Spelling suggestions: "subject:"data los"" "subject:"mata los""
1 |
A framework for data loss prevention using document semantic signatureAlhindi, Hanan 22 November 2019 (has links)
The theft and exfiltration of sensitive data (e.g., state secrets, trade secrets, company records, etc.) represent one of the most damaging threats that can be carried out by malicious insiders against institutions and organizations because this could seriously diminish the confidentiality, integrity, and availability of the organization’s data. Data protection and insider threat detection and prevention are significant steps for any organization to enhance its internal security. In the last decade, data loss prevention (DLP) has emerged as one of the key mechanisms currently used by organizations to detect and block unauthorized data transfer from the organization perimeter. However, existing DLP approaches face several practical challenges, such as their relatively low accuracy that in turn affects their prevention capability. Also, current DLP approaches are ineffective in handling unstructured data or searching and comparing content semantically when confronted with evasion tactics where sensitive content is rewritten without changing its semantic. In the current dissertation, we present a new DLP model that tracks sensitive data using a summarized version of the content semantic called document semantic signature (DSS). The DSS can be updated dynamically as the protected content change and it is resilient against evasion tactics, such as content rewriting. We use domain specific ontologies to capture content semantics and track conceptual similarity and relevancy using adequate metrics to identify data leak from sensitive documents. The evaluation of the DSS model on two public datasets of different domain of interests achieved very encouraging results in terms of detection effectiveness. / Graduate
|
2 |
Enhancing Existing Disaster Recovery Plans Using Backup Performance IndicatorsWhite, Gwen 01 January 2017 (has links)
Companies that perform data backup lose valuable data because they lack reliable data backup or restoration methods. The purpose of this study was to examine the need for a Six Sigma data backup performance indicator tool that clarifies the current state of a data backup method using an intuitive numerical scale. The theoretical framework for the study included backup theory, disaster recovery theory, and Six Sigma theory. The independent variables were implementation of data backup, data backup quality, and data backup confidence. The dependent variable was the need for a data backup performance indicator. An adapted survey instrument that measured an organization's data backup plan, originally administered by Information Week, was used to survey 107 businesses with 15 to 250 employees in the Greater Cincinnati area. The results revealed that 69 out of 107 small businesses did not need a data backup performance indicator and the binary logistic regression model indicated no significant relationship between the dependent and independent variables. The conclusion of the study is that many small businesses have not experienced a disaster and cannot see the importance of a data backup indicator that quantifies recovery potential in case of a disaster. It is recommended that further research is required to determine if this phenomenon is only applicable only to small businesses in the Greater Cincinnati area through comparisons based on business size and location. This study contributes to positive social change through improvement of data backup, which enables organizations to quickly recover from a disaster, thereby saving jobs and contributing to the stability of city, state, and national economies.
|
3 |
The Backup-Plan : En kvantitativ studie om säkerhetskopiering bland studenter på Uppsala universitetBennich-Björkman, Oscar, Nyström, Anton January 2016 (has links)
Få personer säkerhetskopierar tillräckligt ofta, trots att de riskerar att förlora viktiga filer. Vad beror det på? Denna uppsats har genom en kvantitativ undersökning försökt klarlägga vilka faktorer som har störst påverkan på detta beteende och om det finns ett samband mellan dessa. Datainsamlingen skedde genom en enkätundersökning, med över 300 svar från olika studenter på Uppsala universitet. Resultaten analyseras med hjälp av det teoretiska ramverket “Protection Motivation Theory” (PMT) och jämförs sedan med liknande forskning. Resultaten visar att lättja och glömska är de två faktorer som respondenterna själva anser har störst påverkan. Utöver detta har även det studieprogram vilket studenten går på betydelse. Resultaten visar också att hur studenterna bedömer sannolikheten för dataförlust och graden av de problem som kan uppstå vid en dataförlust båda har ett positivt samband med hur ofta säkerhetskopiering utförs. Av dessa har uppskattat problem störst påverkan. Detta resultat skiljer sig från vad delar av tidigare forskningen har visat, men ligger i linje med vad PMT säger om detta beteende. / Few people backup their files frequently enough, even though they risk losing important files. Why is this? This paper has through a quantitative survey attempted to elucidate which factors have the biggest impact on this behavior and if there is a correlation between these. The data was collected using a questionnaire which got over 300 answers from students at Uppsala University. The results were analyzed using the theoretical framework “Protection Motivation Theory” (PMT) and was then compared to similar research. The results show that laziness and forgetfulness are the two biggest factors that the respondents themselves say have the biggest impact on their behavior. In addition to this the kind of program the student is attending also has an effect. The results show that the assessed probability of losing data and the severity of the problem of losing data have a positive correlation with how often backup is done, where assessed problem has the biggest impact. These results differ from what some of the earlier research has shown, but is in line with what PMT says about this behavior.
|
4 |
Automatisk sparning : - funktionen du aldrig visste fannsLeo, Magnus January 2010 (has links)
<p>Automatisk sparning i datorprogram undersöks från perspektivet hur användare upplever tryggheten till att deras arbete ska finnas kvar när de arbetar med en dator, och hur de reagerar på en ny typ av design. Det finns ett problem som ligger i hur datorer rent tekniskt är uppbyggda, jämfört med hur datoranvändare är vana att objekt fungerar i verkligheten. Nuvarande och gamla dator- program har tittats på för att se hur de har löst funktionerna kring sparning. Som teori för hur sparning förstås av dator- användare används mentala modeller och metaforer. Människans begränsningar hos minne och uppmärksamhet används för att förklara varför sparning bör skötas automatiskt. En serie undersökningar är gjorda som visar att den genomsnittlige dator- användaren idag anser sig lita på datorer till stor del när de arbetar med dem, samtidigt som de har beteenden som antyder att de inte litar på dem. Slutligen gjordes ett test med ett krånglande datorprogram som alltid bevarar användarens arbete. Dess resultat visar att datoranvändare kan lita på transparenta program som sparar i bakgrunden, utan att användaren märker någonting.</p>
|
5 |
Preventing data loss using rollback-recovery : A proof-of-concept study at BolagsverketSjölinder, Max January 2013 (has links)
This thesis investigates two alternative approaches, referred to as automatic- and semi-automatic replay, which can be used to prevent data loss due to a certain set of unforeseen events at Bolagsverket, the Swedish Companies Registration Office. The approaches make it possible to recover the correct data from a database that belongs to a stateless distributed system and that contains erroneous- or inaccurate information due to past faults. Both approaches utilize log-based rollback-recovery techniques but make different assumptions regarding the deterministic behaviour of Bolagsverket’s systems. A stateless distributed system logs all received messages during failure-free operation. During recovery, automatic replay recovers the data by enabling the system to re-process the logged messages. In contrast, semi-automatic replay recovers data by utilizing the logged messages to enable officials at Bolagsverket to manually redo lost work in a controlled manner. Proof-of-concept implementations of the two replay approaches are developed on a simplified model that resembles one of Bolagsverket’s electronic services, yet that is general to any stateless system that communicates asynchronously using JMS messages and synchronously using XML sent over HTTP. The theoretical- and performance evaluation was conducted with the aim of producing results general to any system with similar characteristics to those of the model. The results suggest that the failure-free overhead at Bolagsverket is approximately 100 milliseconds per logged message, and that around 3 gigabytes of data must be stored in order to recover one average day’s operation. Further, automatic replay successfully manages to recover one average day’s operation in around 70 minutes. Semi-automatic replay is calculated to require, at a maximum, one workday to recover the same amount of data. It is assessed that automatic replay is a suitable solution for Bolagsverket if it is proven that their systems are fully deterministic. In other cases, it is assessed that semi-automatic replay can be utilized. It is however recommended that further evaluations are conducted before the approaches are implemented in a production environment.
|
6 |
Automatisk sparning : - funktionen du aldrig visste fannsLeo, Magnus January 2010 (has links)
Automatisk sparning i datorprogram undersöks från perspektivet hur användare upplever tryggheten till att deras arbete ska finnas kvar när de arbetar med en dator, och hur de reagerar på en ny typ av design. Det finns ett problem som ligger i hur datorer rent tekniskt är uppbyggda, jämfört med hur datoranvändare är vana att objekt fungerar i verkligheten. Nuvarande och gamla dator- program har tittats på för att se hur de har löst funktionerna kring sparning. Som teori för hur sparning förstås av dator- användare används mentala modeller och metaforer. Människans begränsningar hos minne och uppmärksamhet används för att förklara varför sparning bör skötas automatiskt. En serie undersökningar är gjorda som visar att den genomsnittlige dator- användaren idag anser sig lita på datorer till stor del när de arbetar med dem, samtidigt som de har beteenden som antyder att de inte litar på dem. Slutligen gjordes ett test med ett krånglande datorprogram som alltid bevarar användarens arbete. Dess resultat visar att datoranvändare kan lita på transparenta program som sparar i bakgrunden, utan att användaren märker någonting.
|
7 |
Nasazení kontextového DLP systému v rámci zavádění ISMS / Deployment of the Context DLP System within ISMS ImplementationImrich, Martin January 2015 (has links)
This diploma thesis focuses on a DLP implementation within a specific organization. The thesis contains current situation analysis and provides decision for choice of the most suitable DLP based on the analysis findings. Eventually describes a real implementation of the chosen DLP system within the organization.
|
8 |
Telemetry Post-Processing in the Clouds: A Data Security ChallengeKalibjian, J. R. 10 1900 (has links)
ITC/USA 2011 Conference Proceedings / The Forty-Seventh Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2011 / Bally's Las Vegas, Las Vegas, Nevada / As organizations move toward cloud [1] computing environments, data security challenges will begin to take precedence over network security issues. This will potentially impact telemetry post processing in a myriad of ways. After reviewing how data security tools like Enterprise Rights Management (ERM), Enterprise Key Management (EKM), Data Loss Prevention (DLP), Database Activity Monitoring (DAM), and tokenization are impacting cloud security, their effect on telemetry post-processing will also be examined. An architecture will be described detailing how these data security tools can be utilized to make telemetry post-processing environments in the cloud more robust.
|
9 |
Monitoring a analýza uživatelů systémem DLP / Monitoring and Analysis of Users Using DLP SystemPandoščák, Michal January 2011 (has links)
The purpose of this masters thesis is to study issues of monitoring and analysis of users using DLP (Data Loss Prevention) system, the definition of internal and external attacks, the description of the main parts of the DLP system, managing of politic, monitoring user activities and classifying the data content. This paper explains the difference between contextual and content analysis and describes their techniques. It shows the fundamentals of network and endpoint monitoring and describes the process and users activities which may cause a data leakage. Lastly, we have developed endpoint protection agent who serves to the monitoring activities at a terminal station.
|
10 |
Návrh koncepce prevence ztráty dat / Design of conception of data loss preventionBrejla, Tomáš January 2011 (has links)
This work deals with the making of conception of implementation of processes and software tools designed to ensure sensitive data leakage prevention from the organization infrastructure. The structure consists of three key parts. The first one describes theoretical basis of the work. It explains what is the data loss prevention, what it comes from, why it is necessary to deal with it and what its goals are. It also describes how this fits into the whole area of corporate ICT security environment. There are defined all the risks associated with leakage of sensitive data and there are also defined possible solutions and problems that are associated with these solutions. The first part also analyzes the current state of data loss prevention in organizations. They are divided according to their size and for each group there is a list of the most common weaknesses and risks. It is evaluated how the organizations currently solve prevention of data loss and how they cover this issue from both a procedural point of view and in terms of software tools. The second part focuses directly on the software tools. It is characterized the principle of operation of these systems and it is explained their network architecture. There are described and evaluated current trends in the development of the data loss prevention tools and it is outlined possible further development. They are divided into different categories depending on what features they offer and how these categories cover the needs of organizations. At the end of the second part there are compared the software solutions from leading vendors in the market against actual experience, focusing on their strengths and weaknesses. The third part presents the core content. IT joins two previous sections and the result is the creation of the overall concept of the implementation of data loss prevention with focus on breakdown by several different levels -- processes, time and size of the company. At the beginning of this third section it is described what precedes the implementation of data loss prevention, and what the organizations should be careful of. It is defined by how and what the organizations should set their own expectations for the project could be manageable. The main point is the creation of a procedure of data loss prevention implementation by creating a strategy, choice of solutions, to the implementation of this solution and related processes. The end of the third part deals with the legal and personnel issues which are with the implementation of DLP very closely related. There are made recommendations based on analysis of the law standards and these recommendations are added to the framework approach of HR staff. At the very end there are named benefits of implementing data loss prevention, and the created concept is summarized as a list of best practices.
|
Page generated in 0.1433 seconds