Spelling suggestions: "subject:"data privacy."" "subject:"data rivacy.""
81 |
Privacy enforcement with data owner-defined policiesScheffler, Thomas January 2013 (has links)
This thesis proposes a privacy protection framework for the controlled distribution and use of personal private data. The framework is based on the idea that privacy policies can be set directly by the data owner and can be automatically enforced against the data user.
Data privacy continues to be a very important topic, as our dependency on electronic communication maintains its current growth, and private data is shared between multiple devices, users and locations. The growing amount and the ubiquitous availability of personal private data increases the likelihood of data misuse.
Early privacy protection techniques, such as anonymous email and payment systems have focused on data avoidance and anonymous use of services. They did not take into account that data sharing cannot be avoided when people participate in electronic communication scenarios that involve social interactions. This leads to a situation where data is shared widely and uncontrollably and in most cases the data owner has no control over further distribution and use of personal private data.
Previous efforts to integrate privacy awareness into data processing workflows have focused on the extension of existing access control frameworks with privacy aware functions or have analysed specific individual problems such as the expressiveness of policy languages. So far, very few implementations of integrated privacy protection mechanisms exist and can be studied to prove their effectiveness for privacy protection. Second level issues that stem from practical application of the implemented mechanisms, such as usability, life-time data management and changes in trustworthiness have received very little attention so far, mainly because they require actual implementations to be studied.
Most existing privacy protection schemes silently assume that it is the privilege of the data user to define the contract under which personal private data is released. Such an approach simplifies policy management and policy enforcement for the data user, but leaves the data owner with a binary decision to submit or withhold his or her personal data based on the provided policy.
We wanted to empower the data owner to express his or her privacy preferences through privacy policies that follow the so-called Owner-Retained Access Control (ORAC) model. ORAC has been proposed by McCollum, et al. as an alternate access control mechanism that leaves the authority over access decisions by the originator of the data.
The data owner is given control over the release policy for his or her personal data, and he or she can set permissions or restrictions according to individually perceived trust values. Such a policy needs to be expressed in a coherent way and must allow the deterministic policy evaluation by different entities.
The privacy policy also needs to be communicated from the data owner to the data user, so that it can be enforced. Data and policy are stored together as a Protected Data Object that follows the Sticky Policy paradigm as defined by Mont, et al. and others.
We developed a unique policy combination approach that takes usability aspects for the creation and maintenance of policies into consideration. Our privacy policy consists of three parts: A Default Policy provides basic privacy protection if no specific rules have been entered by the data owner. An Owner Policy part allows the customisation of the default policy by the data owner. And a so-called Safety Policy guarantees that the data owner cannot specify disadvantageous policies, which, for example, exclude him or her from further access to the private data. The combined evaluation of these three policy-parts yields the necessary access decision.
The automatic enforcement of privacy policies in our protection framework is supported by a reference monitor implementation. We started our work with the development of a client-side protection mechanism that allows the enforcement of data-use restrictions after private data has been released to the data user. The client-side enforcement component for data-use policies is based on a modified Java Security Framework. Privacy policies are translated into corresponding Java permissions that can be automatically enforced by the Java Security Manager.
When we later extended our work to implement server-side protection mechanisms, we found several drawbacks for the privacy enforcement through the Java Security Framework. We solved this problem by extending our reference monitor design to use Aspect-Oriented Programming (AOP) and the Java Reflection API to intercept data accesses in existing applications and provide a way to enforce data owner-defined privacy policies for business applications. / Im Rahmen der Dissertation wurde ein Framework für die Durchsetzung von Richtlinien zum Schutz privater Daten geschaffen, welches darauf setzt, dass diese Richtlinien oder Policies direkt von den Eigentümern der Daten erstellt werden und automatisiert durchsetzbar sind.
Der Schutz privater Daten ist ein sehr wichtiges Thema im Bereich der elektronischen Kommunikation, welches durch die fortschreitende Gerätevernetzung und die Verfügbarkeit und Nutzung privater Daten in Onlinediensten noch an Bedeutung gewinnt.
In der Vergangenheit wurden verschiedene Techniken für den Schutz privater Daten entwickelt: so genannte Privacy Enhancing Technologies. Viele dieser Technologien arbeiten nach dem Prinzip der Datensparsamkeit und der Anonymisierung und stehen damit der modernen Netznutzung in Sozialen Medien entgegen. Das führt zu der Situation, dass private Daten umfassend verteilt und genutzt werden, ohne dass der Datenbesitzer gezielte Kontrolle über die Verteilung und Nutzung seiner privaten Daten ausüben kann.
Existierende richtlinienbasiert Datenschutztechniken gehen in der Regel davon aus, dass der Nutzer und nicht der Eigentümer der Daten die Richtlinien für den Umgang mit privaten Daten vorgibt. Dieser Ansatz vereinfacht das Management und die Durchsetzung der Zugriffsbeschränkungen für den Datennutzer, lässt dem Datenbesitzer aber nur die Alternative den Richtlinien des Datennutzers zuzustimmen, oder keine Daten weiterzugeben.
Es war daher unser Ansatz die Interessen des Datenbesitzers durch die Möglichkeit der Formulierung eigener Richtlinien zu stärken. Das dabei verwendete Modell zur Zugriffskontrolle wird auch als Owner-Retained Access Control (ORAC) bezeichnet und wurde 1990 von McCollum u.a. formuliert. Das Grundprinzip dieses Modells besteht darin, dass die Autorität über Zugriffsentscheidungen stets beim Urheber der Daten verbleibt.
Aus diesem Ansatz ergeben sich zwei Herausforderungen. Zum einen muss der Besitzer der Daten, der Data Owner, in die Lage versetzt werden, aussagekräftige und korrekte Richtlinien für den Umgang mit seinen Daten formulieren zu können. Da es sich dabei um normale Computernutzer handelt, muss davon ausgegangen werden, dass diese Personen auch Fehler bei der Richtlinienerstellung machen.
Wir haben dieses Problem dadurch gelöst, dass wir die Datenschutzrichtlinien in drei separate Bereiche mit unterschiedlicher Priorität aufteilen. Der Bereich mit der niedrigsten Priorität definiert grundlegende Schutzeigenschaften. Der Dateneigentümer kann diese Eigenschaften durch eigene Regeln mittlerer Priorität überschrieben. Darüber hinaus sorgt ein Bereich mit Sicherheitsrichtlinien hoher Priorität dafür, dass bestimmte Zugriffsrechte immer gewahrt bleiben.
Die zweite Herausforderung besteht in der gezielten Kommunikation der Richtlinien und deren Durchsetzung gegenüber dem Datennutzer (auch als Data User bezeichnet).
Um die Richtlinien dem Datennutzer bekannt zu machen, verwenden wir so genannte Sticky Policies. Das bedeutet, dass wir die Richtlinien über eine geeignete Kodierung an die zu schützenden Daten anhängen, so dass jederzeit darauf Bezug genommen werden kann und auch bei der Verteilung der Daten die Datenschutzanforderungen der Besitzer erhalten bleiben.
Für die Durchsetzung der Richtlinien auf dem System des Datennutzers haben wir zwei verschiedene Ansätze entwickelt. Wir haben einen so genannten Reference Monitor entwickelt, welcher jeglichen Zugriff auf die privaten Daten kontrolliert und anhand der in der Sticky Policy gespeicherten Regeln entscheidet, ob der Datennutzer den Zugriff auf diese Daten erhält oder nicht.
Dieser Reference Monitor wurde zum einen als Client-seitigen Lösung implementiert, die auf dem Sicherheitskonzept der Programmiersprache Java aufsetzt. Zum anderen wurde auch eine Lösung für Server entwickelt, welche mit Hilfe der Aspekt-orientierten Programmierung den Zugriff auf bestimmte Methoden eines Programms kontrollieren kann.
In dem Client-seitigen Referenzmonitor werden Privacy Policies in Java Permissions übersetzt und automatisiert durch den Java Security Manager gegenüber beliebigen Applikationen durchgesetzt. Da dieser Ansatz beim Zugriff auf Daten mit anderer Privacy Policy den Neustart der Applikation erfordert, wurde für den Server-seitigen Referenzmonitor ein anderer Ansatz gewählt. Mit Hilfe der Java Reflection API und Methoden der Aspektorientierten Programmierung gelang es Datenzugriffe in existierenden Applikationen abzufangen und erst nach Prüfung der Datenschutzrichtlinie den Zugriff zuzulassen oder zu verbieten. Beide Lösungen wurden auf ihre Leistungsfähigkeit getestet und stellen eine Erweiterung der bisher bekannten Techniken zum Schutz privater Daten dar.
|
82 |
Data ownership and interoperability for a decentralized social semantic webSAMBRA, Andrei Vlad 19 November 2013 (has links) (PDF)
Ensuring personal data ownership and interoperability for decentralized social Web applications is currently a debated topic, especially when taking into consideration the aspects of privacy and access control. Since the user's data are such an important asset of the current business models for most social Websites, companies have no incentive to share data among each other or to offer users real ownership of their own data in terms of control and transparency of data usage. We have concluded therefore that it is important to improve the social Web in such a way that it allows for viable business models while still being able to provide increased data ownership and data interoperability compared to the current situation. To this regard, we have focused our research on three different topics: identity, authentication and access control. First, we tackle the subject of decentralized identity by proposing a new Web standard called "Web Identity and Discovery" (WebID), which offers a simple and universal identification mechanism that is distributed and openly extensible. Next, we move to the topic of authentication where we propose WebID-TLS, a decentralized authentication protocol that enables secure, efficient and user friendly authentication on the Web by allowing people to login using client certificates and without relying on Certification Authorities. We also extend the WebID-TLS protocol, offering delegated authentication and access delegation. Finally we present our last contribution, the Social Access Control Service, which serves to protect the privacy of Linked Data resources generated by users (e.g. pro le data, wall posts, conversations, etc.) by applying two social metrics: the "social proximity distance" and "social contexts"
|
83 |
Secret sharing approaches for secure data warehousing and on-line analysis in the cloud / Approches de partage de clés secrètes pour la sécurisation des entrepôts de données et de l’analyse en ligne dans le nuageAttasena, Varunya 22 September 2015 (has links)
Les systèmes d’information décisionnels dans le cloud Computing sont des solutions de plus en plus répandues. En effet, ces dernières offrent des capacités pour l’aide à la décision via l’élasticité des ressources pay-per-use du Cloud. Toutefois, les questions de sécurité des données demeurent une des principales préoccupations notamment lorsqu'il s’agit de traiter des données sensibles de l’entreprise. Beaucoup de questions de sécurité sont soulevées en terme de stockage, de protection, de disponibilité, d'intégrité, de sauvegarde et de récupération des données ainsi que des transferts des données dans un Cloud public. Les risques de sécurité peuvent provenir non seulement des fournisseurs de services de cloud computing mais aussi d’intrus malveillants. Les entrepôts de données dans les nuages devraient contenir des données sécurisées afin de permettre à la fois le traitement d'analyse en ligne hautement protégé et efficacement rafraîchi. Et ceci à plus faibles coûts de stockage et d'accès avec le modèle de paiement à la demande. Dans cette thèse, nous proposons deux nouvelles approches pour la sécurisation des entrepôts de données dans les nuages basées respectivement sur le partage vérifiable de clé secrète (bpVSS) et le partage vérifiable et flexible de clé secrète (fVSS). L’objectif du partage de clé cryptée et la distribution des données auprès de plusieurs fournisseurs du cloud permet de garantir la confidentialité et la disponibilité des données. bpVSS et fVSS abordent cinq lacunes des approches existantes traitant de partage de clés secrètes. Tout d'abord, ils permettent le traitement de l’analyse en ligne. Deuxièmement, ils garantissent l'intégrité des données à l'aide de deux signatures interne et externe. Troisièmement, ils aident les utilisateurs à minimiser le coût de l’entreposage du cloud en limitant le volume global de données cryptées. Sachant que fVSS fait la répartition des volumes des données cryptées en fonction des tarifs des fournisseurs. Quatrièmement, fVSS améliore la sécurité basée sur le partage de clé secrète en imposant une nouvelle contrainte : aucun groupe de fournisseurs de service ne peut contenir suffisamment de volume de données cryptées pour reconstruire ou casser le secret. Et cinquièmement, fVSS permet l'actualisation de l'entrepôt de données, même si certains fournisseurs de services sont défaillants. Pour évaluer l'efficacité de bpVSS et fVSS, nous étudions théoriquement les facteurs qui influent sur nos approches en matière de sécurité, de complexité et de coût financier dans le modèle de paiement à la demande. Nous validons également expérimentalement la pertinence de nos approches avec le Benchmark schéma en étoile afin de démontrer son efficacité par rapport aux méthodes existantes. / Cloud business intelligence is an increasingly popular solution to deliver decision support capabilities via elastic, pay-per-use resources. However, data security issues are one of the top concerns when dealing with sensitive data. Many security issues are raised by data storage in a public cloud, including data privacy, data availability, data integrity, data backup and recovery, and data transfer safety. Moreover, security risks may come from both cloud service providers and intruders, while cloud data warehouses should be both highly protected and effectively refreshed and analyzed through on-line analysis processing. Hence, users seek secure data warehouses at the lowest possible storage and access costs within the pay-as-you-go paradigm.In this thesis, we propose two novel approaches for securing cloud data warehouses by base-p verifiable secret sharing (bpVSS) and flexible verifiable secret sharing (fVSS), respectively. Secret sharing encrypts and distributes data over several cloud service providers, thus enforcing data privacy and availability. bpVSS and fVSS address five shortcomings in existing secret sharing-based approaches. First, they allow on-line analysis processing. Second, they enforce data integrity with the help of both inner and outer signatures. Third, they help users minimize the cost of cloud warehousing by limiting global share volume. Moreover, fVSS balances the load among service providers with respect to their pricing policies. Fourth, fVSS improves secret sharing security by imposing a new constraint: no cloud service provide group can hold enough shares to reconstruct or break the secret. Five, fVSS allows refreshing the data warehouse even when some service providers fail. To evaluate bpVSS' and fVSS' efficiency, we theoretically study the factors that impact our approaches with respect to security, complexity and monetary cost in the pay-as-you-go paradigm. Moreover, we also validate the relevance of our approaches experimentally with the Star Schema Benchmark and demonstrate its superiority to related, existing methods.
|
84 |
Federated Learning for Natural Language Processing using Transformers / Evaluering av Federerad Inlärning tillämpad på Transformers för klassificering av analytikerrapporterKjellberg, Gustav January 2022 (has links)
The use of Machine Learning (ML) in business has increased significantly over the past years. Creating high quality and robust models requires a lot of data, which is at times infeasible to obtain. As more people are becoming concerned about their data being misused, data privacy is increasingly strengthened. In 2018, the General Data Protection Regulation (GDPR), was announced within the EU. Models that use either sensitive or personal data to train need to obtain that data in accordance with the regulatory rules, such as GDPR. One other data related issue is that enterprises who wish to collaborate on model building face problems when it requires them to share their private corporate data [36, 38]. In this thesis we will investigate how one might overcome the issue of directly accessing private data when training ML models by employing Federated Learning (FL) [38]. The concept of FL is to allow several silos, i.e. separate parties, to train models with the same objective, using their local data and then with the learned model parameters create a central model. The objective of the central model is to obtain the information learned by the separate models, without ever accessing the raw data itself. This is achieved by averaging the separate models’ weights into the central model. FL thus facilitates opportunities to train a model on large amounts of data from several sources, without the need of having access to the data itself. If one can create a model with this methodology, that is not significantly worse than a model trained on the raw data, then positive effects such as strengthened data privacy, cross-enterprise collaboration and more could be attainable. In this work we have used a financial data set consisting of 25242 equity research reports, provided by Skandinaviska Enskilda Banken (SEB). Each report has a recommendation label, either Buy, Sell or Hold, making this a multi-class classification problem. To evaluate the feasibility of FL we fine-tune the pre-trained Transformer model AlbertForSequenceClassification [37] on the classification task. We create one baseline model using the entire data set and an FL model with different experimental settings, for which the data is distributed both uniformly and non-uniformly. The baseline model is used to benchmark the FL model. Our results indicate that the best FL setting only suffers a small reduction in performance. The baseline model achieves an accuracy of 83.5% compared to 82.8% for the best FL model setting. Further, we find that with an increased number of clients, the performance is worsened. We also found that our FL model was not sensitive to non-uniform data distributions. All in all, we show that FL results in slightly worse generalisation compared to the baseline model, while strongly improving on data privacy, as the central model never accesses the clients’ data. / Företags nyttjande av maskininlärning har de senaste åren ökat signifikant och för att kunna skapa högkvalitativa modeller krävs stora mängder data, vilket kan vara svårt att insamla. Parallellt med detta så ökar också den allmänna förståelsen för hur användandet av data missbrukas, vilket har lätt till ett ökat behov av starkare datasäkerhet. 2018 så trädde General Data Protection Regulation (GDPR) i kraft inom EU, vilken bland annat ställer krav på hur företag skall hantera persondata. Företag med maskininlärningsmodeller som på något sätt använder känslig eller personlig data behöver således ha fått tillgång till denna data i enlighet med de rådande lagar och regler som omfattar datahanteringen. Ytterligare ett datarelaterat problem är då företag önskar att skapa gemensamma maskininlärningsmodeller som skulle kräva att de delar deras bolagsdata [36, 38]. Denna uppsats kommer att undersöka hur Federerad Inlärning [38] kan användas för att skapa maskinlärningsmodeller som överkommer dessa datasäkerhetsrelaterade problem. Federerad Inlärning är en metod för att på ett decentraliserat vis träna maskininlärningsmodeller. Detta omfattar att låta flera aktörer träna en modell var. Varje enskild aktör tränar respektive modell på deras isolerade data och delar sedan endast modellens parametrar till en central modell. På detta vis kan varje enskild modell bidra till den gemensamma modellen utan att den gemensamma modellen någonsin haft tillgång till den faktiska datan. Givet att en modell, skapad med Federerad Inlärning kan uppnå liknande resultat som en modell tränad på rådata, så finns många positiva fördelar så som ökad datasäkerhet och ökade samarbeten mellan företag. Under arbetet har ett dataset, bestående av 25242 finansiella rapporter tillgängliggjort av Skandinaviska Ensilda Banken (SEB) använts. Varje enskild rapport innefattar en rekommendation, antingen Köp, Sälj eller Håll, vilket innebär att vi utför muliklass-klassificering. Med datan tränas den förtränade Transformermodellen AlbertForSequence- Classification [37] på att klassificera rapporterna. En Baseline-modell, vilken har tränats på all rådata och flera Federerade modellkonfigurationer skapades, där bland annat varierande fördelningen av data mellan aktörer från att vara jämnt fördelat till vara ojämnt fördelad. Resultaten visar att den bästa Federerade modellkonfigurationen endast presterar något sämre än Baseline-modellen. Baselinemodellen uppnådde en klassificeringssäkerhet på 83.5% medan den bästa Federerade modellen uppnådde 82.8%. Resultaten visar också att den Federerade modellen inte var känslig mot att variera fördelningen av datamängd mellan aktorerna, samt att med ett ökat antal aktörer så minskar klassificeringssäkerheten. Sammanfattningsvis så visar vi att Federerad Inlärning uppnår nästan lika goda resultat som Baseline-modellen, samtidigt så bidrar metoden till avsevärt bättre datasäkerhet då den centrala modellen aldrig har tillgång till rådata.
|
85 |
Improving the learning experience of decision support systems in entrepreneurship with 3D management simulation gamesGould, Olga 12 April 2022 (has links)
Business simulation games are used in educational institutions and various industries in the private and public sector to train students and employees to practice the principles of management and decision-making skills by providing a fail-safe environment and enabling them to reflect on their simulation results. These games are generally advanced multiuser environments where a user, or a group of users, have access to a virtual company for making business decisions. Some of these games are expensive and their licences are time limited; typically, such a licence is only valid during the duration of the course. In general, these games are not available for the public as part of informal instructional courses.
In Canada, teaching informal courses to immigrants and refugees, which involve data-driven decision making, to prepare them for future challenges they might encounter as business owners, can be challenging; especially considering language barriers and non-business-related backgrounds obtained outside of Canada. Furthermore, based on their decision-making styles, cognitive limitations, and past experiences, people may have an inaccurate perception of the problem or challenge they face, this could lead to poor decision making of the team they are part of and, therefore, this could reflect in the effectiveness of an organization as a whole.
The objective of this research is to enrich current teaching tools in decision-making processes in entrepreneurship courses for newcomers in Canada with a comprehensive and visual representation of operational business problems involved in Business Intelligence and data analytics. More specifically, we designed and developed a 3D Business Simulation Game with randomized scenarios using modern technologies, such as Unreal Engine as the game engine; Adobe Fuse for the character creation, Mixamo for animation of the character, and Substance Painter for textures and materials for the assets.
The research was conducted with the participation of the students of the Business Creation and Project Management course at VIRCS (Victoria Immigrant and Refugee Centre Society) where we tested this game on each one of the five units of the course. After designing, developing, and testing the 3D business simulation game, we conducted a comprehensive evaluation to investigate whether the decisions students made while playing were correct or not. We also evaluated whether they felt that the challenges were easier to understand, both as a team and individually, when they used the 3D business simulation game compared to only the written description of the problems.
The main results we obtained from our study are the following: After playing the business simulation game, students became more aware of the importance of making correct decisions in different business scenarios. They made sure that the whole team understood the problem, and they felt generally good about their understanding of the course content. We also noticed that when the animation was not part of the business simulation game, they seemed to be confused when following written instructions. This indicate that they depended on the animations for their decision-making.
We believe that, in some ways, the course and the 3D business simulation game we created for this research were a great opportunity to observe students becoming more confident in their future in Canada as entrepreneurs. We observed that, once the game has been used, the students become more participatory in class, the discussion of the course material increases, and in general, the students seem to enjoy the course more. / Graduate
|
86 |
Big Data in Student Data Analytics: Higher Education Policy Implications for Student Autonomy, Privacy, Equity, and Educational ValueHam, Marcia Jean January 2021 (has links)
No description available.
|
87 |
Popularizing implants : Exploring conditions for eliciting user adoption of digital implants through developers, enthusiasts and usersEricsson Duffy, Mikael January 2020 (has links)
Digital implants have become a new frontier for body hackers, technology enthusiasts and disruptive innovation developers, who seek to service this technology for themselves and to new users. This thesis has explored conditions for future user adoption of human body augmentation with digital implants. The conditions explored were mainly self-beneficial health optimization through technology, self-quantification or convenience scenarios. Applying Diffusion Of Innovation theory, Value-based Acceptance Model and research through design methods were used. The process consisted of quantitative and qualitative data gathering and analysis, using interviews, surveys and iterative prototyping with evaluation. The results show mixed user attitude towards implant usage, mainly depending on users' need for added benefits, whether the user is a technology enthusiast actively using technology for self-beneficial gain or a casual everyday consumer of technology. Certain conditions could affect adoption of implants into mainstream usage, mainly data privacy, regulation, convenience, self-quantification or health management. In order for implants to succeed as a mainstream technology, there needs to be proper secure infrastructure, easy installation and coordinated services that offer individual benefits of health or convenience, with a high consumer confidence in supported services, installation / removal and devices. Several companies are working on offering such a service, in order to evaluate such a proposition, iterative prototypes were created to evaluate a health management scenario as a streamlined consumer service, using a service design blueprint and a related interactive smartphone application prototype.
|
88 |
An Improved Utility Driven Approach Towards K-Anonymity Using Data Constraint RulesMorton, Stuart Michael 14 August 2013 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / As medical data continues to transition to electronic formats, opportunities arise for researchers to use this microdata to discover patterns and increase knowledge that can improve patient care. Now more than ever, it is critical to protect the identities of the
patients contained in these databases. Even after removing obvious “identifier”
attributes, such as social security numbers or first and last names, that clearly identify a specific person, it is possible to join “quasi-identifier” attributes from two or more publicly
available databases to identify individuals.
K-anonymity is an approach that has been used to ensure that no one individual
can be distinguished within a group of at least k individuals. However, the majority of the proposed approaches implementing k-anonymity have focused on improving the efficiency of algorithms implementing k-anonymity; less emphasis has been put towards ensuring the “utility” of anonymized data from a researchers’ perspective. We propose a
new data utility measurement, called the research value (RV), which extends existing
utility measurements by employing data constraints rules that are designed to improve
the effectiveness of queries against the anonymized data.
To anonymize a given raw dataset, two algorithms are proposed that use predefined
generalizations provided by the data content expert and their corresponding
research values to assess an attribute’s data utility as it is generalizing the data to
ensure k-anonymity. In addition, an automated algorithm is presented that uses
clustering and the RV to anonymize the dataset. All of the proposed algorithms scale
efficiently when the number of attributes in a dataset is large.
|
89 |
Practice-Oriented Cybersecurity Training FrameworkPodila, Laxmi Mounika January 2020 (has links)
No description available.
|
90 |
INVESTIGATING THE IMPACT OF LEAN SIX SIGMA PRINCIPLES ON ESTABLISHING AND MAINTAINING DATA GOVERNANCE SYSTEMS IN SMES: AN EXPLORATORY STUDY USING GROUNDED THEORY AND ISM APPROACHManal Alduraibi (15265348) 29 April 2023 (has links)
<p>Data Governance and Data Privacy are critical aspects of organizational management that are widely utilized across all organizational scales. However, this research focused specifically on the significance of Data Governance and Data Privacy in Small and Medium Enterprises (SMEs). While the importance of maintaining these systems is paramount across all organizations, the challenges faced by SMEs in maintaining these systems are greater due to their limited resources. These challenges include potential errors such as data leaks, use of corrupted data, or insufficient data, as well as the difficulty in identifying clear roles and responsibilities regarding data handling. To address these challenges, this research investigated the impact of utilizing Lean Six Sigma (LSS) tools and practices to overcome the anticipated gaps and challenges in SMEs. The qualitative methodology utilized is a grounded theory design, chosen due to the limited understanding of the best LSS practices for achieving data governance and data privacy in SMEs and how LSS can improve the adoption of data governance concerning privacy in SMEs. Data were collected using semi-structured interview questions that were reviewed by an expert panel and pilot tested. The sampling method included purposive, snowballing, and theoretical sampling, resulting in 20 participants being selected for interviews. Open, axial, and selective coding were performed, resulting in the development of a grounded theory. The obtained data were imported into NVivo, a qualitative analysis software program, to compare responses, categorize them into themes and groups, and develop a conceptual framework for Data Governance and Data Privacy. An iterative data collection and analysis approach was conducted to ensure that all aspects were considered. The applied grounded theory resulted in retrieving the themes used to generate a theory from the participants’ descriptions of LSS, SMEs, data governance, and data privacy. Finally, ISM technique has been applied to identify the relationships between the concepts and factors resulted from the grounded theory. It helps arranging the levels the criteria, drawing the relationships in a flowchart, and providing valuable insights to the researcher. </p>
|
Page generated in 0.0396 seconds