• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 72
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 93
  • 93
  • 34
  • 25
  • 22
  • 21
  • 15
  • 14
  • 14
  • 11
  • 10
  • 10
  • 9
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Fighting the Biggest Lie on the Internet : Improving Readership of Terms of Service and Privacy Policies

Ziegenbein, Marius-Lukas January 2022 (has links)
When joining a new service, in order to access its features, users are often required to accept the terms of service and privacy policy. However, the readership of these documents is mostly non-existent, leaving an information asymmetry, the imbalance of knowledge between two parties. Due to this, users are sacrificing their online data privacy without being aware of the consequences. The purpose of this work is to investigate the readership of terms of service and privacy policies among users of social media services. We implemented a prototype called ‘ShareIt’, which resembles a photo-sharing platform to gain insight about readership, behavior and effectiveness of our adjusted presentations of terms of service and privacy policies in regard to readership and comprehension. We conducted a survey experiment using the prototype with 31 participants and concluded, that 80,6% of our participants did not spend more than ten seconds in our terms of serviceand privacy policy. The observed behavior suggests, that social media users are used to sharing information on the internet which in addition to their trust towards online services leads to the aforementioned low readership. We presented adjustments to the presentation of terms of service and privacy policies which showed a slight tendency of higher engagement in comparison to the current way of accessing these documents. This result however, due to the lack of readership examined for our participants, has to remain debatable and needs further investigation.
52

Understanding Data Practices in Private Corporations : Analysis of Privacy Policies, Cookies Statements and “Dark Patterns”

Mendes, Débora January 2022 (has links)
Introduction: We analyse the privacy policies of 15 private corporations to understand if the data handling practices – data collection, storage, and sharing –described in the policies are ethical or unethical. The data we leave behind when we use the Internet are crucial for corporations. The data provides valuable insights into our lives, thus helping corporations improve targeted marketing campaigns and increase their revenue. Method: Extensive literature review of peer-reviewed articles, written between1993 and 2021, to examine how theoretical perspectives and empirical findings evolved over time; combined with empirical research to analyse the privacy policies and “dark patterns” of 15 companies. The companies were chosen at random and belong to different sectors to give a broader understanding of the current privacy and data handling practices. Analysis: Discourse analysis of the privacy policies to evaluate the type of language used, if it is clear, easy to understand, and if the policy informs users about how their data are collected, shared, and stored. But also, a visual analysis to understand if the company is implementing “dark patterns”. Results: The results indicate that most privacy policies use misleading terms, are not fully transparent about the company’s data handling practices, and often implement “dark patterns” to try to influence the users’ decisions. Conclusion: Most companies have privacy policies available on their websites due to a clear influence from the GDPR legislation, however, there appears to be a conflicting relationship between wanting to comply with the GDPR and wanting to gather as much information as possible.
53

[pt] O DESIGN DE INTERFACE COMO FACILITADOR NA COMUNICAÇÃO DO PROCESSO DE TRATAMENTO DE DADOS DIGITAIS DOS USUÁRIOS / [en] THE ROLE OF INTERFACE DESIGN AS AN ENABLER IN THE COMMUNICATION OF PERSONAL DATA PROCESSING

ANA LUIZA CASTRO GERVAZONI 30 October 2023 (has links)
[pt] A adoção de modelos de inteligência artificial está alterando a relação entre organizações e consumidores e o volume de produtos digitais dependentes de dados pessoais cresce a cada dia. As políticas de privacidade são o principal instrumento de informação do cidadão sobre como suas informações serão tratadas por empresas com as quais se relaciona. Porém, atualmente, as interfaces destes instrumentos não comunicam de forma objetiva suas informações. O presente estudo demonstra que a aplicação de diretrizes de design nas políticas de privacidade promove uma experiência mais satisfatória e uma aquisição de informação mais rápida de seu conteúdo pelos usuários. A metodologia do estudo abarcou uma revisão bibliográfica, pesquisa documental, adaptação da escala Internet Users’ Information Privacy Concerns, teste de usabilidade e análise de conteúdo. Literaturas de direito e design foram relacionadas para identificar requisitos legais que poderiam ser melhor atendidos por meio do design, o nível de preocupação com a privacidade dos participantes foi verificado e um teste comparativo de usabilidade foi conduzido. Uma réplica da política do Facebook foi comparada à nova proposta, que contava com elementos que representavam diretrizes de design. Os dados mostraram redução no tempo para localização de informações e na taxa de erro entre os usuários que acessaram a nova proposta, assim como maior frequência de declarações positivas a respeito desta versão. A pesquisa contribui para a ampliação do conhecimento sobre a influência do design de interface na construção destes instrumentos ao esclarecer que a consideração de boas práticas deste campo facilita a aquisição de informação. / [en] The use of artificial intelligence models is changing the relationship between organizations and consumers, and the volume of digital products dependent on personal data is growing every day. Privacy policies are the primary tool for informing citizens about how their information will be handled by companies with which they interact. However, currently, the interfaces of these instruments do not objectively communicate their information. The present study demonstrates that the application of design guidelines in privacy policies promotes a more satisfactory experience and faster user acquisition of information from its content. The study methodology encompassed a bibliographical review, documentary research, adjustment of the Internet Users Information Privacy Concerns scale, usability testing, and content analysis. Literature from law and design fields could be interconnected to identify legal requirements that could be addressed more effectively through design, the level of privacy concern of the study s participants was verified, and a comparative usability test was conducted. A replica of Facebook s policy was compared to a new interface, which included elements that represented design guidelines. The data showed a reduction in the time to find information, the error rate among users who accessed the new proposal, and a higher frequency of positive statements regarding this version. This research enhances the understanding of how interface design affects the creation of such instruments by showing that following best practices in this area facilitates information acquisition.
54

Artificial Integrity: Data Privacy and Corporate Responsibility in East Africa

Hansson, Ebba January 2023 (has links)
While digital connectivity in East Africa is quickly increasing, the region is underregulated regarding data protection regulations. Moreover, many existing laws are more state-interest-focused than human rights-based. When comprehensive regulations are not in place, more significant regulatory pressure is put on the actors operating in the tech market. Theoretically and conceptually, this accountability can be described through conceptual models such as Corporate Social Responsibility (CSR) and Corporate Digital Responsibility (CDR).  Organisations use the two frameworks to map and manage their impact on society from an economic, environmental, and societal perspective. While CSR deals with their effects from a more general point of view, CDR has recently emerged in the business ethics discourse to discuss the ethical considerations evolving from the exponential growth of digital technologies and data.     Through a multiple case study design, the main objective of this study was to provide practical insight into how actors manage data privacy-related issues in East Africa. Furthermore, the aim was also to evaluate the existing barriers that prevent the actors from fully implementing higher data responsibility ambitions.   The results reveal that the observed actors are aware of the existing risks and mature enough to develop a comprehensive data responsibility agenda. However, there seems to be a gap between developing the policies and implementing them in practice. The lack of context-adjusted approaches to the CSR/CDR-related guidelines and actions can explain the gap.
55

Anonymizing Faces without Destroying Information

Rosberg, Felix January 2024 (has links)
Anonymization is a broad term. Meaning that personal data, or rather data that identifies a person, is redacted or obscured. In the context of video and image data, the most palpable information is the face. Faces barely change compared to other aspect of a person, such as cloths, and we as people already have a strong sense of recognizing faces. Computers are also adroit at recognizing faces, with facial recognition models being exceptionally powerful at identifying and comparing faces. Therefore it is generally considered important to obscure the faces in video and image when aiming for keeping it anonymized. Traditionally this is simply done through blurring or masking. But this de- stroys useful information such as eye gaze, pose, expression and the fact that it is a face. This is an especial issue, as today our society is data-driven in many aspects. One obvious such aspect is autonomous driving and driver monitoring, where necessary algorithms such as object-detectors rely on deep learning to function. Due to the data hunger of deep learning in conjunction with society’s call for privacy and integrity through regulations such as the General Data Protection Regularization (GDPR), anonymization that preserve useful information becomes important. This Thesis investigates the potential and possible limitation of anonymizing faces without destroying the aforementioned useful information. The base approach to achieve this is through face swapping and face manipulation, where the current research focus on changing the face (or identity) while keeping the original attribute information. All while being incorporated and consistent in an image and/or video. Specifically, will this Thesis demonstrate how target-oriented and subject-agnostic face swapping methodologies can be utilized for realistic anonymization that preserves attributes. Thru this, this Thesis points out several approaches that is: 1) controllable, meaning the proposed models do not naively changes the identity. Meaning that what kind of change of identity and magnitude is adjustable, thus also tunable to guarantee anonymization. 2) subject-agnostic, meaning that the models can handle any identity. 3) fast, meaning that the models is able to run efficiently. Thus having the potential of running in real-time. The end product consist of an anonymizer that achieved state-of-the-art performance on identity transfer, pose retention and expression retention while providing a realism. Apart of identity manipulation, the Thesis demonstrate potential security issues. Specifically reconstruction attacks, where a bad-actor model learns convolutional traces/patterns in the anonymized images in such a way that it is able to completely reconstruct the original identity. The bad-actor networks is able to do this with simple black-box access of the anonymization model by constructing a pair-wise dataset of unanonymized and anonymized faces. To alleviate this issue, different defense measures that disrupts the traces in the anonymized image was investigated. The main take away from this, is that naively using what qualitatively looks convincing of hiding an identity is not necessary the case at all. Making robust quantitative evaluations important.
56

Testing Privacy and Security of Voice Interface Applications in the Internet of Things Era

Shafei, Hassan, 0000-0001-6844-5100 04 1900 (has links)
Voice User Interfaces (VUI) are rapidly gaining popularity, revolutionizing user interaction with technology through the widespread adoption in devices such as desktop computers, smartphones, and smart home assistants, thanks to significant advancements in voice recognition and processing technologies. Over a hundred million users now utilize these devices daily, and smart home assistants have been sold in massive numbers, owing to their ease and convenience in controlling a diverse range of smart devices within the home IoT environment through the power of voice, such as controlling lights, heating systems, and setting timers and alarms. VUI enables users to interact with IoT technology and issue a wide range of commands across various services using their voice, bypassing traditional input methods like keyboards or touchscreens. With ease, users can inquire in natural language about the weather, stock market, and online shopping and access various other types of general information.However, as VUI becomes more integrated into our daily lives, it brings to the forefront issues related to security, privacy, and usability. Concerns such as the unauthorized collection of user data, the potential for recording private conversations, and challenges in accurately recognizing and executing commands across diverse accents, leading to misinterpretations and unintended actions, underscore the need for more robust methods to test and evaluate VUI services. In this dissertation, we delve into voice interface testing, evaluation for privacy and security associated with VUI applications, assessment of the proficiency of VUI in handling diverse accents, and investigation into access control in multi-user environments. We first study the privacy violations of the VUI ecosystem. We introduced the definition of the VUI ecosystem, where users must connect the voice apps to corresponding services and mobile apps to function properly. The ecosystem can also involve multiple voice apps developed by the same third-party developers. We explore the prevalence of voice apps with corresponding services in the VUI ecosystem, assessing the landscape of privacy compliance among Alexa voice apps and their companion services. We developed a testing framework for this ecosystem. We present the first study conducted on the Alexa ecosystem, specifically focusing on voice apps with account linking. Our designed framework analyzes both the privacy policies of these voice apps and their companion services or the privacy policies of multiple voice apps published by the same developers. Using machine learning techniques, the framework automatically extracts data types related to data collection and sharing from these privacy policies, allowing for a comprehensive comparison. Next, researchers studied the voice apps' behavior to conduct privacy violation assessments. An interaction approach with voice apps is needed to extract the behavior where pre-defined utterances are input into the simulator to simulate user interaction. The set of pre-defined utterances is extracted from the skill's web page on the skill store. However, the accuracy of the testing analysis depends on the quality of the extracted utterances. An utterance or interaction that was not captured by the extraction process will not be detected, leading to inaccurate privacy assessment. Therefore, we revisited the utterance extraction techniques used by prior works to study the skill's behavior for privacy violations. We focused on analyzing the effectiveness and limitations of existing utterance extraction techniques. We proposed a new technique that improved prior work extraction techniques by utilizing the union of these techniques and human interaction. Our proposed technique makes use of a small set of human interactions to record all missing utterances, then expands that to test a more extensive set of voice apps. We also conducted testing on VUI with various accents to study by designing a testing framework that can evaluate VUI on different accents to assess how well VUI implemented in smart speakers caters to a diverse population. Recruiting individuals with different accents and instructing them to interact with the smart speaker while adhering to specific scripts is difficult. Thus, we proposed a framework known as AudioAcc, which facilitates evaluating VUI performance across diverse accents using YouTube videos. Our framework uses a filtering algorithm to ensure that the extracted spoken words used in constructing these composite commands closely resemble natural speech patterns. Our framework is scalable; we conducted an extensive examination of the VUI performance across a wide range of accents, encompassing both professional and amateur speakers. Additionally, we introduced a new metric called Consistency of Results (COR) to complement the standard Word Error Rate (WER) metric employed for assessing ASR systems. This metric enables developers to investigate and rewrite skill code based on the consistency of results, enhancing overall WER performance. Moreover, we looked into a special case related to the access control of VUI in multi-user environments. We proposed a framework for automated testing to explore the access control weaknesses to determine whether the accessible data is of consequence. We used the framework to assess the effectiveness of voice access control mechanisms within multi-user environments. Thus, we show that the convenience of using voice systems poses privacy risks as the user's sensitive data becomes accessible. We identify two significant flaws within the access control mechanisms proposed by the voice system, which can exploit the user's private data. These findings underscore the need for enhanced privacy safeguards and improved access control systems within online shopping. We also offer recommendations to mitigate risks associated with unauthorized access, shedding light on securing the user's private data within the voice systems. / Computer and Information Science
57

Secure and Privacy-aware Data Collection and Processing in Mobile Health Systems

Iwaya, Leonardo H January 2016 (has links)
Healthcare systems have assimilated information and communication technologies in order to improve the quality of healthcare and patient's experience at reduced costs. The increasing digitalization of people's health information raises however new threats regarding information security and privacy. Accidental or deliberate data breaches of health data may lead to societal pressures, embarrassment and discrimination. Information security and privacy are paramount to achieve high quality healthcare services, and further, to not harm individuals when providing care. With that in mind, we give special attention to the category of Mobile Health (mHealth) systems. That is, the use of mobile devices (e.g., mobile phones, sensors, PDAs) to support medical and public health. Such systems, have been particularly successful in developing countries, taking advantage of the flourishing mobile market and the need to expand the coverage of primary healthcare programs. Many mHealth initiatives, however, fail to address security and privacy issues. This, coupled with the lack of specific legislation for privacy and data protection in these countries, increases the risk of harm to individuals. The overall objective of this thesis is to enhance knowledge regarding the design of security and privacy technologies for mHealth systems. In particular, we deal with mHealth Data Collection Systems (MDCSs), which consists of mobile devices for collecting and reporting health-related data, replacing paper-based approaches for health surveys and surveillance. This thesis consists of publications contributing to mHealth security and privacy in various ways: with a comprehensive literature review about mHealth in Brazil; with the design of a security framework for MDCSs (SecourHealth); with the design of a MDCS (GeoHealth); with the design of Privacy Impact Assessment template for MDCSs; and with the study of ontology-based obfuscation and anonymisation functions for health data. / Information security and privacy are paramount to achieve high quality healthcare services, and further, to not harm individuals when providing care. With that in mind, we give special attention to the category of Mobile Health (mHealth) systems. That is, the use of mobile devices (e.g., mobile phones, sensors, PDAs) to support medical and public health. Such systems, have been particularly successful in developing countries, taking advantage of the flourishing mobile market and the need to expand the coverage of primary healthcare programs. Many mHealth initiatives, however, fail to address security and privacy issues. This, coupled with the lack of specific legislation for privacy and data protection in these countries, increases the risk of harm to individuals. The overall objective of this thesis is to enhance knowledge regarding the design of security and privacy technologies for mHealth systems. In particular, we deal with mHealth Data Collection Systems (MDCSs), which consists of mobile devices for collecting and reporting health-related data, replacing paper-based approaches for health surveys and surveillance.
58

Machine Learning with Reconfigurable Privacy on Resource-Limited Edge Computing Devices / Maskininlärning med Omkonfigurerbar Integritet på Resursbegränsade Edge-datorenheter

Tania, Zannatun Nayem January 2021 (has links)
Distributed computing allows effective data storage, processing and retrieval but it poses security and privacy issues. Sensors are the cornerstone of the IoT-based pipelines, since they constantly capture data until it can be analyzed at the central cloud resources. However, these sensor nodes are often constrained by limited resources. Ideally, it is desired to make all the collected data features private but due to resource limitations, it may not always be possible. Making all the features private may cause overutilization of resources, which would in turn affect the performance of the whole system. In this thesis, we design and implement a system that is capable of finding the optimal set of data features to make private, given the device’s maximum resource constraints and the desired performance or accuracy of the system. Using the generalization techniques for data anonymization, we create user-defined injective privacy encoder functions to make each feature of the dataset private. Regardless of the resource availability, some data features are defined by the user as essential features to make private. All other data features that may pose privacy threat are termed as the non-essential features. We propose Dynamic Iterative Greedy Search (DIGS), a greedy search algorithm that takes the resource consumption for each non-essential feature as input and returns the most optimal set of non-essential features that can be private given the available resources. The most optimal set contains the features which consume the least resources. We evaluate our system on a Fitbit dataset containing 17 data features, 4 of which are essential private features for a given classification application. Our results show that we can provide 9 additional private features apart from the 4 essential features of the Fitbit dataset containing 1663 records. Furthermore, we can save 26:21% memory as compared to making all the features private. We also test our method on a larger dataset generated with Generative Adversarial Network (GAN). However, the chosen edge device, Raspberry Pi, is unable to cater to the scale of the large dataset due to insufficient resources. Our evaluations using 1=8th of the GAN dataset result in 3 extra private features with up to 62:74% memory savings as compared to all private data features. Maintaining privacy not only requires additional resources, but also has consequences on the performance of the designed applications. However, we discover that privacy encoding has a positive impact on the accuracy of the classification model for our chosen classification application. / Distribuerad databehandling möjliggör effektiv datalagring, bearbetning och hämtning men det medför säkerhets- och sekretessproblem. Sensorer är hörnstenen i de IoT-baserade rörledningarna, eftersom de ständigt samlar in data tills de kan analyseras på de centrala molnresurserna. Dessa sensornoder begränsas dock ofta av begränsade resurser. Helst är det önskvärt att göra alla insamlade datafunktioner privata, men på grund av resursbegränsningar kanske det inte alltid är möjligt. Att göra alla funktioner privata kan orsaka överutnyttjande av resurser, vilket i sin tur skulle påverka prestanda för hela systemet. I denna avhandling designar och implementerar vi ett system som kan hitta den optimala uppsättningen datafunktioner för att göra privata, med tanke på begränsningar av enhetsresurserna och systemets önskade prestanda eller noggrannhet. Med hjälp av generaliseringsteknikerna för data-anonymisering skapar vi användardefinierade injicerbara sekretess-kodningsfunktioner för att göra varje funktion i datasetet privat. Oavsett resurstillgänglighet definieras vissa datafunktioner av användaren som viktiga funktioner för att göra privat. Alla andra datafunktioner som kan utgöra ett integritetshot kallas de icke-väsentliga funktionerna. Vi föreslår Dynamic Iterative Greedy Search (DIGS), en girig sökalgoritm som tar resursförbrukningen för varje icke-väsentlig funktion som inmatning och ger den mest optimala uppsättningen icke-väsentliga funktioner som kan vara privata med tanke på tillgängliga resurser. Den mest optimala uppsättningen innehåller de funktioner som förbrukar minst resurser. Vi utvärderar vårt system på en Fitbit-dataset som innehåller 17 datafunktioner, varav 4 är viktiga privata funktioner för en viss klassificeringsapplikation. Våra resultat visar att vi kan erbjuda ytterligare 9 privata funktioner förutom de 4 viktiga funktionerna i Fitbit-datasetet som innehåller 1663 poster. Dessutom kan vi spara 26; 21% minne jämfört med att göra alla funktioner privata. Vi testar också vår metod på en större dataset som genereras med Generative Adversarial Network (GAN). Den valda kantenheten, Raspberry Pi, kan dock inte tillgodose storleken på den stora datasetet på grund av otillräckliga resurser. Våra utvärderingar med 1=8th av GAN-datasetet resulterar i 3 extra privata funktioner med upp till 62; 74% minnesbesparingar jämfört med alla privata datafunktioner. Att upprätthålla integritet kräver inte bara ytterligare resurser utan har också konsekvenser för de designade applikationernas prestanda. Vi upptäcker dock att integritetskodning har en positiv inverkan på noggrannheten i klassificeringsmodellen för vår valda klassificeringsapplikation.
59

Composing DaaS web services : application to eHealth / Composition des services web DaaS : application à l'eSanté

Barhamgi, Mahmoud 08 October 2010 (has links)
Dans cette thèse, nous intéressons à l'automatisation de la composition de service Web d'accès aux données (i.e. DaaS Data-gs-g-S..ervice Web services) pour les besoins de partage de données dans les environnements distribués. La composition de service Web permet de répondre aux besoins d'un utilisateur ne pouvant être satisfaits par un seul Web service, alors qu'une intégration de plusieurs le permettrait. La motivation principale de notre travail est que les méthodes de composition, telles qu'elles sont appliquées aux services Web traditionnels (i.e. AaaS Application-as-a-Service Web services), ne permettent pas de prendre en compte la relation sémantique entre les entrées/sorties d'un service Web d'accès aux données, et en conséquence, elles ne sont pas adaptées pour composer les services Web d'accès aux données. Dans ce travail de thèse, nous proposons d'exploiter les principes de base des systèmes d'intégration des données pour composer les services Web d'accès aux données. Plus précisément, nous modélisons les services Web d'accès aux données comme des vues sur des ontologies de domaine. Cela permet de représenter la sémantique d'un service d'une manière déclarative en se basant sur des concepts et des relations dont les sémantiques sont formellement définies dans l'ontologie de domaine. Ensuite, nous utilisons les techniques de réécriture des requêtes pour sélectionner et composer automatiquement les services pour répondre aux requêtes des utilisateurs. Comme les services Web d'accès aux données peuvent être utilisés pour accéder à des données sensibles et privées, nous proposons également un mécanisme basé sur la modification des requêtes pour préserver la confidentialité des données. Ce mécanisme modifie les requêtes en se basant sur des politiques de confidentialité avant leur résolution par 1' algorithme de composition, et il prend en considération les préférences des utilisateurs quant à la divulgation de leurs données privées. Le principal domaine d'application de notre approche est le domaine d'e-santé, où les services Web d'accès aux données sont utilisés pour partager les dossiers médicaux des patients. / In this dissertation, we propose a novel approach for the automatic composition of DaaS Web services (DaaS Data-gs-g-S.ervice Web services). Automatic DaaS Web service composition requires dealing with three major research thrusts: (i) describing the semantics of DaaS Web services, (ii) selecting and combining relevant DaaS Web services, and (iii) generating composite service descriptions (i.e. the compositions' plans). We first propose to model DaaS Web services as RDF views over domain ontologies. An RDF view allows capturing the semantics of the associated DaaS Web service in a "declarative" way based on concepts and relationships whose semantics are formally defined in domain ontologies. The service description files (i.e. WSDL files) are annotated with the defined RDF views using the extensibility feature of the WSDL standard. We then propose to use query rewriting techniques for selecting and composing DaaS Web services. Specifically, we devised an efficient RDF-oriented query rewriting algorithm that selects relevant services based ontheir defined RDF views and combines them to ans~wer a posed query. It also generates an execution plan for the obtained composition/s. Our algorithm takes into account the RDFS semantic constraints (i.e. "subClassOf", "subPropertyOf", "Domain" and "Range") and is able to address both specifie and parameterized queries. Since DaaS Web services may be used to access sensitive and private data; we also extended our DaaS service composition approach to handle data privacy concems. Posed queries are modified to accommodate pertaining privacy conditions from data privacy policies before their resolution by the core composition algorithm. Our proposed privacy preservation model takes user' s privacy preferences into account.
60

應用區塊鏈技術設計具資料隱私性之綠色供應鏈管理平台框架 / Using Blockchain Technology to Design a Green Supply Chain Management Information Platform Framework with Data Privacy

黃方佐 Unknown Date (has links)
在今日,實施綠色供應鏈管理不僅是遵守法規規範,更積極的是實施綠色供應鏈管理可為企業帶來更多競爭優勢,因此綠色供應鏈管理對企業而言越來越是值得探討與重視的議題。 目前綠色供應鏈管理平台的建立普遍是依賴政府或是第三方機構建立資料交換的機制,或是串接供應鏈上各個企業不同的企業系統達到資料交換的目的,然而這樣的做法有其風險。因為物料資料對企業來說是敏感且有價值的,企業須對提供儲存、資料交換服務的平台有高度信任度,且整個系統亦有中央集權式系統架構之缺陷。運用區塊鏈技術的特性可以解決這樣的問題,區塊鏈技術提供資料永久保存、不可篡改的分散式系統解決方案。本研究更近一步將區塊鏈技術結合加解密機制,讓資料僅有提供者本身以及其同意之查詢者能讀取,如此一來使得區塊鏈系統更具資料隱私之特性,並透過區塊鏈外部儲存系統的設計使儲存空間更易規模化,使得綠色供應鏈中大量、有價值物料資料之儲存、交換且須高度信賴物料資料不被篡改以及綠色認證不被篡改、偽造的這些議題能被解決。 / Nowadays, the implementation of green supply chain management system can bring more competitive advantages for enterprises. This issue is increasingly worthy of discussion. Until now, the establishment of green supply chain management platform generally depends on the government or third parties building the mechanism of exchanging the information, and connecting the various enterprises of different enterprises on supply chain to achieve the purpose of data exchange. However, the approach has risks. The features of blockchain technology can provide permanent preservation and tamper-proof system through the decentralized system solutions. Moreover, this study combines with data-privacy design and off-blockchain storage design to solve the problems of storaging and exchanging the valuable data and ensure material information and green certifications are tamper-proof.

Page generated in 0.071 seconds