Spelling suggestions: "subject:"E privacy deprotection"" "subject:"E privacy coprotection""
41 |
O Processo penal e a busca pela verdadeFerreira, Rosana Miranda 29 March 2006 (has links)
Made available in DSpace on 2016-04-26T20:23:55Z (GMT). No. of bitstreams: 1
Dissertacao Rosana Miranda Ferreira.pdf: 675636 bytes, checksum: 5495752d2e8bd4722a38bc7a635c12b7 (MD5)
Previous issue date: 2006-03-29 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / In this paper we present the performance of the criminal proceeding as an instrument of search for the truth. To base our knowledge on the truth we search the philosophical approach, starting in Greece with Socrates, and finishing on native grounds with Miguel Reale, and in synthesis we describe as each one formulates the knowledge of the truth.
For this, we present the truth in the process. We detach real truth as unattainable and impossible to reach, as well as to the president of criminal prosecution, rank that the gauging situation and circumstances, such and which had occurred, never will be obtained to reproduce.
We appraise the truths: formal, material, procedural, by approximation and the probability pointing out the most modern trend of the search for certainty close to the judicial truth, this last one happened not of evidence but of a judgment being demarcated by justice primarily.
We stress, however, the conquest of the truth, improbable for the criminal proceeding; the persistence in the search of the true reconstitution of the facts is a value that legitimizes the proper criminal persecution.
From the presented historical synthesis we search to survey the way of the verification of the truth, ever since the most violent ways of the Inquisition until our days, where a civilian has to wait years for the federal reply. To illustrate the idea we present Franz Kafka, portraying in his workmanship somebody "Before the Law .
When disserting the basic right of the access to justice we point out the supremacy of the principle of dignity of the human being, who also must be reflected in the process before the duty of the State "administer justice".
We describe some notions of proof, the allegations, the responsibilities, and some of the obstacles inside of the proceeding that interpose as barriers for the search of the truth. We discuss the question of the determined judge to be able or have to evaluate all raised found evidences and even other ones he believes important to include.
The decision, finally, emanated from free conviction through arguments and transparency in the briefings, represents the longed for and pursued truth, that exercises, likewise, a social function in the sense of accomplishing the right, applying ethics, to reconcile the society, and to look for the common good / Nessa dissertação apresentamos a atuação do processo penal como um instrumento de busca pela verdade. Para alicerçar nosso conhecimento sobre a verdade, buscamos o enfoque filosófico, começando pela Grécia, em Sócrates e finalizando em solo pátrio com Miguel Reale, e em síntese descrevemos como cada um formula o conhecimento da verdade.
A partir disso, apresentamos a verdade no processo. Destacamos a verdade real como inatingível e de impossível alcance, outrossim, ao presidente da persecução penal, posto que a aferição de uma situação fática e suas circunstâncias, tal e qual ocorreram, jamais se conseguirão reproduzir.
Conceituamos as verdades: formal, material, processual, a aproximativa e a verossimilhança apontando a tendência mais moderna da busca da certeza próxima da verdade judicial, essa última advinda não da prova mas de um juízo, sendo demarcada pela justiça como fundamento.
Ressaltamos que apesar da conquista da verdade ser improvável, o empenho na busca da verdadeira reconstituição dos fatos é um valor que legitima a própria persecução penal.
Da síntese histórica apresentada buscamos aferir a maneira de apuração da verdade, desde os modos mais violentos da Inquisição até os nossos dias, onde o cidadão, chega a esperar por anos, pela resposta estatal. Para ilustrar a idéia apresentamos Franz Kafka, retratando em sua obra alguém Diante da Lei .
Ao discorrer do direito fundamental do acesso à justiça, apontamos a supremacia do princípio da dignidade da pessoa humana, que também deve estar refletido no processo, ante o dever do Estado de dizer o direito .
Descrevemos algumas noções de prova, as alegações, os ônus e alguns dos óbices dentro do próprio processo que se interpõem como entraves à busca da verdade. Aventamos do papel do julgador investido do poder- dever de valorar todas as provas levantadas, e até de outras, que no seu entender, ache necessário que se produza.
A decisão, por fim, emanada do livre convencimento com aportes argumentativos e transparência nas elucidações, representa a verdade almejada e perseguida, que presta, outrossim, uma função social, no sentido de efetivar o direito, exercitar a ética, apaziguar a sociedade e buscar o bem comum
|
42 |
整合資料在雲端環境上的分享與 隱私保護-以電子病歷資料為例 / Sharing and Protection of Integrated Data in the Cloud : Electronic Health Record as an Example楊竣展, Yang, Jiun Jan Unknown Date (has links)
由於電子化病歷逐漸取代了傳統的紙本病歷,在流通分享上面比傳統的紙本病歷更加來的方便及快速,另外電子病歷的整合性,也是比傳統的紙本來的有效。近年來雲端運算的發展,使得醫療系統在電子病歷上能夠更快速的發展,但是取而代之的是卻是雲端運算所產生隱私權的問題,在快速發展的雲端運算環境中,目前似乎無法完全確保資料的隱私性。即使現有的研究中可以讓資料擁有者表示自己的隱私偏好,卻因為設計時缺乏語意的考量,造成執行上有語意的落差。
本研究將探討電子病歷存放在雲端環境上,設計一套三層整合平台系統並使用語意化技術本體論整合來自多方的資料,達成在資料庫上使用OWL2作為整合的語言,並在此整合平台進行本體論整合,能夠讓使用者可以從多方的醫療中心快速查詢整合的資料,經由整合平台的改寫,到下層的規範擷取到上層平台進行管理與落實動作,最終在資料庫查詢資料,達成整合分享的目標,並同時能夠兼顧資料擁有者的隱私期待,完成在雲端環境上資料分享、整合、隱私保護的目標。 / The Electronic Health Records (EHRs) have replaced the traditional paper Health Records gradually and they are more rapid and more convenient in data sharing. Furthermore, the EHRs are also better than paper health records when health records need to be integrated on the computer.
In recent years, the rapid development of cloud computing can help Health Information System to be more dynamic and provide a better service, but the problem of privacy is a critical issue. Although recent research can let data owner expresses his own personal privacy preference in to policy to protect privacy, it is lacked of semantics and that will result in the gap between the real meaning of personal privacy preference and of policy.
In our research, we will using semantic technology to express personal privacy preference in to polices and also design the 3-layer integration platform to achieve semantics data integration so that polices can be enforced without loss of real meaning of personal privacy preference and polices will have interoperability with others when we are using semantic data integration.
|
43 |
使用本體論與規則執行企業隱私保護規範 / Using ontologies and rules to enforce enterprise privacy protection policies郭弘毅, Guo, Hong Yi Unknown Date (has links)
在今日愈來愈普及的電子商務方面,客戶資料的搜集來源更加廣泛,對於個人資料外洩的影響將非常嚴重,可能帶來個人財務上或者公司信譽上的重大損失。本研究期望可以建構一個在企業內部(backbone)架構的環境中,透過語意網(Semantic Web)中的本體論(Ontology)和規則(Rule)的加入,希望實現具有語意的個人隱私保護規範架構,實現在語意層級上的隱私權政策安全控管。找出並且驗證以Ontologies+Rules為規範的表達與管理的架構的優勢,以確保各企業伺服器平台在收集客戶個人資料時能夠遵守最初協商後的承諾。最後本研究可以透過第三方平台的架構來加以落實個人資料的流通、分享、與保護。 / In today's increasingly popular e-commerce, ways to collect personal data of customers are is more extensive, and the impact of data disclosure will be very serious, maybe it will cost heavy losses on personal reputation or the credit of companies. We hope to build a in-house (backbone) structure of the environment through the semantic web in the ontology and rules, hoping for enabling the semantics of personal privacy protection normative framework to achieve the privacy policy on the security control. We will identify and verify Ontologies + Rules to regulate the expression of the advantages of the structure and management to ensure that the enterprise platform servers will obey the usage of personal data after their initial consultation commitment. Finally, we propose a third-party platform to enforce data sharing and protection of personal data.
|
44 |
政府機關提高隱私保護信任機制之研究-以金融監理為例 / A study on improving the trust mechanism of privacy protection in government agencies -a case of the financial supervision system林占山 Unknown Date (has links)
個人資料保護係屬隱私權的範疇之一, 現代化政府不斷面臨內外在施政環境變遷的衝擊與挑戰,其中資訊科技的快速發展與廣泛運用,更直接衝擊著政府施政定位、服務範圍、運作模式及治理原則。現代化國家在思考打破施政的常規和舊制,面對資訊公開與行政效率要求下,走向電子化政府的道路,也就成為勢之所趨。但另一方面,資訊革命所帶來對隱私與個人資料保護的衝擊,亦較以往更為強烈而深刻。從許多文獻可得知為何民眾可能不信任政府的原因是多方面的,這些原因與他們個人資料安全、隱私權保護及完整性維護是息息相關連的。在政府努力發展電子化政府以便民眾享受其便利性的同時,如何建構政府機關及政府企業間之個人資料隱私保護電子治理機制及協同作業,以強化我國公共治理指標之政府效能、回應力及課責能力,確保個資的合理流通,並能兼顧隱私保護,提昇整體政府信任度,實為電子化政府對人民基本權利之保障及實踐「隱私保護」之重要課題。
新版《個人資料保護法》已於民國九十九年四月在立法院三讀通過,在新法實施後,將因擴大適用個資法之主體範圍,規範個人資料蒐集與處理程序,加重持有個資業者的保管責任,並調整資料外洩求償上限至二億元,預計將加重企業蒐集與利用個人資料的成本與相關責任。本研究之目的在於探討政府機關如何透過持續隱私保護IT治理框架及系統,設計有效的「行政程序控制」(administrative procedural control)、課責(Accountability)及透明(Transparency)機制進的定期公開與積極散佈。透過個人控制自己資訊應該如何被處理與使用的資訊自我控制(local control)權利,一方面提高了政府的施政透明度及政府課責,另一方面也增強隱私保護及人民信任度;同時並以金融監理體系為例,如何因應新版個資法的衝擊,有效的調整內部資料蒐集與資安控管流程,試圖以銀行業透過監理機制建構雛型,監督管理金融業之營運,以期能提供客戶最佳之服務,有效避免新個資法為金融機構帶來的營運風險,進而建議政府機關隱私保護之IT架構,冀能提供主動積極安全又便利之服務,以贏取國民對政府之信任與向心力。 / Personal data protection is one of the categories in privacy. Modern governments constantly face impacts and challenges of political environment changes internally and externally, which rapid developments and extensive applications of information technologies affect the government policy positioning, service ranges, operation modes, and governance principles directly. Modern countries are always thinking over to break routines and aged systems of administration. Under requirements of facing the information disclosure and the administration efficiency, it has become a potential of the trend towards the road of e-government. However, on the other hand, compares to impacts of privacy and personal data protection which have been brought about by the revolution of information, currently, it becomes more intense and profound than before. Many literatures reveal why civilians may not trust the government for reasons in multiple aspects which is related closely with their personal data security, privacy protection, and integrity maintenance. In the meantime, in order to strengthen our government performances of public governance indicators, responsiveness, and accountability for ensuring a reasonable flow of private data, taking into account of privacy protection, and enhancing the overall trust into government, government is striving to develop e-government for civilians’ ease to enjoy its convenience, and this is truly the important subject for e-government of how to construct e-governance mechanisms of personal data privacy and collaboration operations between government organizations internally and between government vs. business enterprises externally on the protection of civilians’ basic rights and the practice of "privacy protection”.
The new version of “Personal Data Protection Act” has been passed after third reading by Legislative Yuan on April, 2010. After taking effective of this new law, due to the applicable main scope enlargement of Personal Data Protection Act, it regulates personal data collections and processing procedures, expends the custodial responsibility to dealers who own the personal data, and adjusts the limitation of penalty up to NTD$200 millions for data leakage, which expects to enlarge the cost and relative responsibilities to enterprises for collecting and using personal data. The object of this analysis is going to explore how government organizations go through IT government frameworks and systems of the consistent privacy protection to design effective “Administrative Procedural Control”, “Accountability”, and “Transparency” mechanisms for proceeding periodic disclosure and positive broadcast. Not only to increase the transparency of government administration and the government accountability, but also to enhance the privacy protection and the trust to civilians, through the right of information “Local Control”, individual controls over self own information which should be dealt with and used. Meanwhile, for example of governance system in financial industry, how to respond to the impact of the new version in Personal Data Protection Act to adjust internal data collections and information security control processes effectively, and try to build up the prototype through governance mechanisms in banking for supervising and managing operations of financial industry. Furthermore attempt providing clients with the best service to avoid operation risks effectively for financial institutions which are caused by the new version of Personal Data Protection Act, and then suggest the IT infrastructure of privacy protection for government organizations. Hope to be able to provide active, positive, safe, and convenient services for winning upon trust and cohesion from civilians to the government.
|
45 |
Méthodes formelles pour le respect de la vie privée par construction / Formal methods for privacy by designAntignac, Thibaud 25 February 2015 (has links)
Le respect de la vie privée par construction est de plus en plus mentionné comme une étape essentielle vers une meilleure protection de la vie privée. Les nouvelles technologies de l'information et de la communication donnent naissance à de nouveaux modèles d'affaires et de services. Ces services reposent souvent sur l'exploitation de données personnelles à des fins de personnalisation. Alors que les exigences de respect de la vie privée sont de plus en plus sous tension, il apparaît que les technologies elles-mêmes devraient être utilisées pour proposer des solutions davantage satisfaisantes. Les technologies améliorant le respect de la vie privée ont fait l'objet de recherches approfondies et diverses techniques ont été développées telles que des anonymiseurs ou des mécanismes de chiffrement évolués. Cependant, le respect de la vie privée par construction va plus loin que les technologies améliorant simplement son respect. En effet, les exigences en terme de protection des données à caractère personnel doivent être prises en compte au plus tôt lors du développement d’un système car elles peuvent avoir un impact important sur l'ensemble de l'architecture de la solution. Cette approche peut donc être résumée comme « prévenir plutôt que guérir ». Des principes généraux ont été proposés pour définir des critères réglementaires de respect de la vie privée. Ils impliquent des notions telles que la minimisation des données, le contrôle par le sujet des données personnelles, la transparence des traitements ou encore la redevabilité. Ces principes ne sont cependant pas suffisamment précis pour être directement traduits en fonctionnalités techniques. De plus, aucune méthode n’a été proposée jusqu’ici pour aider à la conception et à la vérification de systèmes respectueux de la vie privée. Cette thèse propose une démarche de spécification, de conception et de vérification au niveau architectural. Cette démarche aide les concepteurs à explorer l'espace de conception d'un système de manière systématique. Elle est complétée par un cadre formel prenant en compte les exigences de confidentialité et d’intégrité des données. Enfin, un outil d’aide à la conception permet aux concepteurs non-experts de vérifier formellement les architectures. Une étude de cas illustre l’ensemble de la démarche et montre comment ces différentes contributions se complètent pour être utilisées en pratique. / Privacy by Design (PbD) is increasingly praised as a key approach to improving privacy protection. New information and communication technologies give rise to new business models and services. These services often rely on the exploitation of personal data for the purpose of customization. While privacy is more and more at risk, the growing view is that technologies themselves should be used to propose more privacy-friendly solutions. Privacy Enhancing Technologies (PETs) have been extensively studied, and many techniques have been proposed such as anonymizers or encryption mechanisms. However, PbD goes beyond the use of PETs. Indeed, the privacy requirements of a system should be taken into account from the early stages of the design because they can have a large impact on the overall architecture of the solution. The PbD approach can be summed up as ``prevent rather than cure''. A number of principles related to the protection of personal data and privacy have been enshrined in law and soft regulations. They involve notions such as data minimization, control of personal data by the subject, transparency of the data processing, or accountability. However, it is not clear how to translate these principles into technical features, and no method exists so far to support the design and verification of privacy compliant systems. This thesis proposes a systematic process to specify, design, and verify system architectures. This process helps designers to explore the design space in a systematic way. It is complemented by a formal framework in which confidentiality and integrity requirements can be expressed. Finally, a computer-aided engineering tool enables non-expert designers to perform formal verifications of the architectures. A case study illustrates the whole approach showing how these contributions complement each other and can be used in practice.
|
46 |
Informační chování žáků 8. a 9. tříd ve vztahu k ochraně vlastního soukromí na sociálních sítích / Information behavior of 8th and 9th grade pupils in relation to the protection of their own privacy on social networksFilipová, Helena January 2021 (has links)
i Abstract This diploma thesis deals with the information behavior of the pupils in the 8th and 9th grades of primary schools in relation to the protection of their own privacy on the social networks Facebook and Instagram. The aim of the work is to find out: - how pupils are interested in the issue of protecting their privacy on these networks and where they get this information from (parents, school), - if they are aware of the associated risks and how they take them into account in their behavior, - whether they know the tools available to them on these networks, - how the above affects their manners on selected social networks in terms of content and form of shared information. The theoretical part of the thesis deals mainly with the characteristics of selected social networks Facebook and Instagram in terms of privacy protection against possible threats. The practical part presents the results of quantitative research, which was carried out in the form of a questionnaire among 8th and 9th grade elementary school students.
|
47 |
Local differentially private mechanisms for text privacy protectionMo, Fengran 08 1900 (has links)
Dans les applications de traitement du langage naturel (NLP), la formation d’un modèle
efficace nécessite souvent une quantité massive de données. Cependant, les données textuelles
dans le monde réel sont dispersées dans différentes institutions ou appareils d’utilisateurs.
Leur partage direct avec le fournisseur de services NLP entraîne d’énormes risques pour
la confidentialité, car les données textuelles contiennent souvent des informations sensibles,
entraînant une fuite potentielle de la confidentialité. Un moyen typique de protéger la confidentialité
consiste à privatiser directement le texte brut et à tirer parti de la confidentialité
différentielle (DP) pour protéger le texte à un niveau de protection de la confidentialité quantifiable.
Par ailleurs, la protection des résultats de calcul intermédiaires via un mécanisme
de privatisation de texte aléatoire est une autre solution disponible.
Cependant, les mécanismes existants de privatisation des textes ne permettent pas d’obtenir
un bon compromis entre confidentialité et utilité en raison de la difficulté intrinsèque
de la protection de la confidentialité des textes. Leurs limitations incluent principalement
les aspects suivants: (1) ces mécanismes qui privatisent le texte en appliquant la notion de
dχ-privacy ne sont pas applicables à toutes les métriques de similarité en raison des exigences
strictes; (2) ils privatisent chaque jeton (mot) dans le texte de manière égale en fournissant
le même ensemble de sorties excessivement grand, ce qui entraîne une surprotection; (3) les
méthodes actuelles ne peuvent garantir la confidentialité que pour une seule étape d’entraînement/
d’inférence en raison du manque de composition DP et de techniques d’amplification
DP.
Le manque du compromis utilité-confidentialité empêche l’adoption des mécanismes actuels
de privatisation du texte dans les applications du monde réel. Dans ce mémoire, nous
proposons deux méthodes à partir de perspectives différentes pour les étapes d’apprentissage
et d’inférence tout en ne requérant aucune confiance de sécurité au serveur. La première approche
est un mécanisme de privatisation de texte privé différentiel personnalisé (CusText)
qui attribue à chaque jeton d’entrée un ensemble de sortie personnalisé pour fournir une protection
de confidentialité adaptative plus avancée au niveau du jeton. Il surmonte également
la limitation des métriques de similarité causée par la notion de dχ-privacy, en adaptant
le mécanisme pour satisfaire ϵ-DP. En outre, nous proposons deux nouvelles stratégies de
5
privatisation de texte pour renforcer l’utilité du texte privatisé sans compromettre la confidentialité.
La deuxième approche est un modèle Gaussien privé différentiel local (GauDP)
qui réduit considérablement le volume de bruit calibrée sur la base d’un cadre avancé de
comptabilité de confidentialité et améliore ainsi la précision du modèle en incorporant plusieurs
composants. Le modèle se compose d’une couche LDP, d’algorithmes d’amplification
DP de sous-échantillonnage et de sur-échantillonnage pour l’apprentissage et l’inférence, et
d’algorithmes de composition DP pour l’étalonnage du bruit. Cette nouvelle solution garantit
pour la première fois la confidentialité de l’ensemble des données d’entraînement/d’inférence.
Pour évaluer nos mécanismes de privatisation de texte proposés, nous menons des expériences
étendues sur plusieurs ensembles de données de différents types. Les résultats
expérimentaux démontrent que nos mécanismes proposés peuvent atteindre un meilleur compromis
confidentialité-utilité et une meilleure valeur d’application pratique que les méthodes
existantes. En outre, nous menons également une série d’études d’analyse pour explorer
les facteurs cruciaux de chaque composant qui pourront fournir plus d’informations sur la
protection des textes et généraliser d’autres explorations pour la NLP préservant la confidentialité. / In Natural Language Processing (NLP) applications, training an effective model often
requires a massive amount of data. However, text data in the real world are scattered in
different institutions or user devices. Directly sharing them with the NLP service provider
brings huge privacy risks, as text data often contains sensitive information, leading to potential
privacy leakage. A typical way to protect privacy is to directly privatize raw text
and leverage Differential Privacy (DP) to protect the text at a quantifiable privacy protection
level. Besides, protecting the intermediate computation results via a randomized text
privatization mechanism is another available solution.
However, existing text privatization mechanisms fail to achieve a good privacy-utility
trade-off due to the intrinsic difficulty of text privacy protection. The limitations of them
mainly include the following aspects: (1) those mechanisms that privatize text by applying
dχ-privacy notion are not applicable for all similarity metrics because of the strict requirements;
(2) they privatize each token in the text equally by providing the same and excessively
large output set which results in over-protection; (3) current methods can only guarantee
privacy for either the training/inference step, but not both, because of the lack of DP composition
and DP amplification techniques.
Bad utility-privacy trade-off performance impedes the adoption of current text privatization
mechanisms in real-world applications. In this thesis, we propose two methods from
different perspectives for both training and inference stages while requiring no server security
trust. The first approach is a Customized differentially private Text privatization mechanism
(CusText) that assigns each input token a customized output set to provide more
advanced adaptive privacy protection at the token-level. It also overcomes the limitation
for the similarity metrics caused by dχ-privacy notion, by turning the mechanism to satisfy
ϵ-DP. Furthermore, we provide two new text privatization strategies to boost the utility of
privatized text without compromising privacy. The second approach is a Gaussian-based
local Differentially Private (GauDP) model that significantly reduces calibrated noise power
adding to the intermediate text representations based on an advanced privacy accounting
framework and thus improves model accuracy by incorporating several components. The
model consists of an LDP-layer, sub-sampling and up-sampling DP amplification algorithms
7
for training and inference, and DP composition algorithms for noise calibration. This novel
solution guarantees privacy for both training and inference data.
To evaluate our proposed text privatization mechanisms, we conduct extensive experiments
on several datasets of different types. The experimental results demonstrate that our
proposed mechanisms can achieve a better privacy-utility trade-off and better practical application
value than the existing methods. In addition, we also carry out a series of analyses
to explore the crucial factors for each component which will be able to provide more insights
in text protection and generalize further explorations for privacy-preserving NLP.
|
48 |
Deep Neural Networks for Inverse De-Identification of Medical Case Narratives in Reports of Suspected Adverse Drug Reactions / Djupa neuronnät för omvänd avidentifiering av medicinska fallbeskrivningar i biverkningsrapporterMeldau, Eva-Lisa January 2018 (has links)
Medical research requires detailed and accurate information on individual patients. This is especially so in the context of pharmacovigilance which amongst others seeks to identify previously unknown adverse drug reactions. Here, the clinical stories are often the starting point for assessing whether there is a causal relationship between the drug and the suspected adverse reaction. Reliable automatic de-identification of medical case narratives could allow to share this patient data without compromising the patient’s privacy. Current research on de-identification focused on solving the task of labelling the tokens in a narrative with the class of sensitive information they belong to. In this Master’s thesis project, we explore an inverse approach to the task of de-identification. This means that de-identification of medical case narratives is instead understood as identifying tokens which do not need to be removed from the text in order to ensure patient confidentiality. Our results show that this approach can lead to a more reliable method in terms of higher recall. We achieve a recall of sensitive information of 99.1% while the precision is kept above 51% for the 2014-i2b2 benchmark data set. The model was also fine-tuned on case narratives from reports of suspected adverse drug reactions, where a recall of sensitive information of more than 99% was achieved. Although the precision was only at a level of 55%, which is lower than in comparable systems, an expert could still identify information which would be useful for causality assessment in pharmacovigilance in most of the case narratives which were de-identified with our method. In more than 50% of the case narratives no information useful for causality assessment was missing at all. / Tillgång till detaljerade kliniska data är en förutsättning för att bedriva medicinsk forskning och i förlängningen hjälpa patienter. Säker avidentifiering av medicinska fallbeskrivningar kan göra det möjligt att dela sådan information utan att äventyra patienters skydd av personliga data. Tidigare forskning inom området har sökt angripa problemet genom att märka ord i en text med vilken typ av känslig information de förmedlar. I detta examensarbete utforskar vi möjligheten att angripa problemet på omvänt vis genom att identifiera de ord som inte behöver avlägsnas för att säkerställa skydd av känslig patientinformation. Våra resultat visar att detta kan avidentifiera en större andel av den känsliga informationen: 99,1% av all känslig information avidentifieras med vår metod, samtidigt som 51% av alla uteslutna ord verkligen förmedlar känslig information, vilket undersökts för 2014-i2b2 jämförelse datamängden. Algoritmen anpassades även till fallbeskrivningar från biverkningsrapporter, och i detta fall avidentifierades 99,1% av all känslig information medan 55% av alla uteslutna ord förmedlar känslig information. Även om denna senare andel är lägre än för jämförbara system så kunde en expert hitta information som är användbar för kausalitetsvärdering i flertalet av de avidentifierade rapporterna; i mer än hälften av de avidentifierade fallbeskrivningarna saknades ingen information med värde för kausalitetsvärdering.
|
49 |
Beyond Privacy Concerns: Examining Individual Interest in Privacy in the Machine Learning EraBrown, Nicholas James 12 June 2023 (has links)
The deployment of human-augmented machine learning (ML) systems has become a recommended organizational best practice. ML systems use algorithms that rely on training data labeled by human annotators. However, human involvement in reviewing and labeling consumers' voice data to train speech recognition systems for Amazon Alexa, Microsoft Cortana, and the like has raised privacy concerns among consumers and privacy advocates. We use the enhanced APCO model as the theoretical lens to investigate how the disclosure of human involvement during the supervised machine learning process affects consumers' privacy decision making. In a scenario-based experiment with 499 participants, we present various company privacy policies to participants to examine their trust and privacy considerations, then ask them to share reasons why they would or would not opt in to share their voice data to train a companies' voice recognition software. We find that the perception of human involvement in the ML training process significantly influences participants' privacy-related concerns, which thereby mediate their decisions to share their voice data. Furthermore, we manipulate four factors of a privacy policy to operationalize various cognitive biases actively present in the minds of consumers and find that default trust and salience biases significantly affect participants' privacy decision making. Our results provide a deeper contextualized understanding of privacy-related concerns that may arise in human-augmented ML system configurations and highlight the managerial importance of considering the role of human involvement in supervised machine learning settings. Importantly, we introduce perceived human involvement as a new construct to the information privacy discourse.
Although ubiquitous data collection and increased privacy breaches have elevated the reported concerns of consumers, consumers' behaviors do not always match their stated privacy concerns. Researchers refer to this as the privacy paradox, and decades of information privacy research have identified a myriad of explanations why this paradox occurs. Yet the underlying crux of the explanations presumes privacy concern to be the appropriate proxy to measure privacy attitude and compare with actual privacy behavior. Often, privacy concerns are situational and can be elicited through the setup of boundary conditions and the framing of different privacy scenarios. Drawing on the cognitive model of empowerment and interest, we propose a multidimensional privacy interest construct that captures consumers' situational and dispositional attitudes toward privacy, which can serve as a more robust measure in conditions leading to the privacy paradox. We define privacy interest as a consumer's general feeling toward reengaging particular behaviors that increase their information privacy. This construct comprises four dimensions—impact, awareness, meaningfulness, and competence—and is conceptualized as a consumer's assessment of contextual factors affecting their privacy perceptions and their global predisposition to respond to those factors. Importantly, interest was originally included in the privacy calculus but is largely absent in privacy studies and theoretical conceptualizations. Following MacKenzie et al. (2011), we developed and empirically validated a privacy interest scale. This study contributes to privacy research and practice by reconceptualizing a construct in the original privacy calculus theory and offering a renewed theoretical lens through which to view consumers' privacy attitudes and behaviors. / Doctor of Philosophy / The deployment of human-augmented machine learning (ML) systems has become a recommended organizational best practice. ML systems use algorithms that rely on training data labeled by human annotators. However, human involvement in reviewing and labeling consumers' voice data to train speech recognition systems for Amazon Alexa, Microsoft Cortana, and the like has raised privacy concerns among consumers and privacy advocates. We investigate how the disclosure of human involvement during the supervised machine learning process affects consumers' privacy decision making and find that the perception of human involvement in the ML training process significantly influences participants' privacy-related concerns. This thereby influences their decisions to share their voice data. Our results highlight the importance of understanding consumers' willingness to contribute their data to generate complete and diverse data sets to help companies reduce algorithmic biases and systematic unfairness in the decisions and outputs rendered by ML systems.
Although ubiquitous data collection and increased privacy breaches have elevated the reported concerns of consumers, consumers' behaviors do not always match their stated privacy concerns. This is referred to as the privacy paradox, and decades of information privacy research have identified a myriad of explanations why this paradox occurs. Yet the underlying crux of the explanations presumes privacy concern to be the appropriate proxy to measure privacy attitude and compare with actual privacy behavior. We propose privacy interest as an alternative to privacy concern and assert that it can serve as a more robust measure in conditions leading to the privacy paradox. We define privacy interest as a consumer's general feeling toward reengaging particular behaviors that increase their information privacy. We found that privacy interest was more effective than privacy concern in predicting consumers' mobilization behaviors, such as publicly complaining about privacy issues to companies and third-party organizations, requesting to remove their information from company databases, and reducing their self-disclosure behaviors. By contrast, privacy concern was more effective than privacy interest in predicting consumers' behaviors to misrepresent their identity. By developing and empirically validating the privacy interest scale, we offer interest in privacy as a renewed theoretical lens through which to view consumers' privacy attitudes and behaviors.
|
50 |
Ochrana obětí trestných činů a média: zveřejňování informací o týraných dětech před a po přijetí novely trestního řádu v roce 2009 / Protection of Crime Victims and the Media: Publishing of Mistreatrd Chidren Information before and after Passing the Law of Criminal Procedure Amendment in 2009Hosenseidlová, Petra January 2013 (has links)
The thesis Protection of Crime Victims and the Media: Publishing of Mistreated Children Information before and after Passing the Law of Criminal Procedure Amendment in 2009 deals with the problem of secondary victimization caused by the media. More specifically, it focuses on the mistreated children and publishing that kind of information about them which enable their identification. It is concerned with the nationwide daily press and compares the situation before and after passing the Law of Criminal Procedure Amendment in 2009. This amendment introduced measures towards better privacy protection of crime victims with a special respect to underage victims and victims of some exceptionally serious crimes. The thesis compares the occurrence of information which enable identification of mistreated children in 2008 and 2011 in the three most popular nationwide dailies - Mlada fronta Dnes, Pravo and Blesk. It is interested in the following information: names and surnames of the victims and their family members, residence location, photos of the victims, their family members and their residence location. Apart from that it also examines where journalists get those information and photos from. The main aim is to find out what was the impact of the amendment, it means whether there are less information...
|
Page generated in 0.0781 seconds