Return to search

Establishing Verifiable Trust in Collaborative Health Research

Collaborative health research environments usually involve sharing private health data between a number of participants, including researchers at different institutions. Inclusion of AI systems as participants in this environment allows predictive analytics to be applied on the research data and the provision of better diagnoses. However, the growing number of researchers and AI systems working together raises the problem of protecting the privacy of data contributors and managing the trust among participants, which affects the overall collaboration effort. In this thesis, we propose an architecture that utilizes blockchain technology for enabling verifiable trust in collaborative health research environments so that participants who do not necessarily trust each other can effectively collaborate to achieve a research goal. Provenance management of research data and privacy auditing are key components of the architecture that allow participants’ actions and their compliance with privacy policies to be checked across the research pipeline. The architecture supports distributed trust between participants through a Linked Data-based blockchain model that allows tamper-proof audit logs to be created to preserve log integrity and participant non-repudiation. To maintain the integrity of the audit logs, we investigate the state-of-the-art methods of generating cryptographic hashes for RDF datasets. We demonstrate an efficient method of computing integrity proofs that construct a sorted Merkle tree for growing RDF datasets based on timestamps (as a key) that are extractable from the dataset. Evaluations of our methods through experimental realizations and analyses of their resiliency to common security threats are provided. / Thesis / Master of Science (MSc) / Collaborative health research environments involve the sharing of private health data between a number of participants, including researchers at different institutions. The inclusion of AI systems as participants in this environment allows predictive analytics to be applied on the research data to provide better diagnoses. In such environments where private health data is shared among diverse participants, the maintenance of trust between participants and the auditing of data transformations across the environment are important for protecting the privacy of data contributors. Preserving the integrity of these transformations is paramount for supporting transparent auditing processes. In this thesis, we propose an architecture for establishing verifiable trust and transparency among participants in collaborative health research environments, present a model for creating tamper-proof privacy audit logs that support the privacy management of data contributors, and analyze methods for verifying the integrity of all logged data activities in the research environment.

Identiferoai:union.ndltd.org:mcmaster.ca/oai:macsphere.mcmaster.ca:11375/23417
Date January 2018
CreatorsSutton, Andrew
ContributorsSamavi, Reza, Computing and Software
Source SetsMcMaster University
LanguageEnglish
Detected LanguageEnglish
TypeThesis

Page generated in 0.0015 seconds