Spelling suggestions: "subject:"ehe text 1generation"" "subject:"ehe text 4egeneration""
341 |
The Invisibility of “Second Sight”: Double Consciousness in American Literature and Popular CultureDabbs, Ashlie C. 24 October 2011 (has links)
No description available.
|
342 |
Electrical Power and Storage for NASA Next Generation Aircraft.Al-Agele, Saif January 2017 (has links)
No description available.
|
343 |
Towards a Human Genomic Coevolution NetworkSavel, Daniel M. 04 June 2018 (has links)
No description available.
|
344 |
The Influence of DNA Sequence and Post Translational Modifications on Nucleosome Positioning and StabilityMooney, Alex M. 20 December 2012 (has links)
No description available.
|
345 |
Pipeline for Next Generation Sequencing data of phage displayed libraries to support affinity ligand discoverySchleimann-Jensen, Ella January 2022 (has links)
Affinity ligands are important molecules used in affinity chromatography for purification of significant substances from complex mixtures. To find affinity ligands specific to important target molecules could be a challenging process. Cytiva uses the powerful phage display technique to find new promising affinity ligands. The phage display technique is a method run in several enrichment cycles. When developing new affinity ligands, a protein scaffold library with a diversity of up to 1010-1011 different protein scaffold variants is run through the enrichment cycles. The result from the phage display rounds is screened for target molecule binding followed by sequencing, usually with one of the conventional screening methods ELISA or Biacore followed by Sanger sequencing. However, the throughput of these analyses are unfortunately very low, often with only a few hundred screened clones. Therefore, Next Generation Sequencing or NGS, has become an increasingly popular screening method for phage display libraries which generates millions of sequences from each phage display round. This creates a need for a robust data analysis pipeline to be able to interpret the large amounts of data. In this project, a pipeline for analysis of NGS data of phage displayed libraries has been developed at Cytiva. Cytiva uses NGS as one of their screening methods of phage displayed protein libraries because of the high throughput compared to the conventional screening methods. The purpose is to find new affinity ligands for purification of essential substances used in drugs. The pipeline has been created using the object-oriented programming language R and consists of several analyses covering the most important steps to be able to find promising results from the NGS data. With the developed pipeline the user can analyze the data on both DNA and protein sequence level and per position residue breakdown, as well as filter the data based on specific amino acids and positions. This gives a robust and thorough analysis which can lead to promising results that can be used in the development of novel affinity ligands for future purification products.
|
346 |
Droplet-Based Microfluidics for High-Throughput Single-Cell Omics ProfilingZhang, Qiang 06 September 2022 (has links)
Droplet-based microfluidics is a powerful tool permitting massive-scale single-cell analysis in pico-/nano-liter water-in-oil droplets. It has been integrated into various library preparation techniques to accomplish high-throughput scRNA-seq, scDNA-seq, scATAC-seq, scChIP-seq, as well as scMulti-omics-seq. These advanced technologies have been providing unique and novel insights into both normal differentiation and disease development at single-cell level. In this thesis, we develop four new droplet-based tools for single-cell omics profiling. First, the developed Drop-BS is the first droplet-based platform to construct single-cell bisulfite sequencing libraries for DNA methylome profiling and allows production of BS library of 2,000-10,000 single cells within 2 d. We applied the technology to separately profile mixed cell lines, mouse brain tissues, and human brain tissues to reveal cell type heterogeneity. Second, the new Drop-ChIP platform only requires two steps of droplet generation to achieve multiple steps of reactions in droplets such as single-cell lysis, chromatin fragmentation, ChIP, and barcoding. Third, we aim to establish a droplet-based platform to accomplish high-throughput full-length RNA-seq (Drop-full-seq), which both current tube-based and droplet-based methods cannot realize. Last, we constructed an in-house droplet-based tool to assist single-cell ATAC-seq library preparation (Drop-ATAC), which provided a low-cost and facile protocol to conduct scATAC-seq in laboratories without the expensive instrument. / Doctor of Philosophy / Microfluidics is a collection of techniques to manipulate fluids in the micrometer scale. One of microfluidic techniques is called "droplet-based microfluidics". It can manipulate (i.e., generate, merge, sort, split, etc) pico-/nano-liter of water-in-oil droplets. First, since the water phase is separated by the continuous oil phase, these droplets are discrete and individual reactors. Second, droplet-based microfluidics can achieve highly parallel manipulation of thousands to millions of droplets. These two advantages make droplet-based microfluidics an ideal tool to perform single-cell assays. Over the past 10 years, various droplet-based platforms have been developed to study single-cell transcriptome, genome, epigenome, as well as multi-ome. To expand droplet-based tools for single-cell analysis, we aim to develop four novel platforms in this thesis. First, Drop-BS, by integrating droplet generation and droplet fusion techniques, can achieve high-throughput single-cell bisulfite sequencing library preparation. It can generate 10,000 single-cell BS libraries within 2 days which is difficult to achieve for conventional library preparation in tubes/microwells. Second, we developed a novel and facile Drop-ChIP platform to prepare single-cell ChIP-seq library. It is easy to operate since it only requires two steps of droplet generation. It also generates higher quality of data compared to previous work. In addition, we are working on the development and characterization of the other two droplet-based tools to achieve full-length single-cell RNA-seq and single-cell ATAC-seq.
|
347 |
Low-Input Multi-Omic Studies of Brain Neuroscience Involved in Mental DiseasesZhu, Bohan 13 September 2022 (has links)
Psychiatric disorders are believed to result from the combination of genetic predisposition and many environmental triggers. While the large number of disease-associated genetic variations have been recognized by previous genome-wide association studies (GWAS), the role of epigenetic mechanisms that mediate the effects of environmental factors on CNS gene activity in the etiology of most mental illnesses is still largely unclear. A growing body of evidence suggested that the abnormalities (changes in gene expression, formation of neural circuits, and behavior) involved in most psychiatric syndromes are preserved by epigenetic modifications identified in several specific brain regions. In this thesis, we developed the second generation of one of our microfluidic technologies (MOWChIP-seq) and used it to profile genome-wide histone modifications in three mental illness-related biological studies: the effect of psychedelics in mice, schizophrenia, and the effect of maternal immune activation in mice offspring. The second generation of MOWChIP-seq was designed to generate histone modification profiles from as few as 100 cells per assay with a throughput as high as eight assays in each run. Then, we applied the new MOWChIP-seq and SMART-seq2 to profile the histone modification H3K27ac and transcriptome, respectively, using NeuN+ neuronal nuclei from the mouse frontal cortex after a single dose of psychedelic administration. The epigenomic and transcriptomic changes induced by 2,5-Dimethoxy-4-iodoamphetamine (DOI), a subtype of psychedelics, in mouse neuronal nuclei at various time points suggest that the long-lasting effects of the psychedelic are more closely related to epigenomic alterations than the changes in transcriptomic patterns. Next, we comprehensively characterized epigenomic and transcriptomic features from the frontal cortex of 29 individuals with schizophrenia and 29 individually matched controls (gender and age). We found that schizophrenia subjects exhibited thousands of neuronal vs. glial epigenetic differences at regions that included several susceptibility genetic loci, such as NRXN1, RGS4 and GRIN3A. Finally, we investigated the epigenetic and transcriptomic alterations induced by the maternal immune activation (MIA) in mice offspring's frontal cortex. Pregnant mice were injected with influenza virus at GD 9.5 and the frontal cortex from mice pups (10 weeks old) were examined later. The results offered us some insights into the contribution of MIA to the etiology of some mental disorders, like schizophrenia and autism. / Doctor of Philosophy / While this field is still in its early stage, the epigenetic studies of mental disorders present promise to expand our understanding about how environmental stimulates, interacting with genetic factors, contribute to the etiology of various psychiatric syndromes, like major depression and schizophrenia. Previous clinical trials suggested that psychedelics may represent a promising long-lasting treatment for patients with depression and other psychiatric conditions. These research presented the therapeutic potential of psychedelic compounds for treating major depression and demonstrated the capability of psychedelics in increasing dendritic density and stimulating synapse formation. However, the molecular mechanism mediating the clinical effectiveness of psychedelics remain largely unexplored. Our study revealed that epigenomic-driven changes in synaptic plasticity sustain psychedelics' long-lasting antidepressant action. Another serious mental illness is schizophrenia, which could affect how an individual feels, thinks, and behaves. Like most other mental disorders, schizophrenia results from a combination of genetic and environmental causes. Epigenetic marks allow a dynamic impact of environmental factors, including antipsychotic medications, on the access to genes and regulatory elements. Despite this, no study so far has profiled cell-type-specific genome-wide histone modifications in postmortem brain samples from schizophrenia subjects or the effect of antipsychotic treatment on such epigenetic marks. Here we show the first comprehensive epigenomic characterization of the frontal cortex of 29 individuals with schizophrenia and 29 matched controls. The process of brain development is surprisingly sensitive to a lot of environmental insults. Epidemiological studies have recognized maternal immune activation as a risk factor that may change the normal developmental trajectory of the fetal brain and increase the odds of developing a range of psychiatric disorders, including schizophrenia and autism, in its lifetime. Given the prevalence of the coronavirus, uncovering the molecular mechanism underlie the phenotypic alterations has become more urgent than before, for both prevention and treatment.
|
348 |
<b>Design, Implementation, and Evaluation of a Quantum-Infused Middle-School Level Science Teaching and Learning Sequence</b>Zeynep Gonca Akdemir (19166221) 18 July 2024 (has links)
<p dir="ltr">This dissertation explores the integration of Quantum Information Science and Engineering (QISE) into formal K-12 curriculum through a Design-Based Research (DBR) approach. The overarching purpose is to develop a NGSS-aligned quantum-infused science curriculum unit for middle school students, aiming to enhance student understanding and engagement in quantum randomness. The study emphasizes the sequential introduction of concepts (from radioactive decay to quantum computing), interdisciplinary inquiry-based learning, and alignment of content and assessment strategies by leveraging Learning Progressions (LPs) and Hypothetical Learning Trajectories (LTs). Methods employed in this DBR study included iterative design processes, teacher feedback, and teaching experiments with 10 participant in-service middle school science teachers as well as quantitative assessment and evaluation of students’ learning and engagement data. Also, it is aimed to focus on professional development for teachers, incorporating NGSS and the Framework as the foundational guidelines. Findings highlighted the importance of teacher feedback in refining educational strategies, the challenges of teaching advanced quantum concepts at the middle school level, and the benefits of using classical physics as a gateway to introduce quantum concepts. This study is also manifestation of a structured teaching-learning pathway, guided by validation and hypothetical LPs, to support students' progression of understanding towards more sophisticated knowledge in QISE. Implications included the potential for enhancing coordination and sequencing of QISE teaching at the K-12 level, contributing to the cultivation of a diverse and quantum-savvy workforce. This DBR study hoped to set a foundation for future research endeavors, emphasizing the need for comprehensive teacher training in K-12 QISE education and the transformative power of education in fostering deeper comprehension and engagement with complex subjects.</p>
|
349 |
Improved Error Correction of NGS DataAlic, Andrei Stefan 15 July 2016 (has links)
Tesis por compendio / [EN] The work done for this doctorate thesis focuses on error correction of Next Generation Sequencing (NGS) data in the context of High Performance Computing (HPC).
Due to the reduction in sequencing cost, the increasing output of the sequencers and the advancements in the biological and medical sciences, the amount of NGS data has increased tremendously.
Humans alone are not able to keep pace with this explosion of information, therefore computers must assist them to ease the handle of the deluge of information generated by the sequencing machines.
Since NGS is no longer just a research topic (used in clinical routine to detect cancer mutations, for instance), requirements in performance and accuracy are more stringent.
For sequencing to be useful outside research, the analysis software must work accurately and fast.
This is where HPC comes into play.
NGS processing tools should leverage the full potential of multi-core and even distributed computing, as those platforms are extensively available.
Moreover, as the performance of the individual core has hit a barrier, current computing tendencies focus on adding more cores and explicitly split the computation to take advantage of them.
This thesis starts with a deep analysis of all these problems in a general and comprehensive way (to reach out to a very wide audience), in the form of an exhaustive and objective review of the NGS error correction field.
We dedicate a chapter to this topic to introduce the reader gradually and gently into the world of sequencing.
It presents real problems and applications of NGS that demonstrate the impact this technology has on science.
The review results in the following conclusions: the need of understanding of the specificities of NGS data samples (given the high variety of technologies and features) and the need of flexible, efficient and accurate tools for error correction as a preliminary step of any NGS postprocessing.
As a result of the explosion of NGS data, we introduce MuffinInfo.
It is a piece of software capable of extracting information from the raw data produced by the sequencer to help the user understand the data.
MuffinInfo uses HTML5, therefore it runs in almost any software and hardware environment.
It supports custom statistics to mould itself to specific requirements.
MuffinInfo can reload the results of a run which are stored in JSON format for easier integration with third party applications.
Finally, our application uses threads to perform the calculations, to load the data from the disk and to handle the UI.
In continuation to our research and as a result of the single core performance limitation, we leverage the power of multi-core computers to develop a new error correction tool.
The error correction of the NGS data is normally the first step of any analysis targeting NGS.
As we conclude from the review performed within the frame of this thesis, many projects in different real-life applications have opted for this step before further analysis.
In this sense, we propose MuffinEC, a multi-technology (Illumina, Roche 454, Ion Torrent and PacBio -experimental), any-type-of-error handling (mismatches, deletions insertions and unknown values) corrector.
It surpasses other similar software by providing higher accuracy (demonstrated by three type of tests) and using less computational resources.
It follows a multi-steps approach that starts by grouping all the reads using a k-mers based metric.
Next, it employs the powerful Smith-Waterman algorithm to refine the groups and generate Multiple Sequence Alignments (MSAs).
These MSAs are corrected by taking each column and looking for the correct base, determined by a user-adjustable percentage.
This manuscript is structured in chapters based on material that has been previously published in prestigious journals indexed by the Journal of Citation Reports (on outstanding positions) and relevant congresses. / [ES] El trabajo realizado en el marco de esta tesis doctoral se centra en la corrección de errores en datos provenientes de técnicas NGS utilizando técnicas de computación intensiva.
Debido a la reducción de costes y el incremento en las prestaciones de los secuenciadores, la cantidad de datos disponibles en NGS se ha incrementado notablemente. La utilización de computadores en el análisis de estas muestras se hace imprescindible para poder dar respuesta a la avalancha de información generada por estas técnicas. El uso de NGS transciende la investigación con numerosos ejemplos de uso clínico y agronómico, por lo que aparecen nuevas necesidades en cuanto al tiempo de proceso y la fiabilidad de los resultados. Para maximizar su aplicabilidad clínica, las técnicas de proceso de datos de NGS deben acelerarse y producir datos más precisos. En este contexto es en el que las técnicas de comptuación intensiva juegan un papel relevante. En la actualidad, es común disponer de computadores con varios núcleos de proceso e incluso utilizar múltiples computadores mediante técnicas de computación paralela distribuida. Las tendencias actuales hacia arquitecturas con un mayor número de núcleos ponen de manifiesto que es ésta una aproximación relevante.
Esta tesis comienza con un análisis de los problemas fundamentales del proceso de datos en NGS de forma general y adaptado para su comprensión por una amplia audiencia, a través de una exhaustiva revisión del estado del arte en la corrección de datos de NGS. Esta revisión introduce gradualmente al lector en las técnicas de secuenciación masiva, presentando problemas y aplicaciones reales de las técnicas de NGS, destacando el impacto de esta tecnología en ciencia. De este estudio se concluyen dos ideas principales: La necesidad de analizar de forma adecuada las características de los datos de NGS, atendiendo a la enorme variedad intrínseca que tienen las diferentes técnicas de NGS; y la necesidad de disponer de una herramienta versátil, eficiente y precisa para la corrección de errores.
En el contexto del análisis de datos, la tesis presenta MuffinInfo. La herramienta MuffinInfo es una aplicación software implementada mediante HTML5. MuffinInfo obtiene información relevante de datos crudos de NGS para favorecer el entendimiento de sus características y la aplicación de técnicas de corrección de errores, soportando además la extensión mediante funciones que implementen estadísticos definidos por el usuario. MuffinInfo almacena los resultados del proceso en ficheros JSON. Al usar HTML5, MuffinInfo puede funcionar en casi cualquier entorno hardware y software. La herramienta está implementada aprovechando múltiples hilos de ejecución por la gestión del interfaz.
La segunda conclusión del análisis del estado del arte nos lleva a la oportunidad de aplicar de forma extensiva técnicas de computación de altas prestaciones en la corrección de errores para desarrollar una herramienta que soporte múltiples tecnologías (Illumina, Roche 454, Ion Torrent y experimentalmente PacBio). La herramienta propuesta (MuffinEC), soporta diferentes tipos de errores (sustituciones, indels y valores desconocidos). MuffinEC supera los resultados obtenidos por las herramientas existentes en este ámbito. Ofrece una mejor tasa de corrección, en un tiempo muy inferior y utilizando menos recursos, lo que facilita además su aplicación en muestras de mayor tamaño en computadores convencionales. MuffinEC utiliza una aproximación basada en etapas multiples. Primero agrupa todas las secuencias utilizando la métrica de los k-mers. En segundo lugar realiza un refinamiento de los grupos mediante el alineamiento con Smith-Waterman, generando contigs. Estos contigs resultan de la corrección por columnas de atendiendo a la frecuencia individual de cada base.
La tesis se estructura por capítulos cuya base ha sido previamente publicada en revistas indexadas en posiciones dest / [CA] El treball realitzat en el marc d'aquesta tesi doctoral se centra en la correcció d'errors en dades provinents de tècniques de NGS utilitzant tècniques de computació intensiva.
A causa de la reducció de costos i l'increment en les prestacions dels seqüenciadors, la quantitat de dades disponibles a NGS s'ha incrementat notablement. La utilització de computadors en l'anàlisi d'aquestes mostres es fa imprescindible per poder donar resposta a l'allau d'informació generada per aquestes tècniques. L'ús de NGS transcendeix la investigació amb nombrosos exemples d'ús clínic i agronòmic, per la qual cosa apareixen noves necessitats quant al temps de procés i la fiabilitat dels resultats. Per a maximitzar la seua aplicabilitat clínica, les tècniques de procés de dades de NGS han d'accelerar-se i produir dades més precises. En este context és en el que les tècniques de comptuación intensiva juguen un paper rellevant. En l'actualitat, és comú disposar de computadors amb diversos nuclis de procés i inclús utilitzar múltiples computadors per mitjà de tècniques de computació paral·lela distribuïda. Les tendències actuals cap a arquitectures amb un nombre més gran de nuclis posen de manifest que és esta una aproximació rellevant.
Aquesta tesi comença amb una anàlisi dels problemes fonamentals del procés de dades en NGS de forma general i adaptat per a la seua comprensió per una àmplia audiència, a través d'una exhaustiva revisió de l'estat de l'art en la correcció de dades de NGS. Esta revisió introduïx gradualment al lector en les tècniques de seqüenciació massiva, presentant problemes i aplicacions reals de les tècniques de NGS, destacant l'impacte d'esta tecnologia en ciència. D'este estudi es conclouen dos idees principals: La necessitat d'analitzar de forma adequada les característiques de les dades de NGS, atenent a l'enorme varietat intrínseca que tenen les diferents tècniques de NGS; i la necessitat de disposar d'una ferramenta versàtil, eficient i precisa per a la correcció d'errors.
En el context de l'anàlisi de dades, la tesi presenta MuffinInfo. La ferramenta MuffinInfo és una aplicació programari implementada per mitjà de HTML5. MuffinInfo obté informació rellevant de dades crues de NGS per a afavorir l'enteniment de les seues característiques i l'aplicació de tècniques de correcció d'errors, suportant a més l'extensió per mitjà de funcions que implementen estadístics definits per l'usuari. MuffinInfo emmagatzema els resultats del procés en fitxers JSON. A l'usar HTML5, MuffinInfo pot funcionar en gairebé qualsevol entorn maquinari i programari. La ferramenta està implementada aprofitant múltiples fils d'execució per la gestió de l'interfície.
La segona conclusió de l'anàlisi de l'estat de l'art ens porta a l'oportunitat d'aplicar de forma extensiva tècniques de computació d'altes prestacions en la correcció d'errors per a desenrotllar una ferramenta que suport múltiples tecnologies (Illumina, Roche 454, Ió Torrent i experimentalment PacBio). La ferramenta proposada (MuffinEC), suporta diferents tipus d'errors (substitucions, indels i valors desconeguts). MuffinEC supera els resultats obtinguts per les ferramentes existents en este àmbit. Oferix una millor taxa de correcció, en un temps molt inferior i utilitzant menys recursos, la qual cosa facilita a més la seua aplicació en mostres més gran en computadors convencionals. MuffinEC utilitza una aproximació basada en etapes multiples. Primer agrupa totes les seqüències utilitzant la mètrica dels k-mers. En segon lloc realitza un refinament dels grups per mitjà de l'alineament amb Smith-Waterman, generant contigs. Estos contigs resulten de la correcció per columnes d'atenent a la freqüència individual de cada base.
La tesi s'estructura per capítols la base de la qual ha sigut prèviament publicada en revistes indexades en posicions destacades de l'índex del Journal of Citation Repor / Alic, AS. (2016). Improved Error Correction of NGS Data [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/67630 / Compendio
|
350 |
Blockchain-enabled Secure and Trusted Personalized Health RecordDong, Yibin 20 December 2022 (has links)
Longitudinal personalized electronic health record (LPHR) provides a holistic view of health records for individuals and offers a consistent patient-controlled information system for managing the health care of patients. Except for the patients in Veterans Affairs health care service, however, no LPHR is available for the general population in the U.S. that can integrate the existing patients' electronic health records throughout life of care. Such a gap may be contributed mainly by the fact that existing patients' electronic health records are scattered across multiple health care facilities and often not shared due to privacy and security concerns from both patients and health care organizations. The main objective of this dissertation is to address these roadblocks by designing a scalable and interoperable LPHR with patient-controlled and mutually-trusted security and privacy.
Privacy and security are complex problems. Specifically, without a set of access control policies, encryption alone cannot secure patient data due to insider threat. Moreover, in a distributed system like LPHR, so-called race condition occurs when access control policies are centralized while decisions making processes are localized. We propose a formal definition of secure LPHR and develop a blockchain-enabled next generation access control (BeNGAC) model. The BeNGAC solution focuses on patient-managed secure authorization for access, and NGAC operates in open access surroundings where users can be centrally known or unknown. We also propose permissioned blockchain technology - Hyperledger Fabric (HF) - to ease the shortcoming of race condition in NGAC that in return enhances the weak confidentiality protection in HF. Built upon BeNGAC, we further design a blockchain-enabled secure and trusted (BEST) LPHR prototype in which data are stored in a distributed yet decentralized database. The unique feature of the proposed BEST-LPHR is the use of blockchain smart contracts allowing BeNGAC policies to govern the security, privacy, confidentiality, data integrity, scalability, sharing, and auditability. The interoperability is achieved by using a health care data exchange standard called Fast Health Care Interoperability Resources.
We demonstrated the feasibility of the BEST-LPHR design by the use case studies. Specifically, a small-scale BEST-LPHR is built for sharing platform among a patient and health care organizations. In the study setting, patients have been raising additional ethical concerns related to consent and granular control of LPHR. We engineered a Web-delivered BEST-LPHR sharing platform with patient-controlled consent granularity, security, and privacy realized by BeNGAC. Health organizations that holding the patient's electronic health record (EHR) can join the platform with trust based on the validation from the patient. The mutual trust is established through a rigorous validation process by both the patient and built-in HF consensus mechanism. We measured system scalability and showed millisecond-range performance of LPHR permission changes.
In this dissertation, we report the BEST-LPHR solution to electronically sharing and managing patients' electronic health records from multiple organizations, focusing on privacy and security concerns. While the proposed BEST-LPHR solution cannot, expectedly, address all problems in LPHR, this prototype aims to increase EHR adoption rate and reduce LPHR implementation roadblocks. In a long run, the BEST-LPHR will contribute to improving health care efficiency and the quality of life for many patients. / Doctor of Philosophy / Longitudinal personalized electronic health record (LPHR) provides a holistic view of health records for individuals and offers a consistent patient-controlled information system for managing the health care of patients. Except for the patients in Veterans Affairs health care service, however, no LPHR is available for the general population in the U.S. that can integrate the existing patients' electronic health records throughout life of care. Such a gap may be contributed mainly by the fact that existing patients' electronic health records are scattered across multiple health care facilities and often not shared due to privacy and security concerns from both patients and health care organizations. The main objective of this dissertation is to address these roadblocks by designing a scalable and interoperable LPHR with patient-controlled and mutually-trusted security and privacy.
We propose a formal definition of secure LPHR and develop a novel blockchain-enabled next generation access control (BeNGAC) model, that can protect security and privacy of LPHR. Built upon BeNGAC, we further design a blockchain-enabled secure and trusted (BEST) LPHR prototype in which data are stored in a distributed yet decentralized database. The health records on BEST-LPHR are personalized to the patients with patient-controlled security, privacy, and granular consent. The unique feature of the proposed BEST-LPHR is the use of blockchain technology allowing BeNGAC policies to govern the security, privacy, confidentiality, data integrity, scalability, sharing, and auditability. The interoperability is achieved by using a health care data exchange standard.
We demonstrated the feasibility of the BEST-LPHR design by the use case studies. Specifically, a small-scale BEST-LPHR is built for sharing platform among a patient and health care organizations. We engineered a Web-delivered BEST-LPHR sharing platform with patient-controlled consent granularity, security, and privacy realized by BeNGAC. Health organizations that holding the patient's electronic health record (EHR) can join the platform with trust based on the validation from the patient. The mutual trust is established through a rigorous validation process by both the patient and built-in blockchain consensus mechanism. We measured system scalability and showed millisecond-range performance of LPHR permission changes.
In this dissertation, we report the BEST-LPHR solution to electronically sharing and managing patients' electronic health records from multiple organizations, focusing on privacy and security concerns. While the proposed BEST-LPHR solution cannot, expectedly, address all problems in LPHR, this prototype aims to increase EHR adoption rate and reduce LPHR implementation roadblocks. In a long run, the BEST-LPHR will contribute to improving health care efficiency and the quality of life for many patients.
|
Page generated in 0.1067 seconds