• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 48
  • 9
  • 6
  • 5
  • 5
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 90
  • 90
  • 21
  • 13
  • 10
  • 10
  • 10
  • 10
  • 10
  • 9
  • 9
  • 7
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Learning Analytics in Relation to Open Access to Research Data in Peru. An Interdisciplinary Comparison

Biernacka, Katarzyna, Huaroto, Libio 01 October 2020 (has links)
Conferencia realizada en el marco de la "III Conferencia Latinoamericana de Analíticas de Aprendizaje LALA2020 Project", del 1 al 2 de Octubre de 2020 en Cuenca, Ecuador. / The aim of this paper is to investigate the perceptions of learning analytics re-searchers in Peru about the barriers to publication of their research data. A review of the relevant legislation was done. Semi-structured interviews were used as a research method, the focus being on the presumed conflict between the publica-tion of research data and the protection of personal data. The results show a range of individual factors that influence the behaviour of scientists in relation to the publication of research data, emphasizing the barriers related to data protection in different disciplines.
42

Impact of GDPR on Data Sharing Behavior of Smart Home Users

Dahl, Victor, Österlin, Marco January 2020 (has links)
The number of connected Internet of Things devices are expected to surpass 30 billion in 2020. The unprecedented levels of personal data sharing are drastically increasing the complexity of privacy challenges. This kindled efforts such as the General Data Protection Regulation (GDPR), that came into effect in May 2018 to establish user data rights. These new user data rights have had a considerable impact both for the users, and the data controllers & third-parties that are liable to effectuate the new requirements, such as privacy-by-design and explicit consent. In this thesis, we explore this impact of the GDPR, specifically on self-disclosure of personal data through smart home devices, in order to gain insights for smart home practitioners. In doing so, we specifically want to answer two research questions. Our first research question helps us understand opinions and attitudes, specifically those of Swedish residents. An online survey helps us understand their willingness and fears of adopting smart home devices. In our second research question, we apply a semi-Systematic Literature Review to study how the GDPR has influenced self-disclosure through smart home devices, and which factors have had the most significant effect on its users. The survey (n=131) showed that while trust towards data controllers is the cumulatively highest priority of users, consistent product & service quality was more likely to be the first priority (28%). Some users are struggling to find usefulness for smart home devices, so the perceived benefit is currently mainly exceeding the cost and perceived risk for lead adopters. Since the GDPR came into effect, we have seen a raise in user awareness and perceived control. Notably, this led to increased skepticism towards smart home devices. The literature review showed promise in systems to help negotiate and suggest privacy preferences between users and data controllers. We also found an exacerbation of the concern for information privacy and that trust is a major factor for users when deciding to adopt smart home devices. We conclude that there are some factors that are more important than others, as well as provide insights for smart home practitioners on future venues for research and prototyping.
43

Advancing Cyberinfrastructure for Collaborative Data Sharing and Modeling in Hydrology

Gan, Tian 01 December 2019 (has links)
Hydrologic research is increasingly data and computationally intensive, and often involves hydrologic model simulation and collaboration among researchers. With the development of cyberinfrastructure, researchers are able to improve the efficiency, impact, and effectiveness of their research by utilizing online data sharing and hydrologic modeling functionality. However, further efforts are still in need to improve the capability of cyberinfrastructure to serve the hydrologic science community. This dissertation first presents the evaluation of a physically based snowmelt model as an alternative to a temperature index model to improve operational water supply forecasts in the Colorado River Basin. Then it presents the design of the functionality to share multidimensional space-time data in the HydroShare hydrologic information system. It then describes a web application developed to facilitate input preparation and model execution of a snowmelt model and the storage of these results in HydroShare. The snowmelt model evaluation provided use cases to evaluate the cyberinfrastructure elements developed. This research explored a new approach to advance operational water supply forecasts and provided potential solutions for the challenges associated with the design and implementation of cyberinfrastructure for hydrologic data sharing and modeling.
44

Blockchain Technology for Data Sharing in the Banking Sector

Norvill, Robert E. January 2020 (has links)
Know Your Customer compliance costs have never been higher for banks in Europe. This thesis looks at the application of blockchain technology to reduce Know Your Customer compliance costs. The work within aims to utilise the strengths of blockchain technology in order to reduce the costs of compliance for banks. This is done through collaboration with industry partners, resulting in a system designed to meet banks’ needs. The contributions of this work are: 1) A system which enables data sharing between banks, enabling 2) reduc tion of costs by at least 45%, and 3) reducing or eliminating over reliance on third parties, 4) an exploration of how to price data within the system is made in order to help banks further reduce their costs, 5) reduction of chain size by reducing the size of contract creation transactions in Ethereum by 90% for standard users, lastly, 6) to better understand the functionality and purpose of smart contracts. The system is the first of its kind to remove the requirement of third party storage solutions, and is the first to explore pricing aspects in detail.
45

Issues regarding the sharing of interim results by the Data Safety Monitoring Board of a trial with those responsible for the conduct of the trial.

Borg Debono, Victoria January 2018 (has links)
Background and Objectives: Sharing of interim results by the Data Safety Monitoring Board (DSMB) with non-DSMB members is an important issue that can affect trial integrity. The objective of this dissertation was to determine the views of the stakeholders on what kind of interim results can or should be shared by the DSMB, why, and with whom among those responsible for the conduct of a trial. Methods: We first conducted a systematic search of the literature to assess views and current evidence on sharing interim results. Secondly, we conducted two cross-sectional surveys aimed at those involved in trials to solicit their views on what type of interim results should be shared by the DSMB with non-DSMB members, with whom and under what circumstances. Thirdly, we assessed for any potential association of demographic factors with the sharing of certain interim results and their perceived usefulness, using regression analysis. Results: Mixed views exist in the literature on interim result sharing practices. Evidence from the surveys conducted resulted in the following findings. What to share: Based upon the survey results from our cross-sectional survey (Chapter 4), the interim control event rate (IControlER), the adaptive conditional power (ACP) and the unconditional conditional power (UCP) should not be shared. Most respondents from this survey thought the interim combined event rate (ICombinedER) could be shared provided proper conditions and provisions are in place. However, based on our cross-sectional scenario-based survey (Chapter 3), it was demonstrated that the ICombinedER, when shared at interim, is compatible with three possible interim results (Drug X doing better than placebo, worse than placebo or performing the same as placebo). Why share or not share: Respondents indicate that the ICombinedER can be shared because it does not unmask relative effects between groups, and keeps the steering committee (SC) informed about the trial’s progress; however, with the condition that sharing this type of result should be specified a priori including for what purpose and be at the DSMB’s discretion, especially if the control group rate is known from the literature. However, it is important to note that the ICombinedER, demonstrated with evidence from our cross-sectional scenario-based survey (Chapter 3), is compatible with three possible interim results and should not be shared because it has low usefulness and is flawed due to multiple interpretations. The IControlER and the ACP should not be shared because they are unmasking of interim results. It was mentioned that ICombinedER is usually known by the SC and sponsor making it easy to determine group rates if the IControlER is known. The UCP should not be shared because it is a technical measure that is potentially misleading of interim results. With whom to share: Survey results from Chapter 4 indicated that the ICombinedER can be shared with the SC and that the IControlER, the ACP, and the UCP should not be shared with any non-DSMB members by the DSMB. However, evidence from Chapter 3 also indicates that the ICombinedER should not be shared with any non-DSMB member. Factors associated with sharing: Having experience with greater than 15 trials with private industry sponsorship was found to be associated with not sharing the IControlER and an increase in perceived usefulness in sharing the ACP. Though some other demographic factors were found to be associated with sharing the ICombinedER and the UCP, they were sensitive to missing data upon our sensitivity analysis and will require more validation. Conclusions: Though mixed views exist within an extensive literature review on interim result sharing practices, survey evidence from this dissertation suggests that the ICombinedER, IControlER, the ACP and the UCP should not be shared with any non-DSMB member. The IControlER and ACP can be unmasking of interim results and the UCP is a technical measure that is potentially misleading. We agree with this reasoning. The majority of respondents from the survey in Chapter 4 indicated that the ICombinedER can be shared with the SC because it does not unmask relative effects between groups, however it was also stipulated that sharing this measure should be specified a priori and for what purpose and be at the DSMB’s discretion, especially if the control group rate is known from the literature. Even though the majority from our second survey in Chapter 4 indicate sharing the ICombinedER with the SC, we do not recommend sharing the ICombinedER at interim with any non-DSMB member because, as demonstrated with evidence from our cross-sectional scenario-based survey in Chapter 3, this measure is compatible with three possible interim results potentially leading to the introduction of trial bias at interim by those privy this interim measure and their interpretation. Based on the findings from the survey from Chapter 4, there appears to be a lack of awareness in how sharing the ICombinedER is flawed, of low usefulness, and potentially dangerous. The perceived desire to have this measure shared seems misguided. Experience with greater than 15 trials with private industry sponsorship was found to be associated with not endorsing the sharing the IControlER and an increase in perceived usefulness in sharing the ACP by the DSMB at interim. In regards to implications for future research, this characteristic should be further evaluated to see if this subgroup has insight into interim trial management practices that protect from trial bias. Results from this research have implications for practice and guidelines concerning trial design and protocols, and DSMB charters. These results can also help assess the need for proper safeguards around sharing an interim result when deemed appropriate by the DSMB and under their discretion, that prevent the introduction of bias that could alter the final trial results generated. / Thesis / Doctor of Philosophy (PhD)
46

Establishing Verifiable Trust in Collaborative Health Research

Sutton, Andrew January 2018 (has links)
Collaborative health research environments usually involve sharing private health data between a number of participants, including researchers at different institutions. Inclusion of AI systems as participants in this environment allows predictive analytics to be applied on the research data and the provision of better diagnoses. However, the growing number of researchers and AI systems working together raises the problem of protecting the privacy of data contributors and managing the trust among participants, which affects the overall collaboration effort. In this thesis, we propose an architecture that utilizes blockchain technology for enabling verifiable trust in collaborative health research environments so that participants who do not necessarily trust each other can effectively collaborate to achieve a research goal. Provenance management of research data and privacy auditing are key components of the architecture that allow participants’ actions and their compliance with privacy policies to be checked across the research pipeline. The architecture supports distributed trust between participants through a Linked Data-based blockchain model that allows tamper-proof audit logs to be created to preserve log integrity and participant non-repudiation. To maintain the integrity of the audit logs, we investigate the state-of-the-art methods of generating cryptographic hashes for RDF datasets. We demonstrate an efficient method of computing integrity proofs that construct a sorted Merkle tree for growing RDF datasets based on timestamps (as a key) that are extractable from the dataset. Evaluations of our methods through experimental realizations and analyses of their resiliency to common security threats are provided. / Thesis / Master of Science (MSc) / Collaborative health research environments involve the sharing of private health data between a number of participants, including researchers at different institutions. The inclusion of AI systems as participants in this environment allows predictive analytics to be applied on the research data to provide better diagnoses. In such environments where private health data is shared among diverse participants, the maintenance of trust between participants and the auditing of data transformations across the environment are important for protecting the privacy of data contributors. Preserving the integrity of these transformations is paramount for supporting transparent auditing processes. In this thesis, we propose an architecture for establishing verifiable trust and transparency among participants in collaborative health research environments, present a model for creating tamper-proof privacy audit logs that support the privacy management of data contributors, and analyze methods for verifying the integrity of all logged data activities in the research environment.
47

Promoting Systematic Practices for Designing and Developing Edge Computing Applications via Middleware Abstractions and Performance Estimation

Dantas Cruz, Breno 09 April 2021 (has links)
Mobile, IoT, and wearable devices have been transitioning from passive consumers to active generators of massive amounts of user-generated data. Edge-based processing eliminates network bottlenecks and improves data privacy. However, developing edge applications remains hard, with developers often have to employ ad-hoc software development practices to meet their requirements. By doing so, developers introduce low-level and hard-to-maintain code to the codebase, which is error-prone, expensive to maintain, and vulnerable in terms of security. The thesis of this research is that modular middleware abstractions, exemplar use cases, and ML-based performance estimation can make the design and development of edge applications more systematic. To prove this thesis, this dissertation comprises of three research thrusts: (1) understand the characteristics of edge-based applications, in terms of their runtime, architecture, and performance; (2) provide exemplary use cases to support the development of edge-based application; (3) innovate in the realm of middleware to address the unique challenges of edge-based data transfer and processing. We provide programming support and performance estimation methodologies to help edge-based application developers improve their software development practices. This dissertation is based on three conference papers, presented at MOBILESoft 2018, VTC 2020, and IEEE SMDS 2020. / Doctor of Philosophy / Mobile, IoT, and wearable devices are generating massive volumes of user data. Processing this data can reveal valuable insights. For example, a wearable device collecting its user's vitals can use the collected data to provide health advice. Typically the collected data is sent to some remote computing resources for processing. However, due to the vastly increasing volumes of such data, it becomes infeasible to efficiently transfer it over the network. Edge computing is an emerging system architecture that employs nearby devices for processing and can be used to alleviate the aforementioned data transfer problem. However, it remains hard to design and develop edge computing applications, making it a task reserved for expert developers. This dissertation is concerned with democratizing the development of edge applications, so the task would become accessible for regular developers. The overriding idea is to make the design and implementation of edge applications more systematic by means of programming support, exemplary use cases, and methodologies.
48

User-Centric Security and Privacy Mechanisms in Untrusted Networking and Computing Environments

Li, Ming 13 July 2011 (has links)
"Our modern society is increasingly relying on the collection, processing, and sharing of digital information. There are two fundamental trends: (1) Enabled by the rapid developments in sensor, wireless, and networking technologies, communication and networking are becoming more and more pervasive and ad hoc. (2) Driven by the explosive growth of hardware and software capabilities, computation power is becoming a public utility and information is often stored in centralized servers which facilitate ubiquitous access and sharing. Many emerging platforms and systems hinge on both dimensions, such as E-healthcare and Smart Grid. However, the majority information handled by these critical systems is usually sensitive and of high value, while various security breaches could compromise the social welfare of these systems. Thus there is an urgent need to develop security and privacy mechanisms to protect the authenticity, integrity and confidentiality of the collected data, and to control the disclosure of private information. In achieving that, two unique challenges arise: (1) There lacks centralized trusted parties in pervasive networking; (2) The remote data servers tend not to be trusted by system users in handling their data. They make existing security solutions developed for traditional networked information systems unsuitable. To this end, in this dissertation we propose a series of user-centric security and privacy mechanisms that resolve these challenging issues in untrusted network and computing environments, spanning wireless body area networks (WBAN), mobile social networks (MSN), and cloud computing. The main contributions of this dissertation are fourfold. First, we propose a secure ad hoc trust initialization protocol for WBAN, without relying on any pre-established security context among nodes, while defending against a powerful wireless attacker that may or may not compromise sensor nodes. The protocol is highly usable for a human user. Second, we present novel schemes for sharing sensitive information among distributed mobile hosts in MSN which preserves user privacy, where the users neither need to fully trust each other nor rely on any central trusted party. Third, to realize owner-controlled sharing of sensitive data stored on untrusted servers, we put forward a data access control framework using Multi-Authority Attribute-Based Encryption (ABE), that supports scalable fine-grained access and on-demand user revocation, and is free of key-escrow. Finally, we propose mechanisms for authorized keyword search over encrypted data on untrusted servers, with efficient multi-dimensional range, subset and equality query capabilities, and with enhanced search privacy. The common characteristic of our contributions is they minimize the extent of trust that users must place in the corresponding network or computing environments, in a way that is user-centric, i.e., favoring individual owners/users."
49

Vers l'émergence d'un connectome sémantique cérébral humain par le biais de l'IRM et de la tractographie / Toward the emergence of a semantic human connectome using MRI and tractography

Moreau, Tristan 26 June 2015 (has links)
Le cerveau humain est constitué d'un grand nombre de neurones inter-connectés formant des faisceaux de fibres de matière blanche permettant de transmettre des influx nerveux entre différentes régions. Dans cette thèse, divers aspects de la connectivité anatomique cérébrale ont été étudiés en utilisant l'Imagerie par Résonance Magnétique (IRM) et la tractographie. La tractographie est aujourd'hui la seule méthode permettant de reconstruire, en partie, les faisceaux de fibres de matière blanche in vivo et de manière non-invasive. (1) Une première étude visait à caractériser de manière quantitative les faisceaux d'association courts fronto-pariétaux reconstruits par tractographie dans la région centrale chez vingt sujets sains. (2) Une deuxième étude visait à définir une nouvelle méthode de parcellisation (i.e., subdiviser le cerveau en différentes régions macroscopiques) en utilisant comme critère structurel de base des motifs de connectivité reconstruits par tractographie. (3) Enfin, une troisième étude avait pour objectif de créer une ontologie neuroanatomique afin de représenter des régions de matière grise macroscopiques connectées par des faisceaux de fibres reconstruits par tractographie et d'annoter automatiquement des données de la connectomique humaine. L'utilisation de raisonneurs DL (Description Logic) usuels permettait de générer automatiquement des inférences relatives aux relations partie-tout, de connectivité ou enfin de voisinage spatial. / Human brain contains a great number of neurons interconnected forming white matter fiber bundles that can transmit information between different regions. In this thesis, different aspects of anatomical connectivity were studied using Magnetic Resonance Imaging (MRI) and tractography. Tractography is currently the only tool that allow to reconstruct white matter fiber bundles in the living human brain and in a non invasive way. (1) A first study aimed to characterize quantitatively the white matter fiber bundles reconstructed by tractography between the precentral and postcentral gyri in twenty healthy subjects. (2) A second study aimed to define a new parcellation scheme (i.e., subdivide the brain into different macroscopic regions) using connectivity patterns reconstructed by tractography as the main structural criteria. (3) Lastly, a third study aimed to create a new ontology in order to represent gray matter regions connected by white matter fiber bundles reconstructed by tractography and to annotate automatically connectomics datasets. The use of common DL (Description Logic) reasoners allowed to infer automatically some new axioms concerning especially part-whole, connectivity or spatial relationships.
50

Une approche PLM pour supporter les collaborations et le partage des connaissances dans le secteur médical : Application aux processus de soins par implantation de prothèses / A PLM based approach for supporting collaboration and knowledge management in the medical domain : Application to the treatment process requiring prosthesis implantation

Ngo, Thanh Nghi 27 June 2018 (has links)
Le secteur médical est un domaine dynamique en constante évolution, nécessitant des améliorations continues de ses processus métier et une assistance intelligente aux acteurs impliqués. Ce travail de thèse se focalise sur le processus de soins nécessitant l’implantation d’une prothèse. La particularité de ce processus est qu’il met en interaction deux cycles de vie appartenant respectivement au domaine médical et celui de l’ingénierie. Ceci implique plusieurs actions de collaboration entre des acteurs métier très variés. Cependant, des problèmes de communication et de partage de connaissances peuvent exister en raison de l’hétérogénéité de la sémantique utilisée et des pratiques métiers propres à chaque domaine.Dans ce contexte, ce travail de thèse s’intéresse aux apports des approches d’ingénierie des connaissances et de gestion du cycle de vie du produit pour répondre aux problématiques sous-jacentes au processus de soins médicaux nécessitant l’implantation d’une prothèse. Pour se faire, un cadre conceptuel est proposé pour analyser les connexions entre les cycles de vie de maladie (domaine Médical)et de la prothèse (domaine d’ingénierie). Sur la base de cette analyse, un modèle sémantique sous forme d’une ontologie pour le domaine médical est définit dans le cadre de la construction d’une approche PLM à base de connaissances. L’application de cette proposition est démontrée à travers l’implémentation de quelques fonctions utiles dans un outil PLM du marché nommé AUDROS. / Medical sector is a dynamic domain that requires continuous improvement of its business processes and assistance to the actors involved. This research focuses on the medical treatment process requiring prosthesis implantation. The specificity of such a process is that it makes in connection two lifecyclesbelonging to medical and engineering domains respectively. This implies several collaborative actions between stake holders from heterogeneous disciplines. However, several problems of communication and knowledge sharing may occur because of the variety of semantic used and the specific business practices in each domain. In this context, this PhD work is interested in the potential of knowledge engineering and product lifecycle management approaches to cope with the above problems. To do so, a conceptual framework is proposed for the analysis of links between the disease (medicaldomain) and the prosthesis (engineering domain) lifecycles. Based on this analysis, a semantic ontology model for medical domain is defined as part of a global knowledge-based PLM approach proposition. The application of the proposition is demonstrated through an implementation of useful function in the AUDROS PLM software.

Page generated in 0.4684 seconds