• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 71
  • 13
  • 6
  • 6
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 1
  • Tagged with
  • 136
  • 93
  • 37
  • 27
  • 23
  • 21
  • 21
  • 20
  • 20
  • 19
  • 18
  • 15
  • 15
  • 14
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Tangled in a complex web of relationships college athletic/academic advisors' communicative management of student-athletes' private disclosures /

Thompson, Jason J. January 2008 (has links)
Thesis (Ph.D.)--University of Nebraska-Lincoln, 2008. / Title from title screen (site viewed Jan. 15, 2009). PDF text: iii, 252 p. ; 916 K. UMI publication number: AAT 3323494. Includes bibliographical references. Also available in microfilm and microfiche formats.
42

On the linguistic constitution of research practices

Carlin, Andrew Philip January 1999 (has links)
This thesis explores sociologists' routine research activities, including observation, participant observation, interviewing, and transcription. It suggests that the constitutive activities of sociological research methods - writing field-notes, doing looking and categorising, and the endogenous structure of members' ordinary language transactions are suffused with culturally methodic, i.e. ordinary language activities. "Membership categories" are the ordinary organising practices of description that society-members - including sociologists - routinely use in assembling sense of settings. This thesis addresses the procedural bases of activities which are constituent features of the research: disguising identities of informants, reviewing literature, writing-up research outcomes, and compiling bibliographies. These activities are themselves loci of practical reasoning. Whilst these activities are assemblages of members' cultural methods, they have not been recognised as "research practices" by methodologically ironic sociology. The thesis presents a series of studies in Membership Categorisation Analysis. Using both sequential and membership categorisational aspects of Conversation Analysis, as well as textual analysis of published research, this thesis examines how members' cultural practices coincide with research practices. Data are derived from a period of participant observation in an organisation, video-recordings of the organisation's work; and interviews following the 1996 bombing in Manchester. A major, cumulative theme within this thesis is confidentiality - within an organisation, within a research project and within sociology itself. Features of confidentiality are explored through ethnographic observation, textual analysis and Membership Categorisation Analysis. Membership Categorisation Analysis brings seen-but-unnoticed features of confidentiality into relief. Central to the thesis are the works of Edward Rose, particularly his ethnographic inquiries of Skid Row, and Harvey Sacks, on the cultural logic shared by society-members. Rose and Sacks explicate the visibility and recognition of members' activities to other members, and research activities as linguistic activities.
43

A privacy protection model to support personal privacy in relational databases.

Oberholzer, Hendrik Johannes 02 June 2008 (has links)
The individual of today incessantly insists on more protection of his/her personal privacy than a few years ago. During the last few years, rapid technological advances, especially in the field of information technology, directed most attention and energy to the privacy protection of the Internet user. Research was done and is still being done covering a vast area to protect the privacy of transactions performed on the Internet. However, it was established that almost no research has been done on the protection of the privacy of personal data that are stored in tables of a relational database. Until now the individual had no say in the way his/her personal data might have been used, indicating who may access the data or who may not. The individual also had no way to indicate the level of sensitivity with regard to the use of his/her personal data or exactly what he/she consented to. Therefore, the primary aim of this study was to develop a model to protect the personal privacy of the individual in relational databases in such a way that the individual will be able to specify how sensitive he/she regards the privacy of his/her data. This aim culminated in the development of the Hierarchical Privacy-Sensitive Filtering (HPSF) model. A secondary aim was to test the model by implementing the model into query languages and as such to determine the potential of query languages to support the implementation of the HPSF model. Oracle SQL served as an example for text or command-based query languages, while Oracle SQL*Forms served as an example of a graphical user interface. Eventually, the study showed that SQL could support implementation of the model only partially, but that SQL*Forms was able to support implementation of the model completely. An overview of the research approach employed to realise the objectives of the study: At first, the concepts of privacy were studied to narrow down the field of study to personal privacy and the definition thereof. Problems that relate to the violation or abuse of the individual’s personal privacy were researched. Secondly, the right to privacy was researched on a national and international level. Based on the guidelines set by organisations like the Organisation for Economic Co-operation and Development (OECD) and the Council of Europe (COE), requirements were determined to protect the personal privacy of the individual. Thirdly, existing privacy protection mechanisms like privacy administration, self-regulation, and automated regulation were studied to see what mechanisms are currently available and how they function in the protection of privacy. Probably the most sensitive data about an individual is his/her medical data. Therefore, to conclude the literature study, the privacy of electronic medical records and the mechanisms proposed to protect the personal privacy of patients were investigated. The protection of the personal privacy of patients seemed to serve as the best example to use in the development of a privacy model. Eventually, the Hierarchical Privacy-Sensitive Filtering model was developed and introduced, and the potential of Oracle SQL and Oracle SQL*Forms to implement the model was investigated. The conclusion at the end of the dissertation summarises the study and suggests further research topics. / Prof. M.S. Olivier
44

Genetic information and the family : a challenge to medical confidentiality

Lacroix, Mireille, 1971- January 2003 (has links)
No description available.
45

Interviewer Trustworthiness and Intended Self-Disclosure as a Function of Verbal and Nonverbal Assurances of Confidentiality

Jordan, Randall G. 01 January 1985 (has links) (PDF)
This study attempted to clarify to what degree assurances of confidentiality and interviewer behavior protective of confidentiality impacted an interviewee’s trust of an interviewer and subsequent willingness to self-disclose. Ninety-six undergraduates were asked interview questions. Male and female subjects were divided into four conditions: confidentiality statement/protective behavior, confidentiality statement/nonprotective behavior, neutral statement/protective behavior, and neutral statement/nonprotective behavior. The Intended Self-Disclosure Questionnaire and Counselor Rating Form were used to measure self-disclosure and trustworthiness levels. Results did not support the main hypothesis that protective behavior would have a more significant impact on self-disclosure and trustworthiness than verbal assurances of confidentiality. However, assurances of confidentiality did lead to significantly higher trust levels. Responses to a post-questionnaire revealed over reporting of confidentiality instructions. Implications for therapy and future research are discussed.
46

Datenaustausch zwischen Arbeitgeber und Versicherung : Probleme der Bearbeitung von Gesundheitsdaten der Arbeitnehmer bei der Begründung des privatrechtlichen Arbeitsverhältnisses /

Pärli, Kurt. January 2003 (has links)
Thesis (doctoral)--Universität St. Gallen, 2003. / Includes bibliographical references (p. xxxiii-l).
47

Die anwaltliche Verschwiegenheitspflicht in Deutschland und Frankreich : unter besonderer Beachtung der sich aus dem grenzüberschreitenden Rechtsverkehr ergebenden Kollisionsfälle /

Wild, Maximiliane-Stephanie. January 2008 (has links)
Zugl.: Kiel, Universiẗat, Diss., 2008. / Literaturverz.
48

Confidential Computing in Public Clouds : Confidential Data Translations in hardware-based TEEs: Intel SGX with Occlum support

Yulianti, Sri January 2021 (has links)
As enterprises migrate their data to cloud infrastructure, they increasingly need a flexible, scalable, and secure marketplace for collaborative data creation, analysis, and exchange among enterprises. Security is a prominent research challenge in this context, with a specific question on how two mutually distrusting data owners can share their data. Confidential Computing helps address this question by allowing to perform data computation inside hardware-based Trusted Execution Environments (TEEs) which we refer to as enclaves, a secured memory that is allocated by CPU. Examples of hardware-based TEEs are Advanced Micro Devices (AMD)-Secure Encrypted Virtualization (SEV), Intel Software Guard Extensions (SGX) and Intel Trust Domain Extensions (TDX). Intel SGX is considered as the most popular hardware-based TEEs since it is widely available in processors targeting desktop and server platforms. Intel SGX can be programmed using Software Development Kit (SDK) as development framework and Library Operating Systems (Library OSes) as runtimes. However, communication with software in the enclave such as the Library OS through system calls may result in performance overhead. In this project, we design confidential data transactions among multiple users, using Intel SGX as TEE hardware and Occlum as Library OS. We implement the design by allowing two clients as data owners share their data to a server that owns Intel SGX capable platform. On the server side, we run machine learning model inference with inputs from both clients inside an enclave. In this case, we aim to evaluate Occlum as a memory-safe Library Operating System (OS) that enables secure and efficient multitasking on Intel SGX by measuring two evaluation aspects such as performance overhead and security benefits. To evaluate the measurement results, we compare Occlum with other runtimes: baseline Linux and Graphene-SGX. The evaluation results show that our design with Occlum outperforms Graphene-SGX by 4x in terms of performance. To evaluate the security aspects, we propose 11 threat scenarios potentially launched by both internal and external attackers toward the design in SGX platform. The results show that Occlum security features succeed to mitigate 10 threat scenarios out of 11 scenarios overall. / När företag migrerar sin data till molninfrastruktur behöver de i allt högre grad en flexibel, skalbar och säker marknadsplats för gemensam dataskapande, analys och utbyte mellan företag. Säkerhet är en framstående forskningsutmaning i detta sammanhang, med en specifik fråga om hur två ömsesidigt misstroende dataägare kan dela sina data. Confidential Computing hjälper till att ta itu med den här frågan genom att tillåta att utföra databeräkning i hårdvarubaserad TEEs som vi kallar enklaver, ett säkert minne som allokeras av CPU. Exempel på maskinvarubaserad TEEs är AMD-SEV, Intel SGX och Intel TDX. Intel SGX anses vara den mest populära maskinvarubaserade TEEs eftersom det finns allmänt tillgängligt i processorer som riktar sig mot stationära och serverplattformar. Intel SGX kan programmeras med hjälp av SDK som utvecklingsram och Library Operating System (Library OSes) som körtid. Kommunikation med programvara i enklaven, till exempel Library OS via systemanrop, kan dock leda till prestandakostnader. I det här projektet utformar vi konfidentiella datatransaktioner mellan flera användare, med Intel SGX som TEE-hårdvara och Occlum som Library OS. Vi implementerar designen genom att låta två klienter som dataägare dela sina data till en server som äger Intel SGX-kompatibel plattform. På serversidan kör vi maskininlärningsmodell slutsats med ingångar från båda klienterna i en enklav. I det här fallet strävar vi efter att utvärdera Occlum som ett minnessäkert Library OS som möjliggör säker och effektiv multitasking på Intel SGX genom att mäta två utvärderingsaspekter som prestandakostnader och säkerhetsfördelar. För att utvärdera mätresultaten jämför vi Occlum med andra driftstider: baslinjen Linux och Graphene-SGX. Utvärderingsresultaten visar att vår design med Occlum överträffar Graphene-SGX av 4x när det gäller prestanda. För att utvärdera säkerhetsaspekterna föreslår vi elva hotscenarier som potentiellt lanseras av både interna och externa angripare mot designen i SGX-plattformen. Resultaten visar att Occlums säkerhetsfunktioner lyckas mildra 10 hotscenarier av 11 scenarier totalt.
49

Företagshemlighet eller personligt kunnande? : En uppsats om problematiken med och behovet av företagshemligheter och konkurrensklausuler

Jönsson, Elin January 2016 (has links)
The need to maintain business confidential information within the company are increasing in today’s knowledge-based society. Today, trade secrets are an asset for entrepreneurs and important for competitiveness. These secrets are sometimes provided to the employees and the more secrets spread, the more vulnerable the employer becomes. To prevent trade secrets from being disclosed there is a law about confidential information and competition clauses could be entered in the employment contracts. Nevertheless, the need to protect confidential information must be compared to the right of workers to freely use their skills. This paper aims to highlight the legal situation and the legal balance between both parts within the law of confidential information and competition clauses on the basis of a legal science method. It also aims to highlight the use of non-compete agreements from a gender perspective. The purpose of the paper has led to the following research questions; How can the legal framework of trade secrets and compete clauses be understood from an employer and employee perspective and what are the consequences of it? From a gender perspective, what consequences does the balance between the employer’s need to protect confidential information and the employees’ need to be competitive on the labor market after an employment have? The paper shows that there are weaknesses in the law of confidential information through the employer’s perspective and the law does not stall the employees’ competitiveness. The non-compete agreements however, may jeopardize the movement of the employees and are often seen as unfair in Swedish court. However, the problem is that the freedom of enter contracts prevails and the agreements are valid until an arbitration or court shows otherwise. The study indicates that it is mostly men that are subject to compete clauses, which can lead to improvement of women’s position in the labor market.
50

Multiple Imputation Methods for Nonignorable Nonresponse, Adaptive Survey Design, and Dissemination of Synthetic Geographies

Paiva, Thais Viana January 2014 (has links)
<p>This thesis presents methods for multiple imputation that can be applied to missing data and data with confidential variables. Imputation is useful for missing data because it results in a data set that can be analyzed with complete data statistical methods. The missing data are filled in by values generated from a model fit to the observed data. The model specification will depend on the observed data pattern and the missing data mechanism. For example, when the reason why the data is missing is related to the outcome of interest, that is nonignorable missingness, we need to alter the model fit to the observed data to generate the imputed values from a different distribution. Imputation is also used for generating synthetic values for data sets with disclosure restrictions. Since the synthetic values are not actual observations, they can be released for statistical analysis. The interest is in fitting a model that approximates well the relationships in the original data, keeping the utility of the synthetic data, while preserving the confidentiality of the original data. We consider applications of these methods to data from social sciences and epidemiology.</p><p>The first method is for imputation of multivariate continuous data with nonignorable missingness. Regular imputation methods have been used to deal with nonresponse in several types of survey data. However, in some of these studies, the assumption of missing at random is not valid since the probability of missing depends on the response variable. We propose an imputation method for multivariate data sets when there is nonignorable missingness. We fit a truncated Dirichlet process mixture of multivariate normals to the observed data under a Bayesian framework to provide flexibility. With the posterior samples from the mixture model, an analyst can alter the estimated distribution to obtain imputed data under different scenarios. To facilitate that, I developed an R application that allows the user to alter the values of the mixture parameters and visualize the imputation results automatically. I demonstrate this process of sensitivity analysis with an application to the Colombian Annual Manufacturing Survey. I also include a simulation study to show that the correct complete data distribution can be recovered if the true missing data mechanism is known, thus validating that the method can be meaningfully interpreted to do sensitivity analysis.</p><p>The second method uses the imputation techniques for nonignorable missingness to implement a procedure for adaptive design in surveys. Specifically, I develop a procedure that agencies can use to evaluate whether or not it is effective to stop data collection. This decision is based on utility measures to compare the data collected so far with potential follow-up samples. The options are assessed by imputation of the nonrespondents under different missingness scenarios considered by the analyst. The variation in the utility measures is compared to the cost induced by the follow-up sample sizes. We apply the proposed method to the 2007 U.S. Census of Manufactures.</p><p>The third method is for imputation of confidential data sets with spatial locations using disease mapping models. We consider data that include fine geographic information, such as census tract or street block identifiers. This type of data can be difficult to release as public use files, since fine geography provides information that ill-intentioned data users can use to identify individuals. We propose to release data with simulated geographies, so as to enable spatial analyses while reducing disclosure risks. We fit disease mapping models that predict areal-level counts from attributes in the file, and sample new locations based on the estimated models. I illustrate this approach using data on causes of death in North Carolina, including evaluations of the disclosure risks and analytic validity that can result from releasing synthetic geographies.</p> / Dissertation

Page generated in 0.0659 seconds