• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 66
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 85
  • 85
  • 31
  • 22
  • 19
  • 19
  • 16
  • 12
  • 12
  • 11
  • 9
  • 8
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Demand response of domestic consumers to dynamic electricity pricing in low-carbon power systems

McKenna, Eoghan January 2013 (has links)
The ability for domestic consumers to provide demand response to dynamic electricity pricing will become increasingly valuable for integrating the high penetrations of renewables that are expected to be connected to electricity networks in the future. The aim of this thesis is to investigate whether domestic consumers will be willing and able to provide demand response in such low-carbon futures. A broad approach is presented in this thesis, with research contributions on subjects including data privacy, behavioural economics, and battery modelling. The principle argument of the thesis is that studying the behaviour of consumers with grid-connected photovoltaic ('PV') systems can provide insight into how consumers might respond to dynamic pricing in future low-carbon power systems, as both experience irregular electricity prices that are correlated with intermittent renewable generation. Through a combination of statistical and qualitative methods, this thesis investigates the demand response behaviour of consumers with PV systems in the UK. The results demonstrate that these consumers exhibit demand response behaviour by increasing demand during the day and decreasing demand during the evening. Furthermore, this effect is more pronounced on days with higher irradiance. The results are novel in three ways. First, they provide quantified evidence that suggests that domestic consumers with PV systems engage in demand response behaviour. Second, they provide evidence of domestic consumers responding to irregular electricity prices that are correlated with intermittent renewable generation, thereby addressing the aim of this thesis, and supporting the assumption that consumers can be expected to respond to dynamic pricing in future markets with high penetrations of renewables. Third, they provide evidence of domestic consumers responding to dynamic pricing that is similar to real-time pricing, while prior evidence of this is rare and confined to the USA.
42

Datenschutz in Call Centern – Bestandsaufnahme zur Aufzeichnung und Verwendung personenbezogener Daten

Hrach, Christian, Alt, Rainer 25 January 2012 (has links) (PDF)
Dienstleister in der Telekommunikationsbranche haben nicht zuletzt aus rechtlicher Sicht die Pflicht zu einem sensiblen Umgang mit personenbezogenen Daten. Dies bezieht sich nicht nur auf Kundendaten, sondern ebenso auf mitarbeiterbezogene Daten zur Führung eines Call Centers. Je nach Situation und Anwendungsfall regeln die Verwendungsmöglichkeiten dieser Daten in Call Centern das allgemeine Persönlichkeitsrecht und das Bundesdatenschutzgesetz (BDSG). Daraus ergibt sich für die Entwicklung und den Einsatz von Call Center-spezifischen Anwendungssystemen (z.B. Kampagnenmanagement-Systeme, Dialer) die Herausforderung, zum einen die Einhaltung rechtlicher Bestimmungen sicherzustellen, aber zum anderen den häufig detailreichen Informationsbedarfen der Call Center-Leitungsebenen zu entsprechen. Neben rechtlichen Beschränkungen bei der Handhabung von Kundendaten sind hier die Grenzen und Grauzonen bezüglich der Verwendungsmöglichkeiten von Leistungsdaten zur Mitarbeiterüberwachung und -beurteilung (z.B. verdecktes Mithören oder Gesprächsaufzeichnung) zu berücksichtigen.
43

Geotagging in social media : exploring the privacy paradox

Menfors, Martina, Fernstedt, Felicia January 2015 (has links)
Increasingly, online social media networks allow users to use geotagging. This method of adding location data to various content shared in real time has introduced privacy related issues and threats to the users of such networks. Previous research present opposing findings on whether users actually care about their location privacy or not, and it has also been shown that users often display a behaviour inconsistent with their concerns. When asked, users tend to report high privacy concerns, but in contrast, they will then not let their privacy concerns affect or limit their behaviour online; the privacy paradox is a description of this dichotomy. The problem, however, is not only that location privacy seems to be a paradoxical issue; the sharing of location data provides users with new possibilities that can potentially have negative consequences for them, such as someone else being able to identify one’s identity, home location, habits or other sensitive information. Social media network users communicate that a part of this is due to the lack of control over which information they share, with whom and where.This study employs a qualitative method, using unstructured interviews in a pre-study and a self-completion questionnaire. The purpose of the study is to examine and gain a better understanding of how the privacy paradox can help to better explain users’ location data disclosure preferences in the context of social media networking, and to help social media network developers in order to reduce privacy-related issues in social media networking applications with geotagging capabilities. The findings indicate that the paradox indeed is evident in user’s stated geotagging behaviour, and that users are slightly more worried about their location privacy than their overall online privacy. The conclusions offer a couple of different explanations for the paradox, and we argue that the contradiction of the paradox can be seen as a constant trade-off between benefits and risks of geotagging. We also give some examples of such advantages and disadvantages.
44

Public Awareness of Data Privacy and its Effects

Sichel, Grant 04 May 2021 (has links)
No description available.
45

Fighting the Biggest Lie on the Internet : Improving Readership of Terms of Service and Privacy Policies

Ziegenbein, Marius-Lukas January 2022 (has links)
When joining a new service, in order to access its features, users are often required to accept the terms of service and privacy policy. However, the readership of these documents is mostly non-existent, leaving an information asymmetry, the imbalance of knowledge between two parties. Due to this, users are sacrificing their online data privacy without being aware of the consequences. The purpose of this work is to investigate the readership of terms of service and privacy policies among users of social media services. We implemented a prototype called ‘ShareIt’, which resembles a photo-sharing platform to gain insight about readership, behavior and effectiveness of our adjusted presentations of terms of service and privacy policies in regard to readership and comprehension. We conducted a survey experiment using the prototype with 31 participants and concluded, that 80,6% of our participants did not spend more than ten seconds in our terms of serviceand privacy policy. The observed behavior suggests, that social media users are used to sharing information on the internet which in addition to their trust towards online services leads to the aforementioned low readership. We presented adjustments to the presentation of terms of service and privacy policies which showed a slight tendency of higher engagement in comparison to the current way of accessing these documents. This result however, due to the lack of readership examined for our participants, has to remain debatable and needs further investigation.
46

[pt] O DESIGN DE INTERFACE COMO FACILITADOR NA COMUNICAÇÃO DO PROCESSO DE TRATAMENTO DE DADOS DIGITAIS DOS USUÁRIOS / [en] THE ROLE OF INTERFACE DESIGN AS AN ENABLER IN THE COMMUNICATION OF PERSONAL DATA PROCESSING

ANA LUIZA CASTRO GERVAZONI 30 October 2023 (has links)
[pt] A adoção de modelos de inteligência artificial está alterando a relação entre organizações e consumidores e o volume de produtos digitais dependentes de dados pessoais cresce a cada dia. As políticas de privacidade são o principal instrumento de informação do cidadão sobre como suas informações serão tratadas por empresas com as quais se relaciona. Porém, atualmente, as interfaces destes instrumentos não comunicam de forma objetiva suas informações. O presente estudo demonstra que a aplicação de diretrizes de design nas políticas de privacidade promove uma experiência mais satisfatória e uma aquisição de informação mais rápida de seu conteúdo pelos usuários. A metodologia do estudo abarcou uma revisão bibliográfica, pesquisa documental, adaptação da escala Internet Users’ Information Privacy Concerns, teste de usabilidade e análise de conteúdo. Literaturas de direito e design foram relacionadas para identificar requisitos legais que poderiam ser melhor atendidos por meio do design, o nível de preocupação com a privacidade dos participantes foi verificado e um teste comparativo de usabilidade foi conduzido. Uma réplica da política do Facebook foi comparada à nova proposta, que contava com elementos que representavam diretrizes de design. Os dados mostraram redução no tempo para localização de informações e na taxa de erro entre os usuários que acessaram a nova proposta, assim como maior frequência de declarações positivas a respeito desta versão. A pesquisa contribui para a ampliação do conhecimento sobre a influência do design de interface na construção destes instrumentos ao esclarecer que a consideração de boas práticas deste campo facilita a aquisição de informação. / [en] The use of artificial intelligence models is changing the relationship between organizations and consumers, and the volume of digital products dependent on personal data is growing every day. Privacy policies are the primary tool for informing citizens about how their information will be handled by companies with which they interact. However, currently, the interfaces of these instruments do not objectively communicate their information. The present study demonstrates that the application of design guidelines in privacy policies promotes a more satisfactory experience and faster user acquisition of information from its content. The study methodology encompassed a bibliographical review, documentary research, adjustment of the Internet Users Information Privacy Concerns scale, usability testing, and content analysis. Literature from law and design fields could be interconnected to identify legal requirements that could be addressed more effectively through design, the level of privacy concern of the study s participants was verified, and a comparative usability test was conducted. A replica of Facebook s policy was compared to a new interface, which included elements that represented design guidelines. The data showed a reduction in the time to find information, the error rate among users who accessed the new proposal, and a higher frequency of positive statements regarding this version. This research enhances the understanding of how interface design affects the creation of such instruments by showing that following best practices in this area facilitates information acquisition.
47

Artificial Integrity: Data Privacy and Corporate Responsibility in East Africa

Hansson, Ebba January 2023 (has links)
While digital connectivity in East Africa is quickly increasing, the region is underregulated regarding data protection regulations. Moreover, many existing laws are more state-interest-focused than human rights-based. When comprehensive regulations are not in place, more significant regulatory pressure is put on the actors operating in the tech market. Theoretically and conceptually, this accountability can be described through conceptual models such as Corporate Social Responsibility (CSR) and Corporate Digital Responsibility (CDR).  Organisations use the two frameworks to map and manage their impact on society from an economic, environmental, and societal perspective. While CSR deals with their effects from a more general point of view, CDR has recently emerged in the business ethics discourse to discuss the ethical considerations evolving from the exponential growth of digital technologies and data.     Through a multiple case study design, the main objective of this study was to provide practical insight into how actors manage data privacy-related issues in East Africa. Furthermore, the aim was also to evaluate the existing barriers that prevent the actors from fully implementing higher data responsibility ambitions.   The results reveal that the observed actors are aware of the existing risks and mature enough to develop a comprehensive data responsibility agenda. However, there seems to be a gap between developing the policies and implementing them in practice. The lack of context-adjusted approaches to the CSR/CDR-related guidelines and actions can explain the gap.
48

Anonymizing Faces without Destroying Information

Rosberg, Felix January 2024 (has links)
Anonymization is a broad term. Meaning that personal data, or rather data that identifies a person, is redacted or obscured. In the context of video and image data, the most palpable information is the face. Faces barely change compared to other aspect of a person, such as cloths, and we as people already have a strong sense of recognizing faces. Computers are also adroit at recognizing faces, with facial recognition models being exceptionally powerful at identifying and comparing faces. Therefore it is generally considered important to obscure the faces in video and image when aiming for keeping it anonymized. Traditionally this is simply done through blurring or masking. But this de- stroys useful information such as eye gaze, pose, expression and the fact that it is a face. This is an especial issue, as today our society is data-driven in many aspects. One obvious such aspect is autonomous driving and driver monitoring, where necessary algorithms such as object-detectors rely on deep learning to function. Due to the data hunger of deep learning in conjunction with society’s call for privacy and integrity through regulations such as the General Data Protection Regularization (GDPR), anonymization that preserve useful information becomes important. This Thesis investigates the potential and possible limitation of anonymizing faces without destroying the aforementioned useful information. The base approach to achieve this is through face swapping and face manipulation, where the current research focus on changing the face (or identity) while keeping the original attribute information. All while being incorporated and consistent in an image and/or video. Specifically, will this Thesis demonstrate how target-oriented and subject-agnostic face swapping methodologies can be utilized for realistic anonymization that preserves attributes. Thru this, this Thesis points out several approaches that is: 1) controllable, meaning the proposed models do not naively changes the identity. Meaning that what kind of change of identity and magnitude is adjustable, thus also tunable to guarantee anonymization. 2) subject-agnostic, meaning that the models can handle any identity. 3) fast, meaning that the models is able to run efficiently. Thus having the potential of running in real-time. The end product consist of an anonymizer that achieved state-of-the-art performance on identity transfer, pose retention and expression retention while providing a realism. Apart of identity manipulation, the Thesis demonstrate potential security issues. Specifically reconstruction attacks, where a bad-actor model learns convolutional traces/patterns in the anonymized images in such a way that it is able to completely reconstruct the original identity. The bad-actor networks is able to do this with simple black-box access of the anonymization model by constructing a pair-wise dataset of unanonymized and anonymized faces. To alleviate this issue, different defense measures that disrupts the traces in the anonymized image was investigated. The main take away from this, is that naively using what qualitatively looks convincing of hiding an identity is not necessary the case at all. Making robust quantitative evaluations important.
49

Testing Privacy and Security of Voice Interface Applications in the Internet of Things Era

Shafei, Hassan, 0000-0001-6844-5100 04 1900 (has links)
Voice User Interfaces (VUI) are rapidly gaining popularity, revolutionizing user interaction with technology through the widespread adoption in devices such as desktop computers, smartphones, and smart home assistants, thanks to significant advancements in voice recognition and processing technologies. Over a hundred million users now utilize these devices daily, and smart home assistants have been sold in massive numbers, owing to their ease and convenience in controlling a diverse range of smart devices within the home IoT environment through the power of voice, such as controlling lights, heating systems, and setting timers and alarms. VUI enables users to interact with IoT technology and issue a wide range of commands across various services using their voice, bypassing traditional input methods like keyboards or touchscreens. With ease, users can inquire in natural language about the weather, stock market, and online shopping and access various other types of general information.However, as VUI becomes more integrated into our daily lives, it brings to the forefront issues related to security, privacy, and usability. Concerns such as the unauthorized collection of user data, the potential for recording private conversations, and challenges in accurately recognizing and executing commands across diverse accents, leading to misinterpretations and unintended actions, underscore the need for more robust methods to test and evaluate VUI services. In this dissertation, we delve into voice interface testing, evaluation for privacy and security associated with VUI applications, assessment of the proficiency of VUI in handling diverse accents, and investigation into access control in multi-user environments. We first study the privacy violations of the VUI ecosystem. We introduced the definition of the VUI ecosystem, where users must connect the voice apps to corresponding services and mobile apps to function properly. The ecosystem can also involve multiple voice apps developed by the same third-party developers. We explore the prevalence of voice apps with corresponding services in the VUI ecosystem, assessing the landscape of privacy compliance among Alexa voice apps and their companion services. We developed a testing framework for this ecosystem. We present the first study conducted on the Alexa ecosystem, specifically focusing on voice apps with account linking. Our designed framework analyzes both the privacy policies of these voice apps and their companion services or the privacy policies of multiple voice apps published by the same developers. Using machine learning techniques, the framework automatically extracts data types related to data collection and sharing from these privacy policies, allowing for a comprehensive comparison. Next, researchers studied the voice apps' behavior to conduct privacy violation assessments. An interaction approach with voice apps is needed to extract the behavior where pre-defined utterances are input into the simulator to simulate user interaction. The set of pre-defined utterances is extracted from the skill's web page on the skill store. However, the accuracy of the testing analysis depends on the quality of the extracted utterances. An utterance or interaction that was not captured by the extraction process will not be detected, leading to inaccurate privacy assessment. Therefore, we revisited the utterance extraction techniques used by prior works to study the skill's behavior for privacy violations. We focused on analyzing the effectiveness and limitations of existing utterance extraction techniques. We proposed a new technique that improved prior work extraction techniques by utilizing the union of these techniques and human interaction. Our proposed technique makes use of a small set of human interactions to record all missing utterances, then expands that to test a more extensive set of voice apps. We also conducted testing on VUI with various accents to study by designing a testing framework that can evaluate VUI on different accents to assess how well VUI implemented in smart speakers caters to a diverse population. Recruiting individuals with different accents and instructing them to interact with the smart speaker while adhering to specific scripts is difficult. Thus, we proposed a framework known as AudioAcc, which facilitates evaluating VUI performance across diverse accents using YouTube videos. Our framework uses a filtering algorithm to ensure that the extracted spoken words used in constructing these composite commands closely resemble natural speech patterns. Our framework is scalable; we conducted an extensive examination of the VUI performance across a wide range of accents, encompassing both professional and amateur speakers. Additionally, we introduced a new metric called Consistency of Results (COR) to complement the standard Word Error Rate (WER) metric employed for assessing ASR systems. This metric enables developers to investigate and rewrite skill code based on the consistency of results, enhancing overall WER performance. Moreover, we looked into a special case related to the access control of VUI in multi-user environments. We proposed a framework for automated testing to explore the access control weaknesses to determine whether the accessible data is of consequence. We used the framework to assess the effectiveness of voice access control mechanisms within multi-user environments. Thus, we show that the convenience of using voice systems poses privacy risks as the user's sensitive data becomes accessible. We identify two significant flaws within the access control mechanisms proposed by the voice system, which can exploit the user's private data. These findings underscore the need for enhanced privacy safeguards and improved access control systems within online shopping. We also offer recommendations to mitigate risks associated with unauthorized access, shedding light on securing the user's private data within the voice systems. / Computer and Information Science
50

Secure and Privacy-aware Data Collection and Processing in Mobile Health Systems

Iwaya, Leonardo H January 2016 (has links)
Healthcare systems have assimilated information and communication technologies in order to improve the quality of healthcare and patient's experience at reduced costs. The increasing digitalization of people's health information raises however new threats regarding information security and privacy. Accidental or deliberate data breaches of health data may lead to societal pressures, embarrassment and discrimination. Information security and privacy are paramount to achieve high quality healthcare services, and further, to not harm individuals when providing care. With that in mind, we give special attention to the category of Mobile Health (mHealth) systems. That is, the use of mobile devices (e.g., mobile phones, sensors, PDAs) to support medical and public health. Such systems, have been particularly successful in developing countries, taking advantage of the flourishing mobile market and the need to expand the coverage of primary healthcare programs. Many mHealth initiatives, however, fail to address security and privacy issues. This, coupled with the lack of specific legislation for privacy and data protection in these countries, increases the risk of harm to individuals. The overall objective of this thesis is to enhance knowledge regarding the design of security and privacy technologies for mHealth systems. In particular, we deal with mHealth Data Collection Systems (MDCSs), which consists of mobile devices for collecting and reporting health-related data, replacing paper-based approaches for health surveys and surveillance. This thesis consists of publications contributing to mHealth security and privacy in various ways: with a comprehensive literature review about mHealth in Brazil; with the design of a security framework for MDCSs (SecourHealth); with the design of a MDCS (GeoHealth); with the design of Privacy Impact Assessment template for MDCSs; and with the study of ontology-based obfuscation and anonymisation functions for health data. / Information security and privacy are paramount to achieve high quality healthcare services, and further, to not harm individuals when providing care. With that in mind, we give special attention to the category of Mobile Health (mHealth) systems. That is, the use of mobile devices (e.g., mobile phones, sensors, PDAs) to support medical and public health. Such systems, have been particularly successful in developing countries, taking advantage of the flourishing mobile market and the need to expand the coverage of primary healthcare programs. Many mHealth initiatives, however, fail to address security and privacy issues. This, coupled with the lack of specific legislation for privacy and data protection in these countries, increases the risk of harm to individuals. The overall objective of this thesis is to enhance knowledge regarding the design of security and privacy technologies for mHealth systems. In particular, we deal with mHealth Data Collection Systems (MDCSs), which consists of mobile devices for collecting and reporting health-related data, replacing paper-based approaches for health surveys and surveillance.

Page generated in 0.0373 seconds