531 |
Social media addiction : The paradox of visibility & vulnerabilityKempa, Ewelina January 2015 (has links)
We currently post a large amount of personal information about ourselves on social media sites. Many times though, users of these services are poorly aware of what kind of terms and conditions they agree to. There are in fact many techniques available that ensure users privacy, yet not many organizations make the effort to have those in place. Making a profit is what matters for companies and information on users is highly valued. It is the lack of regulations regarding data collection that enable organizations not to consider their users privacy. The data that can be collected is vast, it is important to understand that everything we do online, every search, click, shop and view is stored and the information is many times sold along to third-parties. Using information on users, companies can make profit by for example making predictions on the users, figuring out what they are interested in buying. It is nevertheless very difficult to make long-lasting regulations as the web constantly changes and grows. A qualitative research was conducted to observe to what extent social media addiction and its consequences is being discussed and researched. Interviews with social media users were also conducted. After an analysis on the findings it is clear that many users in fact would like to have more privacy online yet they feel the need to accept the term and conditions any way. Many users also state that they happily would like to read the terms and conditions, had they been written in a different way.
|
532 |
Social Big Data and Privacy AwarenessSang, Lin January 2015 (has links)
Based on the rapid development of Big Data, the data from the online social network becomea major part of it. Big data make the social networks became data-oriented rather than social-oriented. Taking this into account, this dissertation presents a qualitative study to research howdoes the data-oriented social network affect its users’ privacy management for nowadays. Within this dissertation, an overview of Big Data and privacy issues on the social network waspresented as a background study. We adapted the communication privacy theory as a frameworkfor further analysis how individuals manage their privacy on social networks. We study socialnetworks as an entirety in this dissertation. We selected Facebook as a case study to present theconnection between social network, Big Data and privacy issues. The data that supported the result of this dissertation collected by the face-to-face and in-depthinterview study. As consequence, we found that the people divided the social networks intodifferent level of openness in order to avoid the privacy invasions and violations, according totheir privacy concern. They reduced and transferred their sharing from an open social networkto a more close one. However, the risk of privacy problems actually raised because peopleneglected to understand the data process on social networks. They focused on managed theeveryday sharing but too easily allowed other application accessed their personal data on thesocial network (such like the Facebook profile).
|
533 |
The Challenges of Personal Data Markets and PrivacySpiekermann-Hoff, Sarah, Böhme, Rainer, Acquisti, Alessandro, Hui, Kai-Lung January 2015 (has links) (PDF)
Personal data is increasingly conceived as a tradable asset. Markets for personal information are emerging and new ways of valuating individuals' data are being proposed. At the same time, legal obligations over protection of personal data and individuals' concerns over its privacy persist. This article outlines some of the economic, technical, social, and ethical issues associated with personal data markets, focusing on the privacy challenges they raise.
|
534 |
Fast Computation on Processing Data Warehousing Queries on GPU DevicesCyrus, Sam 29 June 2016 (has links)
Current database management systems use Graphic Processing Units (GPUs) as dedicated accelerators to process each individual query, which results in underutilization of GPU. When a single query data warehousing workload was run on an open source GPU query engine, the utilization of main GPU resources was found to be less than 25%. The low utilization then leads to low system throughput. To resolve this problem, this paper suggests a way to transfer all of the desired data into the global memory of GPU and keep it until all queries are executed as one batch. The PCIe transfer time from CPU to GPU is minimized, which results in better performance in less time of overall query processing. The execution time was improved by up to 40% when running multiple queries, compared to dedicated processing.
|
535 |
Matrices efficientes pour le traitement du signal et l'apprentissage automatique / Efficient matrices for signal processing and machine learningLe Magoarou, Luc 24 November 2016 (has links)
Les matrices, en tant que représentations des applications linéaires en dimension finie, jouent un rôle central en traitement du signal et des images et en apprentissage automatique. L'application d'une matrice de rang plein à un vecteur implique a priori un nombre d'opérations arithmétiques de l'ordre du nombre d'entrées non-nulles que contient la matrice. Cependant, il existe des matrices pouvant être appliquées bien plus rapidement, cette propriété étant d'ailleurs un des fondements du succès de certaines transformations linéaires, telles que la transformée de Fourier ou la transformée en ondelettes. Quelle est cette propriété? Est-elle vérifiable aisément? Peut-on approcher des matrices quelconques par des matrices ayant cette propriété? Peut-on estimer des matrices ayant cette propriété? La thèse s'attaque à ces questions en explorant des applications telles que l'apprentissage de dictionnaire à implémentation efficace, l'accélération des itérations d'algorithmes de résolution de de problèmes inverses pour la localisation de sources, ou l'analyse de Fourier rapide sur graphe. / Matrices, as natural representation of linear mappings in finite dimension, play a crucial role in signal processing and machine learning. Multiplying a vector by a full rank matrix a priori costs of the order of the number of non-zero entries in the matrix, in terms of arithmetic operations. However, matrices exist that can be applied much faster, this property being crucial to the success of certain linear transformations, such as the Fourier transform or the wavelet transform. What is the property that allows these matrices to be applied rapidly ? Is it easy to verify ? Can weapproximate matrices with ones having this property ? Can we estimate matrices having this property ? This thesis investigates these questions, exploring applications such as learning dictionaries with efficient implementations, accelerating the resolution of inverse problems or Fast Fourier Transform on graphs.
|
536 |
Digitalisering av energikartläggningar : Ett verktyg för energikartläggning av komplexafastigheter / Digitization of energy surveys : A tool for energy surveys of complexes real estateThorell, Johan January 2017 (has links)
Swedish law has established that all major companies should energy map their estateand operations every four years. Hence, a need to create a digital tool to accomplishthis work. In this project such a software tool was developed. This tool includesmethods that generate a report and calculate key figures throuth several smart formsand databases. With this, the consultant may save time.The goal was to design a general digital framework to efficiently handle complexbuildings. This digital tool should be useful when Sweco maps complex buildings inother assignments. The sub-objective of the project were to create an preliminarySweco report, create a smart framework that alerts the user if unreasonable valuesexist, import values form databases, and provide indicative information for energysurveys.The digital tool has found to be useful. It still needs some improvement but thearchitecture is more or less implemented. The database function succeeds for thecase with the customer and generates valuable results but no general solution wasfound before the project ended.
|
537 |
The impact of product, service and in-store environment perceptions on customer satisfaction and behaviourManikowski, Adam 09 1900 (has links)
Much previous research concerning the effects of the in-store experience on customers’ decision-making has been laboratory-based. There is a need for empirical research in a real store context to determine the impact of product, service and in-store environment perceptions on customer satisfaction and behaviour.
This study is based on a literature review (Project 1) and a large scale empirical study (Projects 2/3) combining two sources of secondary data from the largest retailer in the UK, Tesco, and their loyalty ‘Clubcard’ provider, Dunnhumby. Data includes customer responses to an online self-completion survey of the customers’ shopping experience combined with customer demographic and behavioural data from a loyalty card programme for the same individual. The total sample comprised n=30,696 Tesco shoppers. The online survey measured aspects of the in-store experience. These items were subjected to factor analysis to identify the influences on the in-store experience with four factors emerging: assortment, retail atmosphere, personalised customer service and checkout customer service. These factors were then matched for each individual with behavioural and demographic data collected via the Tesco Clubcard loyalty program. Regression and sensitivity analyses were then conducted to determine the relative impact of the in-store customer experience dimensions on customer behaviour.
Findings include that perceptions of customer service have a strong positive impact on customers’ overall shopping satisfaction and spending behaviour. Perceptions of the in-store environment and product quality/ availability positively influence customer satisfaction but negatively influence the amount of money spent during their shopping trip. Furthermore, personalised customer service has a strong positive impact on spend and overall shopping satisfaction, which also positively influences the number of store visits the week after. However, an increase in shopping satisfaction coming from positive perceptions of the in-store environment and product quality/ availability factors helps to reduce their negative impact on spend week after.
A key contribution of this study is to suggest a priority order for investment; retailers should prioritise personalised customer service and checkout customer service, followed by the in-store environment together with product quality and availability. These findings are very important in the context of the many initiatives the majority of retail operators undertake. Many retailers focus on cost-optimisation plans like implementing self-service check outs or easy to operate and clinical in-store environment. This research clearly and solidly shows which approach should be followed and what really matters for customers. That is why the findings are important for both retailers and academics, contributing to and expanding knowledge and practice on the impact of the in-store environment on the customer experience.
|
538 |
Data Masking, Encryption, and their Effect on Classification Performance: Trade-offs Between Data Security and UtilityAsenjo, Juan C. 01 January 2017 (has links)
As data mining increasingly shapes organizational decision-making, the quality of its results must be questioned to ensure trust in the technology. Inaccuracies can mislead decision-makers and cause costly mistakes. With more data collected for analytical purposes, privacy is also a major concern. Data security policies and regulations are increasingly put in place to manage risks, but these policies and regulations often employ technologies that substitute and/or suppress sensitive details contained in the data sets being mined. Data masking and substitution and/or data encryption and suppression of sensitive attributes from data sets can limit access to important details. It is believed that the use of data masking and encryption can impact the quality of data mining results. This dissertation investigated and compared the causal effects of data masking and encryption on classification performance as a measure of the quality of knowledge discovery. A review of the literature found a gap in the body of knowledge, indicating that this problem had not been studied before in an experimental setting. The objective of this dissertation was to gain an understanding of the trade-offs between data security and utility in the field of analytics and data mining. The research used a nationally recognized cancer incidence database, to show how masking and encryption of potentially sensitive demographic attributes such as patients’ marital status, race/ethnicity, origin, and year of birth, could have a statistically significant impact on the patients’ predicted survival. Performance parameters measured by four different classifiers delivered sizable variations in the range of 9% to 10% between a control group, where the select attributes were untouched, and two experimental groups where the attributes were substituted or suppressed to simulate the effects of the data protection techniques. In practice, this represented a corroboration of the potential risk involved when basing medical treatment decisions using data mining applications where attributes in the data sets are masked or encrypted for patient privacy and security concerns.
|
539 |
Integrace Big Data a datového skladu / Integration of Big Data and data warehouseKiška, Vladislav January 2017 (has links)
Master thesis deals with a problem of data integration between Big Data platform and enterprise data warehouse. Main goal of this thesis is to create a complex transfer system to move data from a data warehouse to this platform using a suitable tool for this task. This system should also store and manage all metadata information about previous transfers. Theoretical part focuses on describing concepts of Big Data, brief introduction into their history and presents factors which led to need for this new approach. Next chapters describe main principles and attributes of these technologies and discuss benefits of their implementation within an enterprise. Thesis also describes technologies known as Business Intelligence, their typical use cases and their relation to Big Data. Minor chapter presents main components of Hadoop system and most popular related applications. Practical part of this work consists of implementation of a system to execute and manage transfers from traditional relation database, in this case representing a data warehouse, to cluster of a few computers running a Hadoop system. This part also includes a summary of most used applications to move data into Hadoop and a design of database metadata schema, which is used to manage these transfers and to store transfer metadata.
|
540 |
Procurement Automation / Automatizace nákupuCizner, Pavel January 2017 (has links)
The research goal was to find out the current and possible level of procurement automation and its contribution to less routine and more creative jobs. The goal was accomplished by literature review and data collection via survey. The data collected evaluated enterprises in developing and developed countries. The research hypothesis of developing countries automating more than developed ones was not supported by the data tested via Mann-Whithey U test. The data collected was from 146 respondents from all around the world. Therefore, there are limitations of the conclusions. The thesis and its survey contributes to the knowledge about the level of procurement automation.
|
Page generated in 0.0735 seconds