Spelling suggestions: "subject:"bigdata"" "subject:"bølgedata""
561 |
The discourse of surveillance and privacy: biopower and panopticon in the Facebook-Cambridge Analytica scandalMachova, Tereza January 2021 (has links)
The Facebook - Cambridge Analytica scandal came to light in 2018 revealing the problematic surveillance practices, and violations of privacy the companies allowed. The EU has introduced a privacy legislation, GDPR, that came into effect in 2018 shortly after the scandal erupted. Privacy is a key problem with modern technologies, as companies are trying to gain all possible data on individuals. The purpose of this thesis is to explore the surveillance-privacy nexus in the EU. This thesis asked the research question of: How has surveillance, through emerging technologies, affected the EU's ability to protect the right to privacy? To analyse this research question, this thesis used case study and post-structuralist discourse analysis on the recordings of Alexander Nix, CEO of Cambridge Analytica, at a marking festival, and of Mark Zuckerberg at the European Parliament. To analyse the recordings, biopower and panopticon were used as core theoretical tools. Through utilization of the methods and the theoretical tools, the findings of this thesis point to the conclusion that the EU’s ability to protect privacy from surveillance practices was not affected by the modern surveillance technology, and therefore the protection against exploitation of privacy remains low.
|
562 |
Analysis of user density and quality of service using crowdsourced mobile network dataPanjwani, Nazma 07 September 2021 (has links)
This thesis analyzes the end-user quality of service (QoS) in cellular mobile networks
using device-side measurements. Quality of service in a wireless network is a
significant factor in determining a user's satisfaction. Customers' perception of poor
QoS is one of the core sources of customer churn for telecommunications companies.
A core focus of this work is on assessing how user density impacts QoS within cellular
networks. Kernel density estimation is used to produce user density estimates
for high, medium, and low density areas. The QoS distributions are then compared
across these areas. The k-sample Anderson-Darling test is used to determine the
degree to which user densities vary over time. In general, it is shown that users in
higher density areas tend to experience overall lower QoS levels than those in lower
density areas, even though these higher density areas service more subscribers. The
conducted analyses highlight the value of mobile device-side QoS measurements in
augmenting traditional network-side QoS measurements. / Graduate
|
563 |
Where Are You Now: Privacy, Presence & Place in the Pervasive Computing EraWeimer, Jason M. 10 September 2021 (has links)
No description available.
|
564 |
Multiple Learning for Generalized Linear Models in Big DataXiang Liu (11819735) 19 December 2021 (has links)
Big data is an enabling technology in digital transformation. It perfectly complements ordinary linear models and generalized linear models, as training well-performed ordinary linear models and generalized linear models require huge amounts of data. With the help of big data, ordinary and generalized linear models can be well-trained and thus offer better services to human beings. However, there are still many challenges to address for training ordinary linear models and generalized linear models in big data. One of the most prominent challenges is the computational challenges. Computational challenges refer to the memory inflation and training inefficiency issues occurred when processing data and training models. Hundreds of algorithms were proposed by the experts to alleviate/overcome the memory inflation issues. However, the solutions obtained are locally optimal solutions. Additionally, most of the proposed algorithms require loading the dataset to RAM many times when updating the model parameters. If multiple model hyper-parameters needed to be computed and compared, e.g. ridge regression, parallel computing techniques are applied in practice. Thus, multiple learning with sufficient statistics arrays are proposed to tackle the memory inflation and training inefficiency issues.
|
565 |
Turbine Generator Performance Dashboard for Predictive Maintenance StrategiesEmily R Rada (11813852) 19 December 2021 (has links)
<div>Equipment health is the root of productivity and profitability in a
company; through the use of machine learning and advancements in
computing power, a maintenance strategy known as Predictive Maintenance
(PdM) has emerged. The predictive maintenance approach utilizes
performance and condition data to forecast necessary machine repairs.
Predicting maintenance needs reduces the likelihood of operational
errors, aids in the avoidance of production failures, and allows for
preplanned outages. The PdM strategy is based on machine-specific data,
which proves to be a valuable tool. The machine data provides
quantitative proof of operation patterns and production while offering
machine health insights that may otherwise go unnoticed.</div><div><br> </div><div>Purdue
University's Wade Utility Plant is responsible for providing reliable
utility services for the campus community. The Wade Utility Plant has
invested in an equipment monitoring system for a thirty-megawatt turbine
generator. The equipment monitoring system records operational and
performance data as the turbine generator supplies campus with
electricity and high-pressure steam. Unplanned and surprise maintenance
needs in the turbine generator hinder utility production and lessen the
dependability of the system.</div><div><br> </div> The work of this
study leverages the turbine generator data the Wade Utility Plant
records and stores, to justify equipment care and provide early error
detection at an in-house level. The research collects and aggregates
operational, monitoring and performance-based data for the turbine
generator in Microsoft Excel, creating a dashboard which visually
displays and statistically monitors variables for discrepancies. The
dashboard records ninety days of data, tracked hourly, determining
averages, extrema, and alerting the user as data approaches recommended
warning levels. Microsoft Excel offers a low-cost and accessible
platform for data collection and analysis providing an adaptable and
comprehensible collection of data from a turbine generator. The
dashboard offers visual trends, simple statistics, and status updates
using 90 days of user selected data. This dashboard offers the ability
to forecast maintenance needs, plan work outages, and adjust operations
while continuing to provide reliable services that meet Purdue
University's utility demands. <br>
|
566 |
Algorithmic Ability Prediction in Video InterviewsLouis Hickman (10883983) 04 August 2021 (has links)
Automated video interviews (AVIs) use machine learning algorithms to predict interviewee personality traits and social skills, and they are increasingly being used in industry. The present study examines the possibility of expanding the scope and utility of these approaches by developing and testing AVIs that score ability from interviewee verbal, paraverbal, and nonverbal behavior in video interviews. To advance our understanding of whether AVI ability assessments are useful, I develop AVIs that predict ability (GMA, verbal ability, and interviewer-rated intellect) and investigate their reliability (i.e., inter-algorithm reliability, internal consistency across interview questions, and test retest reliability). Then, I investigate the convergent and discriminant-related validity evidence as well as potential ethnic and gender bias of such predictions. Finally, based on the Brunswik lens model, I compare how ability test scores, AVI ability assessments, and interviewer ratings of ability relate to interviewee behavior. By exploring how ability relates to behavior and how ability ratings from both AVIs and interviewers relate to behavior, the study advances our understanding of how ability affects interview performance and the cues that interviewers use to judge ability.
|
567 |
Platforma pro definici a zpracování dat / Platform for Defining and Processing of DataHala, Karel January 2017 (has links)
This diploma thesis deals with creating platform which serves for easy manipulation with large data set. There are numerous technical knowledge described in this thesis to understand web development. Later there are proposed approaches of how to make as easy as possible for user to define and work with large data sets. Platform is written and created in a way, that it is easy to extend eny part of it.
|
568 |
Zpracování síťové komunikace v prostředí Apache Spark / Network Traces Analysis Using Apache SparkBéder, Michal January 2018 (has links)
The aim of this thesis is to show how to design and implement an application for network traces analysis using Apache Spark distributed system. Implementation can be divided into three parts - loading data from a distributed HDFS storage, supported network protocols analysis and distributed data processing. As a data visualization tool is used web-based notebook Apache Zeppelin. The resulting application is able to analyze individual packets as well as the entire flows. It supports JSON and pcap as input data formats. The goal of the application is to allow Big Data processing. The greatest impact on its performance has the input data format and allocation of the available cores.
|
569 |
Environmental Information Modeling: An Integration of Building Information Modeling and Geographic Information Systems for Lean and Green DevelopmentsEzekwem, Kenechukwu Chigozie January 2016 (has links)
Building Information Modeling (BIM), used by many for building design and construction, and Geographic Information GIS System (GIS), used for city planning, contain large spatial and attribute data which could be used for Lean and green city planning and development. However, there exist a systematic gap and interoperability challenge between BIM and GIS that creates a disjointed workflow between city planning data in GIS and building data in BIM. This hinders the seamless analysis of data between BIM and GIS for lean and green developments. This study targets the creation of a system which integrates BIM and GIS system data. The methods involve the establishment of a novel Environmental Information Modeling (EIM) framework to bridge the gap using Microsoft Visual C#. The application of this framework shows the potential of this concept. The research results provide an opportunity for more analysis for lean and green construction planning, development and management.
|
570 |
Nej tack till onödig reklam! : En studie om riktad marknadsföring via Big Data från ett konsumentperspektiv / No thanks to unnecessary advertising! : A study on targeted marketing via Big Data ina consumer perspectiveCarlsson, Ricky, Vilhelmsson, Alexander January 2021 (has links)
Title: No thanks to unnecessary advertising! -A study on targeted marketing via Big Data in a consumer perspective Authors: Ricky Carlsson and Alexander Vilhelmsson Supervisor: Anders Parment Key words: Targeted marketing, Big Data, Customer segmentation, Buying process, Integrity concern, Customer relationship management, Marketing communication, Strategic management, Big Data management, Online Behavioural Targeting Introduction: In a world that is globalizing and where digital development is advancing, companies have had to adapt. In recent times with the increasingly more digital world, technology has become an increasingly more relevant factor, not least in marketing. A digital method that has emerged is Big Data, whichmakes it possible forcompanies tocollect large amounts of information about consumers. By analysing the information extracted from Big Data, it is easier to find and understand consumers' needs and what motivates their buying process. It is important that companies analyse the information correctly so that they do not run the risk of creating negative effects from targeted marketing via Big Data. Purpose: To investigate Swedish consumers' attitudestowards targeted marketing via Big Data and to find out how companies that sell goods and services to consumers can improve their use of Big Data in targeted marketing from a consumer perspective. Method: The study is a cross-sectional study of a qualitative and quantitative nature. The qualitative empirical data consists of 11 semi-structured interviews with students in Sweden. The quantitative empirical data consists of 203 survey answers collected from consumers around Sweden. The study is based on an abductive approach and has a hermeneutic approach. Conclusion: The result of the study shows that there are both opportunities and challenges for companies when using Big Data in targeted marketing. Targeted marketing with the help of Big Data that is performed correctly should only have a positive impact on the targeted marketing and something that creates value for both the consumers and the companies, but this is not the case today. The population of the study perceives that marketing often does not match their needs; this shows that companies must become better at analysing the data. If the data extracted from Big Data is analysed in a better way, the segmentation of consumers will also be better. / Titel: Nej tack till onödig reklam! - En studie om riktad marknadsföring via Big Data i ett konsumentperspektiv. Författare: Ricky Carlsson och Alexander Vilhelmsson Handledare: Anders Parment Bakgrund: I en värld som globaliseras och där den digitala utvecklingen går framåt har företag varit tvungna att anpassa sig. På senare tid i takt med den ständigt mer digitaliserade världen har teknologi blivit en alltmer relevant faktor, inte minst inom marknadsföring. En digital metod som har vuxit fram är Big Data genom vilken företag har möjlighet att samla in stora mängder information om konsumenter. Genom att analysera informationen som utvinns från Big Data går det att lättare finna och förstå konsumenters behov och vad som motiverar deras köpprocess. Det är viktigt att företag analyserar informationen på rätt sätt för att inte löpa risken att skapa negativa effekter av den riktade marknadsföringen via Big Data. Syfte: Att undersöka svenska konsumenters attityder till riktad marknadsföring via Big Data samt ta reda på hur företag som säljer varor eller tjänster till konsumenter kan förbättra användningen av Big Data inom riktad marknadsföring utifrån ett konsumentperspektiv. Metod: Studien är en tvärsnittsstudie av kvalitativ och kvantitativ karaktär. Den kvalitativa empirin består av 11 semi-strukturerade intervjuer med studenter i Sverige. Den kvantitativa empirin består av 203 insamlade enkätsvar från konsumenter runt om i Sverige. Studien grundas i en abduktiv ansats och har ett hermeneutiskt synsätt. Slutsatser: Resultatet i studien visar på att det finns möjligheter och utmaningar för företag vid användning av Big Data inom riktad marknadsföring. En riktad marknadsföring medhjälp av Big Data som utförs på rätt sätt borde enbart ha en positiv påverkan på den riktade marknadsföringen och något som skapar värde för konsumenter och företag, men så är inte fallet idag. Då studiens population uppfattar att den riktade marknadsföringen ofta inte matchar deras behov bör företag bli bättre på att analysera data. Om data som utvinns från Big Data analyseras på ett bättre sätt kommer även segmenteringen av konsumenter att bli bättre.
|
Page generated in 0.0486 seconds