591 |
Bodies of Data: The Social Production of Predictive AnalyticsMadisson Whitman (6881324) 26 June 2020 (has links)
Bodies of Data challenges the promise of big data in knowing and organizing people by explicating how data are made and theorizing mismatches between actors, data, and institutions. Situated at a large public university in the United States that hosts approximately 30,000 undergraduate students, this research ethnographically traces the development and deployment of an app for student success that draws from traditional (demographic information, enrollment history, grade distributions) and non-traditional (WiFi network usage, card swipes, learning management systems) student data to anticipate the likelihood of graduation in a four-year period. The app, which offers an interface for students based on nudging, is the product of collaborations between actors who specialize in educational technology. As these actors manage the app, they must also interpret data against the students who generate those data, many of whom do not neatly mirror their data counterparts. The central question animating this research asks how the designers of the app create order—whether through material bodies that are knowable to data collection or reorganized demographic groupings—as they render students into data.<br><br>To address this question and investigate practices of making data, I conducted 12 months of ethnographic fieldwork, using participant observation and interviewing with university administrators, data scientists, app developers, and undergraduate students. Through a theoretical approach informed by anthropology, science and technology studies, critical data studies, and feminist theory, I analyze how data and the institution make each other through the modeling of student bodies and reshaping of subjectivity. I leverage technical glitches—slippages between students and their data—and failure at large at the institution as analytics to both expose otherwise hidden processes of ordering and productively read failure as an opportunity for imagining what data could do. Predictive projects that derive from big data are increasingly common in higher education as institutions look to data to understand populations. Bodies of Data empirically provides evidence regarding how data are made through sociotechnical processes, in which data are not for understanding but for ordering. As universities look to big data to inform decision-making, the findings of this research contradict assumptions that data provide neutral and objective ways of knowing students.
|
592 |
Method for Collecting Relevant Topics from Twitter supported by Big DataSilva, Jesús, Senior Naveda, Alexa, Gamboa Suarez, Ramiro, Hernández Palma, Hugo, Niebles Núẽz, William 07 January 2020 (has links)
There is a fast increase of information and data generation in virtual environments due to microblogging sites such as Twitter, a social network that produces an average of 8, 000 tweets per second, and up to 550 million tweets per day. That's why this and many other social networks are overloaded with content, making it difficult for users to identify information topics because of the large number of tweets related to different issues. Due to the uncertainty that harms users who created the content, this study proposes a method for inferring the most representative topics that occurred in a time period of 1 day through the selection of user profiles who are experts in sports and politics. It is calculated considering the number of times this topic was mentioned by experts in their timelines. This experiment included a dataset extracted from Twitter, which contains 10, 750 tweets related to sports and 8, 758 tweets related to politics. All tweets were obtained from user timelines selected by the researchers, who were considered experts in their respective subjects due to the content of their tweets. The results show that the effective selection of users, together with the index of relevance implemented for the topics, can help to more easily find important topics in both sport and politics.
|
593 |
Defining, analyzing and determining power losses - due to icing on wind turbine bladesCanovas Lotthagen, Zandra January 2020 (has links)
The wind power industry is one of the fastest-growing renewable energy industries in the world. Since more energy can be extracted from wind when the density is higher, a lot of the investments made in the wind power industry are made in cold climates. But with cold climates come harsh weather conditions such as icing. The icing on wind power rotor blades causes the aerodynamic properties of the blade to shift and with further ice accretion, the wind power plant can come to a standstill causing a loss of power, until the ice is melted. How big these losses are, depend greatly on site-specific variables such as elevation, temperature, and precipitation. The literature claims these ice-related losses can correspond to 10-35% of the annual expected energy output. Some studies have been made to standardize an ice loss determining method to be used by the industry, yet a standardization of calculating these losses do not exist. It was therefore interesting for this thesis to investigate the different methods that are being used. By using historical Supervisory Control and Data Acquisition (SCADA) data for two different sites located in Sweden, a robust ice determining code was created to identify ice losses. Nearly 32 million data points are being analyzed, and the data itself is provided by Siemens Gamesa which is one of the biggest companies within the wind power industry. A sensitivity analysis was made, and it was shown that a reference dataset reaching from May to September for four years could be used to clearly identify ice losses. To find the ice losses, three different scenarios were tested. The three scenarios use different temperature intervals to find ice losses. For scenario 1 all data points below 0 degrees are investigated. And for scenario 2 and 3 this interval is stretching from 3 degrees and below versus 5 degrees and below. It was found that Scenario 3, was the optimal way to identify the ice losses. Scenario 3 filtered the raw data so that only data points with a temperature below five degrees was used. For the two sites investigated, the annual ice losses were found to lower the annual energy output by 5-10%. Further, the correlation between temperature, precipitation, and ice losses was investigated. It was found that low temperature and high precipitation is strongly correlated to ice losses.
|
594 |
The Implementation of social CRM : Key features and significant challenges associated with thepractical implementation of social Customer RelationshipManagementKansbod, Julia January 2022 (has links)
The rise of social media has challenged the traditional notionof CRM and introduced a new paradigm, known as socialCRM. While there are many benefits and opportunitiesassociated with the integration of social data in CRMsystems, a majority of companies are failing their social CRMimplementation. Since social CRM is still considered to be ayoung phenomenon, knowledge regarding itsimplementation and functionalities is limited. The purpose ofthis study is to contribute to the current state of knowledgeregarding the factors which influence the practicalimplementation of social CRM. In order to capturestate-of-the-art knowledge on this topic, a literature reviewwas conducted. In addition, interviews with CRM expertsworking within five Swedish companies were included inorder to gain additional insights from practice. Findingsindicate that the key features needed for social CRMimplementation revolve around the real-time monitoring,collection, processing, storing and analyzing of social data.Advanced technical tools, such as Big Data Technology, aredeemed required in order to handle large volumes of dataand properly transform it into valuable knowledge. The mostsignificant challenges identified heavily revolve aroundlimited knowledge as well as various technical andorganizational limitations. Additionally, findings indicatethat a multitude of uncertainties of practitioners revolvearound data legislations and privacy concerns. Hence, whilesocial CRM can entail a multitude of benefits, there are asignificant number of challenges which seem to stand in theway of unlocking the full potential of social CRM. In orderfor social CRM implementation to be made more accessiblefor organizations in the future, there is a need for moreknowledge and clarity regarding factors such as technicalsolutions, organizational changes and legislations.
|
595 |
The adoption of Industry 4.0- technologies in manufacturing : a multiple case studyNILSEN, SAMUEL, NYBERG, ERIC January 2016 (has links)
Innovations such as combustion engines, electricity and assembly lines have all had a significant role in manufacturing, where the past three industrial revolutions have changed the way manufacturing is performed. The technical progress within the manufacturing industry continues at a high rate and today's progress can be seen as a part of the fourth industrial revolution. The progress can be exemplified by ”Industrie 4.0”; the German government's vision of future manufacturing. Previous studies have been conducted with the aim of investigating the benefits, progress and relevance of Industry 4.0-technologies. Little emphasis in these studies has been put on differences in implementation and relevance of Industry 4.0-technologies across and within industries. This thesis aims to investigate the adoption of Industry 4.0-technologies among and within selected industries and what types of patterns that exists among them. Using a qualitative multiple case study consisting of firms from Aerospace, Heavy equipment, Automation, Electronics and Motor Vehicle Industry, we gain insight into how leading firms are implementing the technologies. In order to identify the factors determining how Industry 4.0-technologies are implemented and what common themes can be found, we introduce the concept production logic, which is built upon the connection between competitive priorities; quality, flexibility, delivery time, cost efficiency and ergonomics. This thesis has two contributions. In our first contribution, we have categorized technologies within Industry 4.0 into two bundles; the Human-Machine-Interface (HMI) and the connectivity bundle. The HMI bundle includes devices for assisting operators in manufacturing activities, such as touchscreens, augmented reality and collaborative robots. The connectivity-bundle includes systems for connecting devices, collecting and analyzing data from the digitalized factory. The result of this master thesis indicates that depending on a firm’s or industry’s logic of production, the adoption of elements from the technology bundles differ. Firms where flexibility is dominant tend to implement elements from the HMI-bundle to a larger degree. In the other end, firms with few product variations where quality and efficiency dominates the production logic tends to implement elements from the connectivity bundle in order to tightly monitor and improve quality in their assembly. Regardless of production logic, firms are implementing elements from both bundles, but with different composition and applications. The second contribution is within the literature of technological transitions. In this contribution, we have studied the rise and development of the HMI-bundle in the light of Geels (2002) Multi-Level Perspective (MLP). It can be concluded that an increased pressure on the landscape-level in the form of changes in the consumer-market and the attitudes within the labor force has created a gradual spread of the HMI-bundle within industries. The bundles have also been studied through Rogers (1995) five attributes of innovation, where the lack of testability and observability prevents increased application of M2M-interfaces. Concerning Big Data and analytics, the high complexity prevents the technology from being further applied. As the HMI-bundle involves a number of technologies with large differences in properties, it is hard draw any conclusion using the attributes of innovation about what limits their application.
|
596 |
Att öka mottagligheten för branded content genom hyper-personalisering : En användarstudie mot målgruppen för digitala tidskrifter inom populärkultur. / Increasing susceptibility to branded content through hyper-personalization.Sombo, Alexandros January 2015 (has links)
I denna studie utreder jag om hur branded content mottas av målgruppen för digitala tidskrifter inom populärkultur genom en framträdande teknik in webb-personalisering, hyper-personalisering. Målgruppen för denna studie är unga opinionsbildare som konsumerar innehåll från digitala tidskrifter som exempelvis Nöjesguiden. Branded content, sponsrat innehåll, är innehåll som skapas för att ett varumärke skall förknippas med en kreatörs målgrupp. Alltså kan ett varumärke be Nöjesguiden skapa redaktionellt innehåll som bör tilltala målgruppen för den digitala tidskriften, och på så sätt skall målgruppen få en ny uppfattning eller fortsatt positiv syn på varumärket. Hyperpersonalisering är en teknik som appliceras för att kunna rikta innehåll, tjänster eller produkter mot individer inom en målgrupp med träffsäker relevans. Tekniken kräver en stor mängd insamlad social data och det blir därför intressant att föra en diskussion kring hur målgruppen reagerar på att man kan genomföra en sådan enorm insamling av data. Även en etisk diskussion kring tekniken förs i rapporten. För kunna besvara om mottagligheten för branded content genom hyper-personalisering är god eller ej genomfördes en kvantitativ studie i form av en enkät och en kvalitativ studie med ett användarexperiment. Enkäten besvarades av 87 personer från målgruppen och fyra personer från målgruppen fick vara med och uppleva användarexperimentet. Valet att genomföra ett flertal metoder grundar sig i att kunna ha möjlighet till en bred diskussion kring problemformuleringen. / In this study I am investigating how branded content is received by the target audience for digital periodicals in popular culture through a prominent technology in web personalization, hyper-personalization. The target group for this study is young opinion builders who consume content from digital periodicals such as “Nöjesguiden”. Branded content is content that is created for a brand to be associated with a creator’s audience. Thus, a brand might ask “Nöjesguiden” to create editorial content that should appeal the target audience for the digital magazine and so should the target audience get a new impression or a continued positive view of the brand. Hyper-personalization is a technology applied to target content, services or products to individuals within a target audience with accurate relevance. The technology requires a large amount of collected social data and it will therefore be interesting to conduct a discussion on how the target group reacts to that you can implement such an enormous collection of data. An ethical discussion about the technology is held in the report. In order to answer the susceptibility of branded content through hyper-personalization is good or not, a quantitative study in the form of a survey and a qualitative study with a user experiment was conducted. 87 people answered the survey from the target group and four people from the target group were taking part of the user experience. The choice of having a multiple number of methods was fundamental for having a broad discussion about the issue.
|
597 |
Accurately measuring content consumption on a modern Play service : Noggrann mätning av konsumtion på en modern Play-tjänstCederman, Mårten January 2015 (has links)
This research represents an attempt to define and accurately measure user consumption of content on a modern, advertised VOD service (AVOD), more specifically known in Sweden as a Play service. With a foundation of previous research in the area of VOD and AVOD services, the characteristics and flaws of these types of platforms are discussed to shine light on factors that might concern Play services. Optimizing the vast content inventory offered on these services is crucial for long term profitability, and to achieve this, content providers need to understanding how to measure the consumption properly. A contentcentric approach was used to focus on individual formats (e.g. TV shows) and the factors that can describe its consumption. A macro perspective was initially applied to investigate global factors that dictates consumption. Analysis on a micro level was carried out by analyzing tracking data collected for a full year from one of the biggest Play services in Sweden, TV3play.se. Ultimately, the development of a new method to measure consumption called Consumption Volume Score (CVS) is proposed. It is introduced as an alternative to the traditional unit which has been to measure the amount of video starts (VS). The validity was evaluated using comparison of rank difference for individual formats, using both methods and different criterias. The results shows that the method of using CVS to measure consumption yields little to no difference in ranking of highly popular formats, while less consumed formats had a more varied change in rank. Further analysis on some of these formats indicated that they might have a dedicated niche audience, where content editors might see potential gains from handpicking them to optimize the consumption further. The findings gives support to believe that CVS as a unit of measuring consumption can help to further understand how individual formats perform, especially less consumed and potentially niched ones. Future research on CVS is recommended to discern its feasibility in a live context. / Denna forskning ämnar att finna metoder för att definiera samt noggrant kunna mäta konsumtionen av streamad media på en modern, reklamfinansierad VODtjänst (AVOD), i Sverige är dessa mer kända som Playtjänster. Med utgångspunkt från tidigare forskning inom på streamingtjänster, belyses de egenskaper och brister dessa typer av plattformar kan påverkas av, i synnerhet de faktorer som kan vara relevanta för Playtjänster. Att kunna organisera och underhålla de stora mängder media som erbjuds på dessa tjänster är avgörande för långsiktig lönsamhet, och för att uppnå detta måste redaktörerna förstå hur man mäter konsumtionen med hög tillförlitlighet i sin uppgift att ta väl grundade beslut. En innehållscentrerat (contentcentric) tillvägagångssätt användes för att kunna fokusera på enskilda format (t.ex. TVprogram) och de inbördes faktorer som kan beskriva dess konsumtion. Ett makroperspektiv applicerades initialt för att undersöka de globala faktorer som styr konsumtion. Vidare tillämpades en analys på mikronivå genom att undersöka spårningsdata kopplat till det innehåll som finns tillgängligt på plattformen. Underlaget för spårningsdatan bestod av ett helt års konsumtion (2014) från en av de största Playtjänsterna i Sverige, TV3play.se. Framtagning av en ny metod för att mäta konsumtionen kallad Consumption Volume Score (CVS) föreslås och införs som ett alternativ till den traditionella metod som har varit att mäta den totala mängden av Videostarter (VS). Signifikansen av den nya metoden utvärderades genom jämförelse av rangordning av enskilda format baserat på CVS och VS samt olika kriterier. Resultaten visar att CVS för att mäta konsumtion ger ytterst liten eller ingen skillnad i ranking av mycket populära format, medan mindre konsumerade format hade en mer varierad förändring i rang. Vidare analys av en del av dessa format indikerade att de skulle kunna ha en nischad publik som uppskattar innehållet, trots relativt låg konsumption. För dessa format anser jag att det finns möjlighet för redaktörer att manuellt handplocka dem för att optimera konsumtionen ytterligare. Resultaten ger underlag för godta CVS som en signifikant mätenhet för konsumtion och kan bidra till att förstå hur enskilda format presterar, särskilt mindre konsumerade och potentiellt nischade sådana. Framtida forskning om CVS som metod för att mäta konsumtion rekommenderas, i synnerhet för att avgöra hur väl det lämpas att applicera i en skarp miljö.
|
598 |
Big Data Analytics of City Wide Building Energy DeclarationsMA, YIXIAO January 2015 (has links)
This thesis explores the building energy performance of the domestic sector in the city of Stockholm based on the building energy declaration database. The aims of this master thesis are to analyze the big data sets of around 20,000 buildings in Stockholm region, explore the correlation between building energy performance and different internal and external affecting factors on building energy consumption, such as building energy systems, building vintages and etc. By using clustering method, buildings with different energy consumptions can be easily identified. Thereafter, energy saving potential is estimated by setting step-by-step target, while feasible energy saving solutions can also be proposed in order to drive building energy performance at city level. A brief introduction of several key concepts, energy consumption in buildings, building energy declaration and big data, serves as the background information, which helps to clarify the necessity of conducting this master thesis. The methods used in this thesis include data processing, descriptive analysis, regression analysis, clustering analysis and energy saving potential analysis. The provided building energy declaration data is firstly processed in MS Excel then reorganized in MS Access. As for the data analysis process, IBM SPSS is further introduced for the descriptive analysis and graphical representation. By defining different energy performance indicators, the descriptive analysis presents the energy consumption and composition for different building classifications. The results also give the application details of different ventilation systems in different building types. Thereafter, the correlation between building energy performance and five different independent variables is analyzed by using a linear regression model. Clustering analysis is further performed on studied buildings for the purpose of targeting low energy efficiency groups, and the buildings with various energy consumptions are well identified and grouped based on their energy performance. It proves that clustering method is quite useful in the big data analysis, however some parameters in the process of clustering needs to be further adjusted in order to achieve more satisfied results. Energy saving potential for the studied buildings is calculated as well. The conclusion shows that the maximal potential for energy savings in the studied buildings is estimated at 43% (2.35 TWh) for residential buildings and 54% (1.68 TWh) for non-residential premises, and the saving potential is calculated for different building categories and different clusters as well.
|
599 |
Big Data Analytics towards a Retrofitting Plan for the City of Stockholmvan der Heijde, Bram January 2014 (has links)
This thesis summarises the outcomes of a Big Data analysis, performed on a set of hourly district heating energy consumption data from 2012 for nearly 15 000 buildings in the City of Stockholm. The aim of the study was to find patterns and inefficiencies in the consumption data using KNIME, a big data analysis tool, and to initiate a retrofitting plan for the city to counteract these inefficiencies. By defining a number of energy saving scenarios, the potential for increased efficiency is estimated and the resulting methodology can be used by other (smart) cities and policy makers to estimate savings potential elsewhere. In addition, the influence of weather circumstances, building location and building types is studied. In the introduction, a concise overview of the concepts Smart City and Big Data is given, together with their relevance for the energy challenges of the 21st century. Thereafter, a summary of the previous studies at the foundation of this research and a brief theory review of less common methods used in this thesis are presented. The method of this thesis consisted of first understanding and describing the dataset using descriptive statistics, studying the annual fluctuations in energy consumption and clustering all consumer groups per building class according to total consumption, consumption intensity and time of consumption. After these descriptive steps, a more analytical part starts with the definition of a number of energy saving scenarios. They are used to estimate the maximal potential for energy savings, regardless of actual measures, financial or temporal aspects. This hypothetical simulation is supplemented with a more realistic retrofitting plan that explores the feasibility of Stockholm’s Climate Action Plan for 2012-2015, using a limited set of energy efficiency measures and a fixed investment horizon. The analytical part is concluded with a spatial regression that sets out to determine the influence of wind velocity and temperature in different parts of Stockholm. The conclusions of this thesis are that the potential for energy savings in the studied data set can go up to 59% or 4.6 TWh. The financially justified savings are estimated at ca. 6% using favourable investment parameters. However, these savings quickly diminish because of a high sensitivity on the input parameters. The clustering analysis has not yielded the anticipated results, but they can be used as a tool to target investments towards groups of buildings that have a high return on investment.
|
600 |
Groundwater-stream connectivity from minutes to months across United States basins as revealed by spectral analysisClyne, Jacob B. January 2021 (has links)
No description available.
|
Page generated in 0.051 seconds