Spelling suggestions: "subject:"bigdata"" "subject:"bølgedata""
591 |
Defining, analyzing and determining power losses - due to icing on wind turbine bladesCanovas Lotthagen, Zandra January 2020 (has links)
The wind power industry is one of the fastest-growing renewable energy industries in the world. Since more energy can be extracted from wind when the density is higher, a lot of the investments made in the wind power industry are made in cold climates. But with cold climates come harsh weather conditions such as icing. The icing on wind power rotor blades causes the aerodynamic properties of the blade to shift and with further ice accretion, the wind power plant can come to a standstill causing a loss of power, until the ice is melted. How big these losses are, depend greatly on site-specific variables such as elevation, temperature, and precipitation. The literature claims these ice-related losses can correspond to 10-35% of the annual expected energy output. Some studies have been made to standardize an ice loss determining method to be used by the industry, yet a standardization of calculating these losses do not exist. It was therefore interesting for this thesis to investigate the different methods that are being used. By using historical Supervisory Control and Data Acquisition (SCADA) data for two different sites located in Sweden, a robust ice determining code was created to identify ice losses. Nearly 32 million data points are being analyzed, and the data itself is provided by Siemens Gamesa which is one of the biggest companies within the wind power industry. A sensitivity analysis was made, and it was shown that a reference dataset reaching from May to September for four years could be used to clearly identify ice losses. To find the ice losses, three different scenarios were tested. The three scenarios use different temperature intervals to find ice losses. For scenario 1 all data points below 0 degrees are investigated. And for scenario 2 and 3 this interval is stretching from 3 degrees and below versus 5 degrees and below. It was found that Scenario 3, was the optimal way to identify the ice losses. Scenario 3 filtered the raw data so that only data points with a temperature below five degrees was used. For the two sites investigated, the annual ice losses were found to lower the annual energy output by 5-10%. Further, the correlation between temperature, precipitation, and ice losses was investigated. It was found that low temperature and high precipitation is strongly correlated to ice losses.
|
592 |
The Implementation of social CRM : Key features and significant challenges associated with thepractical implementation of social Customer RelationshipManagementKansbod, Julia January 2022 (has links)
The rise of social media has challenged the traditional notionof CRM and introduced a new paradigm, known as socialCRM. While there are many benefits and opportunitiesassociated with the integration of social data in CRMsystems, a majority of companies are failing their social CRMimplementation. Since social CRM is still considered to be ayoung phenomenon, knowledge regarding itsimplementation and functionalities is limited. The purpose ofthis study is to contribute to the current state of knowledgeregarding the factors which influence the practicalimplementation of social CRM. In order to capturestate-of-the-art knowledge on this topic, a literature reviewwas conducted. In addition, interviews with CRM expertsworking within five Swedish companies were included inorder to gain additional insights from practice. Findingsindicate that the key features needed for social CRMimplementation revolve around the real-time monitoring,collection, processing, storing and analyzing of social data.Advanced technical tools, such as Big Data Technology, aredeemed required in order to handle large volumes of dataand properly transform it into valuable knowledge. The mostsignificant challenges identified heavily revolve aroundlimited knowledge as well as various technical andorganizational limitations. Additionally, findings indicatethat a multitude of uncertainties of practitioners revolvearound data legislations and privacy concerns. Hence, whilesocial CRM can entail a multitude of benefits, there are asignificant number of challenges which seem to stand in theway of unlocking the full potential of social CRM. In orderfor social CRM implementation to be made more accessiblefor organizations in the future, there is a need for moreknowledge and clarity regarding factors such as technicalsolutions, organizational changes and legislations.
|
593 |
The adoption of Industry 4.0- technologies in manufacturing : a multiple case studyNILSEN, SAMUEL, NYBERG, ERIC January 2016 (has links)
Innovations such as combustion engines, electricity and assembly lines have all had a significant role in manufacturing, where the past three industrial revolutions have changed the way manufacturing is performed. The technical progress within the manufacturing industry continues at a high rate and today's progress can be seen as a part of the fourth industrial revolution. The progress can be exemplified by ”Industrie 4.0”; the German government's vision of future manufacturing. Previous studies have been conducted with the aim of investigating the benefits, progress and relevance of Industry 4.0-technologies. Little emphasis in these studies has been put on differences in implementation and relevance of Industry 4.0-technologies across and within industries. This thesis aims to investigate the adoption of Industry 4.0-technologies among and within selected industries and what types of patterns that exists among them. Using a qualitative multiple case study consisting of firms from Aerospace, Heavy equipment, Automation, Electronics and Motor Vehicle Industry, we gain insight into how leading firms are implementing the technologies. In order to identify the factors determining how Industry 4.0-technologies are implemented and what common themes can be found, we introduce the concept production logic, which is built upon the connection between competitive priorities; quality, flexibility, delivery time, cost efficiency and ergonomics. This thesis has two contributions. In our first contribution, we have categorized technologies within Industry 4.0 into two bundles; the Human-Machine-Interface (HMI) and the connectivity bundle. The HMI bundle includes devices for assisting operators in manufacturing activities, such as touchscreens, augmented reality and collaborative robots. The connectivity-bundle includes systems for connecting devices, collecting and analyzing data from the digitalized factory. The result of this master thesis indicates that depending on a firm’s or industry’s logic of production, the adoption of elements from the technology bundles differ. Firms where flexibility is dominant tend to implement elements from the HMI-bundle to a larger degree. In the other end, firms with few product variations where quality and efficiency dominates the production logic tends to implement elements from the connectivity bundle in order to tightly monitor and improve quality in their assembly. Regardless of production logic, firms are implementing elements from both bundles, but with different composition and applications. The second contribution is within the literature of technological transitions. In this contribution, we have studied the rise and development of the HMI-bundle in the light of Geels (2002) Multi-Level Perspective (MLP). It can be concluded that an increased pressure on the landscape-level in the form of changes in the consumer-market and the attitudes within the labor force has created a gradual spread of the HMI-bundle within industries. The bundles have also been studied through Rogers (1995) five attributes of innovation, where the lack of testability and observability prevents increased application of M2M-interfaces. Concerning Big Data and analytics, the high complexity prevents the technology from being further applied. As the HMI-bundle involves a number of technologies with large differences in properties, it is hard draw any conclusion using the attributes of innovation about what limits their application.
|
594 |
Att öka mottagligheten för branded content genom hyper-personalisering : En användarstudie mot målgruppen för digitala tidskrifter inom populärkultur. / Increasing susceptibility to branded content through hyper-personalization.Sombo, Alexandros January 2015 (has links)
I denna studie utreder jag om hur branded content mottas av målgruppen för digitala tidskrifter inom populärkultur genom en framträdande teknik in webb-personalisering, hyper-personalisering. Målgruppen för denna studie är unga opinionsbildare som konsumerar innehåll från digitala tidskrifter som exempelvis Nöjesguiden. Branded content, sponsrat innehåll, är innehåll som skapas för att ett varumärke skall förknippas med en kreatörs målgrupp. Alltså kan ett varumärke be Nöjesguiden skapa redaktionellt innehåll som bör tilltala målgruppen för den digitala tidskriften, och på så sätt skall målgruppen få en ny uppfattning eller fortsatt positiv syn på varumärket. Hyperpersonalisering är en teknik som appliceras för att kunna rikta innehåll, tjänster eller produkter mot individer inom en målgrupp med träffsäker relevans. Tekniken kräver en stor mängd insamlad social data och det blir därför intressant att föra en diskussion kring hur målgruppen reagerar på att man kan genomföra en sådan enorm insamling av data. Även en etisk diskussion kring tekniken förs i rapporten. För kunna besvara om mottagligheten för branded content genom hyper-personalisering är god eller ej genomfördes en kvantitativ studie i form av en enkät och en kvalitativ studie med ett användarexperiment. Enkäten besvarades av 87 personer från målgruppen och fyra personer från målgruppen fick vara med och uppleva användarexperimentet. Valet att genomföra ett flertal metoder grundar sig i att kunna ha möjlighet till en bred diskussion kring problemformuleringen. / In this study I am investigating how branded content is received by the target audience for digital periodicals in popular culture through a prominent technology in web personalization, hyper-personalization. The target group for this study is young opinion builders who consume content from digital periodicals such as “Nöjesguiden”. Branded content is content that is created for a brand to be associated with a creator’s audience. Thus, a brand might ask “Nöjesguiden” to create editorial content that should appeal the target audience for the digital magazine and so should the target audience get a new impression or a continued positive view of the brand. Hyper-personalization is a technology applied to target content, services or products to individuals within a target audience with accurate relevance. The technology requires a large amount of collected social data and it will therefore be interesting to conduct a discussion on how the target group reacts to that you can implement such an enormous collection of data. An ethical discussion about the technology is held in the report. In order to answer the susceptibility of branded content through hyper-personalization is good or not, a quantitative study in the form of a survey and a qualitative study with a user experiment was conducted. 87 people answered the survey from the target group and four people from the target group were taking part of the user experience. The choice of having a multiple number of methods was fundamental for having a broad discussion about the issue.
|
595 |
Accurately measuring content consumption on a modern Play service : Noggrann mätning av konsumtion på en modern Play-tjänstCederman, Mårten January 2015 (has links)
This research represents an attempt to define and accurately measure user consumption of content on a modern, advertised VOD service (AVOD), more specifically known in Sweden as a Play service. With a foundation of previous research in the area of VOD and AVOD services, the characteristics and flaws of these types of platforms are discussed to shine light on factors that might concern Play services. Optimizing the vast content inventory offered on these services is crucial for long term profitability, and to achieve this, content providers need to understanding how to measure the consumption properly. A contentcentric approach was used to focus on individual formats (e.g. TV shows) and the factors that can describe its consumption. A macro perspective was initially applied to investigate global factors that dictates consumption. Analysis on a micro level was carried out by analyzing tracking data collected for a full year from one of the biggest Play services in Sweden, TV3play.se. Ultimately, the development of a new method to measure consumption called Consumption Volume Score (CVS) is proposed. It is introduced as an alternative to the traditional unit which has been to measure the amount of video starts (VS). The validity was evaluated using comparison of rank difference for individual formats, using both methods and different criterias. The results shows that the method of using CVS to measure consumption yields little to no difference in ranking of highly popular formats, while less consumed formats had a more varied change in rank. Further analysis on some of these formats indicated that they might have a dedicated niche audience, where content editors might see potential gains from handpicking them to optimize the consumption further. The findings gives support to believe that CVS as a unit of measuring consumption can help to further understand how individual formats perform, especially less consumed and potentially niched ones. Future research on CVS is recommended to discern its feasibility in a live context. / Denna forskning ämnar att finna metoder för att definiera samt noggrant kunna mäta konsumtionen av streamad media på en modern, reklamfinansierad VODtjänst (AVOD), i Sverige är dessa mer kända som Playtjänster. Med utgångspunkt från tidigare forskning inom på streamingtjänster, belyses de egenskaper och brister dessa typer av plattformar kan påverkas av, i synnerhet de faktorer som kan vara relevanta för Playtjänster. Att kunna organisera och underhålla de stora mängder media som erbjuds på dessa tjänster är avgörande för långsiktig lönsamhet, och för att uppnå detta måste redaktörerna förstå hur man mäter konsumtionen med hög tillförlitlighet i sin uppgift att ta väl grundade beslut. En innehållscentrerat (contentcentric) tillvägagångssätt användes för att kunna fokusera på enskilda format (t.ex. TVprogram) och de inbördes faktorer som kan beskriva dess konsumtion. Ett makroperspektiv applicerades initialt för att undersöka de globala faktorer som styr konsumtion. Vidare tillämpades en analys på mikronivå genom att undersöka spårningsdata kopplat till det innehåll som finns tillgängligt på plattformen. Underlaget för spårningsdatan bestod av ett helt års konsumtion (2014) från en av de största Playtjänsterna i Sverige, TV3play.se. Framtagning av en ny metod för att mäta konsumtionen kallad Consumption Volume Score (CVS) föreslås och införs som ett alternativ till den traditionella metod som har varit att mäta den totala mängden av Videostarter (VS). Signifikansen av den nya metoden utvärderades genom jämförelse av rangordning av enskilda format baserat på CVS och VS samt olika kriterier. Resultaten visar att CVS för att mäta konsumtion ger ytterst liten eller ingen skillnad i ranking av mycket populära format, medan mindre konsumerade format hade en mer varierad förändring i rang. Vidare analys av en del av dessa format indikerade att de skulle kunna ha en nischad publik som uppskattar innehållet, trots relativt låg konsumption. För dessa format anser jag att det finns möjlighet för redaktörer att manuellt handplocka dem för att optimera konsumtionen ytterligare. Resultaten ger underlag för godta CVS som en signifikant mätenhet för konsumtion och kan bidra till att förstå hur enskilda format presterar, särskilt mindre konsumerade och potentiellt nischade sådana. Framtida forskning om CVS som metod för att mäta konsumtion rekommenderas, i synnerhet för att avgöra hur väl det lämpas att applicera i en skarp miljö.
|
596 |
Big Data Analytics of City Wide Building Energy DeclarationsMA, YIXIAO January 2015 (has links)
This thesis explores the building energy performance of the domestic sector in the city of Stockholm based on the building energy declaration database. The aims of this master thesis are to analyze the big data sets of around 20,000 buildings in Stockholm region, explore the correlation between building energy performance and different internal and external affecting factors on building energy consumption, such as building energy systems, building vintages and etc. By using clustering method, buildings with different energy consumptions can be easily identified. Thereafter, energy saving potential is estimated by setting step-by-step target, while feasible energy saving solutions can also be proposed in order to drive building energy performance at city level. A brief introduction of several key concepts, energy consumption in buildings, building energy declaration and big data, serves as the background information, which helps to clarify the necessity of conducting this master thesis. The methods used in this thesis include data processing, descriptive analysis, regression analysis, clustering analysis and energy saving potential analysis. The provided building energy declaration data is firstly processed in MS Excel then reorganized in MS Access. As for the data analysis process, IBM SPSS is further introduced for the descriptive analysis and graphical representation. By defining different energy performance indicators, the descriptive analysis presents the energy consumption and composition for different building classifications. The results also give the application details of different ventilation systems in different building types. Thereafter, the correlation between building energy performance and five different independent variables is analyzed by using a linear regression model. Clustering analysis is further performed on studied buildings for the purpose of targeting low energy efficiency groups, and the buildings with various energy consumptions are well identified and grouped based on their energy performance. It proves that clustering method is quite useful in the big data analysis, however some parameters in the process of clustering needs to be further adjusted in order to achieve more satisfied results. Energy saving potential for the studied buildings is calculated as well. The conclusion shows that the maximal potential for energy savings in the studied buildings is estimated at 43% (2.35 TWh) for residential buildings and 54% (1.68 TWh) for non-residential premises, and the saving potential is calculated for different building categories and different clusters as well.
|
597 |
Big Data Analytics towards a Retrofitting Plan for the City of Stockholmvan der Heijde, Bram January 2014 (has links)
This thesis summarises the outcomes of a Big Data analysis, performed on a set of hourly district heating energy consumption data from 2012 for nearly 15 000 buildings in the City of Stockholm. The aim of the study was to find patterns and inefficiencies in the consumption data using KNIME, a big data analysis tool, and to initiate a retrofitting plan for the city to counteract these inefficiencies. By defining a number of energy saving scenarios, the potential for increased efficiency is estimated and the resulting methodology can be used by other (smart) cities and policy makers to estimate savings potential elsewhere. In addition, the influence of weather circumstances, building location and building types is studied. In the introduction, a concise overview of the concepts Smart City and Big Data is given, together with their relevance for the energy challenges of the 21st century. Thereafter, a summary of the previous studies at the foundation of this research and a brief theory review of less common methods used in this thesis are presented. The method of this thesis consisted of first understanding and describing the dataset using descriptive statistics, studying the annual fluctuations in energy consumption and clustering all consumer groups per building class according to total consumption, consumption intensity and time of consumption. After these descriptive steps, a more analytical part starts with the definition of a number of energy saving scenarios. They are used to estimate the maximal potential for energy savings, regardless of actual measures, financial or temporal aspects. This hypothetical simulation is supplemented with a more realistic retrofitting plan that explores the feasibility of Stockholm’s Climate Action Plan for 2012-2015, using a limited set of energy efficiency measures and a fixed investment horizon. The analytical part is concluded with a spatial regression that sets out to determine the influence of wind velocity and temperature in different parts of Stockholm. The conclusions of this thesis are that the potential for energy savings in the studied data set can go up to 59% or 4.6 TWh. The financially justified savings are estimated at ca. 6% using favourable investment parameters. However, these savings quickly diminish because of a high sensitivity on the input parameters. The clustering analysis has not yielded the anticipated results, but they can be used as a tool to target investments towards groups of buildings that have a high return on investment.
|
598 |
Groundwater-stream connectivity from minutes to months across United States basins as revealed by spectral analysisClyne, Jacob B. January 2021 (has links)
No description available.
|
599 |
Experimental Investigation of Container-based Virtualization Platforms For a Cassandra ClusterSulewski, Patryk, Jesper, Hallborg January 2017 (has links)
Context. Cloud computing is growing fast and has established itself as the next generationsoftware infrastructure. A major role in cloud computing is the virtualization of hardware toisolate systems from each other. This virtualization is often done with Virtual Machines thatemulate both hardware and software, which in turn makes the process isolation expensive. Newtechniques, known as Microservices or containers, has been developed to deal with the overhead.The infrastructure is conjoint with storing, processing and serving vast and unstructureddata sets. The overall cloud system needs to have high performance while providing scalabilityand easy deployment. Microservices can be introduced for all kinds of applications in a cloudcomputing network, and be a better fit for certain products.Objectives. In this study we investigate how a small system consisting of a Cassandra clusterperform while encapsulated in LXC and Docker containers, compared to a non virtualizedstructure. A specific loader is built to stress the cluster to find the limits of the containers.Methods. We constructed an experiment on a three node Cassandra cluster. Test data is sentfrom the Cassandra-loader from another server in the network. The Cassandra processes are thendeployed in the different architectures and tested. During these tests the metrics CPU, disk I/O,network I/O are monitored on the four servers. The data from the metrics is used in statisticalanalysis to find significant deviations.Results. Three experiments are being conducted and monitored. The Cluster test pointed outthat isolated Docker container indicate major latency during disk reads. A local stress test furtherconfirmed those results. The step-wise test in turn, implied that disk read latencies happened dueto isolated Docker containers needs to read more data to handle these requests. All Microservicesprovide some overheads, but fall behind the most for read requests.Conclusions. The results in this study show that virtualization of Cassandra nodes in a clusterbring latency in comparison to a non virtualized solution for write operations. However, thoselatencies can be neglected if scalability in a system is the main focus. For read operationsall microservices had reduced performance and isolated Docker containers brought out thehighest overhead. This is due to the file system used in those containers, which makes disk I/Oslower compared to the other structures. If a Cassandra cluster is to be launched in a containerenvironment we recommend a Docker container with mounted disks to bypass Dockers filesystem or a LXC solution.
|
600 |
Modelos y arquitecturas de computación móvil en la nube para el desarrollo de los sistemas ciberfísicosColom López, José Francisco 14 July 2016 (has links)
El estado actual y la previsible evolución de las tecnologías de la información y la comunicación se caracteriza por una inundación de dispositivos, muchos de ellos móviles, con altas capacidades de procesamiento, almacenamiento y comunicación en red, junto con una explosión de servicios proporcionados remotamente según diversos paradigmas de computación en la nube. Esta realidad impulsa el desarrollo de nuevas aplicaciones, muchas de ellas centradas en la ejecución de procesos analíticos y extracción de conocimiento a partir de la gran masa de datos producida, entre otras razones, por tal multiplicidad de dispositivos y servicios. Las nuevas aplicaciones presentan importantes retos tales como la necesidad de satisfacer ciertos parámetros de calidad de servicio, la ejecución de una parte de su procesamiento en tiempo real, la disponibilidad permanente de los servicios, el mantenimiento de la privacidad o el ahorro energético, entre otros. El objetivo general de este trabajo consiste en avanzar en la investigación y el desarrollo de modelos y arquitecturas distribuidas capaces de extraer ventaja de esta gran multiplicidad de recursos locales y remotos para favorecer la construcción de aplicaciones que aporten valor a la sociedad digital. La actividad investigadora se enmarca en el ámbito de tres importantes tendencias convergentes que marcan la actualidad y el futuro de las tecnologías de la información y la comunicación: Internet of things (IoT), computación en la nube (cloud computing) y big data. Dentro de este amplio abanico, el trabajo se centra particularmente en los llamados sistemas ciberfísicos, en los que se enfatiza el análisis y control de procesos físicos frente a la interconexión de dispositivos en la Internet global. Desde el punto de vista metodológico, el trabajo se encuadra en el área de la informática experimental. La actividad parte de la observación y detección de oportunidades de mejora en aplicaciones concretas. A partir de dicha observación se realiza una caracterización que engloba a todo un conjunto de posibles servicios. A continuación se diseñan modelos, arquitecturas y métodos que aportan soluciones a los problemas planteados. Finalmente, se lleva a cabo un proceso de validación experimental y posterior revisión mediante la puesta en marcha de prototipos y simulaciones. La principal contribución de la investigación es la concepción y desarrollo de modelos y arquitecturas basándose en dos pilares fundamentales que constituyen la hipótesis de trabajo: (1) la monitorización de la carga de los dispositivos locales y servicios en la nube, y (2) el cálculo o estimación del impacto de los procesos de los sistemas ciberfísicos en los recursos de computación disponibles en la red. Las soluciones propuestas permiten acrecentar los recursos disponibles de una forma flexible, poniendo en marcha estrategias que sopesen las variables relevantes tales como condiciones cambiantes de conectividad, el coste de los modelos de utilidad o el consumo de energía, pero manteniendo las restricciones con las que deben ejecutarse los procesos de la aplicación, tales como requerimientos de tiempo real, disponibilidad permanente o coste.
|
Page generated in 0.0368 seconds