• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1531
  • 192
  • 128
  • 104
  • 19
  • 18
  • 9
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 3
  • Tagged with
  • 2199
  • 2199
  • 850
  • 456
  • 442
  • 283
  • 277
  • 249
  • 242
  • 221
  • 214
  • 202
  • 201
  • 199
  • 183
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
351

Driftövervakning av vindkraftverk

Rosendal, David, Hansson, Ivar January 2020 (has links)
Unlike todays fully-connected wind turbines, older models still in operation oftenretain their original analog control and monitoring systems. These turbines aredifficult and expensive to mantain, and often a visit is required to confirm thatthe turbine is working properly, or to investigate any problems. Elvira VindAB owns a wind turbine situated in Halmstad that was built in 1992. Thetechnology in the turbine is outdated and there is a high risk of doing moreharm than good when replacing parts, so a choice was made to implementnew additional sensors. This thesis describes the development of an operationsmonitoring system for this analog wind turbine. It describes integrating varioussensors that collect data on wind speed, rotor speed and movement with a LoRanetwork that transmits the data, and presentation to the user in Node-RED. Itis demonstrated that it is fully possible to connect a number of different sensorsto operationally monitor such a wind turbine, but the report also discusses faultsthat can occur when designing systems like these that rely on communicationchains that are less than completely reliable. / Till skillnad från dagens helt uppkopplade vindkraftverk finns det fortfarande kvar verk som är analogt kopplade. Denna typ av vindkraftverk är både dyra och svåra att laga och det krävs ofta ett besök för att kunna identifiera om vindkraftverket fungerar och i de fall det inte gör det få en bild av vad som är fel. Elvira vind AB äger ett vindkraftverk som är placerat i Halmstad och byggdes 1992. Den analoga tekniken som sitter i detta vindkraftverk är idag föråldrat och risken att orsaka skador på verket vid ett utbyte av teknik är så stor så valet blir istället att implementera nya sensorer. Denna uppsats handlar om framtagning av ett driftövervakningssystem vid detta analoga vindkraftverk. I rapporten presenteras användning och ihopslagning av olika sensorer där data som vindhastighet, rotorvarv och rörelse skickas via ett LoRa-nätverk och avläses i Node-Red. Det fastställs att det är fullt möjligt att koppla in ett antal olika sensorer för att driftövervaka ett system men att förlita sig på en kommunikationskälla som inte har full tillförlitlighet kan bidra till ett icke stabilt system och bristande kommunikationsmöjligheter.
352

Churn Prediction

Åkermark, Alexander, Hallefält, Mattias January 2019 (has links)
Churn analysis is an important tool for companies as it can reduce the costs that are related to customer churn. Churn prediction is the process of identifying users before they churn, this is done by implementing methods on collected data in order to find patterns that can be helpful when predicting new churners in the future.The objective of this report is to identify churners with the use of surveys collected from different golfclubs, their members and guests. This was accomplished by testing several different supervised machine learning algorithms in order to find the different classes and to see which supervised algorithms are most suitable for this kind of data.The margin of success was to have a greater accuracy than the percentage of major class in the datasetThe data was processed using label encoding, ONE-hot encoding and principal component analysis and was split into 10 folds, 9 training folds and 1 testing fold ensuring cross validation when iterated 10 times rearranging the test and training folds. Each algorithm processed the training data to create a classifier which was tested on the test data.The classifiers used for the project was K nearest neighbours, Support vector machine, multi-layer perceptron, decision trees and random forest.The different classifiers generally had an accuracy of around 72% and the best classifier which was random forest had an accuracy of 75%. All the classifiers had an accuracy above the margin of success.K-folding, confusion-matrices, classification report and other internal crossvalidation techniques were performed on the the data to ensure the quality of the classifier.The project was a success although there is a strong belief that the bottleneck for the project was the quality of the data in terms of new legislation when collecting and storing data that results in redundant and faulty data. / Churn analys är ett viktigt verktyg för företag då det kan reducera kostnaderna som är relaterade till kund churn. Churn prognoser är processen av att identifiera användare innan de churnas, detta är gjort med implementering av metoder på samlad data för att hitta mönster som är hjälpsamma när framtida användare ska prognoseras. Objektivet med denna rapport är att identifiera churnare med användning av enkäter samlade från golfklubbar och deras kunder och gäster. Det är uppnå att igenom att testa flera olika kontrollerade maskinlärnings algoritmer för att jämföra vilken algoritm som passar bäst. Felmarginalen uppgick till att ha en större träffsäkerhet än procenthalten av den dominanta klassen i datasetet. Datan behandlades med label encoding, ONE-hot encoding och principial komponent analys och delades upp i 10 delar, 9 träning och 1 test del för att säkerställa korsvalidering. Varje algoritm behandlade träningsdatan för att skapa att klassifierare som sedan testades på test datan. Klassifierarna som användes för projekted innefattar K nearest neighbours, Support vector machine, multi-layer perceptron, decision trees och random forest. De olika klassifierarna hade en generell träffssäkerhet omkring 72%, där den bästa var random forest med en träffssäkerhet på 75%. Alla klassifierare hade en träffsäkerhet än den felmarginal som st¨alldes. K-folding, confusion matrices, classification report och andra interna korsvaliderings tekniker användes för att säkerställa kvaliteten på klassifieraren. Projektet var lyckat, men det finns misstanke om att flaskhalsen för projektet låg inom kvaliteten på datan med hänsyn på villkor för ny lagstiftning vid insamling och lagring av data som leder till överflödiga och felaktiga uppgifter.
353

Live VM Migration : Principles and Performance / Livemigrering av Virtuella Maskiner : Principer och Prestanda

Svärd, Petter January 2012 (has links)
Virtualization is a key technology for cloud computing as it allows several operating system instances to run on the same machine, enhances resource manageability and enables flexible definition of billing units. Virtualization works by adding a software layer, a hypervisor, on top of the hardware platform. Virtual Machines, \emph{VMs}, are run on top of the hypervisor, which provisions hardwares resources to the VM guests. In addition to enabling higher utilization of hardware resources, the ability to move VMs from one host to another is an important feature. Live migration is the concept of migrating a VM while it is running and responding to requests. Since VMs can be re-located while running, live migration allows for better hardware utilization. This is because placement of services can be performed dynamically and not only when the are started. Live migration is also a useful tool for administrative purposes. If a server needs to be taken off-line for maintenance reasons, it can be cleared of services by live migrating these to other hosts. This thesis investigates the principles behind live migration. The common live migration approaches in use today are evaluated and common objectives are presented as well as challenges that have to be overcome in order to implement an ideal live migration algorithm. The performance of common live migration approaches is also evaluated and it is found that even though live migration is supported by most hypervisors, it has drawbacks which makes the technique hard to use in certain situations. Migrating CPU and/or memory intensive VMs or migrating VMs over low-bandwidth links is a problem regardless of which approach that is used. To tackle this problem, two improvements to live migration are proposed and evaluated, delta compression and dynamic page transfer reordering. Both improvements demonstrate better performance than the standard algorithm when migrating CPU and/or memory intensive VMs and migrating over low bandwidth links. Finally, recommendations are made on which live migration approach to use depending on the scenario and also what improvements to the standard live migration algorithms should be used and when.
354

Comparison between OpenStack virtual machines and Docker containers in regards to performance

Bonnier, Victor January 2020 (has links)
Cloud computing is a fast growing technology which more and more companies are starting to use throughout the years. When deploying a cloud computing application it is important to know what kind of technology that you should use. Two popular technologies are containers and virtual machines. The objective with this study was to find out how the performance differs between Docker containers and OpenStack virtual machines in regards to memory usage, CPU utilization, time to boot up and throughput from a scalability perspective when scaling between two and four instances of containers and virtual machines. The comparison was done by having two different virtual machines running, one with Docker that ran the containers and another machine with OpenStack that was running a stack of my virtual machines. To gather the data from the virtual machines I used the command ”htop” and to get the data from the containers, I used the command ”Docker stats”. The results from the experiment showed a favor towards the Docker containers where the boot time on the virtual machines were between 280-320 seconds and the containers had between 5-8 seconds bootup time. The memory usage was more than doubled on the virtual machines than the containers. The CPU utilization and throughput favored the containers and the gap in performance increased when scaling the application outwards to four instances in all cases except for the throughput when adding information to a database. The conclusion that can be drawn from this is that Docker containers are favored over the OpenStack virtual machines from a performance perspective. There are still other aspects to think about regarding when choosing which technology to use when deploying a cloud application, such as security for example.
355

Safe Configurable Multitenant SaaS / Säker konfigurerbar multitenant SaaS

Leijonhufvud, Adam, Håkansson, Filip January 2020 (has links)
Cloud computing is a significant step forward in computer science. It enables customers to use applications on devices such as telephones, tablets, and computers over the internet. However, in the case of some applications, moving to the cloud can be challenging. Enterprise Resource Planning (ERP) is one example of such an application. ERPs need to be configurable since each company is different and has unique use cases. These configurations could be done by manipulating the logic and execution of programs by extending or modifying existing classes, basically writing customized plugins. The customer or the vendor could easily configure traditional offline single-tenant ERPs. Today, however, having this level of customization in a cloud-based multi-tenant ERP system is not an easy task. Since every customer shares the same application, though isolated from each other, changes made for one customer are made for every customer. Therefore, in this paper, we aim to find one or several answers to the question: how can you enable deep customization in multi-tenant SaaS systems in a secure way? A structured literature study is performed to analyze and investigate different solutions. The results gathered from the literature study showed that three solutions could be adapted: microservices, extensible programming, and static analysis tools. However, based on some requirements, extensible programming was found most suitable for the investigated ERP. / configurable, multitenant, SaaS, code,
356

Docker forensics: Investigation and data recovery on containers / Dockerforensik: Undersökning och datautvinning av containers

Davidsson, Pontus, Englund, Niklas January 2020 (has links)
Container technology continuously grows in popularity, and the forensic area is less explored than other areas of research concerning containers. The aim of this thesis is, therefore, to explore Docker containers in a forensic investigation to test whether data can be recovered from deleted containers and how malicious processes can be detected in active containers. The results of the experiments show that, depending on which container is used, and how it is configured, data sometimes persists after the container is removed. Furthermore, file carving is tested and evaluated as a useful method of recovering lost files from deleted containers, should data not persist. Lastly, tests reveal that malicious processes running inside an active container can be detected by inspection from the host machine.
357

Development of a prototype framework for monitoring application events / Utveckling av ett monitoreringsramverk för applikationshändelser

Persson, Edvin January 2020 (has links)
Software rarely comes without maintenance after it is released. There can be bugs not captured in development or performance that might not meet expectations; therefore, it is crucial to be able to collect data from running software, preemptively addressing such issues. A common way to monitor the general health of a system is by monitoring it through the users' perspective — so-called "black-box" monitoring. Making a more sophisticated analysis of software requires code that offers no functionality to the software, whose purpose is to create data about the software itself. A common way of creating such data is through logging. While logging can be used in the general case, alternatively, more specific solutions can offer an easier pipeline to work with; while not being suited for tasks such as root-cause analysis.This study briefly looks at four different frameworks, all having different approaches to collect and structure data. This study also covers the development of a proof-of-concept framework that creates structured events through logging — along with a SQL-server database to store the event data.
358

Recommender System for Retail Industry : Ease customers’ purchase by generating personal purchase carts consisting of relevant and original products

CARRA, Florian January 2016 (has links)
In this study we explore the problem of purchase cart recommendationin the field of retail. How can we push the right customize purchase cart that would consider both habits and serendipity constraints? Recommender Systems application is widely restricted to Internet services providers: movie recommendation, e-commerce, search engine. We brought algorithmic and technological breakthroughs to outdated retail systems while keeping in mind its own specificities: purchase cart rather than single products, restricted interactions between customers and products. After collecting ingenious recommendations methods, we defined two major directions - the correctness and the serendipity - that would serve as discriminant aspects to compare multiple solutions we implemented. We expect our solutions to have beneficial impacts on customers, gaining time and open-mindedness, and gradually obliterate the separation between supermarkets and e-commerce platforms as far as customized experience is concerned.
359

Datacenter vs Serverhallar : Hur hållbart är molnbaserade lösningar?

Larsson, Joacim January 2022 (has links)
This thesis examines the life cycles of a data centre and a local server to conclude which of them has the largest environmental impact. Data centres are warehouse-looking buildings with servers in them, hosting the so-called “cloud”. Storing your data and information in a data centre and using its software and other resources through the internet is called cloud computing. The more traditional alternative to cloud computing is to have your own servers locally. You can access these without the internet, but you must operate and service it by yourself. There are some benefits with using cloud computing instead of local servers. One of these is often said to be energy efficiency and by that less environmental impact. This thesis sets out to confirm or refute, that statement. To do that, a comparing Life Cycle Assessment (LCA) has been done. A comparing LCA is a method to compare two similar systems and how much environmental impact they both have had during their lifetime. To assess this the amount of carbon dioxide equivalents emissions per the number of virtual machines has been used as a functional unit. The assessment was made on a hypothetical data centre and a hypothetical local server. The results showed that the local server had around 9.5 more carbon dioxide emissions per virtual machine than the data centre which confirms that the data centre has the lesser environmental impact of the two. This was mostly because of a more efficient use of the servers and the fact that a data centre has the possibility to recover some of its waste heat.
360

Real-time View-dependent Triangulation of Infinite Ray Cast Terrain

Cavallin, Fritjof, Pettersson, Timmie January 2019 (has links)
Background. Ray marching is a technique that can be used to render images of infinite terrains defined by a height field by sampling consecutive points along a ray until the terrain surface is intersected. However, this technique can be expensive, and does not generate a mesh representation, which may be useful in certain use cases. Objectives. The aim of the thesis is to implement an algorithm for view-dependent triangulation of infinite terrains in real-time without making use of any preprocessed data, and compare the performance and visual quality of the implementation with that of a ray marched solution. Methods. Performance metrics for both implementations are gathered and compared. Rendered images from both methods are compared using an image quality assessment algorithm. Results. In all tests performed, the proposed method performs better in terms of frame rate than a ray marched version. The visual similarity between the two methods highly depend on the quality setting of the triangulation. Conclusions. The proposed method can perform better than a ray marched version, but is more reliant on CPU processing, and can suffer from visual popping artifacts as the terrain is refined.

Page generated in 0.0823 seconds