• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • 6
  • 2
  • 1
  • 1
  • Tagged with
  • 28
  • 28
  • 14
  • 10
  • 10
  • 8
  • 7
  • 7
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Cloud computing ve firemním prostředí / Cloud Computing in Business

Šíma, Josef January 2015 (has links)
The thesis deals with the means of using cloud computing in business. The aim is to present the selected cloud services currently available on the market and then compare and evaluate them according to certain criteria. After the characteristics of the technology have been listed, including its advantages and disadvantages, a detailed description of the chosen products follows. Specifically, these are Microsoft Office 365, G Suite (formerly also known as Google Apps) and Dropbox Business. In the next section of the thesis, the services are evaluated in each of the selected criteria using the Kepner-Tregoe decision analysis. The list of the criteria includes, among others, the usability of the user interface, security, or deployment in a company.
2

Specialization of an Existing Image Recognition Service Using a Neural Network

Ersson, Sara, Dahl, Oskar January 2018 (has links)
To help combat the environmental impacts caused by humans this project is about investigating one way to simplify the waste management process. The idea is to use image recognition to identify what material the recyclable object is made of. A large data set containing labeled images of trash, called Trashnet, was analyzed using Google Cloud Vision. Since this API is not written for material detection specifically, a feed forward neural network was created using Tensorflow and trained with the output from Google Cloud Vision. Thus, the network learned how different word combinations from Google Cloud Vision implicated one of five different materials; glass, plastic, paper, metal and combustible waste. The network checked for 518 unique words in the input and ran them through two hidden layers with a size of 1000 nodes each, before having a one hot output layer. This neural network received an accuracy of around 60%, which beat Google Cloud Vision’s meager accuracy of around 30%. An application, with which the user can take pictures of the object he or she would like to recycle, could be developed with an educational purpose to let its user know what material the waste is made of, and with this information be able to throw the waste in the right bin. / För att hjälpa till att motverka människans negativa påverkan på miljön kommer detta projekt handla om att undersöka hur man kan göra det enklare att källsortera. Grundidén är att använda bildigenkänning för att identifiera vilket återvinningsbart material som objektet i bilden består av. Ett stort dataset med bilder indelade i olika återvinningsbara material, kallat Trashnet, analyserades med hjälp av Google Cloud Vision, vilket är ett API för bildigenkänning och inte specifikt igenkänning av material. Med hjälp av Tensorflow skapades ett neuralt nätverk som använder utdatan från Google Cloud Vision som indata, vilket i sin tur kan ge ett av fem olika material som utdata; glas, plast, papper, metall eller brännbart. Nätverket lärde sig hur olika ordkombinationer från Google Cloud Vision implikerade ett av de fem materialen. Nätverkets indata-lager består av de 518 unika orden som Google Cloud Vision sammanlagt gav som utdata efter att ha analyserade Trashnets dataset. Dessa ord körs igenom två dolda lager, vilka båda består av 1000 noder var, innan det sista lagret, som är ett ”one hot”-utdatalager. Detta nätverk fick en träffsäkerhet på cirka 60%, vilket slog Google Cloud Visions träffsäkerhet på cirka 30%. Detta skulle kunna användas i en applikation, där användaren tar en bild på det skräp som önskas återvinnas, som utvecklas i utbildningssyfte att lära användaren vilket material dennes återvinningsbara föremål är gjort av, och med denna information bättre kunna källsortera.
3

Inferring models from cloud APIs and reasoning over them : a tooled and formal approach / Inférer des modèles à partir d'APIs cloud et raisonner dessus : une approche outillée et formelle

Challita, Stéphanie 21 December 2018 (has links)
Avec l’avènement de l’informatique en nuage, différents fournisseurs offrant des services en nuage et des interfaces de programmation d’applications (APIs) hétérogènes sont apparus. Cette hétérogénéité complique la mise en œuvre d’un système de multi-nuages interopérable. Parmi les solutions pour l’interopérabilité de multi-nuages, l’Ingénierie Dirigée par les Modèles (IDM) s’est révélée avantageuse. Cependant, la plupart des solutions IDM existantes pour l’informatique en nuage ne sont pas représentatives des APIs et manquent de formalisation. Pour remédier à ces limitations, je présente dans cette thèse une approche basée sur le standard Open Cloud Computing Interface (OCCI), les approches IDM et les méthodes formelles. Je fournis deux contributions qui sont mises en œuvre dans le contexte du projet OCCIware. Premièrement, je propose une approche basée sur la rétro-ingénierie pour extraire des connaissances des documentations textuelles ambiguës des APIs de nuages et améliorer leur représentation à l’aide des techniques IDM. Cette approche est appliquée à Google Cloud Platform (GCP), où je propose GCP Model, une spécification précise et basée sur les modèles, automatiquement déduite de la documentation textuelle de GCP. Deuxièmement, je propose le cadre fclouds pour assurer une interopérabilité sémantique entre plusieurs nuages, i.e., pour identifier les concepts communs entre les APIs et raisonner dessus. Le langage fclouds est une formalisation des concepts et de la sémantique opérationnelle d’OCCI en employant le langage de spécification formel Alloy. Pour démontrer l’efficacité du langage fclouds, je spécifie formellement treize APIs et en vérifie les propriétés. / With the advent of cloud computing, different cloud providers with heterogeneous cloud services and Application Programming Interfaces (APIs) have emerged. This heterogeneity complicates the implementation of an interoperable multi-cloud system. Among the multi-cloud interoperability solutions, Model-Driven Engineering (MDE) has proven to be quite advantageous and is the mostly adopted methodology to rise in abstraction and mask the heterogeneity of the cloud. However, most of the existing MDE solutions for the cloud are not representative of the cloud APIs and lack of formalization. To address these shortcomings, I present in this thesis an approach based on Open Cloud Computing Interface (OCCI) standard, MDE, and formal methods. I provide two major contributions implemented in the context of the OCCIware project. First, I propose an approach based on reverse-engineering to extract knowledge from the ambiguous textual documentation of cloud APIs and to enhance its representation using MDE techniques. This approach is applied to Google Cloud Platform (GCP), where I provide GCP Model, a precise model-driven specification for GCP that is automatically inferred from GCP textual documentation. Second, I propose the fclouds framework to achieve semantic interoperability in multi-clouds, i.e., to identify the common concepts between cloud APIs and to reason over them. The fclouds language is a formalization of OCCI concepts and operational semantics in Alloy formal specification language. To demonstrate the effectiveness of the fclouds language, I formally specify thirteen case studies and verify their properties.
4

Separation and Extraction of Valuable Information From Digital Receipts Using Google Cloud Vision OCR.

Johansson, Elias January 2019 (has links)
Automatization is a desirable feature in many business areas. Manually extracting information from a physical object such as a receipt is something that can be automated to save resources for a company or a private person. In this paper the process will be described of combining an already existing OCR engine with a developed python script to achieve data extraction of valuable information from a digital image of a receipt. Values such as VAT, VAT%, date, total-, gross-, and net-cost; will be considered as valuable information. This is a feature that has already been implemented in existing applications. However, the company that I have done this project for are interested in creating their own version. This project is an experiment to see if it is possible to implement such an application using restricted resources. To develop a program that can extract the information mentioned above. In this paper you will be guided though the process of the development of the program. As well as indulging in the mindset, findings and the steps taken to overcome the problems encountered along the way. The program achieved a success rate of 86.6% in extracting the most valuable information: total cost, VAT% and date from a set of 53 receipts originated from 34 separate establishments.
5

Aplikace pro komunikaci se ztraceným mobilním telefonem / Application for Communication with the Lost Mobile Phone

Sládek, Petr January 2015 (has links)
This thesis is dedicated to communication with lost mobile device throw the Internet with its owner because of the possibility of finding, return or lock the device. It focused on analysis existing solution and draft own mobile application for Android platform with supportive web application, This thesis also summary basic principles of creating application for Android OS and communication with cloud base service Google Cloud Messaging.
6

Android IP kamera / Android IP Camera

Chvála, Jan January 2015 (has links)
The goal of this thesis is to design a system which would allow video data streaming from a mobile device and real time playback using a standard web browser. The technological background and the implementation platform are both part of this thesis. Web Real Time Communications (WebRTC) technology was used for acquiring multimedia data on mobile device. This technology is natively supported in the latest major web browsers and in WebView component (Android version 5.0 and above). Sending push notifications from a server to a mobile device to start the streaming is done with Google Cloud Messaging technology. The resultant system allows a user to start the application on mobile device with easy web browser access. This starts the multimedia stream from device, which can be parametrized and secured by password. The benefit of this thesis is the overview of WebRTC technology and its demonstration. The IP camera implementation shows how easy it is to use the WebRTC in real applications.
7

Voice-controlled order system

Höijer, David, Jansson, Hannes January 2021 (has links)
To order pick-up food by using your computer or phone is nothing new. Food delivery companies such as FoodHero and Uber Eats along with many other around the world base their entire company idea around the food order and delivery process. For a company to stand out in such a vast market can sometimes be quite tricky. Sometimes your company needs a niche to stand out in the crowd. This project aims to create such a niche in an order system prototype based on voice-controlled systems and conversation. This prototype allows users to place food orders through only the use of natural speech and a voice assistant. The prototype utilizes products and services from both Amazon and Google to create the order system structure. The ordering system also takes advantage of the serverless architecture that both Amazon and Google provide. The end result of this project is a simple, convenient, and user-friendly prototype
8

En jämförelse i kostnad och prestanda för molnbaserad datalagring / A comparison in cost and performance for cloud-based data storage

Burgess, Olivia, Oucif, Sara January 2024 (has links)
I takt med att datakvantiteter växer och kraven på skalbarhet och tillgänglighet inom molntjänster växer, framhävs behovet av undersökningar kring dess prestanda och kostnadseffektivitet.  Dessa analyser är avgörande för att optimera tjänster och bistå företag med värdefulla rekommendationer för att fatta välgrundade beslut om datalagring i molnet. Detta examensarbete undersöker kostnad samt prestanda hos relationella och icke-relationella datalagringslösningar implementerade på Microsoft Azure och Google Cloud Platform. Verktyget Hyperfine används för att mäta latens och tjänsternas kostnadseffektivitet beräknas baserat på detta resultat samt dess beräknade månadskostnader. Studiens resultat indikerar att för de utvärderade relationella databastjänsterna uppvisar Azure SQL Database initialt en låg latens som sedan ökar proportionellt med datamängden, medan Google Cloud SQL visar en något högre latens vid lägre datamängder men mer konstant latens vid högre datamängder. Azure SQL visar sig vara mer kostnadseffektiv i förhållande till Google Cloud SQL, vilket gör den till ett mer fördelaktigt alternativ för företag som eftersträvar hög prestanda till lägre kostnader. Vid jämförelse mellan de två icke-relationella databastjänsterna Azure Cosmos DB och Google Cloud Datastore uppvisar Azure Cosmos DB genomgående jämförelsevis lägre latens och överlägsen kostnadseffektivitet. Detta gör Azure Cosmos DB till en fördelaktig lösning för företag som prioriterar ekonomisk effektivitet i sin databashantering. / As data volumes grow and the demands for scalability and availability within cloud services increase, the need for studies on their performance and cost-effectiveness is emphasized. These analyses are crucial for optimizing services and providing businesses with valuable recommendations to make well-grounded decisions about cloud data storage. This thesis examines cost and performance for relational and non-relational data storage solutions implemented on Microsoft Azure and Google Cloud Platform. The tool Hyperfine is used to evaluate latency and the cloud services cost efficiency is calculated using this result as well as their monthly cost. The study's results regarding relational data storage indicate that Azure SQL Database initially exhibits low latency, which then increases proportionally with the data volume, while Google Cloud SQL shows slightly higher latency at smaller data volumes but more consistent latency with more data. Azure SQL Database is more cost-effective, making it a more favorable option than Google Cloud SQL for companies seeking high performance at lower costs. Regarding evaluated services for non-relational data storage Azure Cosmos DB consistently demonstrates lower latency and superior cost efficiency compared to Google Cloud Datastore, making it the preferred solution for companies prioritizing economic efficiency in their database management.
9

Extending the Cutting Stock Problem for Consolidating Services with Stochastic Workloads

Hähnel, Markus, Martinovic, John, Scheithauer, Guntram, Fischer, Andreas, Schill, Alexander, Dargie, Waltenegus 16 May 2023 (has links)
Data centres and similar server clusters consume a large amount of energy. However, not all consumed energy produces useful work. Servers consume a disproportional amount of energy when they are idle, underutilised, or overloaded. The effect of these conditions can be minimised by attempting to balance the demand for and the supply of resources through a careful prediction of future workloads and their efficient consolidation. In this paper we extend the cutting stock problem for consolidating workloads having stochastic characteristics. Hence, we employ the aggregate probability density function of co-located and simultaneously executing services to establish valid patterns. A valid pattern is one yielding an overall resource utilisation below a set threshold. We tested the scope and usefulness of our approach on a 16-core server with 29 different benchmarks. The workloads of these benchmarks have been generated based on the CPU utilisation traces of 100 real-world virtual machines which we obtained from a Google data centre hosting more than 32000 virtual machines. Altogether, we considered 600 different consolidation scenarios during our experiment. We compared the performance of our approach-system overload probability, job completion time, and energy consumption-with four existing/proposed scheduling strategies. In each category, our approach incurred a modest penalty with respect to the best performing approach in that category, but overall resulted in a remarkable performance clearly demonstrating its capacity to achieve the best trade-off between resource consumption and performance.
10

Assessing Query Execution Time and Implementational Complexity in Different Databases for Time Series Data / Utvärdering av frågeexekveringstid och implementeringskomplexitet i olika databaser för tidsseriedata

Jama Mohamud, Nuh, Söderström Broström, Mikael January 2024 (has links)
Traditional database management systems are designed for general purpose data handling, and fail to work efficiently with time-series data due to characteristics like high volume, rapid ingestion rates, and a focus on temporal relationships. However, what is a best solution is not a trivial question to answer. Hence, this thesis aims to analyze four different Database Management Systems (DBMS) to determine their suitability for managing time series data, with a specific focus on Internet of Things (IoT) applications. The DBMSs examined include PostgreSQL, TimescaleDB, ClickHouse, and InfluxDB. This thesis evaluates query performance across varying dataset sizes and time ranges, as well as the implementational complexity of each DBMS. The benchmarking results indicate that InfluxDB consistently delivers the best performance, though it involves higher implementational complexity and time consumption. ClickHouse emerges as a strong alternative with the second-best performance and the simplest implementation. The thesis also identifies potential biases in benchmarking tools and suggests that TimescaleDB's performance may have been affected by configuration errors. The findings provide significant insights into the performance metrics and implementation challenges of the selected DBMSs. Despite limitations in fully addressing the research questions, this thesis offers a valuable overview of the examined DBMSs in terms of performance and implementational complexity. These results should be considered alongside additional research when selecting a DBMS for time series data. / Traditionella databashanteringssystem är utformade för allmän datahantering och fungerar inte effektivt med tidsseriedata på grund av egenskaper som hög volym, snabba insättningshastigheter och fokus på tidsrelationer. Dock är frågan om vad som är den bästa lösningen inte trivial. Därför syftar denna avhandling till att analysera fyra olika databashanteringssystem (DBMS) för att fastställa deras lämplighet för att hantera tidsseriedata, med ett särskilt fokus på Internet of Things (IoT)-applikationer. De DBMS som undersöks inkluderar PostgreSQL, TimescaleDB, ClickHouse och InfluxDB. Denna avhandling utvärderar sökprestanda över varierande datamängder och tidsintervall, samt implementeringskomplexiteten för varje DBMS. Prestandaresultaten visar att InfluxDB konsekvent levererar den bästa prestandan, men med högre implementeringskomplexitet och tidsåtgång. ClickHouse framstår som ett starkt alternativ med näst bäst prestanda och är enklast att implementera. Studien identifierar också potentiella partiskhet i prestandaverktygen och antyder att TimescaleDB:s prestandaresultat kan ha påverkats av konfigurationsfel. Resultaten ger betydande insikter i prestandamått och implementeringsutmaningar för de utvalda DBMS. Trots begränsningarna i att fullt ut besvara forskningsfrågorna erbjuder studien en värdefull översikt. Dessa resultat bör beaktas tillsammans med ytterligare forskning vid val av ett DBMS för tidsseriedata.

Page generated in 0.0501 seconds