• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 42
  • 32
  • 27
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 117
  • 47
  • 44
  • 32
  • 31
  • 28
  • 24
  • 22
  • 22
  • 22
  • 19
  • 19
  • 17
  • 16
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Interactive Mobile Augmented Reality For Fitness Activities

Koech, Irene January 2020 (has links)
Augmented reality (AR) has revolutionized the way people view the real world, AR has been used across a range of sectors. Recently, researchers examined the possibilities for improving user experience with augmented reality. However, there are few studies on adoption of AR users' interactions with online data resources. The aim of this study is proposing a mobile augmented reality interface for users to interact and engage with online data. The prototype is based on the framework of a PEAR (Public Engagement Augmented Reality) initiative for further AR development. PEAR framework provides an AR extension and enables users to engage with online information through AR representation [1]. This prototype was developed and implemented using Unity game engines C# and Vuforia SDK on the front-end, both NodeJS servers with MongoDB were used on the back-end. The prototype was tested and then used in a 2-week user study to analyse and validate the framework.
92

A comparison of latency for MongoDB and PostgreSQL with a focus on analysis of source code

Lindvall, Josefin, Sturesson, Adam January 2021 (has links)
The purpose of this paper is to clarify the differences in latency between PostgreSQL and MongoDB as a consequence of their differences in software architecture. This has been achieved through benchmarking of Insert, Read and Update operations with the tool “Yahoo! Cloud Serving Benchmark”, and through source code analysis of both database management systems (DBMSs). The overall structure of the architecture has been researched with Big O notation as a tool to examine the complexity of the source code. The result from the benchmarking show that the latency for Insert and Update operations were lower for MongoDB, while the latency for Read was lower for PostgreSQL. The results from the source code analysis show that both DBMSs have a complexity of O(n), but that there are multiple differences in their software architecture affecting latency. The most important difference was the length of the parsing process which was larger for PostgreSQL. The conclusion is that there are significant differences in latency and source code and that room exists for further research in the field. The biggest limitation of the experiment consist of factors such as background processes which affected latency and could not be eliminated, resulting in a low validity.
93

Vývoj webové aplikace k testování znalostí uchazečů / Development of Web Application for Testing Candidate's Knowledge

Janík, Martin January 2018 (has links)
Master thesis is focused on analysis, design and development of information system module for the company XYZ s. r. o. The design of the module’s architecture, application objects identification and the development of the module including the economic evaluation are all based on the requirements acquired from the company. Modul is developed in new JavaScript version and uses modern technologies such as React, Redux, REST, Node.js and MongoDB.
94

Informační systém pro správu vizualizací geografických dat / Information System for Management of Geographical Data Visualizations

Grossmann, Jan January 2021 (has links)
The goal of this this is to create an information system for the visualization of geographical data. The main idea is to allow users to create visualizations with their own geographical data, which they can either import from files or directly attach their own database system as a source of data and make use of the data in real-time. The result will be a new web information system that will act as a point of contact between users, geographical data, and visualizations.
95

MongoDB jako datové úložiště pro Google App Engine SDK / MongoDB as a Datastore for Google App Engine SDK

Heller, Stanislav January 2013 (has links)
In this thesis, there are discussed use-cases of NoSQL database MongoDB implemented as a datastore for user data, which is stored by Datastore stubs in Google App Engine SDK. Existing stubs are not very well optimized for higher load; they significantly slow down application development and testing if there is a need to store larger data sets in these storages. The analysis is focused on features of MongoDB, Google App Engine NoSQL Datastore and interfaces for data manipulation in SDK - Datastore Service Stub API. As a result, there was designed and implemented new datastore stub, which is supposed to solve problems of existing stubs. New stub uses MongoDB as a database layer for storing testing data and it is fully integrated into Google App Engine SDK.
96

BigData řešení pro zpracování rozsáhlých dat ze síťových toků / BigData Approach to Management of Large Netflow Datasets

Melkes, Miloslav January 2014 (has links)
This master‘s thesis focuses on distributed processing of big data from network communication. It begins with exploring network communication based on TCP/IP model with focus on data units on each layer, which is necessary to process during analyzation. In terms of the actual processing of big data is described programming model MapReduce, architecture of Apache Hadoop technology and it‘s usage for processing network flows on computer cluster. Second part of this thesis deals with design and following implementation of the application for processing network flows from network communication. In this part are discussed main and problematic parts from the actual implementation. After that this thesis ends with a comparison with available applications for network analysis and evaluation set of tests which confirmed linear growth of acceleration.
97

Smart Clustering System for Filtering and Cleaning User Generated Content : Creating a profanity filter for Truecaller / System för filtrering och sanering av oönskad text i användarskapat innehåll

Moradi, Arvin January 2013 (has links)
This thesis focuses on investigating and creating an application for filtering user-generated content. The method was to examine how profanity and racist expressions are used and manipulated to evade filtering processes in similar systems. Focus also went on to study different algorithms to get this process to be quick and efficient, i.e., to process as many names in the shortest amount of time possible. This is because the client needs to filter millions of new uploads every day. The result shows that the application detects profanity and manipulated profanity. Data from the customer’s database was also used for testing purposes, and the result showed that the application also works in practice. The performance test shows that the application has a fast execution time. We could see this by approximating it to a linear func-tion with respect to time and the number of names entered. The conclusion was that the filter works and discovers profanity not detected earlier. Future updates to strengthen the decision process could be to introduce a third-party service, or a web interface where you can manually control decisions. Execution time is good and shows that 10 million names can be pro-cessed in about 6 hours. In the future, one can parallelize queries to the database so that multiple names can be processed simultaneously. / Denna avhandling fokuserar på att utreda och skapa en applikation för filtrering av användargenererat innehåll. Metoden gick ut på att undersöka hur svordomar samt rasistiska uttryck används och manipuleras för att undgå filtrerings processer i liknande system. Fokus gick även ut på att studera olika algoritmer för att få denna process att vara snabb och effektiv, dvs kunna bearbeta så många namn på kortast möjliga tid. Detta beror på att kunden i detta sammanhang får in miljontals nya uppladdningar varje dag, som måste filtreras innan använding. Resultatet visar att applikationen upptäcker svordomar i olika former. Data från kundens databas användes också för test syfte, och resultatet visade att applikationen även fungerar i praktiken. Prestanda testet visar att applikationen har en snabb exekveringstid. Detta kunde vi se genom att estimera den till en linjär funktion med hänsyn till tid och antal namn som matats in. Slutsatsen blev att filtret fungerar och upptäcker svordomar som inte upptäckts tidigare i kundens databas. För att stärka besluten i processen kan man i framtida uppdateringar införa tredje parts tjänster, eller ett web interface där man manuelt kan styra beslut. Exekverings tiden är bra och visar att 10 miljoner namn kan bearbetas på cirka 6 timmar. I framtiden kan man parallellisera förfrågningarna till databasen så att flera namn kan bearbetas samtidigt.
98

Data storage for a small lumberprocessing company in Sweden

Bäcklund, Simon, Ljungdahl, Albin January 2021 (has links)
The world is becoming increasingly digitized, and with this trend comes an increas-ing need for storing data for companies of all sizes. For smaller enterprises, thiscould prove to be a major challenge due to limitations in knowledge and financialassets. So the purpose of this study is to investigate how smaller companies cansatisfy their needs for data storage and which database management system to usein order to not let their shortcomings hold their development and growth back. Tofulfill this purpose, a small wood processing company in Sweden is examined andused as an example. To investigate and answer the problem, literary research is con-ducted to gain knowledge about data storage and the different options for this thatexist. Microsoft Access, MySQL, and MongoDB are selected for evaluation andtheir performance is compared in controlled experiments. The results of this studyindicates that, due to the small amount of data that the example company possesses,the simplicity of Microsoft Access trumps the high performance of its competitors.However, with increasingly developed internet infrastructure, the option of hostinga database in the cloud has become a feasible option. If hosting the database in thecloud is the desired solution, Microsoft Access has a higher operating cost than theother alternatives, making MySQL come out on top.
99

MySQL och MongoDB operationer med Node.sj som ramverk / MySQL and MongoDB operations with Node.js used as framework

Zsambokrety Eliason, Adam January 2023 (has links)
Data används i stora mängder i dagens samhälle. Sjukvården är inget undantag och i den branschen kan analyser av data rädda liv eller uppfinna läkemedel. För att kunna bruka data i modern tid måste ett databassystem användas. Att välja rätt databassystem kan vara svårt och kräver att man vet varför man väljer den ena över den andra. Här jämförs två databaser för att generera data på om det skilljer sig mellan operationstiderna hos en relationsdatabas och en dokumentbaserad databas. Studien har valt att göra ett tekniskt experiment där MySQL brukas som relationsdatabas medans MongoDB används som dokumentbaserad databas. Node.js används som ramverk för att skapa applikationer där testning sker. Data är hämtad från U.S. Department of Health &amp; Human Services (2023) och representerar sjukvårdsdata från COVID. Operationstyperna INSERT och SELECT är de två som undersöks i denna studien. Resultatet bevisar att båda de framlagda hypoteserna stämmer. MongoDB var i båda testerna mer effektiv och producerade lägre operationstider för INSERT och SELECT. / <p>Stavningskorrigerad titel: MySQL och MongoDB operationer med Node.js som ramverk</p>
100

Jämförande analys av frågor för enskilda och flera geometrityper för hämtning av geospatiala data i MySQL och MongoDB : Bedömning av frågeprestanda för platsbaserad information i MySQL och MongoDB / Comparative analysis of single and multiple geometric type queries for geospatial data retrieval in MySQL and MongoDB : Assessing fetch query performance for location-based information in MySQL and MongoDB

Larsson, William January 2023 (has links)
The use of databases for managing spatial data is widespread due to the efficiency of traditional SQL databases like Azure SQL. However, the exponential growth of data from sources like social media has led to the popularity of NoSQL databases such as MongoDB that handle large volumes of data effectively. NoSQL databases, including MongoDB, have built-in support for geospatial queries, making them suitable for managing geospatial data. Geospatial data combines geometric and geographic information and is represented by spatial datatypes like Point, LineString, and Polygon. MySQL and MongoDB both support geospatial data, but limited studies are comparing their performance in geospatial queries. An experiment was conducted to compare the fetch speed of geospatial data in these databases. The results were analyzed using graphs and related studies to draw conclusions, which showed that MongoDB performed slower fetch requests than MySQL. Future studies can use more data points and different queries.

Page generated in 0.0306 seconds