• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 30
  • 23
  • 21
  • 5
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 101
  • 35
  • 28
  • 24
  • 23
  • 18
  • 15
  • 15
  • 13
  • 13
  • 12
  • 11
  • 11
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Läsa och lagra data i JSON format för smart sensor : En jämförelse i svarstid mellan hybriddatabassystemet PostgreSQL och MongoDB / Read and store data in JSON format for smart sensor : A comparison in response time between the hybrid database PostgreSQL and MongoDB

Edman, Fredrik January 2018 (has links)
Sociala media genererar stora mängder data men det finns fler saker som gör det och lagrar i NoSQL databassystem och smarta sensorer som registrerar elektrisk förbrukning är en av de. MongoDB är ett NoSQL databassystem som lagrar sin data i dataformatet JSONB. PostgreSQL som är ett SQL databassystem har i sina senare distributioner också börjat hantera JSONB. Det gör att PostgreSQL är en typ av hybrid då den hanterar operationer för både SQL och NoSQL. I denna studie gjordes ett experiment för att se hur dessa databassystem hanterar data för att läsa och skriva när det gäller JSON för smarta sensorer. Svarstider registrerades och försökte svara på hypotesen om PostgreSQL kan vara lämplig för att läsa och skriva JSON data som genereras av en smart sensor. Experimentet påvisade att PostgreSQL inte ökar svarstid markant när mängden data ökar för insert men för MongoDB gör det. Svaret på hypotesen om PostgreSQL kan vara lämplig för JSON data är att det är det möjligt att den kan vara det men svårt att svara på och ytterligare forskning behövs.
42

Banco de dados geográfico da arborização de ruas utilizando softwares livres: o caso da cidade de Manaus, Amazonas

Santos, Jairo Rodrigues dos 17 December 2010 (has links)
Made available in DSpace on 2015-04-13T12:17:07Z (GMT). No. of bitstreams: 1 jairo.pdf: 4576085 bytes, checksum: 47f9d28f4f1057b6093eb6378910df7b (MD5) Previous issue date: 2010-12-17 / Currently, several Brazilian municipalities had issues regarding the conditions in which they find their cadastral systems of trees on streets, especially when it comes to city of small and medium businesses, which are commonly found outdated, disorganized and outdated. This work aims to develop a cadastral template tree, analyze and query the database about the trees on the streets by four nonproprietary software for management of Geotechnology and decision trees in the streets of Manaus, using as a study If the neighborhood of the center. In the data acquired in the laboratory for the Study of Landscape UFAM 15 species were identified in a total of 902 trees, the most frequent being: Oiti, Ficus, Flamboian, Castanhola, Manga. Thus, one can conclude that the free software are able to meet the demands of managing street trees. / Atualmente, diversas prefeituras brasileiras apresentaram problemas quanto às condições em que se encontram seus sistemas cadastrais de arborização de ruas, principalmente quando se trata de município de pequeno e médio porte, onde são comumente encontradas defasadas, esorganizada e desatualizada. Este trabalho tem o objetivo de desenvolver um modelo cadastral de árvores, analisar e consultar no banco de dados da arborização de ruas por meio de quatro softwares não proprietárias de geotecnologia para gestão e tomada de decisão na arborização de ruas de Manaus, utilizando como estudo de caso o bairro do Centro. Nos dados adquiridos no laboratório de Estudo da Paisagem da UFAM foram identificadas 15 espécies em um total de 902 árvores, sendo mais frequentes: Oiti, Ficus, Flamboian, Castanhola, Manga. Com isso, pode-se concluir que os softwares livres são capazes de atender as exigências da gestão de arborização de ruas.
43

Experimental Database Export/Import for InPUT

Karlsson, Stefan January 2013 (has links)
The Intelligent Parameter Utilization Tool (InPUT) is a format and API for thecross-language description of experiments, which makes it possible to defineexperiments and their contexts at an abstract level in the form of XML- andarchive-based descriptors. By using experimental descriptors, programs can bereconfigured without having to be recoded and recompiled and the experimentalresults of third-parties can be reproduced independently of the programminglanguage and algorithm implementation. Previously, InPUT has supported theexport and import of experimental descriptors to/from XML documents, archivefiles and LaTex tables. The overall aim of this project was to develop an SQLdatabase design that allows for the export, import, querying, updating anddeletion of experimental descriptors, implementing the design as an extensionof the Java implementation of InPUT (InPUTj) and to verify the generalapplicability of the created implementation by modeling real-world use cases.The use cases covered everything from simple database transactions involvingsimple descriptors to complex database transactions involving complexdescriptors. In addition, it was investigated whether queries and updates ofdescriptors are executed more rapidly if the descriptors are stored in databasesin accordance with the created SQL schema and the queries and updates arehandled by the DBMS PostgreSQL or, if the descriptors are stored directly infiles and the queries and updates are handled by the default XML-processingengine of InPUTj (JDOM). The results of the test case indicate that the formerusually allows for a faster execution of queries while the latter usually allowsfor a faster execution of updates. Using database-stored descriptors instead offile-based descriptors offers many advantages, such as making it significantlyeasier and less costly to manage, analyze and exchange large amounts of experi-mental data. However, database-stored descriptors complement file-baseddescriptors rather than replace them. The goals of the project were achieved,and the different types of database transactions involving descriptors can nowbe handled via a simple API provided by a Java facade class.
44

Návrh vývoje webových aplikací s automatickým vytvářením databázového schématu / Web Application Development Scheme with Automatic Database Schema Creation

Prochocká, Kristína January 2015 (has links)
This thesis designs and implements an intermediary layer on both the backend and the frontend part of a web application, together with a user interface used to show analysis outputs and to manage the data. This layer was designed based on analysis of several existing solutions, in various languages and environments, and implemented using MongoDB and PostgreSQL databases, written in PHP on the server side and in JavaScript on the client side. It offers an immediately usable flexible persistence layer directly in the frontend, with analysis, feedback and an option to convert collections to PostgreSQL while keeping the same API, enabling a fluent transition.
45

Webová aplikace HelpDesk a synchronizace dat / Web application HelpDesk and data synchronization

Balogh, Pavel January 2012 (has links)
The thesis deals with the development of the web-based Helpdesk application which ensures and supports the communication with Internet users. The first section of the thesis contains a brief description of the individual means and tools used by the developer. The practical part describes the design and development of the individual layers of the web-based application concerned. The latter section also mentions the development and usage of the required synchronization and code generation tools.
46

Vyhledávání fotografií podle obsahu / Content Based Photo Search

Dvořák, Pavel January 2014 (has links)
This thesis covers design and practical realization of a tool for quick search in large image databases, containing from tens to hundreds of thousands photos, based on image similarity. The proposed technique uses various methods of descriptor extraction, creation of Bag of Words dictionaries and methods of storing image data in PostgreSQL database. Further, experiments with the implemented software were carried out to evaluate the search time effectivity and scaling possibilities of the design solution.
47

Víceuživatelská mapová aplikace pro mobilní zařízení / Multiuser Mapping Application for Mobile Device

Utěkal, Jan January 2012 (has links)
This thesis presents design and implementation of multiuser mapping application for mobile device. The aim is to create a coordination system intended primarily for controlling mobile units of the Police of the Czech Republic. First, the existing solutions with similar focus are described. Subsequently, the most suitable communication, mapping and database technologies are selected. The next part of this work describes the system design and its implementation. Effectiveness of the system is verified in a series of tests, which deal with the power and data consumption. The thesis concludes with possible extensions of the program and ways of future development.
48

Förbättring av webbportal förexamensarbetsförslag / Improving Web Portal for Degree Project proposals

Risendal, Kristoffer January 2012 (has links)
I ett tidigare examensarbete på skolan för informations- och kommunikationsteknik på KTH utvecklades exjobbspoolen, en webbportal skapad med syftet att göra det möjligt för företag att annonsera ut examenarbeten som studenter på ICT-skolan kan söka. Men för att kunna börja använda exjobbspoolen behövdes vissa efterfrågade funktioner utvecklas och befintliga förbättras. För att en mer enhetlig känsla med resten av KTH:s hemsidor skulle uppnås samt att presentationen av sidans innehåll skulle visas på ett effektivare sätt behövdes även webbportalens layout göras om. Den här rapporten tar upp hur systemet har vidareutvecklats och varför valda metoder har använts. Arbetet har gjorts utifrån projektmetoden Feature-Driven Development och är konstruerat i HTML5, PHP, JavaScript, jQuery och har en databas av typen PostgreSQL. Resultatet av projektet är en webbportal som ger möjligheten till företag eller institutioner att förmedla exjobb med önskad formatering. Det examensarbetet kan sedan sökas och bokas av studenter som har identifierats sig via KTHs-inloggningstjänst. / In a previous project at the School of Information and Communication Technology at KTH, a web portal called exjobbspoolen was created with the aim of making it possible for companies to advertise graduate jobs that students at the School of ICT could apply for. But in order to start using this webportal some requested featureshad to be developed and some existing improved. For a more consistent feel with the rest of the KTH websites the layout also had to be redone. This report discusses how the system has been developed and why selected methods have been used. The projec have been driven based on Feature Driven Development, and is developed in HTML5, PHP, JavaScript, jQuery and has a PostgreSQL database. The result of the project is a web portal that provides the ability for companies or institutions to submit graduate jobs with the desired formatting. The job can then be searched and booked by students who have been identified via the KTH-login service.
49

Evaluating the effect of cardinality estimates on two state-of-the-art query optimizer's selection of access method / En utvärdering av kardinalitetsuppskattningens påverkan på två state-of-the-art query optimizers val av metod för att hämta data

Barksten, Martin January 2016 (has links)
This master thesis concern relational databases and their query optimizer’s sensitivity to cardinality estimates and the e!ect the quality of the estimate has on the number of different access methods used for the same relation. Two databases are evaluated — PostgreSQL and MariaDB — on a real-world dataset to provide realistic results. The evaluation was done via a tool implemented in Clojure and tests were conducted on a query and subsets of it with varying sample sizes used when estimating cardinality. The results indicate that MariaDB’s query optimizer is less sensitive to cardinality estimates and for all tests select the same access methods, regardless of the quality of the cardinality estimate. This stands in contrast to PostgreSQL’s query optimizer which will vary between using an index or doing a full table scan depending on the estimated cardinality. Finally, it is also found that the predicate value used in the query a!ects the access method used. Both PostgreSQL and MariaDB are found sensitive to this property, with MariaDB having the largest number of di!erent access methods used depending on predicate value. / Detta examensarbete behandlar relationella databaseer och hur stor påverkan kvaliteten på den uppskattade kardinaliteten har på antalet olika metoder som används för att hämta data från samma relation. Två databaser testades — PostgreSQL och MariaDB — på ett verkligt dataset för att ge realistiska resultat. Utvärderingen gjordes med hjälp av ett verktyg implementerat i Clojure och testerna gjordes på en query, och delvarianter av den, med varierande stora sample sizes för kardinalitetsuppskattningen. Resultaten indikerar att MariaDBs query optimizer inte påverkas av kardinalitetsuppskattningen, för alla testerna valde den samma metod för att hämta datan. Detta skiljer sig mot PostgreSQLs query optimizer som varierade mellan att använda sig av index eller göra en full table scan beroende på den uppskattade kardinaliteten. Slutligen pekade även resultaten på att båda databasernas query optimizers varierade metod för att hämta data beroende på värdet i predikatet som användes i queryn.
50

Performance comparison between PostgreSQL, MongoDB, ArangoDB and HBase / Prestandajämförelse mellan PostgreSQL, MongoDB, ArangoDB och Hbase

Dalström, Isak, Ericsson, Philip January 2022 (has links)
There is a large amount of data that needs to be stored today. Handling so much data efficiently is important as minor performance differences can have significant effects on large systems. Knowing how a certain database management system performs is important for companies and organizations to decide which database management system to use. There is currently a gap in the research regarding performance differences between different database management systems. We conducted a study that compares the average query response time of PostgreSQL, MongoDB, ArangoDB and HBase. We also compared the performance between using a single thread and using multiple threads. We compared how they perform with a dataset size and operation count of 10 000, 100 000, and 1 000 000 with insert, update and read queries. The results show that PostgreSQL has the lowest average query response when doing read queries and that MongoDB has the lowest average query response when doing insert and update queries. The results also showed a significant performance gain from using multiple threads instead of using a single thread.

Page generated in 0.0392 seconds