• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 30
  • 21
  • 19
  • 5
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 96
  • 31
  • 26
  • 22
  • 21
  • 16
  • 14
  • 14
  • 13
  • 12
  • 12
  • 10
  • 10
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Experimental Database Export/Import for InPUT

Karlsson, Stefan January 2013 (has links)
The Intelligent Parameter Utilization Tool (InPUT) is a format and API for thecross-language description of experiments, which makes it possible to defineexperiments and their contexts at an abstract level in the form of XML- andarchive-based descriptors. By using experimental descriptors, programs can bereconfigured without having to be recoded and recompiled and the experimentalresults of third-parties can be reproduced independently of the programminglanguage and algorithm implementation. Previously, InPUT has supported theexport and import of experimental descriptors to/from XML documents, archivefiles and LaTex tables. The overall aim of this project was to develop an SQLdatabase design that allows for the export, import, querying, updating anddeletion of experimental descriptors, implementing the design as an extensionof the Java implementation of InPUT (InPUTj) and to verify the generalapplicability of the created implementation by modeling real-world use cases.The use cases covered everything from simple database transactions involvingsimple descriptors to complex database transactions involving complexdescriptors. In addition, it was investigated whether queries and updates ofdescriptors are executed more rapidly if the descriptors are stored in databasesin accordance with the created SQL schema and the queries and updates arehandled by the DBMS PostgreSQL or, if the descriptors are stored directly infiles and the queries and updates are handled by the default XML-processingengine of InPUTj (JDOM). The results of the test case indicate that the formerusually allows for a faster execution of queries while the latter usually allowsfor a faster execution of updates. Using database-stored descriptors instead offile-based descriptors offers many advantages, such as making it significantlyeasier and less costly to manage, analyze and exchange large amounts of experi-mental data. However, database-stored descriptors complement file-baseddescriptors rather than replace them. The goals of the project were achieved,and the different types of database transactions involving descriptors can nowbe handled via a simple API provided by a Java facade class.
42

Návrh vývoje webových aplikací s automatickým vytvářením databázového schématu / Web Application Development Scheme with Automatic Database Schema Creation

Prochocká, Kristína January 2015 (has links)
This thesis designs and implements an intermediary layer on both the backend and the frontend part of a web application, together with a user interface used to show analysis outputs and to manage the data. This layer was designed based on analysis of several existing solutions, in various languages and environments, and implemented using MongoDB and PostgreSQL databases, written in PHP on the server side and in JavaScript on the client side. It offers an immediately usable flexible persistence layer directly in the frontend, with analysis, feedback and an option to convert collections to PostgreSQL while keeping the same API, enabling a fluent transition.
43

Webová aplikace HelpDesk a synchronizace dat / Web application HelpDesk and data synchronization

Balogh, Pavel January 2012 (has links)
The thesis deals with the development of the web-based Helpdesk application which ensures and supports the communication with Internet users. The first section of the thesis contains a brief description of the individual means and tools used by the developer. The practical part describes the design and development of the individual layers of the web-based application concerned. The latter section also mentions the development and usage of the required synchronization and code generation tools.
44

Vyhledávání fotografií podle obsahu / Content Based Photo Search

Dvořák, Pavel January 2014 (has links)
This thesis covers design and practical realization of a tool for quick search in large image databases, containing from tens to hundreds of thousands photos, based on image similarity. The proposed technique uses various methods of descriptor extraction, creation of Bag of Words dictionaries and methods of storing image data in PostgreSQL database. Further, experiments with the implemented software were carried out to evaluate the search time effectivity and scaling possibilities of the design solution.
45

Víceuživatelská mapová aplikace pro mobilní zařízení / Multiuser Mapping Application for Mobile Device

Utěkal, Jan January 2012 (has links)
This thesis presents design and implementation of multiuser mapping application for mobile device. The aim is to create a coordination system intended primarily for controlling mobile units of the Police of the Czech Republic. First, the existing solutions with similar focus are described. Subsequently, the most suitable communication, mapping and database technologies are selected. The next part of this work describes the system design and its implementation. Effectiveness of the system is verified in a series of tests, which deal with the power and data consumption. The thesis concludes with possible extensions of the program and ways of future development.
46

Förbättring av webbportal förexamensarbetsförslag / Improving Web Portal for Degree Project proposals

Risendal, Kristoffer January 2012 (has links)
I ett tidigare examensarbete på skolan för informations- och kommunikationsteknik på KTH utvecklades exjobbspoolen, en webbportal skapad med syftet att göra det möjligt för företag att annonsera ut examenarbeten som studenter på ICT-skolan kan söka. Men för att kunna börja använda exjobbspoolen behövdes vissa efterfrågade funktioner utvecklas och befintliga förbättras. För att en mer enhetlig känsla med resten av KTH:s hemsidor skulle uppnås samt att presentationen av sidans innehåll skulle visas på ett effektivare sätt behövdes även webbportalens layout göras om. Den här rapporten tar upp hur systemet har vidareutvecklats och varför valda metoder har använts. Arbetet har gjorts utifrån projektmetoden Feature-Driven Development och är konstruerat i HTML5, PHP, JavaScript, jQuery och har en databas av typen PostgreSQL. Resultatet av projektet är en webbportal som ger möjligheten till företag eller institutioner att förmedla exjobb med önskad formatering. Det examensarbetet kan sedan sökas och bokas av studenter som har identifierats sig via KTHs-inloggningstjänst. / In a previous project at the School of Information and Communication Technology at KTH, a web portal called exjobbspoolen was created with the aim of making it possible for companies to advertise graduate jobs that students at the School of ICT could apply for. But in order to start using this webportal some requested featureshad to be developed and some existing improved. For a more consistent feel with the rest of the KTH websites the layout also had to be redone. This report discusses how the system has been developed and why selected methods have been used. The projec have been driven based on Feature Driven Development, and is developed in HTML5, PHP, JavaScript, jQuery and has a PostgreSQL database. The result of the project is a web portal that provides the ability for companies or institutions to submit graduate jobs with the desired formatting. The job can then be searched and booked by students who have been identified via the KTH-login service.
47

Evaluating the effect of cardinality estimates on two state-of-the-art query optimizer's selection of access method / En utvärdering av kardinalitetsuppskattningens påverkan på två state-of-the-art query optimizers val av metod för att hämta data

Barksten, Martin January 2016 (has links)
This master thesis concern relational databases and their query optimizer’s sensitivity to cardinality estimates and the e!ect the quality of the estimate has on the number of different access methods used for the same relation. Two databases are evaluated — PostgreSQL and MariaDB — on a real-world dataset to provide realistic results. The evaluation was done via a tool implemented in Clojure and tests were conducted on a query and subsets of it with varying sample sizes used when estimating cardinality. The results indicate that MariaDB’s query optimizer is less sensitive to cardinality estimates and for all tests select the same access methods, regardless of the quality of the cardinality estimate. This stands in contrast to PostgreSQL’s query optimizer which will vary between using an index or doing a full table scan depending on the estimated cardinality. Finally, it is also found that the predicate value used in the query a!ects the access method used. Both PostgreSQL and MariaDB are found sensitive to this property, with MariaDB having the largest number of di!erent access methods used depending on predicate value. / Detta examensarbete behandlar relationella databaseer och hur stor påverkan kvaliteten på den uppskattade kardinaliteten har på antalet olika metoder som används för att hämta data från samma relation. Två databaser testades — PostgreSQL och MariaDB — på ett verkligt dataset för att ge realistiska resultat. Utvärderingen gjordes med hjälp av ett verktyg implementerat i Clojure och testerna gjordes på en query, och delvarianter av den, med varierande stora sample sizes för kardinalitetsuppskattningen. Resultaten indikerar att MariaDBs query optimizer inte påverkas av kardinalitetsuppskattningen, för alla testerna valde den samma metod för att hämta datan. Detta skiljer sig mot PostgreSQLs query optimizer som varierade mellan att använda sig av index eller göra en full table scan beroende på den uppskattade kardinaliteten. Slutligen pekade även resultaten på att båda databasernas query optimizers varierade metod för att hämta data beroende på värdet i predikatet som användes i queryn.
48

Performance comparison between PostgreSQL, MongoDB, ArangoDB and HBase / Prestandajämförelse mellan PostgreSQL, MongoDB, ArangoDB och Hbase

Dalström, Isak, Ericsson, Philip January 2022 (has links)
There is a large amount of data that needs to be stored today. Handling so much data efficiently is important as minor performance differences can have significant effects on large systems. Knowing how a certain database management system performs is important for companies and organizations to decide which database management system to use. There is currently a gap in the research regarding performance differences between different database management systems. We conducted a study that compares the average query response time of PostgreSQL, MongoDB, ArangoDB and HBase. We also compared the performance between using a single thread and using multiple threads. We compared how they perform with a dataset size and operation count of 10 000, 100 000, and 1 000 000 with insert, update and read queries. The results show that PostgreSQL has the lowest average query response when doing read queries and that MongoDB has the lowest average query response when doing insert and update queries. The results also showed a significant performance gain from using multiple threads instead of using a single thread.
49

Backend-utveckling av tidsredovisningsapplikation för Devize : Migrering av data via API och rapportsammanställning

Gillström, Felicia January 2022 (has links)
This report summarizes the procedure of the independent work in the final course DT140G. The project's task and main goal has been to help the company involved to enable a potential interruption with a time registration service called Harvest which they currently consume. The task itself has been sectioned into three clear parts with completely different orientations but towards the same end goal. The first part has involved data management from the consumed time registration service in terms of both exporting and importing data. The second part has been about developing a CRUD functionality that can be consumed in the frontend by another developer. The last part has meant that a report compilation application has been created where data from the previous parts is handled and produces various reports which could then be exported in Excel files. The result of this independent work resulted in an application with great similarities in terms of functionality as the previous time registration service. The company has taken a step closer to their vision of a break from Harvest. This has been done with access to source code from a previous developer who shared his repository via GitLab and the React Admin framework. The CRUD functionality has been checked with the help of the test tool ARC and all code development has taken place in the software development environment Visual Studio Code. / Denna rapport sammanfattar proceduren av det självständiga arbetet i slutkursen DT140G. Projektets uppgift och främsta mål har varit att hjälpa det involverade företaget att möjliggöra ett potentiellt avbrott med en tidsregistreringstjänst vid namn Harvest som de i dagsläget konsumerar. Själva uppgiften i sig har varit sektionerad i tre tydliga delar med helt olika inriktningar fast mot samma slutmål. Den fösta delen har involverat datahantering ifrån den konsumerade tidsregistreringstjänsten vad det gäller att både exportera och importera data. Den andra delen har handlat om att utveckla en CRUD-funktionalitet som skall kunna konsumeras i frontend av en annan utvecklare. Den sista delen har inneburit att en rappportsammanställningsapplikation har skapats där data ifrån de tidigare delarna hanteras och frambringar olika rapporter som sedan skulle kunna exporteras i Excel-filer. Utkomsten av detta självständiga arbete resulterade i en applikation med väldiga likheter funktionsmässigt vad det gäller den tidigare tidsregistreringstjänsten. Företaget har tagits ett steg närmare sin vision om en frislagning ifrån Harvest. Detta har genomförts med tillgång till källkod ifrån en tidigare utvecklare som delat sitt repository via GitLab samt ramverket React Admin. CRUD-funktionaliteten ha kontrollerats med hjälp av testverktyget ARC och all utveckling av kod har skett i programutvecklingsmiljön Visual Studio Code
50

A Forensic Examination of Database Slack

Joseph W. Balazs (5930528) 23 July 2021 (has links)
This research includes an examination and analysis of the phenomenon of database slack.<br>Database forensics is an underexplored subfield of Digital Forensics, and the lack of research is<br>becoming more important with every breach and theft of data. A small amount of research exists<br>in the literature regarding database slack. This exploratory work examined what partial records of<br>forensic significance can be found in database slack. A series of experiments performed update<br>and delete transactions upon data in a PostgreSQL database, which created database slack.<br>Patterns of hexadecimal indicators for database slack in the file system were found and analyzed.<br>Despite limitations in the experiments, the results indicated that partial records of forensic<br>significance are found in database slack. Significantly, partial records found in database slack<br>may aid a forensic investigation of a database breach. The details of the hexadecimal patterns of<br>the database slack fill in gaps in the literature, the impact of log findings on an investigation was<br>shown, and complexity aspects back up existing parts of database forensics research. This<br>research helped to lessen the dearth of work in the area of database forensics as well as database slack.<br>

Page generated in 0.0186 seconds