• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 30
  • 22
  • 20
  • 5
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 98
  • 33
  • 27
  • 24
  • 22
  • 17
  • 15
  • 15
  • 13
  • 13
  • 12
  • 11
  • 10
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Distribuční systém pro elektronické obchody / Distribution System for e-Shops

Gavenda, Martin January 2008 (has links)
The aim of my thesis is to create a model of information system for internet stores support. Together with creating this model I will also analyse the possible use of new technologies such as AJAX or communication with the help of XML. The final work will consist of two applications, where first is a catalogue of products (goods) and the second is an internet store. The main application will provide services for this store.
52

Extending the Kubernetes operator Kubegres to handle database restoration from dump files

Bemm, Rickard January 2023 (has links)
The use of cloud-native technologies has grown in popularity in recent years. With its ability to take advantage of the full benefits of cloud computing, cloud-native architecture has become a hot topic among developers and IT professionals. It refers to building and running applications using cloud services and architectures, including containerization, microservices, and automation tools such as Kubernetes to enable fast and continuous delivery of software applications. In Kubernetes, the desired state of a resource is described declaratively and then handles the details of how to get there. Databases are notoriously hard to deploy in such environments, and the Kubernetes operator pattern extends the resources it manages and how to get to the desired state, called reconcile function. Operators exist to manage PostgreSQL databases with backup and restore functionality, and some require a license to be used. Kubegres is a free-to-use open-source operator, but it lacks restore functionality. This thesis aims to extend the Kubegres operator to support database restoration using dump files. It includes how to create the restore process in Kubernetes, what modifications must be done to the current architecture, and how to make the reconcile function robust and self-healing yet customizable to fit many different needs. Research has been done to explore the design of other operators that already support database restoration. It inspired the design of the resource definition and the restoration process. A new resource definition was added to define the desired state of the database restoration and a new reconcile function to define how to act on it. The state is repeatedly created each time the reconcile function is triggered. During the restoration, a new database is always the target, and once completed, the resources to restore it are deleted, and only the PostgreSQL database is left. The performance of the modified operator impact compared to the original operator was measured to evaluate the operator. The tests consisted of operations both versions of the operator supported, including PostgreSQL database creation, cluster scaling, and changing resource limits. The two collected metrics, CPU- and memory usage, increased by 0.058-0.4 mvCPU (12-33%) and 8.2 MB (29%), respectively. A qualitative evaluation of the operator using qualities such as robustness, self-healing, customizability, and correctness showed that the design fulfils most of the qualities.
53

A Performance Comparison of SQL and NoSQL Database Management Systems for 5G Radio Base Station Configuration / En jämförelse av prestanda mellan SQL och NoSQL databashanteringssystem för konfiguration av 5G radiobasstationer

Goltsis, Alexandra January 2022 (has links)
The need to store large amounts of data is always increasing and this requires better solutions for storing and managing the data. This is often done using a Database Management System (DBMS) which helps manage all data. There are a lot of different options to use today, where all serve a different purpose. This means that it is important to choose the right DBMS for the data that you have. Furthermore, it is not enough to just choose the best DBMS, the database then needs to be designed so that the data can be stored in a structured way. This can be done in many ways. Ericsson wants to implement a database solution in one of their systems to make the workflow more efficient. The system is used to store data for configuring 5G nodes in testing environments. To do this, an investigation about which DBMS fits this data best is to be done. For this purpose, the PostgreSQL database is chosen to represent SQL databases and MongoDB is chosen to represent NoSQL databases. Additionally, proposed designs for each DBMS are produced. These designs are compared with regards to their response time for common queries, as well as in a load test with an expected load on the database. The results show that the two DBMS are good in different aspects. For example, PostgreSQL is faster when relationships between different tables are used, but MongoDB is faster when querying only one document. In conclusion, both implementations serve their purpose and do have their benefits, but MongoDB is chosen to be the better one given the knowledge of how the system is to be used.
54

Utveckling och design av WiGID

Altayr, Hydar, Adis, Michael January 2003 (has links)
The Center for Genomics and Bioinformatics (CGB) is an academic department at Karolinska Institute. Generally stated, the CGB department is committed to the generation and management of genetic information by approaches aiming at elucidating the connection between genes, protein and function. WiGID is a genome information database that is available through WAP (Wireless Application Protocol). Our version of WiGID is based on WML, PHP and PostgreSQL as a database server. One of the changes on the old WiGID application was the creation of a relational database with seven tables and one view, instead of the file that represented the database on the old version. We also changed the script language from python to PHP. The search engine ability has been extended with three new search alternatives for a user to choose from. Each choice leads to other, sometimes multiple choices. A GUI has been created for the administrator, to be able to insert information into the database. The structure of the search engine is primarily for narrowing down the search result on the phone display, thereby making the search efficient. / Wireless Genome Information Database (WiGID), är en genome information databas och är tillgänglig genom WAP (Wireless Application Protocol). WiGID har vidareutvecklats med WML och PHP som skript språk istället för WML och Python. Några exempel på den ny utvecklade WiGID är navigationsmöjligheterna och applikationens bakomliggande struktur. Modelleringen av relationsbaserade databasen har gett ökad flexibilitet till applikationen. Den är inte längre statisk och svårhanterlig. Hanteringen av databasen har lösts genom inmatnings skriptet. Inmatnings skriptet hanterar information från en fil som den läser ifrån och lägger informationen i respektive tabell. Sökmöjligheterna har ökats genom användning av SQL (Structured Query Language). Navigationsmöjligheterna i sökmotorn utökades till sex valmöjligheter istället för tre enligt den äldre versionen. Varje länk går vidare till nya alternativ för att förfina själva sökningen. Sökningen har effektiviserats och valmöjligheterna.
55

Evaluating Mitigations For Meltdown and Spectre : Benchmarking performance of mitigations against database management systems with OLTP workload / Bedömining Av Mitigering Mot Spectre och Meltdown : Prestandamätningar av databashanteringssystem efter mitigering mot Spectre och Meltdown med OLTP arbetsbelastning

Nilsson, Victor January 2018 (has links)
With Spectre and Meltdown out in the public, a rushed effort was made to patch these vulnerabilities by operating system vendors. However, with the mitigations against said vulnerabilities there will be some form of performance impact. This study aims to find out how much of an impact the software mitigations against Spectre and Meltdown have on database management systems during an online transaction processing workload. An experiment was carried out to evaluate two popular open-source database management systems and see how they were affected before and after the software mitigations against Spectre and Meltdown was applied. The study found that there is an average of 4-5% impact on the performance when the software mitigations is applied. The study also compared the two database management systems with each other and found that PostgreSQL can have a reduced performance of about 27% when both a hypervisor and the operating system is patched against Spectre and Meltdown. / När Spectre och Meltdown tillkännagavs gjordes en snabb insats för att korrigera dessa sårbarheter av operativsystemleverantörer. Men med mildringarna mot dessa sårbarheter kommer det att finnas någon form av prestationspåverkan. Denna studie syftar till att ta reda på hur mycket av en påverkan uppdateringarna mot Spectre och Meltdown har på databashanteringssystem under en online-transaktionsbehandlings arbetsbelastning. Ett experiment gjordes för att utvärdera två populära databashanteringssystem baserad på fri mjukvara och se hur de påverkades före och efter att uppdateringarna mot Spectre och Meltdown applicerats i en Linux maskin. Studien fann att det i genomsnitt är 4–5% påverkan på prestandan när uppdateringarna tillämpas. Studien jämförde också de två databashanteringssystemen med varandra och fann att PostgreSQL kan ha en reducerad prestanda på cirka 27% när både det virtuella maskinhanteringssystemet och operativsystemet är uppdaterad mot Spectre och Meltdown.
56

Jämförelse av svarstider för olika bilddatabaser för Javabaserade http-servrar / Benchmark of different image databases for Java-based http-servers

Bäcklin, Staffan January 2016 (has links)
Denna kandidatuppsats berör databaser i javabaserade bildhanteringssystem där bilderna lagras och hämtas som binära objekt. I MySQL och en del andra databashanterare kallas detta format för Blob(Binary large object). För att bildhanteringssystemet skall fungera bra krävs det att man använder en snabb databas. Syftet har varit att av ett urval databaser utse den databas som är snabbast i avseende på svarstider för hämtning av bilder som lagras som binära objekt i databaser. Databaserna är de fyra välkända databashanterarna MySQL, MariaDB, PostGreSQL och MongoDB. Testerna har utförts med databaserna integrerade i Javabaserade klient-server moduler för att så mycket som möjligt spegla de villkor som förekommer i ett bildhanteringssystem. De testverktyg som har använts är JMeter som är en avancerad applikation för mätning av svarstider och PerfMon som övervakar åtgång av systemresurser. MongoDB var den snabbaste bilddatabasen men det finns många osäkerhetsfaktorer som måste beaktas vilket också beskrivs i denna kandidatuppsats. Trots att många åtgärder för att motverka osäkerhetsfaktorerna har gjorts, förblir mätosäkerheten stor. Mer åtgärder för att isolera databasernas del av svarstiderna i ett klient-server system måste göras. Förslag på åtgärder redogörs i denna kandidatuppsats. / This bachelor thesis concerns databases in Java-based imaging system where the images are stored and retrieved as binary objects. In MySQL and in some other database management systems this format is called Blob (Binary Large Object). For the imaging system to work well, it is necessary to use a fast database. The aim has been that out of a sample of databases designate the database that is the fastest in terms of response times for downloading images stored as binary objects in databases. The databases are the four well-known database management systems MySQL, MariaDB, PostgreSQL and MongoDB. The tests have been conducted with the databases integrated into Java-based client-server modules in order to as much as possible mirror the conditions prevailing in an imaging system. The test tool that has been used is JMeter which is an advanced application for measuring response times and PerfMon to monitor the consumption of system resources. MongoDB was the fastest image database, but there are many uncertainties that must be considered, which is also explained in this bachelor thesis. Although many measures to counter the uncertainties have been made, the measurement uncertainty remains big. Further measures to isolate the database part of the response times in a client-server system must be made. Proposed measures are described in this bachelor thesis.
57

Разработка инфраструктуры и серверного приложения для проекта «Мониторинг IT-конференций» : магистерская диссертация / Development of infrastructure and server application for the project "Monitoring IT conferences"

Сухарев, Н. В., Sukharev, N. V. January 2021 (has links)
Цель работы – разработка серверной части приложения и инфраструктурных компонентов для проекта «Мониторинг IT-конференций». Методы исследования: анализ, сравнение, систематизацию и обобщение данных о существующих и разработанных инфраструктурных компонентах, апробация современных подходов при построении архитектуры инфраструктуры. В результате работы сконфигурированы две виртуальные машины для работы Kubernetes и Gitlab Runner, настроены компоненты хранения постоянных данных для PostgreSQL, RabbitMQ и S3-хранилища на базе Rook Ceph, создано приложение на базе Django для предоставления API клиентскому приложению, написана конфигурация для Gitlab CI, обеспечивающая сборку образа приложения и его развертывание в Kubernetes. Созданное приложение предоставляет функционал управления контентом для администраторов сервиса (загрузка видео в S3-хранилище, разметка с помощью системы тегов, привязывание конференций к спикерам) и HTTP API для клиентского приложения с возможностью регистрации, аутентификации через JWT-токены, иерархическому поиску по системе тегов и отдаче подписанных ссылок на S3-хранилище для просмотра видео. / The purpose of the work is to develop the server part of the application and infrastructure components for the project "Monitoring IT conferences". Research methods: analysis, comparison, systematization and generalization of data on existing and developed infrastructure components, approbation of modern approaches in building infrastructure architecture. As a result of the work, two virtual machines were configured for Kubernetes and Gitlab Runner, persistent data storage components for PostgreSQL, RabbitMQ and S3 storage based on Rook Ceph were configured, an application based on Django was created to provide an API to a client application, a configuration for Gitlab CI was written, providing building an application image and deploying it to Kubernetes. The created application provides content management functionality for service administrators (uploading videos to S3 storage, marking using a tag system, binding conferences to speakers) and an HTTP API for a client application with the ability to register, authenticate through JWT tokens, hierarchical search using the tag system, and giving back signed links to S3 storage for watching videos.
58

En jämförelse av metoder och verktyg för datahantering och analys inom datalager / A comparison of methods and tools for data management and analysis within data warehouses

Aziz, Adeeba January 2024 (has links)
I detta examensarbete utförs en jämförande analys av metoder och verktyg för hantering och analys av data inom datalager. Med den snabbt ökande mängden data och utvecklingen av molnteknologier står företag inför utmaningen att navigera bland olika metoder för att välja den mest lämpliga för sin specifika datahantering och analysbehov. Rapporten belyser metoden One Big Table (OBT) samt verktyget Data Build Tool (dbt) och undersöker deras för- och nackdelar i datalagermiljöer. För att få en djupare förståelse för deras funktion och effektivitet jämförs de i olika användarfall genom prestandatester på latens och samtidighet med hjälp av verktyget Hyperfine. OBT implementeras med hjälp av Google BigQuery såväl som Google Cloud SQL för PostgreSQL där latens och samtidighet för analytiska målsättningar utvärderas genom användning av Python-skript med SQL-frågor respektive med dbt-modeller. Skripten och dbt-modellerna körs mot BigQuery samt PostgreSQL och de båda implementerar OBT. Resultatet visar att SQL-skripten uppvisade lägre latens än dbt-modeller när de exekverades mot både BigQuery och PostgreSQL. Ett annat fynd är att latensen för SQL-skripten var lägre i PostgreSQL jämfört med BigQuery, medan dbt-modellerna istället uppvisade högre latens i PostgreSQL jämfört med BigQuery. I båda datalagermiljöer visas det även att SQL-skripten presterar bättre än dbt-modeller vid samtidiga körningar. / This bachelor’s thesis presents a comparative analysis of methods and tools for data management and analysis within data warehouses. With the rapidly increasing volume of data and the development of cloud technologies, companies face the challenge of navigating various methods to choose the most suitable one for their specific data management and analysis needs. The report highlights the One Big Table (OBT) method and the Data Build Tool (dbt), examining their advantages and disadvantages in data warehouse environments. To gain a deeper understanding of their functionality and efficiency, they are compared in different use cases through performance tests on latency and concurrency using the Hyperfine tool. OBT is implemented using Google BigQuery as well as Google Cloud SQL for PostgreSQL, where latency and concurrency for analytical purposes are evaluated using Python scripts with SQL queries and dbt models. The scripts and dbt models are run against BigQuery and PostgreSQL, both implementing OBT. The results show that the SQL scripts exhibited lower latency than the dbt models when executed against both BigQuery and PostgreSQL. Another finding is that the latency for SQL scripts was lower when run against PostgreSQL compared to BigQuery, while dbt models showed higher latency when run against PostgreSQL compared to BigQuery. The SQL scripts also performed better than the dbt models in concurrent executions in both BigQuery and PostgreSQL.
59

Assessing Query Execution Time and Implementational Complexity in Different Databases for Time Series Data / Utvärdering av frågeexekveringstid och implementeringskomplexitet i olika databaser för tidsseriedata

Jama Mohamud, Nuh, Söderström Broström, Mikael January 2024 (has links)
Traditional database management systems are designed for general purpose data handling, and fail to work efficiently with time-series data due to characteristics like high volume, rapid ingestion rates, and a focus on temporal relationships. However, what is a best solution is not a trivial question to answer. Hence, this thesis aims to analyze four different Database Management Systems (DBMS) to determine their suitability for managing time series data, with a specific focus on Internet of Things (IoT) applications. The DBMSs examined include PostgreSQL, TimescaleDB, ClickHouse, and InfluxDB. This thesis evaluates query performance across varying dataset sizes and time ranges, as well as the implementational complexity of each DBMS. The benchmarking results indicate that InfluxDB consistently delivers the best performance, though it involves higher implementational complexity and time consumption. ClickHouse emerges as a strong alternative with the second-best performance and the simplest implementation. The thesis also identifies potential biases in benchmarking tools and suggests that TimescaleDB's performance may have been affected by configuration errors. The findings provide significant insights into the performance metrics and implementation challenges of the selected DBMSs. Despite limitations in fully addressing the research questions, this thesis offers a valuable overview of the examined DBMSs in terms of performance and implementational complexity. These results should be considered alongside additional research when selecting a DBMS for time series data. / Traditionella databashanteringssystem är utformade för allmän datahantering och fungerar inte effektivt med tidsseriedata på grund av egenskaper som hög volym, snabba insättningshastigheter och fokus på tidsrelationer. Dock är frågan om vad som är den bästa lösningen inte trivial. Därför syftar denna avhandling till att analysera fyra olika databashanteringssystem (DBMS) för att fastställa deras lämplighet för att hantera tidsseriedata, med ett särskilt fokus på Internet of Things (IoT)-applikationer. De DBMS som undersöks inkluderar PostgreSQL, TimescaleDB, ClickHouse och InfluxDB. Denna avhandling utvärderar sökprestanda över varierande datamängder och tidsintervall, samt implementeringskomplexiteten för varje DBMS. Prestandaresultaten visar att InfluxDB konsekvent levererar den bästa prestandan, men med högre implementeringskomplexitet och tidsåtgång. ClickHouse framstår som ett starkt alternativ med näst bäst prestanda och är enklast att implementera. Studien identifierar också potentiella partiskhet i prestandaverktygen och antyder att TimescaleDB:s prestandaresultat kan ha påverkats av konfigurationsfel. Resultaten ger betydande insikter i prestandamått och implementeringsutmaningar för de utvalda DBMS. Trots begränsningarna i att fullt ut besvara forskningsfrågorna erbjuder studien en värdefull översikt. Dessa resultat bör beaktas tillsammans med ytterligare forskning vid val av ett DBMS för tidsseriedata.
60

Automatická publikace metadat a dat pro mapové a katalogové systémy z rastrových podkladů v PostgreSQL / Automatic publication of data and metadata for map and catalogue systems from raster sources in PostgreSQL

Hettler, Jakub January 2012 (has links)
Automatic publication of data and metadata for map and catalogue systems from raster sources in PostgreSQL Abstract The main goal of the presented work is the design and implementation of the application for the automatic publication of the raster data and metadata from the PostgreSQL database to the map and catalog services. The application should exclusively utilize the open source software and technologies. The fundamental component of the developed application is the PostgreSQL database with the PostGIS and PostGIS raster extensions. The presented work evaluates the possibilities of the raster storage from different points of view -e.g. the suitability for the further data processing or the publication of the raster data. The most suitable structure for raster storage is then proposed with respect to analytical and publication usage of the stored data. The possibilities of the open source software for the solution and implementation of the presented problem are then inspected. The GeoNetwork and GeoServer are utilized as a metadata and map server solution. The results of the deployment of these technologies is then evaluated for the real world data and compared with other available related solutions. Keywords: PostGIS, PostGIS raster, GeoServer, GeoNetwork opensource, metadata, web map services, OGC,...

Page generated in 0.0594 seconds