Spelling suggestions: "subject:"postgresql."" "subject:"postgres.""
51 |
Backend-utveckling av tidsredovisningsapplikation för Devize : Migrering av data via API och rapportsammanställningGillström, Felicia January 2022 (has links)
This report summarizes the procedure of the independent work in the final course DT140G. The project's task and main goal has been to help the company involved to enable a potential interruption with a time registration service called Harvest which they currently consume. The task itself has been sectioned into three clear parts with completely different orientations but towards the same end goal. The first part has involved data management from the consumed time registration service in terms of both exporting and importing data. The second part has been about developing a CRUD functionality that can be consumed in the frontend by another developer. The last part has meant that a report compilation application has been created where data from the previous parts is handled and produces various reports which could then be exported in Excel files. The result of this independent work resulted in an application with great similarities in terms of functionality as the previous time registration service. The company has taken a step closer to their vision of a break from Harvest. This has been done with access to source code from a previous developer who shared his repository via GitLab and the React Admin framework. The CRUD functionality has been checked with the help of the test tool ARC and all code development has taken place in the software development environment Visual Studio Code. / Denna rapport sammanfattar proceduren av det självständiga arbetet i slutkursen DT140G. Projektets uppgift och främsta mål har varit att hjälpa det involverade företaget att möjliggöra ett potentiellt avbrott med en tidsregistreringstjänst vid namn Harvest som de i dagsläget konsumerar. Själva uppgiften i sig har varit sektionerad i tre tydliga delar med helt olika inriktningar fast mot samma slutmål. Den fösta delen har involverat datahantering ifrån den konsumerade tidsregistreringstjänsten vad det gäller att både exportera och importera data. Den andra delen har handlat om att utveckla en CRUD-funktionalitet som skall kunna konsumeras i frontend av en annan utvecklare. Den sista delen har inneburit att en rappportsammanställningsapplikation har skapats där data ifrån de tidigare delarna hanteras och frambringar olika rapporter som sedan skulle kunna exporteras i Excel-filer. Utkomsten av detta självständiga arbete resulterade i en applikation med väldiga likheter funktionsmässigt vad det gäller den tidigare tidsregistreringstjänsten. Företaget har tagits ett steg närmare sin vision om en frislagning ifrån Harvest. Detta har genomförts med tillgång till källkod ifrån en tidigare utvecklare som delat sitt repository via GitLab samt ramverket React Admin. CRUD-funktionaliteten ha kontrollerats med hjälp av testverktyget ARC och all utveckling av kod har skett i programutvecklingsmiljön Visual Studio Code
|
52 |
A Forensic Examination of Database SlackJoseph W. Balazs (5930528) 23 July 2021 (has links)
This research includes an examination and analysis of the phenomenon of database slack.<br>Database forensics is an underexplored subfield of Digital Forensics, and the lack of research is<br>becoming more important with every breach and theft of data. A small amount of research exists<br>in the literature regarding database slack. This exploratory work examined what partial records of<br>forensic significance can be found in database slack. A series of experiments performed update<br>and delete transactions upon data in a PostgreSQL database, which created database slack.<br>Patterns of hexadecimal indicators for database slack in the file system were found and analyzed.<br>Despite limitations in the experiments, the results indicated that partial records of forensic<br>significance are found in database slack. Significantly, partial records found in database slack<br>may aid a forensic investigation of a database breach. The details of the hexadecimal patterns of<br>the database slack fill in gaps in the literature, the impact of log findings on an investigation was<br>shown, and complexity aspects back up existing parts of database forensics research. This<br>research helped to lessen the dearth of work in the area of database forensics as well as database slack.<br>
|
53 |
Distribuční systém pro elektronické obchody / Distribution System for e-ShopsGavenda, Martin January 2008 (has links)
The aim of my thesis is to create a model of information system for internet stores support. Together with creating this model I will also analyse the possible use of new technologies such as AJAX or communication with the help of XML. The final work will consist of two applications, where first is a catalogue of products (goods) and the second is an internet store. The main application will provide services for this store.
|
54 |
Extending the Kubernetes operator Kubegres to handle database restoration from dump filesBemm, Rickard January 2023 (has links)
The use of cloud-native technologies has grown in popularity in recent years. With its ability to take advantage of the full benefits of cloud computing, cloud-native architecture has become a hot topic among developers and IT professionals. It refers to building and running applications using cloud services and architectures, including containerization, microservices, and automation tools such as Kubernetes to enable fast and continuous delivery of software applications. In Kubernetes, the desired state of a resource is described declaratively and then handles the details of how to get there. Databases are notoriously hard to deploy in such environments, and the Kubernetes operator pattern extends the resources it manages and how to get to the desired state, called reconcile function. Operators exist to manage PostgreSQL databases with backup and restore functionality, and some require a license to be used. Kubegres is a free-to-use open-source operator, but it lacks restore functionality. This thesis aims to extend the Kubegres operator to support database restoration using dump files. It includes how to create the restore process in Kubernetes, what modifications must be done to the current architecture, and how to make the reconcile function robust and self-healing yet customizable to fit many different needs. Research has been done to explore the design of other operators that already support database restoration. It inspired the design of the resource definition and the restoration process. A new resource definition was added to define the desired state of the database restoration and a new reconcile function to define how to act on it. The state is repeatedly created each time the reconcile function is triggered. During the restoration, a new database is always the target, and once completed, the resources to restore it are deleted, and only the PostgreSQL database is left. The performance of the modified operator impact compared to the original operator was measured to evaluate the operator. The tests consisted of operations both versions of the operator supported, including PostgreSQL database creation, cluster scaling, and changing resource limits. The two collected metrics, CPU- and memory usage, increased by 0.058-0.4 mvCPU (12-33%) and 8.2 MB (29%), respectively. A qualitative evaluation of the operator using qualities such as robustness, self-healing, customizability, and correctness showed that the design fulfils most of the qualities.
|
55 |
A Performance Comparison of SQL and NoSQL Database Management Systems for 5G Radio Base Station Configuration / En jämförelse av prestanda mellan SQL och NoSQL databashanteringssystem för konfiguration av 5G radiobasstationerGoltsis, Alexandra January 2022 (has links)
The need to store large amounts of data is always increasing and this requires better solutions for storing and managing the data. This is often done using a Database Management System (DBMS) which helps manage all data. There are a lot of different options to use today, where all serve a different purpose. This means that it is important to choose the right DBMS for the data that you have. Furthermore, it is not enough to just choose the best DBMS, the database then needs to be designed so that the data can be stored in a structured way. This can be done in many ways. Ericsson wants to implement a database solution in one of their systems to make the workflow more efficient. The system is used to store data for configuring 5G nodes in testing environments. To do this, an investigation about which DBMS fits this data best is to be done. For this purpose, the PostgreSQL database is chosen to represent SQL databases and MongoDB is chosen to represent NoSQL databases. Additionally, proposed designs for each DBMS are produced. These designs are compared with regards to their response time for common queries, as well as in a load test with an expected load on the database. The results show that the two DBMS are good in different aspects. For example, PostgreSQL is faster when relationships between different tables are used, but MongoDB is faster when querying only one document. In conclusion, both implementations serve their purpose and do have their benefits, but MongoDB is chosen to be the better one given the knowledge of how the system is to be used.
|
56 |
Utveckling och design av WiGIDAltayr, Hydar, Adis, Michael January 2003 (has links)
The Center for Genomics and Bioinformatics (CGB) is an academic department at Karolinska Institute. Generally stated, the CGB department is committed to the generation and management of genetic information by approaches aiming at elucidating the connection between genes, protein and function. WiGID is a genome information database that is available through WAP (Wireless Application Protocol). Our version of WiGID is based on WML, PHP and PostgreSQL as a database server. One of the changes on the old WiGID application was the creation of a relational database with seven tables and one view, instead of the file that represented the database on the old version. We also changed the script language from python to PHP. The search engine ability has been extended with three new search alternatives for a user to choose from. Each choice leads to other, sometimes multiple choices. A GUI has been created for the administrator, to be able to insert information into the database. The structure of the search engine is primarily for narrowing down the search result on the phone display, thereby making the search efficient. / Wireless Genome Information Database (WiGID), är en genome information databas och är tillgänglig genom WAP (Wireless Application Protocol). WiGID har vidareutvecklats med WML och PHP som skript språk istället för WML och Python. Några exempel på den ny utvecklade WiGID är navigationsmöjligheterna och applikationens bakomliggande struktur. Modelleringen av relationsbaserade databasen har gett ökad flexibilitet till applikationen. Den är inte längre statisk och svårhanterlig. Hanteringen av databasen har lösts genom inmatnings skriptet. Inmatnings skriptet hanterar information från en fil som den läser ifrån och lägger informationen i respektive tabell. Sökmöjligheterna har ökats genom användning av SQL (Structured Query Language). Navigationsmöjligheterna i sökmotorn utökades till sex valmöjligheter istället för tre enligt den äldre versionen. Varje länk går vidare till nya alternativ för att förfina själva sökningen. Sökningen har effektiviserats och valmöjligheterna.
|
57 |
Evaluating Mitigations For Meltdown and Spectre : Benchmarking performance of mitigations against database management systems with OLTP workload / Bedömining Av Mitigering Mot Spectre och Meltdown : Prestandamätningar av databashanteringssystem efter mitigering mot Spectre och Meltdown med OLTP arbetsbelastningNilsson, Victor January 2018 (has links)
With Spectre and Meltdown out in the public, a rushed effort was made to patch these vulnerabilities by operating system vendors. However, with the mitigations against said vulnerabilities there will be some form of performance impact. This study aims to find out how much of an impact the software mitigations against Spectre and Meltdown have on database management systems during an online transaction processing workload. An experiment was carried out to evaluate two popular open-source database management systems and see how they were affected before and after the software mitigations against Spectre and Meltdown was applied. The study found that there is an average of 4-5% impact on the performance when the software mitigations is applied. The study also compared the two database management systems with each other and found that PostgreSQL can have a reduced performance of about 27% when both a hypervisor and the operating system is patched against Spectre and Meltdown. / När Spectre och Meltdown tillkännagavs gjordes en snabb insats för att korrigera dessa sårbarheter av operativsystemleverantörer. Men med mildringarna mot dessa sårbarheter kommer det att finnas någon form av prestationspåverkan. Denna studie syftar till att ta reda på hur mycket av en påverkan uppdateringarna mot Spectre och Meltdown har på databashanteringssystem under en online-transaktionsbehandlings arbetsbelastning. Ett experiment gjordes för att utvärdera två populära databashanteringssystem baserad på fri mjukvara och se hur de påverkades före och efter att uppdateringarna mot Spectre och Meltdown applicerats i en Linux maskin. Studien fann att det i genomsnitt är 4–5% påverkan på prestandan när uppdateringarna tillämpas. Studien jämförde också de två databashanteringssystemen med varandra och fann att PostgreSQL kan ha en reducerad prestanda på cirka 27% när både det virtuella maskinhanteringssystemet och operativsystemet är uppdaterad mot Spectre och Meltdown.
|
58 |
Jämförelse av svarstider för olika bilddatabaser för Javabaserade http-servrar / Benchmark of different image databases for Java-based http-serversBäcklin, Staffan January 2016 (has links)
Denna kandidatuppsats berör databaser i javabaserade bildhanteringssystem där bilderna lagras och hämtas som binära objekt. I MySQL och en del andra databashanterare kallas detta format för Blob(Binary large object). För att bildhanteringssystemet skall fungera bra krävs det att man använder en snabb databas. Syftet har varit att av ett urval databaser utse den databas som är snabbast i avseende på svarstider för hämtning av bilder som lagras som binära objekt i databaser. Databaserna är de fyra välkända databashanterarna MySQL, MariaDB, PostGreSQL och MongoDB. Testerna har utförts med databaserna integrerade i Javabaserade klient-server moduler för att så mycket som möjligt spegla de villkor som förekommer i ett bildhanteringssystem. De testverktyg som har använts är JMeter som är en avancerad applikation för mätning av svarstider och PerfMon som övervakar åtgång av systemresurser. MongoDB var den snabbaste bilddatabasen men det finns många osäkerhetsfaktorer som måste beaktas vilket också beskrivs i denna kandidatuppsats. Trots att många åtgärder för att motverka osäkerhetsfaktorerna har gjorts, förblir mätosäkerheten stor. Mer åtgärder för att isolera databasernas del av svarstiderna i ett klient-server system måste göras. Förslag på åtgärder redogörs i denna kandidatuppsats. / This bachelor thesis concerns databases in Java-based imaging system where the images are stored and retrieved as binary objects. In MySQL and in some other database management systems this format is called Blob (Binary Large Object). For the imaging system to work well, it is necessary to use a fast database. The aim has been that out of a sample of databases designate the database that is the fastest in terms of response times for downloading images stored as binary objects in databases. The databases are the four well-known database management systems MySQL, MariaDB, PostgreSQL and MongoDB. The tests have been conducted with the databases integrated into Java-based client-server modules in order to as much as possible mirror the conditions prevailing in an imaging system. The test tool that has been used is JMeter which is an advanced application for measuring response times and PerfMon to monitor the consumption of system resources. MongoDB was the fastest image database, but there are many uncertainties that must be considered, which is also explained in this bachelor thesis. Although many measures to counter the uncertainties have been made, the measurement uncertainty remains big. Further measures to isolate the database part of the response times in a client-server system must be made. Proposed measures are described in this bachelor thesis.
|
59 |
Разработка инфраструктуры и серверного приложения для проекта «Мониторинг IT-конференций» : магистерская диссертация / Development of infrastructure and server application for the project "Monitoring IT conferences"Сухарев, Н. В., Sukharev, N. V. January 2021 (has links)
Цель работы – разработка серверной части приложения и инфраструктурных компонентов для проекта «Мониторинг IT-конференций». Методы исследования: анализ, сравнение, систематизацию и обобщение данных о существующих и разработанных инфраструктурных компонентах, апробация современных подходов при построении архитектуры инфраструктуры. В результате работы сконфигурированы две виртуальные машины для работы Kubernetes и Gitlab Runner, настроены компоненты хранения постоянных данных для PostgreSQL, RabbitMQ и S3-хранилища на базе Rook Ceph, создано приложение на базе Django для предоставления API клиентскому приложению, написана конфигурация для Gitlab CI, обеспечивающая сборку образа приложения и его развертывание в Kubernetes. Созданное приложение предоставляет функционал управления контентом для администраторов сервиса (загрузка видео в S3-хранилище, разметка с помощью системы тегов, привязывание конференций к спикерам) и HTTP API для клиентского приложения с возможностью регистрации, аутентификации через JWT-токены, иерархическому поиску по системе тегов и отдаче подписанных ссылок на S3-хранилище для просмотра видео. / The purpose of the work is to develop the server part of the application and infrastructure components for the project "Monitoring IT conferences". Research methods: analysis, comparison, systematization and generalization of data on existing and developed infrastructure components, approbation of modern approaches in building infrastructure architecture. As a result of the work, two virtual machines were configured for Kubernetes and Gitlab Runner, persistent data storage components for PostgreSQL, RabbitMQ and S3 storage based on Rook Ceph were configured, an application based on Django was created to provide an API to a client application, a configuration for Gitlab CI was written, providing building an application image and deploying it to Kubernetes. The created application provides content management functionality for service administrators (uploading videos to S3 storage, marking using a tag system, binding conferences to speakers) and an HTTP API for a client application with the ability to register, authenticate through JWT tokens, hierarchical search using the tag system, and giving back signed links to S3 storage for watching videos.
|
60 |
En jämförelse av metoder och verktyg för datahantering och analys inom datalager / A comparison of methods and tools for data management and analysis within data warehousesAziz, Adeeba January 2024 (has links)
I detta examensarbete utförs en jämförande analys av metoder och verktyg för hantering och analys av data inom datalager. Med den snabbt ökande mängden data och utvecklingen av molnteknologier står företag inför utmaningen att navigera bland olika metoder för att välja den mest lämpliga för sin specifika datahantering och analysbehov. Rapporten belyser metoden One Big Table (OBT) samt verktyget Data Build Tool (dbt) och undersöker deras för- och nackdelar i datalagermiljöer. För att få en djupare förståelse för deras funktion och effektivitet jämförs de i olika användarfall genom prestandatester på latens och samtidighet med hjälp av verktyget Hyperfine. OBT implementeras med hjälp av Google BigQuery såväl som Google Cloud SQL för PostgreSQL där latens och samtidighet för analytiska målsättningar utvärderas genom användning av Python-skript med SQL-frågor respektive med dbt-modeller. Skripten och dbt-modellerna körs mot BigQuery samt PostgreSQL och de båda implementerar OBT. Resultatet visar att SQL-skripten uppvisade lägre latens än dbt-modeller när de exekverades mot både BigQuery och PostgreSQL. Ett annat fynd är att latensen för SQL-skripten var lägre i PostgreSQL jämfört med BigQuery, medan dbt-modellerna istället uppvisade högre latens i PostgreSQL jämfört med BigQuery. I båda datalagermiljöer visas det även att SQL-skripten presterar bättre än dbt-modeller vid samtidiga körningar. / This bachelor’s thesis presents a comparative analysis of methods and tools for data management and analysis within data warehouses. With the rapidly increasing volume of data and the development of cloud technologies, companies face the challenge of navigating various methods to choose the most suitable one for their specific data management and analysis needs. The report highlights the One Big Table (OBT) method and the Data Build Tool (dbt), examining their advantages and disadvantages in data warehouse environments. To gain a deeper understanding of their functionality and efficiency, they are compared in different use cases through performance tests on latency and concurrency using the Hyperfine tool. OBT is implemented using Google BigQuery as well as Google Cloud SQL for PostgreSQL, where latency and concurrency for analytical purposes are evaluated using Python scripts with SQL queries and dbt models. The scripts and dbt models are run against BigQuery and PostgreSQL, both implementing OBT. The results show that the SQL scripts exhibited lower latency than the dbt models when executed against both BigQuery and PostgreSQL. Another finding is that the latency for SQL scripts was lower when run against PostgreSQL compared to BigQuery, while dbt models showed higher latency when run against PostgreSQL compared to BigQuery. The SQL scripts also performed better than the dbt models in concurrent executions in both BigQuery and PostgreSQL.
|
Page generated in 0.0265 seconds