• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 30
  • 23
  • 21
  • 5
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 101
  • 35
  • 28
  • 24
  • 23
  • 18
  • 15
  • 15
  • 13
  • 13
  • 12
  • 11
  • 11
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Assessing Query Execution Time and Implementational Complexity in Different Databases for Time Series Data / Utvärdering av frågeexekveringstid och implementeringskomplexitet i olika databaser för tidsseriedata

Jama Mohamud, Nuh, Söderström Broström, Mikael January 2024 (has links)
Traditional database management systems are designed for general purpose data handling, and fail to work efficiently with time-series data due to characteristics like high volume, rapid ingestion rates, and a focus on temporal relationships. However, what is a best solution is not a trivial question to answer. Hence, this thesis aims to analyze four different Database Management Systems (DBMS) to determine their suitability for managing time series data, with a specific focus on Internet of Things (IoT) applications. The DBMSs examined include PostgreSQL, TimescaleDB, ClickHouse, and InfluxDB. This thesis evaluates query performance across varying dataset sizes and time ranges, as well as the implementational complexity of each DBMS. The benchmarking results indicate that InfluxDB consistently delivers the best performance, though it involves higher implementational complexity and time consumption. ClickHouse emerges as a strong alternative with the second-best performance and the simplest implementation. The thesis also identifies potential biases in benchmarking tools and suggests that TimescaleDB's performance may have been affected by configuration errors. The findings provide significant insights into the performance metrics and implementation challenges of the selected DBMSs. Despite limitations in fully addressing the research questions, this thesis offers a valuable overview of the examined DBMSs in terms of performance and implementational complexity. These results should be considered alongside additional research when selecting a DBMS for time series data. / Traditionella databashanteringssystem är utformade för allmän datahantering och fungerar inte effektivt med tidsseriedata på grund av egenskaper som hög volym, snabba insättningshastigheter och fokus på tidsrelationer. Dock är frågan om vad som är den bästa lösningen inte trivial. Därför syftar denna avhandling till att analysera fyra olika databashanteringssystem (DBMS) för att fastställa deras lämplighet för att hantera tidsseriedata, med ett särskilt fokus på Internet of Things (IoT)-applikationer. De DBMS som undersöks inkluderar PostgreSQL, TimescaleDB, ClickHouse och InfluxDB. Denna avhandling utvärderar sökprestanda över varierande datamängder och tidsintervall, samt implementeringskomplexiteten för varje DBMS. Prestandaresultaten visar att InfluxDB konsekvent levererar den bästa prestandan, men med högre implementeringskomplexitet och tidsåtgång. ClickHouse framstår som ett starkt alternativ med näst bäst prestanda och är enklast att implementera. Studien identifierar också potentiella partiskhet i prestandaverktygen och antyder att TimescaleDB:s prestandaresultat kan ha påverkats av konfigurationsfel. Resultaten ger betydande insikter i prestandamått och implementeringsutmaningar för de utvalda DBMS. Trots begränsningarna i att fullt ut besvara forskningsfrågorna erbjuder studien en värdefull översikt. Dessa resultat bör beaktas tillsammans med ytterligare forskning vid val av ett DBMS för tidsseriedata.
62

Информационно-моделирующая система расчета показателей теплового режима при изменении режимных и конструктивных параметров доменной печи : магистерская диссертация / Information modeling system calculation of thermal condition indicators when changing the operating and design blast furnace parameters

Шамсимухаметов, П. Р., Shamsimukhametov, P. R. January 2024 (has links)
Магистерская диссертация посвящена разработке информационно-моделирующей системы теплового режима доменной плавки на основе трехзвенной клиент-серверной архитектуры. Рассмотрены основные этапы разработки программного обеспечения: постановка задачи; анализ предметной области; определение требований; выбор технологии и средств реализации проекта; создание архитектуры системы; проектирование и реализация программного обеспечения, а также его публикация (развертывание). Представлено описание работы системы на примере расчета показателей теплового режима для условий работы доменного цеха ПАО «Магнитогорский металлургический комбинат». Основными функциями программного обеспечения являются: ручной ввод и корректировка исходных данных на основе заданного шаблона или ранее сохраненного варианта расчета; сохранение и загрузка вариантов исходных данных для базового периода; расчет показателей теплового состояния доменной плавки в базовом и проектном периодах; отображение результатов расчета в табличном и графическом видах; экспорт результатов во внешний формат офисных документов; ведение блока нормативно-справочной информации. В ходе разработки использована гибкая методология создания программного обеспечения (Agile) с использованием итерационного подхода (спринтов) на основе трекера задач (task tracker) Microsoft Azure DevOps, а также удаленного репозитория хранения и совместной разработки программного кода IT-проектов GitHub. Научная новизна полученных в работе результатов заключается в: разработке методов эффективной организации, ведения процесса разработки и сопровождения, специализированного информационного, алгоритмического и программного обеспечений, включая базу данных и микросервисы TeploAPI и TeploClient; использовании гибкой методологии разработки (Agile, SCRUM) и таск-трекера Microsoft Azure DevOps для ведения проекта, взаимодействия с заказчиком во время разработки, отслеживания ошибок, визуального отображения задач и мониторинга процесса их выполнения; использовании методики коллективного владения программным кодом на основе сервиса (удаленного репозитория) GitHub. Практическая значимость результатов заключается в том, что разработанное программное обеспечение позволит: специалистам инженерно-технологической группы доменного цеха предоставит возможность автоматизированного моделирования газодинамического режима доменной плавки, сократить время на формирование отчетных документов; специалистам отдела сопровождения информационных систем снизить трудозатраты на сопровождение, совершенствование и развитие системы с учетом пожеланий пользователей за счет использования микросервисной архитектуры. Разработанное программное обеспечение предназначено для инженерно-технологического персонала доменных цехов металлургических предприятий, научных работников, занимающихся исследованием доменного процесса, а также может быть использовано в учебном процессе для проведения лабораторных и практических работ для студентов металлургических специальностей вузов. Результаты работы представлены и обсуждены на конференциях: X, XI и XII Всероссийской научно-практической конференции студентов, аспирантов и молодых учёных (Екатеринбург, УрФУ, 2022, 2023, 2024); всероссийской научно-практической конференции (с международным участием «Системы автоматизации (в образовании, науке и производстве): AS’2023». Результаты работы отражены в 3-х публикациях, получено 2 свидетельства о государственной регистрации программ для ЭВМ. / The master's thesis is devoted to the development of an information modeling system for the thermal regime of blast furnace smelting based on a three-tier client-server architecture. The main stages of software development are considered: problem statement; domain analysis; definition of requirements; selection of technology and means of project implementation; creation of system architecture; design and implementation of software, as well as its publication (deployment). A description of the operation of the system is presented using the example of calculating thermal parameters for the operating conditions of the blast furnace shop of PJSC Magnitogorsk Iron and Steel Works. The main functions of the software are: manual input and adjustment of source data based on a given template or a previously saved calculation option; saving and loading source data options for the base period; calculation of thermal state indicators of blast furnace smelting in the base and design periods; display of calculation results in tabular and graphical forms; exporting results to an external office document format; maintaining a block of regulatory and reference information. During the development, a flexible methodology for creating software (Agile) was used using an iterative approach (sprints) based on the task tracker Microsoft Azure DevOps, as well as a remote repository for storing and jointly developing program code for IT projects GitHub. The scientific novelty of the results obtained in the work lies in: development of methods for effective organization, management of the development and maintenance process, specialized information, algorithmic and software, including the database and microservices TeploAPI and TeploClient; use of flexible development methodology (Agile, SCRUM) and the Microsoft Azure DevOps task tracker for project management, interaction with the customer during development, error tracking, visual display of tasks and monitoring the progress of their implementation; using the methodology of collective ownership of program code based on the GitHub service (remote repository). The practical significance of the results is that the developed software will allow: specialists of the engineering and technological group of the blast furnace shop will be given the opportunity to automatically simulate the gas-dynamic regime of blast furnace smelting, reducing the time for generating reporting documents; specialists of the information systems support department to reduce labor costs for maintaining, improving and developing the system, taking into account the wishes of users through the use of microservice architecture. The developed software is intended for engineering and technological personnel of blast furnace shops of metallurgical enterprises, scientists involved in research of the blast furnace process, and can also be used in the educational process to carry out laboratory and practical work for students of metallurgical specialties at universities. The results of the work were presented and discussed at conferences: X, XI and XII All-Russian Scientific and Practical Conference of Students, Graduate Students and Young Scientists (Ekaterinburg, UrFU, 2022, 2023, 2024); All-Russian scientific and practical conference (with international participation “Automation systems (in education, science and production): AS’2023.” The results of the work are reflected in 3 publications, 2 certificates of state registration of computer programs were received.
63

Automatická publikace metadat a dat pro mapové a katalogové systémy z rastrových podkladů v PostgreSQL / Automatic publication of data and metadata for map and catalogue systems from raster sources in PostgreSQL

Hettler, Jakub January 2012 (has links)
Automatic publication of data and metadata for map and catalogue systems from raster sources in PostgreSQL Abstract The main goal of the presented work is the design and implementation of the application for the automatic publication of the raster data and metadata from the PostgreSQL database to the map and catalog services. The application should exclusively utilize the open source software and technologies. The fundamental component of the developed application is the PostgreSQL database with the PostGIS and PostGIS raster extensions. The presented work evaluates the possibilities of the raster storage from different points of view -e.g. the suitability for the further data processing or the publication of the raster data. The most suitable structure for raster storage is then proposed with respect to analytical and publication usage of the stored data. The possibilities of the open source software for the solution and implementation of the presented problem are then inspected. The GeoNetwork and GeoServer are utilized as a metadata and map server solution. The results of the deployment of these technologies is then evaluated for the real world data and compared with other available related solutions. Keywords: PostGIS, PostGIS raster, GeoServer, GeoNetwork opensource, metadata, web map services, OGC,...
64

Otimização de desempenho na recuperação de imagens de um sistema de auxílio ao diagnóstico de pneumonias na infância / Performance optimization in an image retrieval system to aid diagnosis of pneumonia in children

Silva, Keila Sousa 23 September 2013 (has links)
Submitted by Erika Demachki (erikademachki@gmail.com) on 2014-09-22T21:46:54Z No. of bitstreams: 2 Keila Sousa Silva.pdf: 7076994 bytes, checksum: b2ae0f0393e80451a0c468e865f49847 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) / Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2014-09-23T15:39:04Z (GMT) No. of bitstreams: 2 Keila Sousa Silva.pdf: 7076994 bytes, checksum: b2ae0f0393e80451a0c468e865f49847 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) / Made available in DSpace on 2014-09-23T15:39:04Z (GMT). No. of bitstreams: 2 Keila Sousa Silva.pdf: 7076994 bytes, checksum: b2ae0f0393e80451a0c468e865f49847 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Previous issue date: 2013-09-23 / This study aims to optimize runtime performance of a system developed to aid the diagnosis of childhood pneumonia by computer. This system, called Pneumocad according Macedo (2012), aims to identify chest radiographs consistent with the disease using computational techniques for recognizing patterns in textures through decomposition of the transformed wavelets, features extracted from the decomposition and classi cation applied to radiographs. In pursuit of this optimization in terms of performance at runtime, insertion of new rays and the recovery of their radiographs similar, we used the proposed deployment of a cluster architecture of radiographs already stored in the database Pneumocad. In parallel , functionality, responsible for de ning how a radiograph is similar to another , were transferred from the source code Java to views in the database. The experiments were performed on databases that contained 183, 2.568 and 10.200 radiographs and using the Pneumo- cad of Macedo (2012) , the Pneumocad Optimized with views and without grouping and Pneumocad Optimized with views and grouping. The experiments and results show that the proposed optimization contributed to the evolution of Pneumocad and enhanced this tool to support the diagnosis of pneumonia in childhood. / O presente estudo propõe a otimizaçao de desempenho de tempo de execu ção de um sistema desenvolvido para auxiliar o diagn ostico de pneumonia infantil por computador. Esse sistema, denominado Pneumocad, segundo Macedo (2012), visa identi car radiografi as de t órax compatí veis com a doen ça utilizando t écnicas de reconhecimento computacional de padrões em texturas por meio da decomposi ção das transformadas wavelets, das caracter sticas extra das das decomposi ções e da classi ca ção aplicadas as radiogra as. Em busca dessa otimiza ção, em termos de desempenho em tempo de execu ção, na inser ção de novas radiogra fias e na recupera ção de suas radiogra fias similares, utilizou-se a proposta da implanta c~ao de uma arquitetura de agrupamento das radiogra fias j a armazenadas na base de dados do Pneumocad. Em paralelo, as funcionalidades, respons aveis por de finir o quanto uma radiogra fia e similar a outra, foram transferidas do c odigo-fonte Java para views na base de dados. Os experimentos foram executados sobre bases de dados que continham 183, 2.568 e 10.200 radiogra fias e utilizando o Pneumocad de Macedo (2012), o Pneumocad Otimizado com views e sem agrupamento e o Pneumocad Otimizado com views e com agrupamento. Os experimentos e resultados mostram que a otimiza c~ao proposta contribuiu para a evolu ção do Pneumocad e aprimorou essa ferramenta que visa apoiar os diagn osticos de pneumonias na infância.
65

Implementace elektronické evidence tržeb (EET) pro e-shopy / Implementation of EET for electronic shops

CÍGL, Jakub January 2017 (has links)
My diploma thesis is focused on creating an application for Electronic Register of Sales. The main sense of the application is to facilitate an implementation to e-shops, which were created on so-called "Greenfield". This application is supposed to help programmers in their work, because the most complex logic will be done by the app. Another big advantage of it is saving money to the e-shop owners. The implementation of my solution is more time saving than the implementation which is provided by Financial administration of the Czech Republic. The first part of my thesis is concentrated on Electronic Register of Sales issue. It's following by a description of technology and tools which I used for the app development. The selected technology and tools are the most modern and widely used nowadays. The end of the theoretical part is about introducing e-shop Tisknisi.cz In the practical part of the thesis I described the "E-EET" application and compiled its structure in terms of frontend and backend. Then I explained its technical solution. Code samples are accompanied by explanations. A phase "implementation solution" is described by POST request method. I've designed a price policy to ensure the profitability of "E-EET" application by myself.
66

A comparison of latency for MongoDB and PostgreSQL with a focus on analysis of source code

Lindvall, Josefin, Sturesson, Adam January 2021 (has links)
The purpose of this paper is to clarify the differences in latency between PostgreSQL and MongoDB as a consequence of their differences in software architecture. This has been achieved through benchmarking of Insert, Read and Update operations with the tool “Yahoo! Cloud Serving Benchmark”, and through source code analysis of both database management systems (DBMSs). The overall structure of the architecture has been researched with Big O notation as a tool to examine the complexity of the source code. The result from the benchmarking show that the latency for Insert and Update operations were lower for MongoDB, while the latency for Read was lower for PostgreSQL. The results from the source code analysis show that both DBMSs have a complexity of O(n), but that there are multiple differences in their software architecture affecting latency. The most important difference was the length of the parsing process which was larger for PostgreSQL. The conclusion is that there are significant differences in latency and source code and that room exists for further research in the field. The biggest limitation of the experiment consist of factors such as background processes which affected latency and could not be eliminated, resulting in a low validity.
67

Optimalizace informačního systému pro sledování spotřeb energií / Optimization of Energy Consumption Monitoring System

Hrbek, Martin January 2019 (has links)
The goal of this thesis is to optimize the existing energy consumption monitoring system and expand the presentation options for the measured data in a way that would be suitable for large volume of data and a long period of time. Optimization concerns especially data processing and data presentation.
68

Automatizovaná analýza a archivace dat z webu / Automated Web Analysis and Archivation

Kocman, Tomáš January 2019 (has links)
This thesis is focused on cybercrime, acquisition of evidence and development of a platform for retrieval, analysis and archiving web site data. The goal is to satisfy the investigators and security experts of the Czech police. The aim is to provide an open source platform that will be freely disseminable and in compliance with the requirements of legal institutions. The output of the thesis is two platform versions - full-fledged, fulfilling all the requirements set out in the diploma thesis and a light version for the police investigators of the Czech Republic.
69

Komplexní řešení prodeje zboží / A Complex Solution for Selling Merchandise

Krhovský, Patrik January 2020 (has links)
The aim of this thesis is to analyze, design and implement solution for selling merchandise, which sellers can be used with commonly used hardware, free in basic package and they should be able to handle system setup. As a result, sellers can avoid new operating costs. The system will run as a service on Heroku servers. The front-end and back-end is implemented in JavaScript, front-end also uses React. GraphQL is used for communication between frontend and back-end. The data is stored in the PostgreSQL relational database, but also is used the Redis database, which runs tasks in the background.
70

Metody přístupu k databázím PostgreSQL v .NET Framework / Methods of access to PostgreSQL databases in .NET Framework

Henzl, Václav January 2009 (has links)
The results of this work are two major projects - NpgObjects and PagedDataGridView. NpgObjects is a simple ORM framework to enable the mapping database tables to objects in the common language runtime. It contains a specially designed generator which generates classes in C# from information obtained from the database. These classes are mapping on the database tables one to one. NpgObjects allows all the basic database operations - SELECT, INSERT, UPDATE and DELETE. PagedDataGridView is a component for displaying tabular data. In cooperation with NpgObjects can paginate database data and manage the flow of data into application. It provides a comfortable user interface, which can easily navigate between different pages of data.

Page generated in 0.0747 seconds