71 |
Metody přístupu k databázím PostgreSQL v .NET Framework / Methods of access to PostgreSQL databases in .NET FrameworkHenzl, Václav January 2009 (has links)
The results of this work are two major projects - NpgObjects and PagedDataGridView. NpgObjects is a simple ORM framework to enable the mapping database tables to objects in the common language runtime. It contains a specially designed generator which generates classes in C# from information obtained from the database. These classes are mapping on the database tables one to one. NpgObjects allows all the basic database operations - SELECT, INSERT, UPDATE and DELETE. PagedDataGridView is a component for displaying tabular data. In cooperation with NpgObjects can paginate database data and manage the flow of data into application. It provides a comfortable user interface, which can easily navigate between different pages of data.
|
72 |
Aplikace pro zobrazení modelu bezdrátové sítě / Application for display wireless network modelŽoldoš, Petr January 2011 (has links)
The first step of Master's thesis was to gain knowledge about technologies Adobe Flex SDK and Google Maps API. Knowledge was used to develop an application, which let users create, generate and modify graphical wireless network model. Position and characteristics of each single unit are monitored either in a map interface or in building plans. Dates are gathered from forms filled by current user, from external file or periodically from connected database system. Theoretical part enlightens technologies that were used. It describes program development and solutions, which were made, along with examples of the source code. Included are printscreens of graphical user interface as well as description of how does it all work.
|
73 |
Temporální rozšíření pro PostgreSQL / A Temporal Extension for PostgreSQLJelínek, Radek January 2015 (has links)
This thesis is focused on PostgreSQL database system. You can find here introducing to temporal databases, database system PostgreSQL, proposal of temporal extension for PostgreSQL and implementation chapter with examples of using this extension. In this thesis is also using temporal database systems and use temporal databases in practise.
|
74 |
High-Availability für ZOPEDamaschke, Marko 11 June 2005 (has links)
Im Rahmen dieser vorliegenden Arbeit soll untersucht werden, welche
Möglichkeiten zur Sicherung einer möglichst hohen Verfügbarkeit
(High-Availability), Mechanismen zur Lastverteilung mittels des
ZEO-Produkts oder ähnlichem sowie welche Strategien des Cachings sinnvoll an
einem ZOPE-Server zum Einsatz kommen können.
Die Arbeit untersucht dabei die Einsatzmöglichkeiten von bereits
vorhandenen und die eventuelle Notwendigkeit der Eigenimplementierung
weiterer Produkte der ZOPE-Entwicklung.
Den Rahmen der Arbeit bildet die Serverstruktur des Bildungsmarktplatzes Sachsen.
|
75 |
Sicheres Verteilen von Konfigurationsdaten und Migrationsstrategie zum Trennen von Diensten und DatenbasisWehrmann, Sebastian 01 August 2006 (has links)
Aus historischen Gründen war die CSN Datenbank und die darauf zugreifenden Dienste immer auf dem selben Rechner. Zum einen aus Geldmangel, zum anderen, weil die Verteilung der Konfiguration und Zugriffssteuerung zur Datenbank ein ungelöstes Problem ist. Aufgabe dieser Arbeit ist die physikalische und logische Trennung der Firewall (und des Shapers) von der Datenbank. Dazu muss ein Dienst geschaffen werden, der die Konfigurationsinformationen für die Firewall und potentiell andere Applikationen bereitstellt. Der Zugriff auf diese Informationen muss vor Dritten geschützt werden. Im Weiteren soll eine Migrationstrategie entworfen werden, wie der Übergang zu der skizzierten Lösung bewerkstelligt werden kann.
|
76 |
Hantering av kursmjukvaror vid Linköpings universitet / Management of course software at Linköping UniversityUdd, Gustaf, Axelsson, Isak, Duvaldt, Jakob, Bergman, Oscar, Måhlén, Joar, Lundin, Oskar, Abrahamsen, Tobias, Sköldehag, Sara January 2020 (has links)
Denna kandidatrapport är skriven av åtta studenter i kursen Kandidatprojekt i mjukvaruutveckling, TDDD96, vid Linköpings universitet under våren 2020. Kandidatrapporten inkluderar en sammanfattning av det arbete som utförts i projektet. Till projektet tillhörde ett uppdrag på beställning av Digitala resursenheten vid Linköpings universitet. Uppdraget var att skapa en webbapplikation där examinatorer och handledare för kurser vid Linköpings universitet på ett enkelt och intuitivt sätt kan beställa mjukvara till sina respektive kurser. Projektgruppen jobbade agilt och byggde projektet i Python samt JavaScript. Applikationen nådde alla mål satta tillsammans mellan kunden och projektgruppen och resulterade i en fungerande produkt som kunde utökas och modifieras enligt kundens behov. Till kandidatrapporten inkluderas även individuella bidrag skrivna av vardera gruppmedlem som djupdykt inom ett specifikt ämne eller område.
|
77 |
TimescaleDB för lagring av OBD-II-data / TimescaleDB for OBD-II data storageSvensson, Alex, Wichardt, Ulf January 2022 (has links)
All cars support reading diagnostic data from their control units via the On-Board Diagnostics II protocol. For companies with large vehicle fleets it may be valuable to analyze this diagnostic data, but large vehicle fleets produce large amounts of data. In this thesis we investigated whether the time series database TimescaleDB is suitable for storing such data. In order to investigate this we tested and evaluated its insertion speed, query execution time and compression ratio. The results show that TimescaleDB is able to insert over 200 000 rows of data per second. They also show that the compression algorithm can speed up query execution by up to 134.5 times and reach a compression ratio of 9.1. Considering these results we conclude that TimescaleDB is a suitable choice for storing diagnostic data, but not necessarily the most suitable.
|
78 |
Podpora pro práci s XML u databázového serveru Microsoft SQL Server 2008 / Support for XML in Microsoft SQL Server 2008Bábíčková, Radka Unknown Date (has links)
This thesis is focused on XML and related technologies. The XML language is directly linked to the databases and its support in databases. The overview of the XML support provided by various database products and systems are presented in this work. Support in the MS SQL Server 2008 is discussed in more detail starting with the mapping of relational data to XML and vice versa to support of the XML data type and work with it through XQuery. Also some indexing techniques are briefly presented. Finally, the support in MS SQL Server 2008 is demonstrated by means of a sample application, which verifes the theoretical knowledge in practice.
|
79 |
Le résumé linguistique de données structurées comme support pour l'interrogationVoglozin, W. Amenel 11 July 2007 (has links) (PDF)
Le travail présenté dans cette thèse traite de l'utilisation des résumés de données dans l'in- terrogation. Dans le contexte des résumés linguistiques du modèle SaintEtiQ sur lequel se focalise cette thèse, un résumé est une description du contenu d'une table relationnelle. Grâce à la définition de variables linguistiques, il est possible d'utiliser des termes du langage pour caractériser les données structurées de la table. En outre, l'organisation des résumés en hié- rarchie offre divers niveaux de granularité. Nous nous intéressons à fournir une application concrète aux résumés déjà construits. D'une part, nous étudions les possibilités d'utilisation des résumés dans une interrogation à but descriptif. L'objectif est de décrire entièrement des données dont certaines caractéristiques sont connues. Nous proposons une démarche de re- cherche de concepts et une instanciation de cette démarche. Ensuite, une étude des systèmes d'interrogation flexible, dont certains ont, ainsi que SaintEtiQ, la théorie des sous-ensembles flous comme base, nous permet d'enrichir la démarche proposée par des fonctionnalités plus avancées. D'autre part, nous avons intégré les résumés linguistiques de SaintEtiQ au SGBD PostgreSQL. L'objectif est d'aider le SGBD à identifier des enregistrements. Nous présen- tons un état de l'art des techniques d'indexation, ainsi que le détail de l'implémentation des résumés en tant que méthode d'accès dans PostgreSQL.
|
80 |
Efficiently Approximating Query Optimizer DiagramsDey, Atreyee 08 1900 (has links)
Modern database systems use a query optimizer to identify the most efficient strategy, called “query execution plan”, to execute declarative SQL queries. The role of the query optimizer is especially critical for the complex decision-support queries featured in current data warehousing and data mining applications.
Given an SQL query template that is parametrized on the selectivities of the participating base relations and a choice of query optimizer, a plan diagram is a color-coded pictorial enumeration of the execution plan choices of the optimizer over the query parameter space. Complementary to the plan-diagrams are cost and cardinality diagrams which graphically plot the estimated execution costs and cardinalities respectively, over the query parameter space. These diagrams are collectively known as optimizer diagrams. Optimizer diagrams have proved to be a powerful tool for the analysis and redesign of modern optimizers, and are gaining interest in diverse industrial and academic institutions. However, their utility is adversely impacted by the impractically large computational overheads incurred when standard brute-force approaches are used for producing fine-grained diagrams on high-dimensional query templates.
In this thesis, we investigate strategies for efficiently producing close approximations to complex optimizer diagrams. Our techniques are customized for different classes of optimizers, ranging from the generic Class I optimizers that provide only the optimal plan for a query, to Class II optimizers that also support costing of sub-optimal plans and Class III optimizers which offer enumerated rank-ordered lists of plans in addition to both the former features.
For approximating plan diagrams for Class I optimizers, we first present database oblivious techniques based on classical random sampling in conjunction with nearest neighbor (NN) inference scheme. Next we propose grid sampling algorithms which consider database specific knowledge such as(a) the structural differences between the operator trees of plans on the grid locations and (b) parametric query optimization principle. These algorithms become more efficient when modified to exploit the sub-optimal plan costing feature available with Class II optimizers. The final algorithm developed for Class III optimizers assume plan cost monotonicity and utilize the rank-ordered lists of plans to efficiently generate completely accurate optimizer diagrams. Subsequently, we provide a relaxed variant, which trades quality of approximation, for reduction in diagram generation overhead. Our proposed algorithms are capable of terminating according to user given error bound for plan diagram approximation.
For approximating cost diagrams, our strategy is based on linear least square regression performed on a mathematical model of plan cost behavior over the parameter space, in conjunction with interpolation techniques. Game theoretic and linear programming approaches have been employed to further reduce the error in cost approximation.
For approximating cardinality diagrams, we propose a novel parametrized mathematical model as a function of selectivities for characterizing query cardinality behavior. The complete cardinality model is constructed by clustering the data points according to their cardinality values and subsequently fitting the model through linear least square regression technique separately for each cluster. For non-sampled data points the cardinality values are estimated by first determining the cluster they belong to and then interpolating the cardinality value according to the suitable model.
Extensive experimentation with a representative set of TPC-H and TPC-DS-based query templates on industrial-strength optimizers indicates that our techniques are capable of delivering 90% accurate optimizer diagrams while incurring no more than 20% of the computational overheads of the exhaustive approach. Infact, for full-featured optimizers, we can guarantee zero error optimizer diagrams which usually require less than 10% overheads. Our results exhibit that (a) the approximation is materially faithful to the features of the exact optimizer diagram, with the errors thinly spread across the picture and Largely confined to the plan transition boundaries and (b) the cost increase at the non-sampled point due to assignment of sub-optimal plan is also limited.
These approximation techniques have been implemented in the publicly available Picasso optimizer visualizer tool. We have also modified PostgreSQL’s optimizer to incorporate costing of sub-optimal plans and enumerating rank-ordered lists of plans. In addition to these, we have designed estimators for predicting the time overhead involved in approximating optimizer diagrams with regard to user given error bounds.
In summary, this thesis demonstrates that accurate approximations to exact optimizer diagrams can indeed be obtained cheaply and consistently, with typical overheads being an order of magnitude lower than the brute-force approach. We hope that our results will encourage database vendors to incorporate the foreign-plan-costing and plan-rank-list features in their optimizer APIs.
|
Page generated in 0.0244 seconds