• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 30
  • 23
  • 21
  • 5
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 101
  • 35
  • 28
  • 24
  • 23
  • 18
  • 15
  • 15
  • 13
  • 13
  • 12
  • 11
  • 11
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Aplikace pro zobrazení modelu bezdrátové sítě / Application for display wireless network model

Žoldoš, Petr January 2011 (has links)
The first step of Master's thesis was to gain knowledge about technologies Adobe Flex SDK and Google Maps API. Knowledge was used to develop an application, which let users create, generate and modify graphical wireless network model. Position and characteristics of each single unit are monitored either in a map interface or in building plans. Dates are gathered from forms filled by current user, from external file or periodically from connected database system. Theoretical part enlightens technologies that were used. It describes program development and solutions, which were made, along with examples of the source code. Included are printscreens of graphical user interface as well as description of how does it all work.
72

Temporální rozšíření pro PostgreSQL / A Temporal Extension for PostgreSQL

Jelínek, Radek January 2015 (has links)
This thesis is focused on PostgreSQL database system. You can find here introducing to temporal databases, database system PostgreSQL, proposal of temporal extension for PostgreSQL and implementation chapter with examples of using this extension. In this thesis is also using temporal database systems and use temporal databases in practise.
73

High-Availability für ZOPE

Damaschke, Marko 11 June 2005 (has links)
Im Rahmen dieser vorliegenden Arbeit soll untersucht werden, welche Möglichkeiten zur Sicherung einer möglichst hohen Verfügbarkeit (High-Availability), Mechanismen zur Lastverteilung mittels des ZEO-Produkts oder ähnlichem sowie welche Strategien des Cachings sinnvoll an einem ZOPE-Server zum Einsatz kommen können. Die Arbeit untersucht dabei die Einsatzmöglichkeiten von bereits vorhandenen und die eventuelle Notwendigkeit der Eigenimplementierung weiterer Produkte der ZOPE-Entwicklung. Den Rahmen der Arbeit bildet die Serverstruktur des Bildungsmarktplatzes Sachsen.
74

Sicheres Verteilen von Konfigurationsdaten und Migrationsstrategie zum Trennen von Diensten und Datenbasis

Wehrmann, Sebastian 01 August 2006 (has links)
Aus historischen Gründen war die CSN Datenbank und die darauf zugreifenden Dienste immer auf dem selben Rechner. Zum einen aus Geldmangel, zum anderen, weil die Verteilung der Konfiguration und Zugriffssteuerung zur Datenbank ein ungelöstes Problem ist. Aufgabe dieser Arbeit ist die physikalische und logische Trennung der Firewall (und des Shapers) von der Datenbank. Dazu muss ein Dienst geschaffen werden, der die Konfigurationsinformationen für die Firewall und potentiell andere Applikationen bereitstellt. Der Zugriff auf diese Informationen muss vor Dritten geschützt werden. Im Weiteren soll eine Migrationstrategie entworfen werden, wie der Übergang zu der skizzierten Lösung bewerkstelligt werden kann.
75

Hantering av kursmjukvaror vid Linköpings universitet / Management of course software at Linköping University

Udd, Gustaf, Axelsson, Isak, Duvaldt, Jakob, Bergman, Oscar, Måhlén, Joar, Lundin, Oskar, Abrahamsen, Tobias, Sköldehag, Sara January 2020 (has links)
Denna kandidatrapport är skriven av åtta studenter i kursen Kandidatprojekt i mjukvaruutveckling, TDDD96, vid Linköpings universitet under våren 2020. Kandidatrapporten inkluderar en sammanfattning av det arbete som utförts i projektet. Till projektet tillhörde ett uppdrag på beställning av Digitala resursenheten vid Linköpings universitet. Uppdraget var att skapa en webbapplikation där examinatorer och handledare för kurser vid Linköpings universitet på ett enkelt och intuitivt sätt kan beställa mjukvara till sina respektive kurser. Projektgruppen jobbade agilt och byggde projektet i Python samt JavaScript. Applikationen nådde alla mål satta tillsammans mellan kunden och projektgruppen och resulterade i en fungerande produkt som kunde utökas och modifieras enligt kundens behov. Till kandidatrapporten inkluderas även individuella bidrag skrivna av vardera gruppmedlem som djupdykt inom ett specifikt ämne eller område.
76

TimescaleDB för lagring av OBD-II-data / TimescaleDB for OBD-II data storage

Svensson, Alex, Wichardt, Ulf January 2022 (has links)
All cars support reading diagnostic data from their control units via the On-Board Diagnostics II protocol. For companies with large vehicle fleets it may be valuable to analyze this diagnostic data, but large vehicle fleets produce large amounts of data. In this thesis we investigated whether the time series database TimescaleDB is suitable for storing such data. In order to investigate this we tested and evaluated its insertion speed, query execution time and compression ratio. The results show that TimescaleDB is able to insert over 200 000 rows of data per second. They also show that the compression algorithm can speed up query execution by up to 134.5 times and reach a compression ratio of 9.1. Considering these results we conclude that TimescaleDB is a suitable choice for storing diagnostic data, but not necessarily the most suitable.
77

Podpora pro práci s XML u databázového serveru Microsoft SQL Server 2008 / Support for XML in Microsoft SQL Server 2008

Bábíčková, Radka Unknown Date (has links)
This thesis is focused on XML and related technologies. The XML language is directly linked to the databases and its support in databases. The overview of the XML support provided by various database products and systems are presented in this work. Support in the MS SQL Server 2008 is discussed in more detail starting with the mapping of relational data to XML and vice versa to support of the XML data type and work with it through XQuery. Also some indexing techniques are briefly presented. Finally, the support in MS SQL Server 2008 is demonstrated by means of a sample application, which verifes the theoretical knowledge in practice.
78

Le résumé linguistique de données structurées comme support pour l'interrogation

Voglozin, W. Amenel 11 July 2007 (has links) (PDF)
Le travail présenté dans cette thèse traite de l'utilisation des résumés de données dans l'in- terrogation. Dans le contexte des résumés linguistiques du modèle SaintEtiQ sur lequel se focalise cette thèse, un résumé est une description du contenu d'une table relationnelle. Grâce à la définition de variables linguistiques, il est possible d'utiliser des termes du langage pour caractériser les données structurées de la table. En outre, l'organisation des résumés en hié- rarchie offre divers niveaux de granularité. Nous nous intéressons à fournir une application concrète aux résumés déjà construits. D'une part, nous étudions les possibilités d'utilisation des résumés dans une interrogation à but descriptif. L'objectif est de décrire entièrement des données dont certaines caractéristiques sont connues. Nous proposons une démarche de re- cherche de concepts et une instanciation de cette démarche. Ensuite, une étude des systèmes d'interrogation flexible, dont certains ont, ainsi que SaintEtiQ, la théorie des sous-ensembles flous comme base, nous permet d'enrichir la démarche proposée par des fonctionnalités plus avancées. D'autre part, nous avons intégré les résumés linguistiques de SaintEtiQ au SGBD PostgreSQL. L'objectif est d'aider le SGBD à identifier des enregistrements. Nous présen- tons un état de l'art des techniques d'indexation, ainsi que le détail de l'implémentation des résumés en tant que méthode d'accès dans PostgreSQL.
79

Efficiently Approximating Query Optimizer Diagrams

Dey, Atreyee 08 1900 (has links)
Modern database systems use a query optimizer to identify the most efficient strategy, called “query execution plan”, to execute declarative SQL queries. The role of the query optimizer is especially critical for the complex decision-support queries featured in current data warehousing and data mining applications. Given an SQL query template that is parametrized on the selectivities of the participating base relations and a choice of query optimizer, a plan diagram is a color-coded pictorial enumeration of the execution plan choices of the optimizer over the query parameter space. Complementary to the plan-diagrams are cost and cardinality diagrams which graphically plot the estimated execution costs and cardinalities respectively, over the query parameter space. These diagrams are collectively known as optimizer diagrams. Optimizer diagrams have proved to be a powerful tool for the analysis and redesign of modern optimizers, and are gaining interest in diverse industrial and academic institutions. However, their utility is adversely impacted by the impractically large computational overheads incurred when standard brute-force approaches are used for producing fine-grained diagrams on high-dimensional query templates. In this thesis, we investigate strategies for efficiently producing close approximations to complex optimizer diagrams. Our techniques are customized for different classes of optimizers, ranging from the generic Class I optimizers that provide only the optimal plan for a query, to Class II optimizers that also support costing of sub-optimal plans and Class III optimizers which offer enumerated rank-ordered lists of plans in addition to both the former features. For approximating plan diagrams for Class I optimizers, we first present database oblivious techniques based on classical random sampling in conjunction with nearest neighbor (NN) inference scheme. Next we propose grid sampling algorithms which consider database specific knowledge such as(a) the structural differences between the operator trees of plans on the grid locations and (b) parametric query optimization principle. These algorithms become more efficient when modified to exploit the sub-optimal plan costing feature available with Class II optimizers. The final algorithm developed for Class III optimizers assume plan cost monotonicity and utilize the rank-ordered lists of plans to efficiently generate completely accurate optimizer diagrams. Subsequently, we provide a relaxed variant, which trades quality of approximation, for reduction in diagram generation overhead. Our proposed algorithms are capable of terminating according to user given error bound for plan diagram approximation. For approximating cost diagrams, our strategy is based on linear least square regression performed on a mathematical model of plan cost behavior over the parameter space, in conjunction with interpolation techniques. Game theoretic and linear programming approaches have been employed to further reduce the error in cost approximation. For approximating cardinality diagrams, we propose a novel parametrized mathematical model as a function of selectivities for characterizing query cardinality behavior. The complete cardinality model is constructed by clustering the data points according to their cardinality values and subsequently fitting the model through linear least square regression technique separately for each cluster. For non-sampled data points the cardinality values are estimated by first determining the cluster they belong to and then interpolating the cardinality value according to the suitable model. Extensive experimentation with a representative set of TPC-H and TPC-DS-based query templates on industrial-strength optimizers indicates that our techniques are capable of delivering 90% accurate optimizer diagrams while incurring no more than 20% of the computational overheads of the exhaustive approach. Infact, for full-featured optimizers, we can guarantee zero error optimizer diagrams which usually require less than 10% overheads. Our results exhibit that (a) the approximation is materially faithful to the features of the exact optimizer diagram, with the errors thinly spread across the picture and Largely confined to the plan transition boundaries and (b) the cost increase at the non-sampled point due to assignment of sub-optimal plan is also limited. These approximation techniques have been implemented in the publicly available Picasso optimizer visualizer tool. We have also modified PostgreSQL’s optimizer to incorporate costing of sub-optimal plans and enumerating rank-ordered lists of plans. In addition to these, we have designed estimators for predicting the time overhead involved in approximating optimizer diagrams with regard to user given error bounds. In summary, this thesis demonstrates that accurate approximations to exact optimizer diagrams can indeed be obtained cheaply and consistently, with typical overheads being an order of magnitude lower than the brute-force approach. We hope that our results will encourage database vendors to incorporate the foreign-plan-costing and plan-rank-list features in their optimizer APIs.
80

Use of the general transit feed specification (GTFS) in transit performance measurement

Wong, James C. 13 January 2014 (has links)
Until recently, transit data lacked a common data format that could be used to share and integrate information among multiple agencies. In 2005, however, Google worked with Tri-Met in Oregon to create the General Transit Feed Specification (GTFS), an open data format now used by all transit agencies that participate in Google Maps. GTFS feeds contain data for scheduled transit service including stop and route locations, schedules and fare information. The broad adoption of GTFS by transit agencies has made it a de facto standard. Those agencies using it are able to participate in a host of traveler services designed for GTFS, most notably transit trip planners. Still, analysts have not widely used GTFS as a data source for transit planning because of the newness of the technology. The objectives of this project are to demonstrate that GTFS feeds are an efficient data source for calculating key transit service metrics and to evaluate the validity of GTFS feeds as a data source. To demonstrate GTFS feeds’ analytic potential, the author created a tool called GTFS Reader, which imports GTFS feeds into a database using open-source products. GTFS Reader also includes a series of queries that calculate metrics like headways, route lengths and stop-spacing. To evaluate the validity of GTFS feeds, annual vehicle revenue miles and hours from the National Transit Database (NTD) are compared to the calculated values from agencies whose GTFS feeds are available. The key finding of this work is that well-formed GTFS feeds are an accurate representation of transit networks and that the method of aggregation presented in this research can be used to effectively and efficiently calculate metrics for transit agencies. The daily aggregation method is more accurate than the weekly aggregation method, both introduced in this thesis, but practical limitations on processing time favor the weekly method. The reliability of GTFS feed data for smaller agencies is less conclusive than that of larger agencies because of discrepancies found in smaller agencies when their GTFS-generated metrics were compared to those in the NTD. This research will be of particular interest to transit and policy analysts, researchers and transit planners.

Page generated in 0.045 seconds