• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 86
  • 59
  • 35
  • 26
  • 23
  • 11
  • 5
  • 5
  • 5
  • 5
  • 4
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 275
  • 165
  • 100
  • 81
  • 74
  • 42
  • 38
  • 37
  • 36
  • 33
  • 33
  • 33
  • 32
  • 32
  • 31
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Developing an XML-based, exploitable linguistic database of the Hebrew text of Gen. 1:1-2:3

Kroeze, J.H. (Jan Hendrik) 28 July 2008 (has links)
The thesis discusses a series of related techniques that prepare and transform raw linguistic data for advanced processing in order to unveil hidden grammatical patterns. A threedimensional array is identified as a suitable data structure to build a data cube to capture multidimensional linguistic data in a computer's temporary storage facility. It also enables online analytical processing, like slicing, to be executed on this data cube in order to reveal various subsets and presentations of the data. XML is investigated as a suitable mark-up language to permanently store such an exploitable databank of Biblical Hebrew linguistic data. This concept is illustrated by tagging a phonetic transcription of Genesis 1:1-2:3 on various linguistic levels and manipulating this databank. Transferring the data set between an XML file and a threedimensional array creates a stable environment allowing editing and advanced processing of the data in order to confirm existing knowledge or to mine for new, yet undiscovered, linguistic features. Two experiments are executed to demonstrate possible text-mining procedures. Finally, visualisation is discussed as a technique that enhances interaction between the human researcher and the computerised technologies supporting the process of knowledge creation. Although the data set is very small there are exciting indications that the compilation and analysis of aggregate linguistic data may assist linguists to perform rigorous research, for example regarding the definitions of semantic functions and the mapping of these functions onto the syntactic module. / Thesis (PhD (Information Technology))--University of Pretoria, 2008. / Information Science / unrestricted
242

Řešení Business Intelligence / Business Intelligence Solutions

Dzimko, Miroslav January 2017 (has links)
Diploma thesis presents an evaluation of the current state of the company system, identification of critical areas and areas suitable for improvement. Based on the theoretical knowledge and analysis results, commercial Business Intelligence software is designed to enhance the quality and efficiency of the company's decision-support system and the introduction of an advanced Quality Culture system. The thesis reveals critical locations in the corporate environment and opens up space to design improvements to the system.
243

Analýza veřejně dostupných dat Českého statistického úřadu / Analysis of Public Data of the Czech Statistical Office

Pohl, Ondřej January 2017 (has links)
The aim of this thesis is analysis of data of the Czech Statistical Office concerning foreign trade. At first, reader familiarize with Business Intelligence and data warehousing. Further, OLAP analysis and data mining basics are explained. In next parts the thesis deal with describing and analysis of data of foreign trade by the help of OLAP technology and data mining in MS SQL Server including selected analytical tasks implementation.
244

Návrh manažerského reportingu pro řízení výkonnosti společnosti / Design of Management Reporting for Business Performance

Koreňovský, Jakub January 2018 (has links)
Diploma thesis aim to create Management Reporting application. First part focuses to analysis of various Business Intelligence solution and choosing the best one regarding to functional, financial features and availability. Main part is aimed to preparation of data, designing data model, overview of used functions, metrics and creating of reporting application. In the end of thesis is assessment of the application and induction to real business conditions.
245

Klient pro zobrazování OLAP kostek / Client for Displaying OLAP Cubes

Podsedník, Lukáš January 2010 (has links)
At the beginning, the project describes basics and utilization of data warehousing and OLAP techniques and operations used within the data warehouses. Then follows a description of one of the commercial OLAP client - based on the features of this product the requirement analysis of the freeware OLAP cube client displayer is desribed - choosing the functionality to be implemented in the client. Using the requirement analysis the structural design of the application (including UML diagrams) is made. The best solution from compared libraries, frameworks and development environments is chosen for the design. Next chapter is about implementation and tools and frameworks used in implemetation. At the end the thesis clasifies the reached results and options for further improvement.
246

Analýza globálních meteorologických dat / Global Meteorological Data Analysis

Gerych, Petr January 2012 (has links)
The thesis generally describes matters of data warehouses and knowledge discovery in databases. Then it focuses on the meteorological databases and their problems. The practical part of thesis describes design methods for data mining project, NOAA Global Surface Summary of the Day (GSOD), which is then implemented in two different ways using the Pentaho tools. Finally, an evaluation and comparison of these two approaches.
247

Design von Stichproben in analytischen Datenbanken

Rösch, Philipp 17 July 2009 (has links)
Aktuelle Studien belegen ein rasantes, mehrdimensionales Wachstum in analytischen Datenbanken: Das Datenvolumen verzehnfachte sich in den letzten vier Jahren, die Anzahl der Nutzer wuchs um durchschnittlich 25% pro Jahr und die Anzahl der Anfragen verdoppelte sich seit 2004 jährlich. Bei den Anfragen handelt es sich zunehmend um komplexe Verbundanfragen mit Aggregationen; sie sind häufig explorativer Natur und werden interaktiv an das System gestellt. Eine Möglichkeit, der Forderung nach Interaktivität bei diesem starken, mehrdimensionalen Wachstum nachzukommen, stellen Stichproben und eine darauf aufsetzende näherungsweise Anfrageverarbeitung dar. Diese Lösung bietet signifikant kürzere Antwortzeiten sowie Schätzungen mit probabilistischen Fehlergrenzen. Mit den Operationen Verbund, Gruppierung und Aggregation als Hauptbestandteile analytischer Anfragen ergeben sich folgende Anforderungen an das Design von Stichproben in analytischen Datenbanken: Zwischen den Stichproben fremdschlüsselverbundener Relationen ist die referenzielle Integrität zu gewährleisten, sämtliche Gruppen sind angemessen zu repräsentieren und Aggregationsattribute sind auf extreme Werte zu untersuchen. In dieser Dissertation wird für jedes dieser Teilprobleme ein Stichprobenverfahren vorgestellt, das sich durch speicherplatzbeschränkte Stichproben und geringe Schätzfehler auszeichnet. Im ersten der vorgestellten Verfahren wird durch eine korrelierte Stichprobenerhebung die referenzielle Integrität bei minimalem zusätzlichen Speicherplatz gewährleistet. Das zweite vorgestellte Stichprobenverfahren hat durch eine Berücksichtigung der Streuung der Daten eine angemessene Repräsentation sämtlicher Gruppen zur Folge und unterstützt damit beliebige Gruppierungen, und im dritten Verfahren ermöglicht eine mehrdimensionale Ausreißerbehandlung geringe Schätzfehler für beliebig viele Aggregationsattribute. Für jedes dieser Verfahren wird die Qualität der resultierenden Stichprobe diskutiert und bei der Berechnung speicherplatzbeschränkter Stichproben berücksichtigt. Um den Berechnungsaufwand und damit die Systembelastung gering zu halten, werden für jeden Algorithmus Heuristiken vorgestellt, deren Kennzeichen hohe Effizienz und eine geringe Beeinflussung der Stichprobenqualität sind. Weiterhin werden alle möglichen Kombinationen der vorgestellten Stichprobenverfahren betrachtet; diese Kombinationen ermöglichen eine zusätzliche Verringerung der Schätzfehler und vergrößern gleichzeitig das Anwendungsspektrum der resultierenden Stichproben. Mit der Kombination aller drei Techniken wird ein Stichprobenverfahren vorgestellt, das alle Anforderungen an das Design von Stichproben in analytischen Datenbanken erfüllt und die Vorteile der Einzellösungen vereint. Damit ist es möglich, ein breites Spektrum an Anfragen mit hoher Genauigkeit näherungsweise zu beantworten. / Recent studies have shown the fast and multi-dimensional growth in analytical databases: Over the last four years, the data volume has risen by a factor of 10; the number of users has increased by an average of 25% per year; and the number of queries has been doubling every year since 2004. These queries have increasingly become complex join queries with aggregations; they are often of an explorative nature and interactively submitted to the system. One option to address the need for interactivity in the context of this strong, multi-dimensional growth is the use of samples and an approximate query processing approach based on those samples. Such a solution offers significantly shorter response times as well as estimates with probabilistic error bounds. Given that joins, groupings and aggregations are the main components of analytical queries, the following requirements for the design of samples in analytical databases arise: 1) The foreign-key integrity between the samples of foreign-key related tables has to be preserved. 2) Any existing groups have to be represented appropriately. 3) Aggregation attributes have to be checked for extreme values. For each of these sub-problems, this dissertation presents sampling techniques that are characterized by memory-bounded samples and low estimation errors. In the first of these presented approaches, a correlated sampling process guarantees the referential integrity while only using up a minimum of additional memory. The second illustrated sampling technique considers the data distribution, and as a result, any arbitrary grouping is supported; all groups are appropriately represented. In the third approach, the multi-column outlier handling leads to low estimation errors for any number of aggregation attributes. For all three approaches, the quality of the resulting samples is discussed and considered when computing memory-bounded samples. In order to keep the computation effort - and thus the system load - at a low level, heuristics are provided for each algorithm; these are marked by high efficiency and minimal effects on the sampling quality. Furthermore, the dissertation examines all possible combinations of the presented sampling techniques; such combinations allow to additionally reduce estimation errors while increasing the range of applicability for the resulting samples at the same time. With the combination of all three techniques, a sampling technique is introduced that meets all requirements for the design of samples in analytical databases and that merges the advantages of the individual techniques. Thereby, the approximate but very precise answering of a wide range of queries becomes a true possibility.
248

Tabulkové procesory jako zobrazení dat OLAP / OLAP Interface in Spreadsheet Form

Kužela, Alois Unknown Date (has links)
This thesis envisages the possibilities of data transport from portal applications into spreadsheets. The main goal is to find out a solution to export data in suitable form for MS Excel application.   The thesis also shows principles of the OLAP. It describes OLAP internal data model and shows the way, how data could be exported from a website into spreadsheets. It enlarges SYLK and XML file formats, which are suitable for data representation in MS Office Excel spreadsheet.
249

Multiuživatelský systém pro podporu znovuvyužití materiálů / Multiuser System for Material Reusing

Kolarik, Petr January 2007 (has links)
This text is documentation for multi-access system, which supports recoverable materials. It deals with structure possibilities according to functional system specification and its implementation through the PHP together with using MySQL database system. It analyses a progress of system creation from ER diagram through use-case diagram to programming itself. This work shows how to design web advertisement system which enables an user to define personal multi-level views on data. This project might have been as basis for commerce project, which can check up usability designed structure of individual parts.
250

Informační systém laboratoře inteligentních systémů / Information System of Laboratory of Intelligent Systems

Kundrát, Miloš January 2009 (has links)
Information system of laboratory intelligent systems. A target of my masters thesis is online reservation and evidence information system of the property of laboratory Department of Intelligent Systems on the Faculty of Information Technology (DIS FIT). This system files and manages all property of the laboratory, collects detailed information (description and utilization of the property, related photos and associated documents. It makes possible to manage not only the property and in addition to it documents in electronic form as well, but it contains full-value reservation system with a possibility of entering and handling a reservations and loans for use. The system is divided to an administrator and user part. A researching program unit is a part of the system DIS FIT. It makes possible to lookup the online documents in worldwide net Internet, scilicet with minimal service of the authorized user on the basis of keywords (authors name, documents title, eventually the others) and it makes possible to store documents direct into the systems database.

Page generated in 0.041 seconds