Spelling suggestions: "subject:"conserver"" "subject:"verver""
21 |
Návrh interaktivního WWW OLAP rozhraní pro analýzu produkce výrobních závodů / Design of Interactive WWW OLAP InterfaceMazáč, Pavel January 2008 (has links)
This work is focused on OLAP analysis. The work presents important theoretic facts and compares availible OLAP systems in different ways. The main goal was to create own OLAP system. Design and implementation of this system is described in the project.
|
22 |
Skaitmeninės antžeminės televizijos paslaugos duomenų saugyklos ir OLAP galimybių taikymas ir tyrimas / Digital video broadcasting terriastrial service`s data warehouse and OLAP opportunities research and applicationJuškaitis, Renatas 04 March 2009 (has links)
Tai darbas apie duomenų saugyklos ir OLAP galimybių panaudojimą skaitmeninės antžeminės televizijos paslaugos pardavimo procese. Atlikta išsami probleminės srities analizė, duomenų saugyklos, integravimo, kubų projektavimas ir realizavimas. Darbas atliktas, pasinaudojant MS SQL Server 2005 analizavimo ir integravimo paslaugomis. / This master`s work investigates data warehouse and OLAP tools opportunities research and practical use in organization which produces DVB-T (digital video broadcasting terrestrial) service for end-users. This work consist of problem analysis, data warehouse, data integration, data cubes project and realization. Realization was done using Ms SQL Server 2005 analysis and integration services.
|
23 |
Intelektuali universiteto akademinių duomenų analizė MS SQL Server 2008 priemonėmis / Intelligent analysis of university data with MS SQL Server 2008 toolsBrukštus, Vaidotas 23 July 2009 (has links)
Šiame darbe tiriama galimybė analizuoti įtakas, lemiančias studentų mokymosi universitete sėkmę. Remiamasi duomenų gavybos algoritmais. Sukurtas būdas, kaip prognozuoti, ar būsimas studentas, remiantis jo turimais stojimo balais bei ankstesnės kartos patirtimi, sėkmingai užbaigs studijas. Pradžioje aptariamos galimos duomenų gavybos taikymo sritys, būtini etapai, tam skirta programinė įranga. Detalizuojami Microsoft SQL Server 2008 palaikomi duomenų gavybos algoritmai. Keturi iš jų sėkmingai pritaikyti pasirinktos dalykinės srities analizei. Sukurta analitinė sistema, sugebanti įvertinti stojimo balų įtakas, universitete dėstomų dalykų įtakas galimybei sėkmingai baigti studijas. Atliktas tyrimas, nustatyti, kuris duomenų gavybos algoritmas yra tinkamiausias prognozuoti studentų iškritimą. / This paper describes a research of evaluation of influences, causing a success to graduate university. The research is based on the data mining algorithms. There has been developed way to predict if a prospective student will successfully graduate the university or not. The prediction is based on the data of earlier generation of students, and school’s marks of prospective student. First part of paper describes the spheres, where data mining is adapted. Then, there is detailed stages used in data mining process; reviewed most popular data mining tools. After Microsoft SQL Server data mining algorithms were analyzed, it became clear witch ones are most suitable for the selected research area. The realization part explains how data mining can serve to improve the study process in university. This can be achieved by analyzing influences of different study disciplines to the ability to graduate the university. The last part of paper describes the performed experiment, witch showed the most appropriate algorithm to make predictions about ability to graduate the university.
|
24 |
Databasoptimering för användning med Power BI : Hur indexering och kompression kan förbättra prestanda vid datahämtningLundström, Anton January 2020 (has links)
I mätrummet på Sandvik Coromant finns en lösning för att visualisera maskinhälsa, mäthistorik och servicetider för olika mätinstrument. Lösningen för datavisualiseringen nyttjar verktyget Power BI och är kopplad till Excelfiler. När data väl hämtats in görs en rad modifieringar på tabellerna för att få fram visualiserbar data. Dessa modifieringar i kombination med många Excelark resulterar i att ledtiderna för att uppdatera en Power BI rapport blir väldigt långa. Nu önskas det att istället nyttja en databaslösning för den data dessa Excelfiler innehåller och därmed förbättra dessa ledtider. Således skapades en databas utifrån den data dessa Excelfiler innehöll. Power BI tillåter användaren att importera data från en databas till applikationen på två sätt, via Import Mode eller DirectQuery. Import Mode läser in samtliga tabeller som efterfrågas och lagrar dessa i minnet. DirectQuery ställer frågor direkt till databasen utifrån vad som efterfrågas. I och med denna skillnad i importsätt finns metoder för att optimera den databas som data läses in ifrån. Studien undersöker hur olika typer av indexering och olika typer av kompression av dessa index påverkar svarstiden på frågor ställda av Power BI för att besvara följande två forskningsfrågor: Hur påverkar olika typer av indexering av en databas datahämtningshastigheten vid användning av Power BI? Hur påverkar olika typer av kompression av index datahämtningshastigheten vid användning av Power BI? Studien utfördes genom att studera execution plans och exekveringshastighet för de frågor som ställdes mot databasen av Power BI. Med hjälp av T-SQL kunde exekveringshastigheten för en specifik fråga tas fram. Denna exekveringshastighet jämfördes sedan för de olika typerna av index och kompression mot exekveringshastigheten för samma fråga mot en tabell helt utan index. Detta utfördes sedan på tabeller med varierande antal rader, där antalet rader som testades var 33 001, 50 081, 100 101, 500 017 och 1 000 217. Resultatet av studien visar att för Import Mode är det bästa typen av index ett clustered rowstore index utan kompression, med undantag för tabeller med över 1 001 217 rader där radkompression presterade bättre. För DirectQuery presterade non-clustered rowstore index bäst, men för vilken kompression var resultatet tvetydigt. Detta eftersom samtliga typer av kompression presterade bäst för olika antal rader i tabellen. För tabeller med fler än 500 017 rader presterade dock ingen kompression allra bäst. / In the measurement room at Sandvik Coromant there is a solution for visualizing machine health, measurement history and service times for different measuring instruments. The data visualization solution uses Power Bi and connects to Excel files. Once the data has been collected, a number of modifications are made on the tables to produce something that is possible to visualize. These modifications in combination with many Excel sheets result in very long lead times for updating a Power BI report. Now it is desired to use a database solution for the data contained in the Excel files and thus improve these lead times. For this, a database was created based on the data that these Excel files contained. Power BI allows the user to import data from a database into the application in two ways, via Import Mode or DirectQuery. Import Mode loads all the requested tables and stores them in memory. DirectQuery runs queries directly to the database, based on what is requested. Due to this difference, there are methods to optimize the database from which the data is loaded. This study examines how different types of indexing and different types of compression affect the response time for queries ran by Power BI to answer the following two research questions: How do different types of indexing affect a database's data retrieval rate when using Power BI? How do different types of compression affect the data retrieval rate when using Power BI? This was done by studying execution plans and execution rate for the queries that was done towards the database by Power BI. With the help of T-SQL, the execution rate for a specific query was obtained. The execution rate for different types of index and compression was then compared against a table without an index. This was then performed on tables with varying numbers of rows, where the numbers of rows that were tested was 33 001, 50 081, 100 101, 500 017 and 1 000 217. The results of the study show that for Import Mode, the best type of index is a clustered rowstore index without compression, with the exception of tables with over 1 001 217 rows where row compression performed better. For DirectQuery, non-clustered rowstore index performed best, but for which compression the result was ambiguous. This was because all types of compression performed best for different number of rows in the table. However, for tables with more than 500 017 rows, no compression performed best.
|
25 |
Řešení pro odchylkovou analýzu nákladů ve výrobní společnosti / Solution for Deviation Analysis of Cost in a Manufacturing CompanyDobeš, Radim January 2021 (has links)
At the very beginning of the diploma thesis, we introduce the reader to the issues of BI and controlling of manufacturing companies. Subsequently, we perform an analysis and evaluation of the current state of the selected manufacturing company in terms of variations in production. Then we use MSSQL server and SSAS to create a controlling model. The company will be able to unambiguously and quickly identify weaknesses in production and quickly eliminate them. Finally, we evaluate the real benefits of this project for the company.
|
26 |
Automatiserad dokumentation vid systemutveckling / Automated documentation in systems developmentAndersson, Magnus January 2012 (has links)
Ett erkänt problem inom industrin för mjukvaruutveckling är bristen på kvalitativ systemdokumentation. Hos företaget Multisoft Consulting finns detta problem. Utvecklare måste spendera onödigt mycket tid på att sätta sig in i befintliga system. Som en del av lösningen vill företaget införa automatisk generering av dokumentation. Genereringen ska ske i plattformen Softadmin® som används för att bygga alla kundsystem. Plattformen är baserad på C# och Microsofts SQL Server och innehåller en mängd färdiga komponenter med olika funktionalitet. För att veta vilken dokumentation som bör genereras automatiskt har litteraturstu-dier och intervjuer med Multisoft Consulting-anställda genomförts. För att veta vilken dokumentation som kan genereras har Softadmin® analyserats. Undersökningarna visar att dokumentation som ger en överblick över ett system samt visar hur systemet används både är önskvärd och möjlig att generera via Softadmin®. En form av system-överblick fanns redan implementerad i Softadmin® i form av en träd-struktur. Överblicken saknade dock en del önskvärda detaljer vilket medförde att fokus för studien blev att implementera en prototyp som kompletterade överblicken. Resultatet är att information om systemets menyval, vilket är sidor med olika funktionalitet, nu visas i överblicken. / A well-known problem within the software development industry is the absence of qualitative system documentation. This problem can be found within the company Multisoft Consulting. Too much time is spent by the developers when they must familiarize themselves with an existing system. As a part of the solution to this problem the company would like to generate documentation automatically. The generation should be performed by the platform Softadmin® which is used to develop all customer systems. The platform is based on C# and Microsoft’s SQL Server and contains different components, each with its own functionality. In order to find out which documentation that should be generated automatically literature has been studied and developers have been interviewed. In order to know which documentation that can be generated by Softadmin® the platform has been analyzed. The conclusion is that documentation which provides a general view of the system and demonstrates how the system is used is both desirable and possible to generate with Softadmin®. A kind of general view had already been implemented in Softadmin®. However, some desired features were not included in the general view. The priority of this study became to implement a prototype which completed the general view by including some of the desired features. The result is that it is now possible to display information about the system’s menu items, which is pages with different functionality, within the general view.
|
27 |
Jämförelse mellan graf- och relationsdatabas : En studie av prestanda vid sökning av kortaste vägen mellan två givna platser i ett rälsbundet nätverk / Comparison between graph and relational database : A study of performance when searching for the shortest path between two given places in a rail networkNilsson, Jimmy, Hansson, Johan January 2021 (has links)
Traditional relational databases store data in tabular form and have existed for several decades. The new requirements for data such as high availability and scalability have led to an increase in NoSQL databases in popularity. NoSQL databases meet these requirements as they use other methods for handling and storing data, for example document databases and graph databases are two of these variants. This study examined the difference in performance between the SQL Server 19 relational database and the Neo4j graph database. An experiment with the hypothesis: "Graph databases have faster response times compared to relational databases when retrieving the shortest route between two specified locations" was performed by executing a function on a dataset provided by the study's partner the Swedish Transport Administration. The data set represents Sweden's railway network and consists of 1320 places and 2788 associated connections. The function searched for the shortest route between two locations for four selected sections in each database architecture. The observed and analyzed response times show that Neo4j has an average response time that is 50 times faster than SQL Server 19, which verifies the hypothesis. The response times from the two databases were also tested with a Wilcoxon test which showed that the median response times differ from each other at a 1 % significance level. In addition, the results show that the average response time for SQL Server 19 will increase more than Neo4j as more sites and connections become involved in the search. Relational databases have slower response times than graph databases as they use join statements to find current relationships between its tables, which means that they must search all the data to find the shortest path between two places. Unlike relational databases, graph databases only use relationships directly connected to the current node where the algorithm is located, which means that response times are shorter. / Traditionella relationsdatabaser lagrar data i tabellform och har existerat i flera årtionden. De nya kraven på data som hög tillgänglighet samt skalbarhet har gjort att NoSQL databaser ökat i popularitet. NoSQL databaser tillgodoser dessa krav då de använder andra sätt för hantering och lagring av data, exempelvis är dokument-databaser samt grafdatabaser två av dessa varianter. I denna studie undersöktes skillnaden i prestanda mellan relationsdatabasen SQL Server 19 och grafdatabasen Neo4j. Ett experiment med hypotesen: “Grafdatabaser har snabbare svarstider i jämförelse mot relationsdatabaser vid hämtning av kortaste vägen mellan två angivna platser” genomfördes genom att exekvera en funktion på ett dataset som tillhandahållits av studiens samarbetspartner Trafikverket. Datasetet representerar Sveriges järnvägsnätverk och består av 1320 platser och 2788 tillhörande förbindelser. Funktionen sökte efter den kortaste vägen mellan två platser för fyra utvalda sträckor i varje databasarkitektur. De observerade och analyserade svarstiderna visar att Neo4j har en genomsnittlig svarstid som är 50gånger snabbare än SQL Server 19 vilket verifierar hypotesen. Svarstiderna från de två databaserna testades även med ett Wilcoxon-test som visade att svarstidernas median skiljer sig från varandra påen 1 % signifikansnivå. Därtill visar resultatet att den genomsnittliga svarstiden för SQL Server 19 kommer att öka mer än Neo4j då fler platser och förbindelser blir involverade i sökningen. Relationsdatabaser har långsammare svarstider än grafdatabaser då de använder join-satser för att hitta aktuella relationer mellan dess tabeller vilket gör att de måste söka igenom all data för att hitta kortaste vägen mellan två platser. Till skillnad från relationsdatabaser använder grafdatabaser endast relationer direkt anslutna till den nuvarande noden där algoritmen befinner sig vilket gör att svarstiderna blir mindre.
|
28 |
Generell DDL-Generering: metodik för olika databashanterare : Undersökning av metoder för generisk DDL-kod-generering över olika databassystemGabrielsson, Andreas January 2023 (has links)
Syftet med denna studie var att utveckla en generell applikation som kan generera DDL-skript från tre olika databaser: Oracle, SQL Server och DB2, genom att enbart använda en JDBC-uppkoppling. Behovet av denna studie kommer från att databasadministratörer och utvecklare effektivt ska kunna hantera databaser med olika system med varierande syntax och struktur. Processen genomfördes i IDEAn IntelliJ med java.sql-APIt för databasoperationer. Resultatet visade att trots skillnaderna mellan dessa databaser var det möjligt att utveckla en generell process för att extrahera DDL-kod med endast en JDBCuppkoppling. Dock krävdes vissa specifika anpassningar för varje databassystem. En observation var hanteringen av primärnycklar och index mellan systemen. Denna applikation har potential att vidareutvecklas till ett kraftfullt verktyg för databashantering, vilket sparar tid och resurser. Områden för framtida undersökning inkluderar hantering av komplexa datatyper och strukturer, samt prestanda med stora databaser. / This study was aimed at developing a generic application capable of generating DDL-code from three different databases: Oracle, SQL Server and DB2 by using JDBC. This research necessity origins from database administrators and developers need to effectively manage databases across different systems with different syntax and structure. The process was conducted in the IDEA IntelliJ using the java.sql-API for database operations. The result showed that despite the differences between these databases it was possible to develop a generic process for extracting DDL-code only using a JDBC connection. However, some specific adaptions were required for each database system. An observation was the managing of primary keys and indexes across the systems. This application has the potential to be developed further into a powerful tool for database management that saves time and resources. Areas for further investigation is handling of complex data types and structures and performance with large databases.
|
29 |
Investigating and Implementing a DNS Administration SystemBrännström, Anders, Nilsson, Rickard January 2007 (has links)
<p>NinetechGruppen AB is an IT service providing company with about 30 employees, primarily based in Karlstad, Sweden. The company began to have problems with their DNS administration because the number of administrated domains had grown too large. A single employee was responsible for all the administration, and text editors were used for modifying the DNS configuration files directly on the name servers. This was an error prone process which also easily led to inconsistencies between the documentation and the real world.</p><p>NinetechGruppen AB decided to solve the administrative problems by incorporating a DNS administration system, either by using an existing product or by developing a new sys-tem internally. This thesis describes the process of simplifying the DNS administration procedures of NinetechGruppen AB.</p><p>Initially, an investigation was conducted where existing DNS administration tools were sought for, and evaluated against the requirements the company had on the new system.</p><p>The system was going to have a web administration interface, which was to be developed in ASP.NET 2.0 with C# as programming language. The administration interface had to run on Windows, use SQL Server 2005 as backend database server, and base access control on Active Directory. Further, the system had to be able of integrating customer handling with the domain administration, and any changes to the system information had to follow the Informa-tion Technology Infrastructure Library change management process.</p><p>The name servers were running the popular name server software BIND and ran on two different Linux distributions – Red Hat Linux 9 and SUSE Linux 10.0.</p><p>The investigation concluded that no existing system satisfied the requirements; hence a new system was to be developed, streamlined for the use at NinetechGruppen AB. A requirement specification and a functional description was created and used as the basis for the development. The finalized system satisfies all necessary requirements to some extent, and most of them are fully satisfied.</p>
|
30 |
Business Intelligence v MS Dynamics AX 2009 / Business Intelligence in MS Dynamics AX 2009Hubáček, Filip January 2010 (has links)
The subject of the diploma thesis entitled "Business Intelligence in Microsoft Dynamics AX" is to analyze the functionality of the ERP system Microsoft Dynamics AX 2009 in the area of Business Intelligence and Reporting with the reflection of the current market position of the company. The goal is to set the basic definition of the relationship between ERP and Business Intelligence systems, further defining the possibilities of MS Dynamics AX in BI in the terms of their practical use and also to describe the fundamental technological aspects. The aim of the work is evaluation and definition particular steps during implementation based on methodology MS Sure Step 2010 together with description of deployment process. As a reflection of insufficient coverage of some of the areas is mentioned solution of the company Circon Circle Consulting. Also is realized proposal for BI for AX cost accounting module, including designing data mart, ETL and report with respect to specifics, such as the parent-child hierarchies and many-to-many relationship between fact tables and dimensions. The contribution of this work is mainly evident for the consultants of the system, to who is provided insight into the important and for users attractive functionality and also is offered possible implementation process. Technically oriented readers may appreciate the solutions for Cost Accounting and potential approaches to the concept of data mart or other areas where they meet the above-mentioned aspects.
|
Page generated in 0.0461 seconds