• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 60
  • 33
  • 13
  • 8
  • 7
  • 4
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 139
  • 74
  • 56
  • 51
  • 48
  • 38
  • 35
  • 35
  • 22
  • 22
  • 16
  • 16
  • 14
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Implementace SugarCRM a BI řešení s použitím opensource nástrojů / Implementing SugarCRM and BI solution using opensource tools

Ullrich, Jan January 2009 (has links)
Diploma thesis is focused on the implementation of customer relationship management and Business Intelligence solutions in the small business company using open source technologies. Main objective of the thesis is implementation of CRM and Business Intelligence and evaluation of usability of these solutions. Thesis describes basic elements of these solutions. For the setting up metrics is using Balanced Scorecard. In the next part of this thesis is designed the whole Business Intelligence including the creating of data model and OLAP cubes using open source technologies. Theoretical background is described in the first part of thesis. There are defined basic terms from scope of strategic management using Balanced Scorecard, CRM and Business Intelligence. Thesis demonstrates using and creation of the whole Business Intelligence solution. There are also evaluated difficulties in implementation of these products. In thesis is designed data warehouse and the whole solution including reports.
112

Implementace finanční a majetkové analýzy municipalit pomocí open-source BI nástrojů / Implementation of the Financial and Property Analysis of Municipalities Using Open Source BI Tools

Černý, Ondřej January 2011 (has links)
The objective of this thesis is a complete implementation of the Financial and Property Analysis of Municipalities (FAMA) methodology using open-source Business Intelligence (BI) tools. The FAMA methodology was developed at the Institute of Public Administration and Regional Development at University of Economics in Prague and monitors wide range of aspects of management of municipalities. The main objective of this work is to create an application that would allow users to clearly analyze the management of municipalities with indicators of this methodology, using open source BI tools. The first, theoretical part, consists of two parts. The first half of the theoretical part of this thesis is devoted to the analysis of municipal finances and describes the FAMA methodology itself. Furthermore, methods used by the Ministry of Finance to evaluate municipalities are described. The second half of the theoretical part introduces readers to principles and components used in BI, some of which are used in the actual implementation. The second, practical part, initially deals with selection of suitable open-source BI tools and these tools are subsequently used to create an application that is used to analyze the management of municipalities. The implementation itself is divided into several parts. Firstly an initial study is performed, which is based on analysis of source data and user requirements. Based on the initial analysis, data warehouse is designed. Subsequently, an ETL project is created to process financial reports of municipalities and store them in the data warehouse. After filling the data warehouse, several OLAP cubes are created, which are used for multidimensional data analysis and finally, the presentation layer of the application is introduced and suitable graphical outputs for data presentation are designed. The main contribution of this thesis is the actual implementation of the FAMA methodology using the selected tools. The solution includes all indicators of the methodology and covers financial data of all municipalities of the Czech Republic in the years 2001 to 2012. Due to the scope of this work, a complete solution was made, that is ready for use.
113

Spectroscopic Study of Highly Ionised Plasmas : Detailed and Statistical Approaches / Etude spectroscopique des plasmas hautement ionisés : approche détaillée et statistique

Na, Xieyu 16 November 2017 (has links)
La description des propriétésspectrales des plasmas hautement ionisés –comme ceux rencontrés en fusion nucléaire,en astrophysique et en expérimentationlaser-plasmas –peut nécessiter différentstypes d’interprétation, parmi lesquelsl’approche détaillée ou raie-par-raieimpliquant de la diagonalisation del’Hamiltonien du système et, l’approchestatistique basée sur la caractérisation desstructures spectrales à travers les momentsde distribution.Ce travail de thèse a pour objectif d'étudier etde mettre en œuvre les méthodes statistiquestraitant des cas où de nombreuses raies seregroupent en faisceaux de transition nonrésolus (UTA pour Unresolved TransitionArray).Pour cela, des études analytique et numériqueont été menées. D’une part, les momentsd’ordre élevé de la distribution d’énergies spinorbiteont été obtenus, en utilisant lestechniques de calcul de moyennes qui fontintervenir les résultats de la secondequantification et de l’algèbre de momentangulaire. D’autre part, après avoir implémentéun programme de post-traitement des donnéesatomiques produites par le code FAC (FlexibleAtomic Code), en mode détaillée comme enmode UTA, les spectres d’émission et / The description of spectralproperties of highly ionized plasmas – asthose studied in stellar atmospheres, facilitiesfor nuclear fusion, or laser-plasmasexperiments – may require different types ofinterpretation, among which the detailed line by-line accounting which relies on Hamiltoniandiagonalization and the statistical approachbased on the characterization of spectralstructures through distribution moments.My PhD work aims at developing statisticalmethods dealing with situations whereabundant lines gather in Unresolved TransitionArrays (UTA).To this end, analytical and numerical analysishave been carried out. On one hand, high-ordermoments of spin-orbit energy have beenderived using averaging calculation techniquesbased on second quantization results andangular momentum algebra. On the other hand,after implementing a post-processing programfor both detailed and UTA computations of theFlexible Atomic Code (FAC), emission andabsorption spectra of tungsten plasmas havebeen studied in tokamak-equivalentthermodynamic conditions.Results of this thesis should hopefully stimulatefurther analysis on averages computationinvolving complex transition processes.
114

ResearchIQ: An End-To-End Semantic Knowledge Platform For Resource Discovery in Biomedical Research

Raje, Satyajeet 20 December 2012 (has links)
No description available.
115

Istar : um esquema estrela otimizado para Image Data Warehouses baseado em similaridade

Anibal, Luana Peixoto 26 August 2011 (has links)
Made available in DSpace on 2016-06-02T19:05:54Z (GMT). No. of bitstreams: 1 3993.pdf: 3294402 bytes, checksum: 982c043143364db53c8a4e2084205995 (MD5) Previous issue date: 2011-08-26 / A data warehousing environment supports the decision-making process through the investigation and analysis of data in an organized and agile way. However, the current data warehousing technologies do not allow that the decision-making processe be carried out based on images pictorial (intrinsic) features. This analysis can not be carried out in a conventional data warehousing because it requires the management of data related to the intrinsic features of the images to perform similarity comparisons. In this work, we propose a new data warehousing environment called iCube to enable the processing of OLAP perceptual similarity queries over images, based on their pictorial (intrinsic) features. Our approach deals with and extends the three main phases of the traditional data warehousing process to allow the use of images as data. For the data integration phase, or ETL phase, we propose a process to represent the image by its intrinsic content (such as color or texture numerical descriptors) and integrate this data with conventional data in the DW. For the dimensional modeling phase, we propose a star schema, called iStar, that stores both the intrinsic and the conventional image data. Moreover, at this stage, our approach models the schema to represent and support the use of different user-defined perceptual layers. For the data analysis phase, we propose an environment in which the OLAP engine uses the image similarity as a query predicate. This environment employs a filter mechanism to speed-up the query execution. The iStar was validated through performance tests for evaluating both the building cost and the cost to process IOLAP queries. The results showed that our approach provided an impressive performance improvement in IOLAP query processing. The performance gain of the iCube over the best related work (i.e. SingleOnion) was up to 98,21%. / Um ambiente de data warehousing (DWing) auxilia seus usuários a tomarem decisões a partir de investigações e análises dos dados de maneira organizada e ágil. Entretanto, os atuais recursos de DWing não possibilitam que o processo de tomada de decisão seja realizado com base em comparações do conteúdo intrínseco de imagens. Esta análise não pode ser realizada por aplicações de DW convencionais porque essa utiliza, como base, imagens digitais e necessita realizar operações baseadas em similaridade, para as quais um DW convencional não oferece suporte. Neste trabalho, é proposto um ambiente de data warehouse chamado iCube que provê suporte ao processamento de consultas IOLAP (Image On-Line Analytical Processing) baseadas em diversas percepções de similaridade entre as imagens. O iCube realiza adaptações nas três principais fases de um ambiente de data warehousing convencional para permitir o uso de imagens como dados de um data warehouse (DW). Para a fase de integração, ou fase ETL (Extract, Trasnform and Load), nós propomos um processo para representar as imagens a partir de seu conteúdo intrínseco (i.e., por exemplo por meio de descritores numéricos que representam cor ou textura dessas imagens) e integrar esse conteúdo intrínseco a dados convencionais em um DW. Neste trabalho, nós também propomos um esquema estrela otimizado para o iCube, denominado iStar, que armazena tanto dados convencionais quanto dados de representação do conteúdo intrínseco das imagens. Ademais, nesta fase, o iStar foi projetado para representar e prover suporte ao uso de diferentes camadas perceptuais definidas pelo usuário. Para a fase de análise de dados, o iCube permite que processos OLAP sejam executados com o uso de comparações de similaridade como predicado de consultas e com o uso de mecanismos de filtragem para acelerar o processamento de consultas OLAP. O iCube foi validado a partir de testes de desempenho para a construção da estrutura e para o processamento de consultas IOLAP. Os resultados demonstraram que o iCube melhora significativamente o desempenho no processamento de consultas IOLAP quando comparado aos atuais recursos de IDWing. Os ganhos de desempenho do iCube contra o melhor trabalho correlato (i.e. SingleOnion) foram de até 98,21%.
116

Datové sklady - principy, metody návrhu, nástroje, aplikace, návrh konkrétního řešení / Data warehouses -- main principles, concepts and methods, tools, applications, design and building of data warehouse solution in real company

Mašek, Martin January 2007 (has links)
The main goal of this thesis is to summarize and introduce general theoretical concepts of Data Warehousing by using the systems approach. The thesis defines Data Warehousing and its main areas and delimitates Data Warehousing area in terms of higher-level area called Business Intelligence. It also describes the history of Data Warehousing & Business Intelligence, focuses on key principals of Data Warehouse building and explains the practical applications of this solution. The aim of the practical part is to perform the evaluation of theoretical concepts. Based on that, design and build Data Warehouse in environment of an existing company. The final solution shall include Data Warehouse design, hardware and software platform selection, loading with real data by using ETL services and building of end users reports. The objective of the practical part is also to demonstrate the power of this technology and shall contribute to business decision-making process in this company.
117

Quantitative indicators of a successful mobile application

Skogsberg, Peter January 2013 (has links)
The smartphone industry has grown immensely in recent years. The two leading platforms, Google Android and Apple iOS, each feature marketplaces offering hundreds of thousands of software applications, or apps. The vast selection has facilitated a maturing industry, with new business and revenue models emerging. As an app developer, basic statistics and data for one's apps are available via the marketplace, but also via third-party data sources. This report regards how mobile software is evaluated and rated quantitatively by both endusers and developers, and which metrics are relevant in this context. A selection of freely available third-party data sources and app monitoring tools is discussed, followed by introduction of several relevant statistical methods and data mining techniques. The main object of this thesis project is to investigate whether findings from app statistics can provide understanding in how to design more successful apps, that attract more downloads and/or more revenue. After the theoretical background, a practical implementation is discussed, in the form of an in-house application statistics web platform. This was developed together with the app developer company The Mobile Life, who also provided access to app data for 16 of their published iOS and Android apps. The implementation utilizes automated download and import from online data sources, and provides a web based graphical user interface to display this data using tables and charts. Using mathematical software, a number of statistical methods have been applied to the collected dataset. Analysis findings include different categories (clusters) of apps, the existence of correlations between metrics such as an app’s market ranking and the number of downloads, a long-tailed distribution of keywords used in app reviews, regression analysis models for the distribution of downloads, and an experimental application of Pareto’s 80-20 rule which was found relevant to the gathered dataset. Recommendations to the app company include embedding session tracking libraries such as Google Analytics into future apps. This would allow collection of in-depth metrics such as session length and user retention, which would enable more interesting pattern discovery. / Smartphonebranschen har växt kraftigt de senaste åren. De två ledande operativsystemen, Google Android och Apple iOS, har vardera distributionskanaler som erbjuder hundratusentals mjukvaruapplikationer, eller appar. Det breda utbudet har bidragit till en mognande bransch, med nya växande affärs- och intäktsmodeller. Som apputvecklare finns grundläggande statistik och data för ens egna appar att tillgå via distributionskanalerna, men även via datakällor från tredje part. Den här rapporten behandlar hur mobil mjukvara utvärderas och bedöms kvantitativt av båda slutanvändare och utvecklare, samt vilka data och mått som är relevanta i sammanhanget.  Ett urval av fritt tillgängliga tredjeparts datakällor och bevakningsverktyg presenteras, följt av en översikt av flertalet relevanta statistiska metoder och data mining-tekniker. Huvudsyftet med detta examensarbete är att utreda om fynd utifrån appstatistik kan ge förståelse för hur man utvecklar och utformar mer framgångsrika appar, som uppnår fler nedladdningar och/eller större intäkter. Efter den teoretiska bakgrunden diskuteras en konkret implementation, i form av en intern webplattform för appstatistik. Denna plattform utvecklades i samarbete med apputvecklaren The Mobile Life, som också bistod med tillgång till appdata för 16 av deras publicerade iOSoch Android-appar. Implementationen nyttjar automatiserad nedladdning och import av data från datakällor online, samt utgör ett grafiskt gränssnitt för att åskådliggöra datan med bland annat tabeller och grafer. Med hjälp av matematisk mjukvara har ett antal statistiska metoder tillämpats på det insamlade dataurvalet. Analysens omfattning inkluderar en kategorisering (klustring) av appar, existensen av en korrelation mellan mätvärden såsom appars ranking och dess antal nedladdningar, analys av vanligt förekommande ord ur apprecensioner, en regressionsanalysmodell för distributionen av nedladdningar samt en experimentell applicering av Paretos ”80-20”-regel som fanns lämplig för vår data. Rekommendationer till appföretaget inkluderar att bädda in bibliotek för appsessionsspårning, såsom Google Analytics, i dess framtida appar. Detta skulle möjliggöra insamling av mer detaljerad data såsom att mäta sessionslängd och användarlojalitet, vilket skulle möjliggöra mer intressanta analyser.
118

Získávání znalostí z datových skladů / Knowledge Discovery over Data Warehouses

Pumprla, Ondřej January 2009 (has links)
This Master's thesis deals with the principles of the data mining process, especially with the mining  of association rules. The theoretical apparatus of general description and principles of the data warehouse creation is set. On the basis of this theoretical knowledge, the application for the association rules mining is implemented. The application requires the data in the transactional form or the multidimensional data organized in the Star schema. The implemented algorithms for finding  of the frequent patterns are Apriori and FP-tree. The system allows the variant setting of parameters for mining process. Also, the validation tests and efficiency proofs were accomplished. From the point of view of the association rules searching support, the resultant application is more applicable and robust than the existing compared systems SAS Miner and Oracle Data Miner.
119

A comparison of the impact of data vault and dimensional modelling on data warehouse performance and maintenance / Marius van Schalkwyk

Van Schalkwyk, Marius January 2014 (has links)
This study compares the impact of dimensional modelling and data vault modelling on the performance and maintenance effort of data warehouses. Dimensional modelling is a data warehouse modelling technique pioneered by Ralph Kimball in the 1980s that is much more effective at querying large volumes of data in relational databases than third normal form data models. Data vault modelling is a relatively new modelling technique for data warehouses that, according to its creator Dan Linstedt, was created in order to address the weaknesses of dimensional modelling. To date, no scientific comparison between the two modelling techniques have been conducted. A scientific comparison was achieved in this study, through the implementation of several experiments. The experiments compared the data warehouse implementations based on dimensional modelling techniques with data warehouse implementations based on data vault modelling techniques in terms of load performance, query performance, storage requirements, and flexibility to business requirements changes. An analysis of the results of each of the experiments indicated that the data vault model outperformed the dimensional model in terms of load performance and flexibility. However, the dimensional model required less storage space than the data vault model. With regards to query performance, no statistically significant differences existed between the two modelling techniques. / MSc (Computer Science), North-West University, Potchefstroom Campus, 2014
120

A comparison of the impact of data vault and dimensional modelling on data warehouse performance and maintenance / Marius van Schalkwyk

Van Schalkwyk, Marius January 2014 (has links)
This study compares the impact of dimensional modelling and data vault modelling on the performance and maintenance effort of data warehouses. Dimensional modelling is a data warehouse modelling technique pioneered by Ralph Kimball in the 1980s that is much more effective at querying large volumes of data in relational databases than third normal form data models. Data vault modelling is a relatively new modelling technique for data warehouses that, according to its creator Dan Linstedt, was created in order to address the weaknesses of dimensional modelling. To date, no scientific comparison between the two modelling techniques have been conducted. A scientific comparison was achieved in this study, through the implementation of several experiments. The experiments compared the data warehouse implementations based on dimensional modelling techniques with data warehouse implementations based on data vault modelling techniques in terms of load performance, query performance, storage requirements, and flexibility to business requirements changes. An analysis of the results of each of the experiments indicated that the data vault model outperformed the dimensional model in terms of load performance and flexibility. However, the dimensional model required less storage space than the data vault model. With regards to query performance, no statistically significant differences existed between the two modelling techniques. / MSc (Computer Science), North-West University, Potchefstroom Campus, 2014

Page generated in 0.0266 seconds