• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 114
  • 85
  • 84
  • 46
  • 23
  • 12
  • 7
  • 7
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 407
  • 407
  • 105
  • 100
  • 94
  • 74
  • 69
  • 61
  • 61
  • 61
  • 52
  • 49
  • 48
  • 43
  • 40
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Įmonės duomenų statistinės analizės, panaudojant DBVS, galimybių tyrimas / Research of Statistical Analysis of Enterprise Data using DBMS

Vasiliauskas, Žygimantas 26 August 2010 (has links)
Veiklos analizavimas bei sprendimų priėmimas labai svarbus šiandieninėje įmonių veikloje. Norėdamos atlikti veiklos statistinę analizę, įmonės perka brangius ir sudėtingus produktus nesigilindamos apie tų produktų nauda įmonės veikloje. Vienas iš sprendimo būdų kaip efektyviai atlikti statistinę duomenų analizę neinvestuojant didelių resursų – naudoti standartines DBVS statistines priemones. Šiomis priemonėmis įmonės gali atlikti įvairią statistinę analizę kasdienėje veikloje naudojant statistinės analizės metodus: linijinę regresiją, koreliacijos analizę, nuspėjamąją duomenų analitika, Pareto analitika, Chi kvadrato analitika, ANOVA. / This work reviews advantages and disadvantages of existing statistical analysis tools usage for small and mid sized enterprises, explores usage of statistical analysis of enterprise data in data warehouse, reviews integrated statistical analysis functions in existing database management systems and integrated graphical software for graphical statistical analysis usage. In this work were analyzed Oracle, Microsoft SQL Server and DB2 DBMS. The new statistical analysis solution was offered. This solution allows statistical analysis of existing data using integrated statistical functions of the database management systems and integrated graphical tool. Proposed solution has been designed and realized for the statistical analysis of insurance company‘s data. Oracle DBMS was selected for statistical analysis, because this DBMS is used by insurance company. Oracle has a large number of integrated statistical analysis functions; it ensures more diverse and rapid analysis. Graphic depiction of the selected Oracle Discoverer tool to optimally exploit data analysis potential of Oracle DBMS. The proposed statistical analysis process is versatile and suitable for different business areas and can be applied to other DBMS, which has an integrated analytical functions and graphical tools for the results display.
182

Semantische Integration von Data Warehousing und Wissensmanagement

Haak, Liane January 2007 (has links)
Zugl.: Oldenburg, Univ., Diss., 2007
183

Szenario-Technik mit einem future warehouse : ein Beitrag zur Zukunftssicherung von Unternehmensgründungen /

Zühlsdorff, Diana. Unknown Date (has links)
Bremen, Universiẗat, Diss., 2009.
184

Stammdatenmanagement zwischen Handel und Konsumgüterindustrie : Referenzarchitektur für die überbetriebliche Datensynchronisation /

Schemm, Jan Werner. Unknown Date (has links)
Sankt Gallen, Universiẗat, Diss., 2008.
185

Referenzprozesse für die Wartung von Data-Warehouse-Systemen /

Herrmann, Clemens. January 2006 (has links) (PDF)
Diss. Nr. 3165 Wirtschaftswiss. St. Gallen, 2006. / Literaturverz.
186

Business intelligence aus Kennzahlen und Dokumenten : Integration strukturierter und unstrukturierter Daten in entscheidungsunterstützenden Informationssystemen /

Bange, Carsten. January 2004 (has links)
Thesis (doctoral)--Universiẗat, Würzburg, 2003.
187

Performance enhancements for advanced database management systems

Helmer, Sven. Unknown Date (has links) (PDF)
University, Diss., 2000--Mannheim.
188

Processo de indução e ranqueamento de árvores de decisão sobre modelos OLAP

Colares, Peterson Fernandes January 2011 (has links)
Made available in DSpace on 2013-08-07T18:42:31Z (GMT). No. of bitstreams: 1 000437994-Texto+Completo-0.pdf: 1640213 bytes, checksum: 26f32168808fae3383c6bd3a3b9c87fc (MD5) Previous issue date: 2011 / Organizations acting on several markets have been using the benefits offered by the use of Data Mining - DM techniques as a complementary activity to their support systems to the strategic decision. However, to the great majority of the organizations, the deployment of a DM Project ends up not being feasible due to different factors, such as: Project duration, high costs and mainly by the uncertainty as to getting results that may effectively help the organization to improve their business processes. In this context, this paper presents a process based on the process of knowledge Discovery in Database - KDD which aims to identify opportunities to the application of DM techniques through the induction and ranking of decisions generated by the exploration of semi automatic Online Analytical Processing Models-OLAP. The built process uses stored information in a OLAP model prepared on the basis of used information by Customer Relationship Management - CRM and Business Intelligence - BI typically used by the organization to support strategic decision making. In relation to the information selected for this research, it has been carried out in a semi automatic way, a series of experiments using DM techniques which the results are collected and stored for later evaluation and ranking. The process was built and tested with a significant number of experiments and later evaluated by business experts in a large financial institution where this research was developed. / Organizações atuantes nos mais diferentes mercados, têm utilizado os benefícios oferecidos pela utilização de técnicas de Data Mining – DM como atividades complementares a seus sistemas de apoio a decisão estratégica. Porém, para a grande maioria das organizações, a implantação de um projeto de DM acaba sendo inviabilizada em função de diferentes fatores como: duração do projeto, custos elevados e principalmente pela incerteza quanto à obtenção de resultados que possam auxiliar de fato a organização a melhorar seus processos de negócio. Neste contexto, este trabalho apresenta um processo, baseado no processo de Knowledge Discovery in Database – KDD, que visa identificar oportunidades para aplicação de técnicas de DM através da indução e ranqueamento de árvores de decisão geradas pela exploração semiautomática de modelos On-Line Analytical Processing - OLAP. O processo construído utiliza informações armazenadas em um modelo OLAP preparado com base nas informações utilizadas por sistemas de Customer Relationship Management - CRM e Business Intelligence – BI, tipicamente utilizados por organizações no apoio a tomada de decisão estratégica. Neste trabalho é apresentada uma série de experimentos, gerados de forma semiautomática, utilizando técnicas de DM, cujos resultados são coletados e armazenados para posterior avaliação e ranqueamento. O processo foi construído e testado com um conjunto significativo de experimentos e posteriormente avaliado por especialistas de negócio em uma instituição financeira de grande porte onde esta pesquisa foi desenvolvida.
189

Processo de ETC orientado a serviços para um ambiente de gestão de PDS baseado em métricas

Silveira, Patrícia Souza January 2007 (has links)
Made available in DSpace on 2013-08-07T18:43:14Z (GMT). No. of bitstreams: 1 000399886-Texto+Completo-0.pdf: 1933710 bytes, checksum: 802b7870cad99de8d93c140653277d3d (MD5) Previous issue date: 2007 / The search for quality is a constant value in corporate environments. With this aim, software development organizations utilize metrics to measure quality of their products, processes and services. These metrics should be collected, consolidated and stored in a single central repository typically implemented as a Data Warehouse (DW). The definition of extraction, transformation and loading (ETL) of metrics that will be stored in DW, considering the software development environment (heterogeneity of sources, of process models, of project classes and of level of isolation) is no trivial task. This paper presents a data warehousing environment called SPDW+ as a solution to the automation of the ETL metrics process. This solution introduces a comprehensive and streamlined analytical model for the analysis and monitoring of metrics, and is built on a service-oriented approach that utilizes the Web Services technology (WS). Moreover, SPDW+ addresses the low-intrusion incremental load and the high frequency and low latency present in metrics collection. The main components of SPDW+ are specified, implemented and tested. The advantages of SPDW+ are: (i) flexibility and adaptation to meet the requirements of the constant changes in business environments; (ii) support to monitoring, which allows the run of frequent and incremental loads; (iii) the capacity to make less burdensome the complex, time-consuming task of capturing metrics; (iv) freedom of choice regarding management models and the support tools used in projects; and (v) cohesion and consistency of the information contained in the metrics repository needed to compare the data of different projects. / A busca pela qualidade é uma constante nos ambientes corporativos. Para tanto, as operações de desenvolvimento de software utilizam métricas para mensurar a qualidade dos seus produtos, processos e serviços. As mesmas devem ser coletadas, consolidadas e armazenadas em um repositório central único, tipicamente implementado na forma de Data Warehouse (DW). A definição do processo de extração, transformação e carga (ETC) das métricas a serem armazenadas nesse repositório, considerando as características do ambiente de desenvolvimento de software (heterogeneidade de fontes, de modelos de processos, de tipos de projetos e de níveis de isolamento) não é uma tarefa trivial. Este trabalho apresenta um ambiente de data warehousing denominado SPDW+, como solução para a automatização do processo de ETC das métricas. Esta solução contém um modelo analítico abrangente e elegante, para análise e monitoração de métricas, e é baseada em uma abordagem orientada a serviços, aliada à tecnologia de Web Services (WS). Além disso, o SPDW+ trata a carga incremental com baixo nível de intrusão, e alta freqüência e baixa latência na coleta das métricas. Os principais componentes da solução são especificados, implementados e testados. Os benefícios desta solução são: i) ser flexível e adaptável para atender às constantes modificações do ambiente do negócio; ii) oferecer suporte à monitoração, permitindo a realização de cargas freqüentes e incrementais; iii) ser capaz de desonerar os projetos da tarefa, laboriosa e complexa, de captura das métricas; iv) manter a liberdade de escolha dos projetos, quanto aos modelos de gestão e às ferramentas de apoio empregadas; e v) possibilitar que as informações contidas no repositório de métricas estejam coesas e consistentes, para que os dados de diferentes projetos sejam comparáveis entre si.
190

A framework to support developers in the integration and application of linked and open data

Heuss, Timm January 2016 (has links)
In the last years, the number of freely available Linked and Open Data datasets has multiplied into tens of thousands. The numbers of applications taking advantage of it, however, have not. Thus, large portions of potentially valuable data remain unexploited and are inaccessible for lay users. Therefore the upfront investment in releasing data in the first place is hard to justify. The lack of applications needs to be addressed in order not to undermine efforts put into Linked and Open Data. In existing research, strong indicators can be found that the dearth of applications is due to a lack of pragmatic, working architectures supporting these applications and guiding developers. In this thesis, a new architecture for the integration and application of Linked and Open Data is presented. Fundamental design decisions are backed up by two studies: firstly, based on real-world Linked and Open Data samples, characteristic properties are identified. A key finding is the fact that large amounts of structured data display tabular structures, do not use clear licensing and involve multiple different file formats. Secondly, following on from that study, a comparison of storage choices in relevant query scenarios is made. It includes the de-facto standard storage choice in this domain, Triples Stores, as well as relational and NoSQL approaches. Results show significant performance deficiencies of some technologies in certain scenarios. Consequently, when integrating Linked and Open Data in scenarios with application-specific entities, the first choice of storage is relational databases. Combining these findings and related best practices of existing research, a prototype framework is implemented using Java 8 and Hibernate. As a proof-of-concept it is employed in an existing Linked and Open Data integration project. Thereby, it is shown that a best practice architectural component is introduced successfully, while development effort to implement specific program code can be simplified. Thus, the present work provides an important foundation for the development of semantic applications based on Linked and Open Data and potentially leads to a broader adoption of such applications.

Page generated in 0.0404 seconds