• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 60
  • 32
  • 13
  • 8
  • 7
  • 4
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 138
  • 73
  • 56
  • 51
  • 48
  • 38
  • 35
  • 35
  • 22
  • 22
  • 16
  • 15
  • 13
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

An Efficient Evaluation Scheme of Shielding Effectiveness of Rectangular Enclosures with Array of Small Apertures

Lin, Yu-Pin 30 July 2009 (has links)
Using analytical formula to estimate a complex problem of cavity based on a transmission line approach is called the ETL (Equivalent Transmission Line) method . Operating frequency of dominant mode of standard case is several hundreds of megahertz that is lower than frequency of CPU, higher-order mode must be considered . Several papers always discuss a single aperture that does not correspond to practical array of apertures of case .Array of apertures can be estimated using analytical formula and higher-order mode can be considered in this paper .Usually array of apertures is not placed at the center of metallic plate, we can modify the analytical formula to get ETL of shifted array of apertures. To consider industrial design and esthetics, many possible shape of array of apertures are discussed by adapting to the ETL method .The problem of how to improve the shielding effectiveness of array of apertures is treated in the latter half of this thesis. The FDTD (Finite-Difference Time Domain) method is a full-wave analytical algorithm and is not efficient for analyzing cavity. Advantage of ETL is that it is faster than full-wave analytical algorithm .However, traditional ETL can not get accurate level of shielding effectiveness without higher-order modes. Thus improved ways are introduced in the paper to simulate practical array of apertures to reach the result of good agreement and short computing time.
2

Desenvolvimento de aplicações ETL como uma proposta para redução de custos em projetos de Data Warehouse

Oran Fonseca de Souza, Carla January 2003 (has links)
Made available in DSpace on 2014-06-12T17:40:24Z (GMT). No. of bitstreams: 2 arquivo7013_1.pdf: 809857 bytes, checksum: de9857bc64ae2bfd5c7b1eddd33041b8 (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2003 / O acesso privilegiado a informações estratégicas cada vez mais vem se tornando uma arma poderosa para manutenção das empresas no mercado. De um modo geral as organizações possuem um grande volume de dados, mas não dispõem de mecanismos capazes de tratá-los e convertê-los em informações relevantes para o processo decisório. Dentre os mecanismos empregados para prover essas necessidades, destaca-se a tecnologia de Data Warehousing, a qual começou a ser difundida na década de 80, com o conceito de bancos de dados corporativos. Essa tecnologia certamente apresenta inúmeras vantagens à organização, porém exigem altos investimentos para sua implatação. Durante o desenvolvimento de um Data Warehouse (DW), uma das fases mais dispendiosas inclui as etapas de Extração, Transformação e Carga dos dados (ETL), sendo um dos fatores de elevação dos custos da mesma, a aquisição de feramentas para automatizar esse processo. Esse trabalho tem como principal objetivo apresentar uma alternativa de redução de custos para projetos de DW, por meio da implementação de uma aplicação para extração, transformação e carga de dados em ambientes com base em dados homogêneas. A técnica adotada nessa investigação foi um estudo de caso, tendo como cenário o Departamento de Tributação da Prefeitura Municipal de Manaus. Nesse ambiente desenvolveu-se uma aplicação para automatizar o processo de ETL, de forma a demonstrar a viabilidade dessa fase de movimentação dos dados, sem a necessidade da organização despender recursos para a aquisição de uma ferramenta ETL comecial
3

Srovnání komerčních BI nástrojů s nástroji OpenSource

Okleštěk, Petr January 2006 (has links)
Cílem této diplomové práce je porovnat komerční a OpenSource Business Intelligence řešení pro střední podnik. Za účelem srovnání byl vybrán databázový server MS SQL Server s Analysis Services a dva možné způsoby nasazení OpenSource technologií. Prvním z nich je komplexní BI řešení od společnosti Insight Strategy a.s. ?The Bee?. Druhým způsobem je řešení pomocí dílčích aplikací nalezených na Internetu a to pomocí aplikací Keetle, Firebird, Mondrian a Openi. Jednotlivá řešení jsou porovnána na základě předem stanovených a popsaných metrik. Hlavním cílem je ověření na fiktivní firmě zda OpenSource BI řešení fungují a mohou nahradit komerční řešení a zda je to z finanční stránky výhodné.
4

Moderní přístupy tvorby datových skladů

Bednář, Luboš January 2010 (has links)
No description available.
5

A Semantic Framework for Integrating and Publishing Linked Data on the Web

January 2016 (has links)
abstract: Semantic web is the web of data that provides a common framework and technologies for sharing and reusing data in various applications. In semantic web terminology, linked data is the term used to describe a method of exposing and connecting data on the web from different sources. The purpose of linked data and semantic web is to publish data in an open and standard format and to link this data with existing data on the Linked Open Data Cloud. The goal of this thesis to come up with a semantic framework for integrating and publishing linked data on the web. Traditionally integrating data from multiple sources usually involves an Extract-Transform-Load (ETL) framework to generate datasets for analytics and visualization. The thesis proposes introducing a semantic component in the ETL framework to semi-automate the generation and publishing of linked data. In this thesis, various existing ETL tools and data integration techniques have been analyzed and deficiencies have been identified. This thesis proposes a set of requirements for the semantic ETL framework by conducting a manual process to integrate data from various sources such as weather, holidays, airports, flight arrival, departure and delays. The research questions that are addressed are: (i) to what extent can the integration, generation, and publishing of linked data to the cloud using a semantic ETL framework be automated; (ii) does use of semantic technologies produce a richer data model and integrated data. Details of the methodology, data collection, and application that uses the linked data generated are presented. Evaluation is done by comparing traditional data integration approach with semantic ETL approach in terms of effort involved in integration, data model generated and querying the data generated. / Dissertation/Thesis / Masters Thesis Computer Science 2016
6

EXTRACT TRANSFORM AND LOADING TOOL FOR EMAIL

Lawanghare, Amit Rajiv 01 September 2019 (has links)
This project focuses on applying Extract, Transform and Load (ETL) operations on the relational data exchanged via emails. An Email is an important form of communication by both personal and corporate means as it enables reliable and quick exchange. Many useful files are shared as a form of attachments which contains transactional/ relational data. This tool allows a user to write the filter conditions and lookup conditions on attachments; define the attribute map for attachments to the database table. The Data Cleansing for each attribute can be performed writing rules and their matching state. A user can add custom functions for the data transformation. The aggregation of the data is done in the form of reports after the operation of data loading into the database is complete. The tool needs one-time setup per file template and its automated from that point.
7

Metodología y Herramientas para Proyectos de Almacenamiento de Datos Históricos para Empresas de Retail en Chile

Villavicencio Théoduloz, Andrés Felipe January 2007 (has links)
El objetivo de este trabajo de título es proponer una metodología de levantamiento de DW para información de retail, enfocada a la realidad de las empresas del área en Chile. Además, se desarrollará un conjunto básico, pero suficiente, de herramientas que soporten su funcionamiento, dando repetibilidad y agilidad a los levantamientos mencionados, garantizando altos niveles de confianza en la calidad de los datos.
8

Sbírka řešených příkladů pro výuku Business Intelligence pro studenty EF / Collection of Solved Problems of Business Intelligence for Students of the Faculty of Economics

KOZUBKOVÁ, Miroslava January 2016 (has links)
The subject of the diploma thesis titled, "Collection of Solved Problems of Business Intelligence for Students of the Faculty of Economics" is a creation and description of a series of solved problems of Business Intelligence. The theoretical part is focused on basic terms associated with Business Intelligence. In the practical part, there is showed the usage of MS SQL Server and MS Server Data Tools for creation different parts of the Business Intelligence process.
9

Large Scale ETL Design, Optimization and Implementation Based On Spark and AWS Platform

Zhu, Di January 2017 (has links)
Nowadays, the amount of data generated by users within an Internet product is increasing exponentially, for instance, clickstream for a website application from millions of users, geospatial information from GIS-based APPs of Android and IPhone, or sensor data from cars or any electronic equipment, etc. All these data may be yielded billions every day, which is not surprisingly essential that insights could be extracted or built. For instance, monitoring system, fraud detection, user behavior analysis and feature verification, etc.Nevertheless, technical issues emerge accordingly. Heterogeneity, massiveness and miscellaneous requirements for taking use of the data from different dimensions make it much harder when it comes to the design of data pipelines, transforming and persistence in data warehouse. Undeniably, there are traditional ways to build ETLs from mainframe [1], RDBMS, to MapReduce and Hive. Yet with the emergence and popularization of Spark framework and AWS, this procedure could be evolved to a more robust, efficient, less costly and easy-to-implement architecture for collecting, building dimensional models and proceeding analytics on massive data. With the advantage of being in a car transportation company, billions of user behavior events come in every day, this paper contributes to an exploratory way of building and optimizing ETL pipelines based on AWS and Spark, and compare it with current main Data pipelines from different aspects. / Mängden data som genereras internet-produkt-användare ökar lavinartat och exponentiellt. Det finns otaliga exempel på detta; klick-strömmen från hemsidor med miljontals användare, geospatial information från GISbaserade Android och iPhone appar, eller från sensorer på autonoma bilar.Mängden händelser från de här typerna av data kan enkelt uppnå miljardantal dagligen, därför är det föga förvånande att det är möjligt att extrahera insikter från de här data-strömmarna. Till exempel kan man sätta upp automatiserade övervakningssystem eller kalibrera bedrägerimodeller effektivt. Att handskas med data i de här storleksordningarna är dock inte helt problemfritt, det finns flertalet tekniska bekymmer som enkelt kan uppstå. Datan är inte alltid på samma form, den kan vara av olika dimensioner vilket gör det betydligt svårare att designa en effektiv data-pipeline, transformera datan och lagra den persistent i ett data-warehouse. Onekligen finns det traditionella sätt att bygga ETL’s på från mainframe [1], RDBMS, till MapReduce och Hive. Dock har det med upptäckten och ökade populariteten av Spark och AWS blivit mer robust, effektivt, billigare och enklare att implementera system för att samla data, bygga dimensions-enliga modeller och genomföra analys av massiva data-set. Den här uppsatsen bidrar till en ökad förståelse kring hur man bygger och optimerar ETL-pipelines baserade på AWS och Spark och jämför med huvudsakliga nuvarande Data-pipelines med hänsyn till diverse aspekter. Uppsatsen drar nytta av att ha tillgång till ett massivt data-set med miljarder användar-events genererade dagligen från ett bil-transport-bolag i mellanöstern.
10

Integrering av befintliga operationella system för beslutsstöd / Systems Integration for Decision Support

Johansson, Peter, Stiernström, Peter January 2003 (has links)
<p>Detta arbete har sin utgångspunkt i Tekniska Verkens och Östkrafts integrerade operationella system. Dessa är utvecklade för att stödja beslutsprocesser för bl.a. fysisk och finansiell elhandel. Integreringen har gjorts genom annamandet av en IRM-baserad lösning, av verksamheterna benämnt "datavaruhus". </p><p>Avregleringen av elmarknaden medförde större krav på elleverantörerna med avseende på flexibilitet och funktionalitet när kunderna fick välja elbolag själva. Det som främst bidrar till komplexiteten gällande elhandel är de många olika sorters elavtal som kan tecknas och det ständigt varierande inköpspriset på nordiska kraftbörsen. </p><p>För fallstudiens företag gäller att deras datavaruhuslösning lider av osedvanligt dåliga prestanda. Syftet med uppsatsen är att utifrån en kvalitativ studie försöka identifiera primära faktorer för dessa prestandaproblem. Vidare vill vi belysa hur man bör integrera befintliga operationella system för att uppnå goda prestanda. </p><p>Arbetets slutsats är att prestandaproblemen kan härledas både till det arkitekturella och det strukturella planet såväl som till valet att egenutveckla den logik som bearbetar data genom att hämta, transformera och uppdatera datavaruhuset. Ytterligare en faktor utgörs av den höga detaljeringsgrad som kännetecknar data i datavaruhuset.</p>

Page generated in 0.0401 seconds