• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 232
  • 154
  • 151
  • 127
  • 77
  • 29
  • 20
  • 9
  • 9
  • 9
  • 8
  • 7
  • 7
  • 6
  • 5
  • Tagged with
  • 892
  • 515
  • 162
  • 161
  • 125
  • 114
  • 110
  • 107
  • 105
  • 104
  • 100
  • 98
  • 70
  • 69
  • 69
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

Data Warehouses na era do Big Data: processamento eficiente de Junções Estrela no Hadoop / Data Warehouses na era do Big Data: processamento eficiente de Junções Estrela no Hadoop

Jaqueline Joice Brito 12 December 2017 (has links)
The era of Big Data is here: the combination of unprecedented amounts of data collected every day with the promotion of open source solutions for massively parallel processing has shifted the industry in the direction of data-driven solutions. From recommendation systems that help you find your next significant one to the dawn of self-driving cars, Cloud Computing has enabled companies of all sizes and areas to achieve their full potential with minimal overhead. In particular, the use of these technologies for Data Warehousing applications has decreased costs greatly and provided remarkable scalability, empowering business-oriented applications such as Online Analytical Processing (OLAP). One of the most essential primitives in Data Warehouses are the Star Joins, i.e. joins of a central table with satellite dimensions. As the volume of the database scales, Star Joins become unpractical and may seriously limit applications. In this thesis, we proposed specialized solutions to optimize the processing of Star Joins. To achieve this, we used the Hadoop software family on a cluster of 21 nodes. We showed that the primary bottleneck in the computation of Star Joins on Hadoop lies in the excessive disk spill and overhead due to network communication. To mitigate these negative effects, we proposed two solutions based on a combination of the Spark framework with either Bloom filters or the Broadcast technique. This reduced the computation time by at least 38%. Furthermore, we showed that the use of full scan may significantly hinder the performance of queries with low selectivity. Thus, we proposed a distributed Bitmap Join Index that can be processed as a secondary index with loose-binding and can be used with random access in the Hadoop Distributed File System (HDFS). We also implemented three versions (one in MapReduce and two in Spark) of our processing algorithm that uses the distributed index, which reduced the total computation time up to 88% for Star Joins with low selectivity from the Star Schema Benchmark (SSB). Because, ideally, the system should be able to perform both random access and full scan, our solution was designed to rely on a two-layer architecture that is framework-agnostic and enables the use of a query optimizer to select which approaches should be used as a function of the query. Due to the ubiquity of joins as primitive queries, our solutions are likely to fit a broad range of applications. Our contributions not only leverage the strengths of massively parallel frameworks but also exploit more efficient access methods to provide scalable and robust solutions to Star Joins with a significant drop in total computation time. / A era do Big Data chegou: a combinação entre o volume dados coletados diarimente com o surgimento de soluções de código aberto para o processamento massivo de dados mudou para sempre a indústria. De sistemas de recomendação que assistem às pessoas a encontrarem seus pares românticos à criação de carros auto-dirigidos, a Computação em Nuvem permitiu que empresas de todos os tamanhos e áreas alcançassem o seu pleno potencial com custos reduzidos. Em particular, o uso dessas tecnologias em aplicações de Data Warehousing reduziu custos e proporcionou alta escalabilidade para aplicações orientadas a negócios, como em processamento on-line analítico (Online Analytical Processing- OLAP). Junções Estrelas são das primitivas mais essenciais em Data Warehouses, ou seja, consultas que realizam a junções de tabelas de fato com tabelas de dimensões. Conforme o volume de dados aumenta, Junções Estrela tornam-se custosas e podem limitar o desempenho das aplicações. Nesta tese são propostas soluções especializadas para otimizar o processamento de Junções Estrela. Para isso, utilizamos a família de software Hadoop em um cluster de 21 nós. Nós mostramos que o gargalo primário na computação de Junções Estrelas no Hadoop reside no excesso de operações escrita do disco (disk spill) e na sobrecarga da rede devido a comunicação excessiva entre os nós. Para reduzir estes efeitos negativos, são propostas duas soluções em Spark baseadas nas técnicas Bloom filters ou Broadcast, reduzindo o tempo total de computação em pelo menos 38%. Além disso, mostramos que a realização de uma leitura completa das tables (full table scan) pode prejudicar significativamente o desempenho de consultas com baixa seletividade. Assim, nós propomos um Índice Bitmap de Junção distribuído que é implementado como um índice secundário que pode ser combinado com acesso aleatório no Hadoop Distributed File System (HDFS). Nós implementamos três versões (uma em MapReduce e duas em Spark) do nosso algoritmo de processamento baseado nesse índice distribuído, os quais reduziram o tempo de computação em até 77% para Junções Estrelas de baixa seletividade do Star Schema Benchmark (SSB). Como idealmente o sistema deve ser capaz de executar tanto acesso aleatório quanto full scan, nós também propusemos uma arquitetura genérica que permite a inserção de um otimizador de consultas capaz de selecionar quais abordagens devem ser usadas dependendo da consulta. Devido ao fato de consultas de junção serem frequentes, nossas soluções são pertinentes a uma ampla gama de aplicações. A contribuições desta tese não só fortalecem o uso de frameworks de processamento de código aberto, como também exploram métodos mais eficientes de acesso aos dados para promover uma melhora significativa no desempenho Junções Estrela.
262

Avaliação do Star Schema Benchmark aplicado a bancos de dados NoSQL distribuídos e orientados a colunas / Evaluation of the Star Schema Benchmark applied to NoSQL column-oriented distributed databases systems

Lucas de Carvalho Scabora 06 May 2016 (has links)
Com o crescimento do volume de dados manipulado por aplicações de data warehousing, soluções centralizadas tornam-se muito custosas e enfrentam dificuldades para tratar a escalabilidade do volume de dados. Nesse sentido, existe a necessidade tanto de se armazenar grandes volumes de dados quanto de se realizar consultas analíticas (ou seja, consultas OLAP) sobre esses dados volumosos de forma eficiente. Isso pode ser facilitado por cenários caracterizados pelo uso de bancos de dados NoSQL gerenciados em ambientes paralelos e distribuídos. Dentre os desafios relacionados a esses cenários, destaca-se a necessidade de se promover uma análise de desempenho de aplicações de data warehousing que armazenam os dados do data warehouse (DW) em bancos de dados NoSQL orientados a colunas. A análise experimental e padronizada de diferentes sistemas é realizada por meio de ferramentas denominadas benchmarks. Entretanto, benchmarks para DW foram desenvolvidos majoritariamente para bancos de dados relacionais e ambientes centralizados. Nesta pesquisa de mestrado são investigadas formas de se estender o Star Schema Benchmark (SSB), um benchmark de DW centralizado, para o banco de dados NoSQL distribuído e orientado a colunas HBase. São realizadas propostas e análises principalmente baseadas em testes de desempenho experimentais considerando cada uma das quatro etapas de um benchmark, ou seja, esquema e carga de trabalho, geração de dados, parâmetros e métricas, e validação. Os principais resultados obtidos pelo desenvolvimento do trabalho são: (i) proposta do esquema FactDate, o qual otimiza consultas que acessam poucas dimensões do DW; (ii) investigação da aplicabilidade de diferentes esquemas a cenários empresariais distintos; (iii) proposta de duas consultas adicionais à carga de trabalho do SSB; (iv) análise da distribuição dos dados gerados pelo SSB, verificando se os dados agregados pelas consultas OLAP estão balanceados entre os nós de um cluster; (v) investigação da influência de três importantes parâmetros do framework Hadoop MapReduce no processamento de consultas OLAP; (vi) avaliação da relação entre o desempenho de consultas OLAP e a quantidade de nós que compõem um cluster; e (vii) proposta do uso de visões materializadas hierárquicas, por meio do framework Spark, para otimizar o desempenho no processamento de consultas OLAP consecutivas que requerem a análise de dados em níveis progressivamente mais ou menos detalhados. Os resultados obtidos representam descobertas importantes que visam possibilitar a proposta futura de um benchmark para DWs armazenados em bancos de dados NoSQL dentro de ambientes paralelos e distribuídos. / Due to the explosive increase in data volume, centralized data warehousing applications become very costly and are facing several problems to deal with data scalability. This is related to the fact that these applications need to store huge volumes of data and to perform analytical queries (i.e., OLAP queries) against these voluminous data efficiently. One solution is to employ scenarios characterized by the use of NoSQL databases managed in parallel and distributed environments. Among the challenges related to these scenarios, there is a need to investigate the performance of data warehousing applications that store the data warehouse (DW) in column-oriented NoSQL databases. In this context, benchmarks are widely used to perform standard and experimental analysis of distinct systems. However, most of the benchmarks for DW focus on relational database systems and centralized environments. In this masters research, we investigate how to extend the Star Schema Benchmark (SSB), which was proposed for centralized DWs, to the distributed and column-oriented NoSQL database HBase. We introduce proposals and analysis mainly based on experimental performance tests considering each one of the four steps of a benchmark, i.e. schema and workload, data generation, parameters and metrics, and validation. The main results described in this masters research are described as follows: (i) proposal of the FactDate schema, which optimizes queries that access few dimensions of the DW; (ii) investigation of the applicability of different schemas for different business scenarios; (iii) proposal of two additional queries to the SSB workload; (iv) analysis of the data distribution generated by the SSB, verifying if the data aggregated by OLAP queries are balanced between the nodes of a cluster; (v) investigation of the influence caused by three important parameters of the Hadoop MapReduce framework in the OLAP query processing; (vi) evaluation of the relationship between the OLAP query performance and the number of nodes of a cluster; and (vii) employment of hierarchical materialized views using the Spark framework to optimize the processing performance of consecutive OLAP queries that require progressively more or less aggregated data. These results represent important findings that enable the future proposal of a benchmark for DWs stored in NoSQL databases and managed in parallel and distributed environments.
263

Výběr a implementace informačního systému / Implementation of the information system

Horká, Tereza January 2020 (has links)
This Master’s thesis deals with the selection and implementation of an information system for the company SunSport s.r.o. In this thesis is firstly introduced the theoretical foundation necessary for the understanding of this topic and subsequently is analysed the current state of the company with an emphasis on the information system. The third part of the thesis focuses on the selection of the system and its implementation, which is described using the techniques of project management.
264

Qualitätsgetriebene Datenproduktionssteuerung in Echtzeit-Data-Warehouse-Systemen

Thiele, Maik 31 May 2010 (has links)
Wurden früher Data-Warehouse-Systeme meist nur zur Datenanalyse für die Entscheidungsunterstützung des Managements eingesetzt, haben sie sich nunmehr zur zentralen Plattform für die integrierte Informationsversorgung eines Unternehmens entwickelt. Dies schließt vor allem auch die Einbindung des Data-Warehouses in operative Prozesse mit ein, für die zum einen sehr aktuelle Daten benötigt werden und zum anderen eine schnelle Anfrageverarbeitung gefordert wird. Daneben existieren jedoch weiterhin klassische Data-Warehouse-Anwendungen, welche hochqualitative und verfeinerte Daten benötigen. Die Anwender eines Data-Warehouse-Systems haben somit verschiedene und zum Teil konfligierende Anforderungen bezüglich der Datenaktualität, der Anfragelatenz und der Datenstabilität. In der vorliegenden Dissertation wurden Methoden und Techniken entwickelt, die diesen Konflikt adressieren und lösen. Die umfassende Zielstellung bestand darin, eine Echtzeit-Data-Warehouse-Architektur zu entwickeln, welche die Informationsversorgung in seiner ganzen Breite -- von historischen bis hin zu aktuellen Daten -- abdecken kann. Zunächst wurde ein Verfahren zur Ablaufplanung kontinuierlicher Aktualisierungsströme erarbeitet. Dieses berücksichtigt die widerstreitenden Anforderungen der Nutzer des Data-Warehouse-Systems und erzeugt bewiesenermaßen optimale Ablaufpläne. Im nächsten Schritt wurde die Ablaufplanung im Kontext mehrstufiger Datenproduktionsprozesse untersucht. Gegenstand der Analyse war insbesondere, unter welchen Bedingungen eine Ablaufplanung in Datenproduktionsprozessen gewinnbringend anwendbar ist. Zur Unterstützung der Analyse komplexer Data-Warehouse-Prozesse wurde eine Visualisierung der Entwicklung der Datenzustände, über die Produktionsprozesse hinweg, vorgeschlagen. Mit dieser steht ein Werkzeug zur Verfügung, mit dem explorativ Datenproduktionsprozesse auf ihr Optimierungspotenzial hin untersucht werden können. Das den operativen Datenänderungen unterworfene Echtzeit-Data-Warehouse-System führt in der Berichtsproduktion zu Inkonsistenzen. Daher wurde eine entkoppelte und für die Anwendung der Berichtsproduktion optimierte Datenschicht erarbeitet. Es wurde weiterhin ein Aggregationskonzept zur Beschleunigung der Anfrageverarbeitung entwickelt. Die Vollständigkeit der Berichtsanfragen wird durch spezielle Anfragetechniken garantiert. Es wurden zwei Data-Warehouse-Fallstudien großer Unternehmen vorgestellt sowie deren spezifische Herausforderungen analysiert. Die in dieser Dissertation entwickelten Konzepte wurden auf ihren Nutzen und ihre Anwendbarkeit in den Praxisszenarien hin überprüft.:1 Einleitung 1 2 Fallstudien 7 2.1 Fallstudie A: UBS AG . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.1.1 Unternehmen und Anwendungsdomäne . . . . . . . . . . . . 8 2.1.2 Systemarchitektur . . . . . . . . . . . . . . . . . . . . . . . . 8 2.1.3 Besonderheiten und Herausforderungen . . . . . . . . . . . . 13 2.2 Fallstudie B: GfK Retail and Technology . . . . . . . . . . . . . . . . 15 2.2.1 Unternehmen und Anwendungsdomäne . . . . . . . . . . . . 15 2.2.2 Systemarchitektur . . . . . . . . . . . . . . . . . . . . . . . . 17 2.2.3 Besonderheiten und Herausforderungen . . . . . . . . . . . . 20 3 Evolution der Data-Warehouse- Systeme und Anforderungsanalyse 23 3.1 Der Data-Warehouse-Begriff und Referenzarchitektur . . . . . . . . . 23 3.1.1 Definition des klassischen Data-Warehouse-Begriffs . . . . . . 23 3.1.2 Referenzarchitektur . . . . . . . . . . . . . . . . . . . . . . . 24 3.2 Situative Datenanalyse . . . . . . . . . . . . . . . . . . . . . . . . . . 30 3.2.1 Interaktion zwischen IT und Fachbereich . . . . . . . . . . . 31 3.2.2 Spreadmart-Lösungen . . . . . . . . . . . . . . . . . . . . . . 33 3.2.3 Analytische Mashups und dienstorientierte Architekturen . . 35 3.2.4 Werkzeuge und Methoden im Kostenvergleich . . . . . . . . . 40 3.3 Evolution der Data-Warehouse-Systeme . . . . . . . . . . . . . . . . 40 3.3.1 Nutzung von Data-Warehouse-Systemen . . . . . . . . . . . . 41 3.3.2 Entwicklungsprozess der Hardware- und DBMS-Architekturen 46 3.4 Architektur eines Echtzeit-Data-Warehouse . . . . . . . . . . . . . . 50 3.4.1 Der Echtzeit-Begriff im Data-Warehouse-Umfeld . . . . . . . 50 3.4.2 Architektur eines Echtzeit-Data-Warehouses . . . . . . . . . . 51 3.4.3 Systemmodell . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 3.5 Anforderungen an ein Echtzeit-Data-Warehouse . . . . . . . . . . . . 55 3.5.1 Maximierung der Datenaktualität . . . . . . . . . . . . . . . 55 3.5.2 Minimierung der Anfragelatenz . . . . . . . . . . . . . . . . . 56 3.5.3 Erhalt der Datenstabilität . . . . . . . . . . . . . . . . . . . . 57 4 Datenproduktionssteuerung in einstufigen Systemen 59 4.1 Qualitätskriterien und Systemmodell . . . . . . . . . . . . . . . . . . 59 4.1.1 Dienstqualitätskriterien . . . . . . . . . . . . . . . . . . . . . 60 4.1.2 Datenqualitätskriterien . . . . . . . . . . . . . . . . . . . . . 63 4.1.3 Multikriterielle Optimierung . . . . . . . . . . . . . . . . . . 64 4.1.4 Workload- und Systemmodell . . . . . . . . . . . . . . . . . . 66 4.2 Multikriterielle Ablaufplanung . . . . . . . . . . . . . . . . . . . . . 68 4.2.1 Pareto-effiziente Ablaufpläne . . . . . . . . . . . . . . . . . . 68 4.2.2 Abbildung auf das Rucksackproblem . . . . . . . . . . . . . . 71 4.2.3 Lösung mittels dynamischer Programmierung . . . . . . . . . 74 4.3 Dynamische Ablaufplanung zur Laufzeit . . . . . . . . . . . . . . . . 78 4.4 Selektionsbasierte Ausnahmebehandlung . . . . . . . . . . . . . . . . 81 4.5 Evaluierung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 4.5.1 Experimentierumgebung . . . . . . . . . . . . . . . . . . . . . 84 4.5.2 Leistungsvergleich und Adaptivität . . . . . . . . . . . . . . . 86 4.5.3 Laufzeit- und Speicherkomplexität . . . . . . . . . . . . . . . 87 4.5.4 Änderungsstabilität . . . . . . . . . . . . . . . . . . . . . . . 89 4.6 Zusammenfassung . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 5 Bewertung von Ladestrategien in mehrstufigen Datenproduktionsprozessen 5.1 Ablaufplanung in mehrstufigen Datenproduktionsprozessen . . . . . 96 5.1.1 Ladestrategien und Problemstellung . . . . . . . . . . . . . . 97 5.1.2 Evaluierung und Diskussion . . . . . . . . . . . . . . . . . . . 98 5.2 Visualisierung der Datenqualität in mehrstufigen Datenproduktionsprozessen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 5.2.1 Erfassung und Speicherung . . . . . . . . . . . . . . . . . . . 110 5.2.2 Visualisierung der Datenqualität . . . . . . . . . . . . . . . . 111 5.2.3 Prototypische Umsetzung . . . . . . . . . . . . . . . . . . . . 114 5.3 Zusammenfassung . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 6 Konsistente Datenanalyse in operativen Datenproduktionsprozessen 119 6.1 Der Reporting-Layer als Basis einer stabilen Berichtsproduktion . . 120 6.1.1 Stabilität durch Entkopplung . . . . . . . . . . . . . . . . . . 120 6.1.2 Vorberechnung von Basisaggregaten . . . . . . . . . . . . . . 121 6.1.3 Vollständigkeitsbestimmung und Nullwertsemantik . . . . . . 125 6.1.4 Datenhaltung . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 6.1.5 Prozess der Anfrageverarbeitung mit Vollständigkeitsbestimmung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 6.1.6 Verwandte Arbeiten und Techniken . . . . . . . . . . . . . . . 127 6.1.7 Evaluierung . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 6.2 Nullwertkomprimierung . . . . . . . . . . . . . . . . . . . . . . . . . 133 6.2.1 Einleitendes Beispiel und Vorbetrachtungen . . . . . . . . . . 134 6.2.2 Nullwertkomprimierung . . . . . . . . . . . . . . . . . . . . . 136 6.2.3 Anfrageverarbeitung auf nullwertkomprimierten Daten . . . . 143 6.2.4 Verwandte Arbeiten und Techniken . . . . . . . . . . . . . . . 146 6.2.5 Evaluierung . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148 6.3 Zusammenfassung . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 7 Zusammenfassung und Ausblick 157 Literaturverzeichnis 161 Online-Quellenverzeichnis 169 Abbildungsverzeichnis 173
265

Warehouse management – streamlining picking rounds / Lagerhantering – effektivisering av plockrundor

Blom, Amanda, Stenman, Sofia January 2021 (has links)
In this study we have conducted research on how to optimize inventory management within logistics. The focus in this study is to examine the picking rounds, the reason for this is because it is the most time consuming and expensive part within a warehouse. Is it possible to minimize the handling time to create efficient picking rounds? As a part of the research project AI has been investigated as well. If it is possible with help of AI, create a streamlining of current warehouse logistics. The purpose of this report is to investigate how to minimize the distance in picking rounds for efficient warehouse management. To be able to fulfil the purpose of the report research questions where conducted. The methodology that was chosen at first was traditional data collection. With the help of other studies conducted in this area we started to collect information. To be able to compare this information to the chosen company Care of Carl a case study was performed. A case study on the current situation at Care of Carl, and what the current optimization is based on. With the help of these two methods a result emerged. The result that was conducted by this study is that placement and categorization of products as well as route planning has a significant role when streamlining the picking process and minimizing the picking process. To store items in a warehouse the most suitable option is to use a free item placement, or storage out of sale frequency. But important to acknowledge is that it requires support systems to make this storage possible. When categorizing articles, it is crucial to combine this with a suiting picking method. In the case study, combining ABC categorization with zone picking was a possible solution. In general, it might be a good idea to invest in AI to use the picking position principle. With AI it is possible to analyse more complex data such as customer patterns and if this implementation succeeds it can lead to great advantages within a warehouse and the picking processes. The traveling distance constitutes most of the total picking time, it is important to have a route method that works with how you have chosen to place the items. This study shows that the optimal routing method is the one to use. This study showed that there are a lot of different strategies and methods on the current market. According to the case study Care of Carl can make big savings by changing strategies and methods. The reason why is because they have been reactive when investing in IT support systems. But in general, if a company wants to meet the current increasing requirements according to the globalization and the continuous changes within logistics operations, AI is the next step. The methods that are currently used are not sufficient, with the help of AI there is room for improvements within product allocation and route planning. / I denna studie har det undersökts hur man kan optimera lagerhanteringen inom logistik. Fokus har varit att undersöka plockrundorna, då det är den mest tidskrävande och kostsamma delen inom ett lager. Är det möjligt att minimera hanteringstiden och därmed effektivisera plockrundorna? Studien har även varit en del av ett forskningsprojekt där man har undersökt om det med hjälp av AI är möjligt att skapa en effektivisering av lagerhantering. Syftet med denna rapport är att undersöka hur man minimerar avståndet i plockrundorna för att effektivisera lagerhanteringen. För att kunna uppfylla syftet med rapporten utformades det forskningsfrågor kopplat till syftet. Traditionell datainsamling var den metod som användes för att komma i gång med studien. Den teoretiska referensramen som skapades i denna rapport var utifrån andra studier som genomförts inom detta område, men även utifrån att kunna besvara de forskningsfrågor som skapats. Det genomfördes även en fallstudie på företaget Care of Carl, med en nulägesbeskrivning samt en förklaring gällande hur deras nuvarande optimering tagits fram. För att kunna besvara syftet med rapporten och forskningsfrågorna jämfördes den teoretiska referensramen med den fallstudien som genomförts i samband med denna studie. Resultatet som framkom under studien var att placering och kategorisering av produkter såväl som ruttplanering har en avgörande roll gällande effektivisering av plockprocessen i ett lager. Gällande inlagringsmetod är det lämpligast att använda sig av flytande artikelplacering alternativt lagring utifrån försäljningsfrekvens. Vad som är viktigt att nämna är att båda metoder kräver ett stödsystem för att kunna implementeras. Gällande kategorisering av artiklar är det viktigt att kombinera detta med en passande plockmetod. I fallstudien var en möjlig lösning att kombinera ABC-kategorisering med zonplockning. Generellt sätt är AI en framtida värd investering då man kan använda sig av plockpositionsprincipen. AI möjliggör analysering av mer komplexa data som kundmönster och om denna implementering lyckas kan det leda till stora fördelar inom ett lager och för plockprocessen. Det är även viktigt att ha en ruttmetod som fungerar ihop med den placeringsmetod man använt sig av, då gångtiden och gångavståndet är det som utgör det mesta av den totala plocktiden. Denna studie visar att den optimala ruttmetoden är den som bör användas, och detta kräver en investering i ett stödsystem. Denna studie visade att det för tillfället finns många olika strategier och metoder på marknaden idag. Enligt fallstudien kan Care of Carl göra stora besparingar bara genom att ändra sina strategier och metoder. Orsaken är att de har varit reaktiva vid investeringav IT-stödsystem. Generellt sätt, om ett företag vill uppfylla de ökande kraven som finns till följd av globaliseringen och de kontinuerliga förändringarna inom logistikverksamheten, är AI nästa steg att ta. Metoderna som för närvarande används är inte tillräckliga och med hjälp av AI finns det möjlighet för förbättringar inom produktallokering och ruttplanering.
266

Strategy and methodology for enterprise data warehouse development. Integrating data mining and social networking techniques for identifying different communities within the data warehouse.

Rifaie, Mohammad January 2010 (has links)
Data warehouse technology has been successfully integrated into the information infrastructure of major organizations as potential solution for eliminating redundancy and providing for comprehensive data integration. Realizing the importance of a data warehouse as the main data repository within an organization, this dissertation addresses different aspects related to the data warehouse architecture and performance issues. Many data warehouse architectures have been presented by industry analysts and research organizations. These architectures vary from the independent and physical business unit centric data marts to the centralised two-tier hub-and-spoke data warehouse. The operational data store is a third tier which was offered later to address the business requirements for inter-day data loading. While the industry-available architectures are all valid, I found them to be suboptimal in efficiency (cost) and effectiveness (productivity). In this dissertation, I am advocating a new architecture (The Hybrid Architecture) which encompasses the industry advocated architecture. The hybrid architecture demands the acquisition, loading and consolidation of enterprise atomic and detailed data into a single integrated enterprise data store (The Enterprise Data Warehouse) where businessunit centric Data Marts and Operational Data Stores (ODS) are built in the same instance of the Enterprise Data Warehouse. For the purpose of highlighting the role of data warehouses for different applications, we describe an effort to develop a data warehouse for a geographical information system (GIS). We further study the importance of data practices, quality and governance for financial institutions by commenting on the RBC Financial Group case. v The development and deployment of the Enterprise Data Warehouse based on the Hybrid Architecture spawned its own issues and challenges. Organic data growth and business requirements to load additional new data significantly will increase the amount of stored data. Consequently, the number of users will increase significantly. Enterprise data warehouse obesity, performance degradation and navigation difficulties are chief amongst the issues and challenges. Association rules mining and social networks have been adopted in this thesis to address the above mentioned issues and challenges. We describe an approach that uses frequent pattern mining and social network techniques to discover different communities within the data warehouse. These communities include sets of tables frequently accessed together, sets of tables retrieved together most of the time and sets of attributes that mostly appear together in the queries. We concentrate on tables in the discussion; however, the model is general enough to discover other communities. We first build a frequent pattern mining model by considering each query as a transaction and the tables as items. Then, we mine closed frequent itemsets of tables; these itemsets include tables that are mostly accessed together and hence should be treated as one unit in storage and retrieval for better overall performance. We utilize social network construction and analysis to find maximum-sized sets of related tables; this is a more robust approach as opposed to a union of overlapping itemsets. We derive the Jaccard distance between the closed itemsets and construct the social network of tables by adding links that represent distance above a given threshold. The constructed network is analyzed to discover communities of tables that are mostly accessed together. The reported test results are promising and demonstrate the applicability and effectiveness of the developed approach.
267

Evaluation of Load Scheduling Strategies for Real-Time Data Warehouse Environments

Thiele, Maik, Lehner, Wolfgang 13 January 2023 (has links)
The demand for so-called living or real-time data warehouses is increasing in many application areas, including manufacturing, event monitoring and telecommunications. In fields like these, users normally expect short response times for their queries and high freshness for the requested data. However, it is truly challenging to meet both requirements at the same time because of the continuous flow of write-only updates and read-only queries as well as the latency caused by arbitrarily complex ETL processes. To optimize the update flow in terms of data freshness maximization and load minimization, we propose two algorithms - local and global scheduling - that operate on the basis of different system information. We want to discuss the benefits and drawbacks of both approaches in detail and derive recommendations regarding the optimal scheduling strategy for any given system setup and workload.
268

Partition-based workload scheduling in living data warehouse environments

Thiele, Maik, Fischer, Ulrike, Lehner, Wolfgang 04 July 2023 (has links)
The demand for so-called living or real-time data warehouses is increasing in many application areas such as manufacturing, event monitoring and telecommunications. In these fields, users normally expect short response times for their queries and high freshness for the requested data. However, meeting these fundamental requirements is challenging due to the high loads and the continuous flow of write-only updates and read-only queries that might be in conflict with each other. Therefore, we present the concept of workload balancing by election (WINE), which allows users to express their individual demands on the quality of service and the quality of data, respectively. WINE exploits these information to balance and prioritize both types of transactions—queries and updates—according to the varying user needs. A simulation study shows that our proposed algorithm outperforms competing baseline algorithms over the entire spectrum of workloads and user requirements.
269

Brister och åtgärder i en distributionscentrals produktionsprocess : En fallstudie på Staples AB i Växjö / Shortcomings and measures in the production process in a distribution center : A case study at Staples AB in Växjö

Gredelj, Melita, Sadikaj, Emine January 2019 (has links)
Titel: Brister och åtgärder i produktionsprocessen i en distributionscentral - En fallstudie på Staples AB i Växjö Författare: Melita Gredelj & Emine Sadikaj Handledare: Hana Hulthén Examinator: Peter Berling Bakgrund: Den mest tids- och arbetskraftintensiva processen i lagret är plockprocessen, vilket omfattar hämtning av objekt från lagerplatser för att uppfylla kundorder. Plockprocessen står även för en stor del av lagrets kostnader och det är därför viktig att undersöka den för att öka effektiviteten i organisationen. En stor del av den kostnaden går till arbetskraften som arbetar med detta manuellt. Det finns idag idag stor potential för att göra denna process mer automatiserad.   Syfte: Studiens syfte är att visa vilka åtgärder det finns till brister inom manuella samt automatiserade plockprocesser inom företag som levererar kontorsmaterial.   Genomförande: Den genomförande studien är en fallstudie på Staples. Data har samlats in genom semistrukturerade intervjuer på Staples men även genom enkäter som besvarats av Atea och Office Depot. Målet med studien har varit att ta reda på vilka brister som finns i en plockprocess och hur dessa kan åtgärdas.   Slutsats: Ett företag som levererar kontorsmaterial kan effektivisera sin plockprocess och åtgärda sina felplock genom att använda sig av automatiserade lösningar, så som WMS. Detta hjälper dessa företag att integrera sina system vilket kommer att stödja det dagliga lagerarbetet genom att exempelvis visa information i realtid av vad som finns i lagret och vart det är placerat. Genom att ha detta system behöver inte lagerarbetare plocka efter minne eller erfarenheter utan de kan hela tiden förlita sig till systemet. Genom att implementera detta kan kostnaderna minska kraftigt.   Nyckelord: Logistik, Lager, Automatiserat lager, Manuellt lager, Plockprocessen, Plockning, WMS, Manuella brister, Automatiserade brister / Title: Shortcomings and measures in the production process in a distribution center - A case study at Staples AB in Växjö Authors: Melita Gredelj & Emine Sadikaj Tutor: Hana Hulthén Examiner: Peter Berling   Background: The most time- and labor-intensive process in the warehouse is the picking process, which includes retrieving objects from storage locations to fulfill customer orders. The picking process also accounts for a large part of the warehouse's costs and it is therefore important to examine it to increase the efficiency of the organization. Much of that cost goes to the workforce who works with this manually. Today, there is great potential for making this process more automated.   Purpose: The aim of the study is to show what measures are available for deficiencies in manual and automated picking processes within companies that supply office supplies.   Method: The implemented study is a case study at Staples. Data has been collected through semi-structured interviews at Staples but also through questionnaires that were answered by Atea and Office Depot. The aim of the study has been to find out which shortcomings exist in a picking process and how these can be fixed.   Conclusions: A company that supplies office supplies can streamline its picking process and fix its picking error by using automated solutions, such as WMS. This helps these companies integrate their systems, which will support the daily inventory work by, for example, displaying real-time information of what is in the warehouse and where it is located. By having this system, warehouse workers do not need to pick by memory or experience, and they can always rely on the system. By implementing it, costs can decrease sharply.   Keywords: Logistics, Warehouse, Automated Warehouse, Manual Warehouse, Picking Process, Picking, WMS, Manual Shortcomings, Automated Shortcomings
270

Systém pro řízení skladu / Warehouse management system

Kusák, Václav January 2011 (has links)
The diploma thesis aims to explain, demonstrate and verify the basic practical principles and methods of warehouse management system. This thesis could be used as an advanced guide for warehouse management system software. All this by using own knowledge and experience acquired by practical and purposeful activities done in co-operation with ORYX GROUP Ltd. which is one of the few companies in the Czech Republic developing similar systems.

Page generated in 0.0505 seconds