1 |
An Evaluation on Using Coarse-grained Events in an Event Sourcing Context and its Effects Compared to Fine-grained Events / En utvärdering på användningen av grova händelser i ett event sourcing-sammanhang och dess konsekvenser jämfört med fina händelserYe, Brian January 2017 (has links)
Introducing event sourcing to a system that is based on a model following Create, Read, Update and Delete (CRUD) operations can be a challenging task and requires an extensive rework of the current system. By introducing coarse-grained events it is possible to persist the structure of the data in a CRUD model and still gain the benefits of event sourcing, avoiding an extensive rework of the system. This thesis investigates how large amounts of data can be handled with coarse-grained events and still gain the benefits of event sourcing, by comparing with the conventional way of using fine-grained events. The data to be examined is trade data fed into a data warehouse. Based on research, an event sourcing application is implemented for coarse-grained as well as fine-grained events, to measure the difference between the two event types. The difference is limited to the metrics, latency and size of storage. The application is verified with an error handler, using example data and a profiler to make sure that it does not have any unnecessary bottlenecks. The resulting performance of the two cases show that fine-grained events have excessively larger latency than coarse-grained events in most cases whereas the size of storage is strictly smaller for fine-grained events. / Att introducera event sourcing i ett system baserat på en model som använder Create-, Read-, Update- och Delete-operationer (CRUD) kan vara en utmanande uppgift och kräver en omfattande omstrukturering av det nuvarande systemet. Genom att introducera grova händelser är det möjligt att bevara hela strukturen på datan i en CRUD-modell och ändå få fördelarna med event sourcing, för att därigenom undvika en omfattande omarbetning av systemet. Detta arbete undersöker hur stora datamängder kan hanteras genom grova händelser och ändå ge fördelarna med event sourcing, genom att jämföra med det konventionella sättet att använda fina händelser. Datan som undersöks är transaktionsdata på finansiella derivat som matas in i ett datalager. Baserat på forskning implementeras en event sourcing-applikation för både grova och fina händelser, för att mäta skillnaden mellan dessa två händelsetyper. Skillnaden är avgränsad till latens och lagringsutrymme. Applikationen verifieras genom felhantering, exempeldata och profilering för att säkerställa att den inte har några onödiga flaskhalsar. Den resulterande prestandan visar att fina händelser har betydligt större latens än grova händelser i de flesta fallen, medan lagringsutrymmet är strikt mindre för fina händelser.
|
2 |
MAINTAINING PARALLEL REALITIES IN CQRS AND EVENT SOURCINGEschmann, Ehren Thomas 21 August 2017 (has links)
No description available.
|
3 |
Comparison between CRUD and CQRS in an event driven system / Jämförelse mellan CRUD och CQRS i ett event drivet systemJansson, Rasmus January 2024 (has links)
In todays digitalised society, effective solutions to manage huge amount of data is needed. An established design pattern that are used in many systems are CRUD. To handle data in events have become more popular over the years, but CRUD is not optimised for it. A possible replacement is CQRS, it is designed with events in mind. The purpose of the report is to see if CQRS can replace CRUD. The report shows that when it comes to an event driven system using event sourcing, CQRS is recommended. Reason being CQRS is more compatible with events then CRUD. CRUD is more designed around data driven design and therefor is a better fit for other systems. / I dagens digitaliserade samhälle krävs effektiva lösningar för att behandla stora mängder data. Ett etablerat designmönster som används i många system är CRUD. Att hantera data i händelser är något som har blivit alltmer populärt, men CRUD är inte optimerad kring just det. En möjlig ersättare är CQRS, som är designad med event i åtanke. Målet med denna rapport är att se om CQRS kan ersätta CRUD i ett händelsebaserat system. Rapporten visar att när det kommer till ett händelsedrivet system som använder händelsekällor, så är rekommendationen att använda CQRS. Detta för att CQRS är mer kompatibel med händelser än CRUD. CRUD är mer designat runt data driven design och funkar därför bättre med andra typer av system.
|
4 |
ARQUITETURA DE SOFTWARE PARA OTIMIZAÇÃO DO USO DE AERONAVES REMOTAMENTE PILOTADAS NA AGRICULTURA DE PRECISÃO UTILIZANDO RACIOCÍNIO BASEADO EM CASOS / ARQUITETURA DE SOFTWARE PARA OTIMIZAÇÃO DO USO DE AERONAVES REMOTAMENTE PILOTADAS NA AGRICULTURA DE PRECISÃO UTILIZANDO RACIOCÍNIO BASEADO EM CASOSMikami, Malcon Miranda 06 March 2017 (has links)
Made available in DSpace on 2017-07-21T14:19:31Z (GMT). No. of bitstreams: 1
MIKAMI, M M.pdf: 3402794 bytes, checksum: cb934aeb85ce1c45338450c4767a924d (MD5)
Previous issue date: 2017-03-06 / The use of remotely piloted aircrafts (RPA) in precision agriculture (PA) is constantly evolving being a process comprised of the following steps: objective determination, capture and processing of images and the analisys of the obtained data. Although there is software and hardware made for those steps, the challange remains being the integration of the collected data, its reliability, and the intepretation of the resulting data for decision making. This work proposes a software architeture to make use of RPA on PA, allowing to perform all steps of the process using the Domain-Driven Design and Command and Query Responsability Segregation architectural patterns. The architecture, when used by researchers, allows the integration of new image processing modules, making use of case-based reasoning (CBR) for its evaluation. The architecture was evaluated in a case study under the following aspects: the reliability of the image processing methods and the adequacy of the CBR method when determining the best image processing algorithm for the specified objective. The results of the image processing algorithm for estimate the Normalized Difference Vegetation Index (NDVI) were compared to the results obtained by the field equipment (Greenseeker) and by processing made with the Pix4D software. Both the software and the equipment produced similar results, with a difference of about 7%. When doing the simulation for the best algorithm to be automaticaly used by the end user, the software correctly selected the algorithm with the highest probability of generating a correct outcome. In addition to the technical benefits, the developed architecture allows the results of field experiments analysis, associated with the algorithms used, to feed the knowledge base of the software so that it generates better parametrizations in the executions made by the end user. In addition to the technical benefits, the developed architecture allows the results of field experiments, associated with the used algorithms, to feed the knowledge base of the softwares that it generates better parametrizations when executed by the end user. The developed architecture in this work made possible integrating the several steps when using RPAs in PA, making easier its use by resaerchers and end users. The joint use by researchers and end users in the same environment could be an interesting alterative to publish new and better methods for image processing, giving to the end users more reliable results and more information about the crops. / O uso de aeronaves remotamente pilotadas (RPA) na agricultura de precisão (AP) está em constante evolução, sendo um processo que compreende as etapas de: determinação do objetivo, captura e processamento das imagens, e análise dos dados obtidos. Apesar de existirem alguns softwares e hardwares para essas etapas, a dificuldade ainda reside na integração dos dados coletados, sua confiabilidade, e na interpretação das informações resultantes para tomada de decisão. Este trabalho propõe uma arquitetura de software para o uso de RPA na AP que permite a execução de todas as etapas do processo utilizando os padrões arquiteturais Domain-Driven Design e Command e Query Responsability Segregation. Esta arquitetura, quando utilizada por pesquisadores, permite a integração de novos módulos de processamento de imagem, utilizando raciocínio baseado em casos (RBC) para sua avaliação. A arquitetura foi avaliada em um estudo de caso sob os seguintes aspectos: a confiabilidade dos métodos de processamento de imagem implementados e a adequação do método de RBC ao determinar o melhor algoritmo de processamento de imagem para objetivos específicos. O resultado do algoritmo de processamento de imagem para estimativa do índice de vegetação por diferença normalizada (NDVI) foi comparado aos resultados obtidos em equipamento de campo (Greenseeker) e pelo processamento via software Pix4D. Foi verificado que tanto os softwares quanto o equipamento geraram valores similares, com diferença média de 7%. Nas simulações de escolha do melhor algoritmo a ser utilizado automaticamente pelo usuário final, o software selecionou o algoritmo correto indicando para o mesmo a maior probabilidade de acerto. Além dos benefícios técnicos, a arquitetura desenvolvida permite que resultados de análises de experimentos em campo, associados aos algoritmos utilizados, alimentem a base de conhecimento do software para que o mesmo gere melhores parametrizações nas execuções feitas pelo usuário final. A arquitetura desenvolvida neste trabalho permitiu a integração das diversas etapas que envolvem o uso de RPA na AP, facilitando o uso por pesquisadores e usuários finais. A utilização conjunta de pesquisadores e usuários finais em um mesmo ambiente, pode ser uma alternativa interessante para publicação de novos e melhores métodos de processamento de imagem, fornecendo aos usuários finais mais informações sobre sua cultura e resultados mais confiáveis.
|
5 |
Administratívny portál pre skladový softwareKarabin, Štefan January 2019 (has links)
This diploma thesis is focused on designing and implementing a web solution to support the operation of a warehouse management system as requested by the system provider. The theoretical part of this paper analyzes the already existing approaches towards the problematic parts of web application development, such as the design of architecture and the method used to record the settings and permissions. The final portal is built using mainly Microsoft technologies. The thesis concludes with an evaluation of applicability of this solution from both technical and economical standpoints.
|
6 |
Reengineering skladového systému prodejce sportovního vybavení / Sports Equipment Retailer Storage System ReeingeneeringSváček, Radim January 2017 (has links)
Goal of this thesis is to analyze processes in company’s warehouse, optimize it and create backend of warehouse’s information system. Application aim to evidence income goods, stock and expedition. System allows comunicate with web services of delivery companies. It was implemented in PHP with use of Nete, Slim and Doctrine. Application was successfuly implemented and tested.
|
7 |
A new programming model for enterprise software : Allowing for rapid adaption and supporting maintainability at scaleHöffl, Marc January 2017 (has links)
Companies are under constant pressure to adapt and improve their processes to staycompetitive. Since most of their processes are handled by software, it also needs toconstantly change. Those improvements and changes add up over time and increase thecomplexity of the system, which in turn prevents the company from further adaption.In order to change and improve existing business processes and their implementation withinsoftware, several stakeholders have to go through a long process. Current IT methodologies arenot suitable for such a dynamic environment. The analysis of this change process shows thatfour software characteristics are important to speed it up. They are: transparency, adaptability,testability and reparability. Transparency refers to the users capability to understand what thesystem is doing, where and why. Adaptability is a mainly technical characteristic that indicatesthe capability of the system to evolve or change. Testability allows automated testing andvalidation for correctness without requiring manual checks. The last characteristic is reparability,which describes the possibility to bring the system back into a consistent and correct state, evenif erroneous software was deployed.An architecture and software development patterns are evaluated to build an overall programmingmodel that provides the software characteristics. The overall architecture is basedon microservices, which facilitates decoupling and maintainability for the software as well asorganizations. Command Query Responsibility Segregation decouples read from write operationsand makes data changes explicit. With Event Sourcing, the system stores not only the currentstate, but all historic events. It provides a built-in audit trail and is able to reproduce differentscenarios for troubleshooting and testing.A demo process is defined and implemented within multiple prototypes. The design of theprototype is based on the programming model. It is built in Javascript and implements Microservices,CQRS and Event Sourcing. The prototypes show and validate how the programmingmodel provides the software characteristics. Software built with the programming model allowscompanies to iterate faster at scale. Since the programming model is suited for complex processes,the main limitation is that the validation is based on a demo process that is simpler and thebenefits are hard to quantify. / ör att fortsatt vara konkurrenskraftiga är företag under konstant press att anpassa ochförbättra sina processer. Eftersom de flesta processer hanteras av programvara, behöveräven de ständigt förändras. Övertiden leder dessa förbättringar och förändringar till ökadsystemkomplexitet, vilket i sin tur hindrar företaget från ytterligare anpassningar. För attförändra och förbättra befintliga affärsprocesser och dess programvara, måste idag typiskt fleraaktörer vara en del av en lång och tidskrävande process. Nuvarande metoder är inte lämpade fören sådan dynamisk miljö. Detta arbete har fokuserat på fyra programvaruegenskaper som ärviktiga för att underlätta förändringsprocesser. Dessa fyra egenskaper är: öppenhet, anpassningsförmåga,testbarhet och reparerbarhet. Öppenhet, hänvisar till förmågan att förstå varför, var ochvad systemet gör. Anpassningsbarhet är huvudsakligen en teknisk egenskap som fokuserar påsystemets förmåga att utvecklas och förändras. Testbarhet strävar efter automatisk testning ochvalidering av korrekthet som kräver ingen eller lite manuell kontroll. Den sista egenskapen ärreparerbarhet, som beskriver möjligheten att återhämta systemet till ett konsekvent och korrekttillstånd, även om felaktig programvara har använts. En programmeringsmodell som rustarprogramvara med de ovan beskrivna programegenskaperna är utvecklad i detta examensarbete.Programmeringsmodellens arkitektur är baserad på diverse micro-tjänster, vilka ger brafrånkopplings- och underhållsförmåga för en programvara, samt användarorganisationerna.Command Query Responsibility Segregation (CQRS) frånkopplar läsoperationer från skrivoperationeroch gör ändringar i data explicita. Med Event Sourcing lagrar systemet inte endastdet nuvarande tillståndet, utan alla historiska händelser. Modellen förser användarna medett inbyggt revisionsspår och kan reproducera olika scenarion för felsökning och testning. Endemoprocess är definierad och implementerad i tre olika prototyper. Designen av prototypernaär baserad på den föreslagna programmeringsmodellen. Vilken är byggd i Javascript och implementerarmicro-tjänster, CQRS och Event Sourcing. Prototyperna visar och validerar hurprogrammeringsmodellen ger programvaran rätt egenskaper. Programvara byggd med dennaprogrammeringsmodell tillåter företag att iterera snabbare. De huvudsakliga begränsningarna iarbetet är att valideringen är baserad på en enklare demoprocess och att dess fördelar är svåraatt kvantifiera.
|
Page generated in 0.025 seconds