31 |
Monólise: Uma técnica para decomposição de aplicações monolíticas em microsserviçosRocha, Diego Pereira da 17 September 2018 (has links)
Submitted by JOSIANE SANTOS DE OLIVEIRA (josianeso) on 2018-12-21T15:54:25Z
No. of bitstreams: 1
Diego Pereira da Rocha_.pdf: 4394542 bytes, checksum: c92aa948a9a1de248deed850d6c19d5b (MD5) / Made available in DSpace on 2018-12-21T15:54:25Z (GMT). No. of bitstreams: 1
Diego Pereira da Rocha_.pdf: 4394542 bytes, checksum: c92aa948a9a1de248deed850d6c19d5b (MD5)
Previous issue date: 2018-09-17 / Nenhuma / A recorrente necessidade de as empresas entregarem seus softwares em curto espaço de tempo e de forma contínua, combinada ao alto nível de exigência dos usuários, está fazendo a indústria, de um modo geral, repensar como devem ser desenvolvidas as aplicações para o mercado atual. Nesse cenário, microsserviços é o estilo arquitetural utilizado para modernizar as aplicações monolíticas. No entanto, o processo para decompor uma aplicação monolítica em microsserviços é ainda um desafio que precisa ser investigado, já que, na indústria, atualmente, não há uma estrutura padronizada para fazer a decomposição das aplicações. Encontrar uma técnica que permita definir o grau de granularidade de um microsserviço também é um tema que desperta discussão na área de Engenharia de Software. Partindo dessas considerações, este trabalho propôs a Monólise, uma técnica que utiliza um algoritmo chamado Monobreak, que possibilita decompor uma aplicação monolítica a partir de funcionalidades e também definir o grau de granularidade dos microsserviços a serem gerados. Nesta pesquisa, a Monólise foi avaliada através de um estudo de caso. Tal avaliação consistiu na comparação da decomposição realizada pela Monólise com a decomposição executada por um especialista na aplicação-alvo utilizada no estudo de caso. Essa comparação permitiu avaliar a efetividade da Monólise através de oito cenários realísticos de decomposição. O resultado dessa avaliação permitiu verificar as semelhanças e diferenças ao decompor uma aplicação monolítica em microsserviços de forma manual e a partir de uma técnica semiautomática. O desenvolvimento deste trabalho demonstrou que a técnica de Monólise apresenta-se com uma grande potencialidade na área de Engenharia de Software referente à decomposição de aplicações. Além disso, as considerações do estudo evidenciaram que essa técnica poderá ser um motivador para encorajar desenvolvedores e arquitetos na jornada de modernização de suas aplicações monolíticas em microsserviços bem como diminuir possíveis erros cometidos nessa atividade por profissionais com pouca experiência em decomposição de aplicações. / The recurring need for companies to deliver their software in a short time and on a continuous basis combined with the high level of demand of users is making the industry in general rethink how to develop the applications for the current market. In this scenario microservice
is the architectural style used to modernize monolithic applications. However the process of decomposing a monolithic application into microservices is still a challenge that needs to be investigated since in industry there is currently no standardized framework for decomposing
applications. Finding a technique that allows defining the degree of granularity of a microservice is also a topic that arouses discussion in the area of Software Engineering. Based on these considerations this work proposed the Monolise a technique that uses an algorithm called Mono- Break that allows to decompose a monolithic application from functionalities and also to define the degree of granularity of the microservices to be generated. In this research the Monolise was evaluated through a case study. Such evaluation consisted of comparing the decomposition performed by the Monolise with the decomposition performed by a specialist in the target application used in the case study. This comparison allowed to evaluate the effectiveness of the Monolise through eight realistic scenarios of decomposition. The result of this evaluation allowed to verify the similarities and differences in the decomposition of a monolithic application in microservices manually and from a semiautomatic technique. The development of this work demonstrated that the Monolise technique presents with great potentiality in the area of Software Engineering regarding the decomposition of applications. In addition the study’s considerations showed that this technique could be a motivator to encourage developers and architects in the modernization of their monolithic applications in microservices as well as to reduce possible mistakes made in this activity by professionals with little experience in decomposing applications.
|
32 |
Optimalizace čtení dat z distribuované databáze / Optimization of data reading from a distributed databaseKozlovský, Jiří January 2019 (has links)
This thesis is focused on optimization of data reading from distributed NoSQL database Apache HBase with regards to the desired data granularity. The assignment was created as a product request from Seznam.cz, a.s. the Reklama division, Sklik.cz cost center to improve user experience by making filtering of aggregated statistical data available to advertiser web application users for the purpose of viewing entity performance history.
|
33 |
The run-time impact of business functionality when decomposing and adopting the microservice architecture / Påverkan av körtid för system funktionaliteter då de upplöses och microservice architektur applicerasFaradj, Rasti January 2018 (has links)
In line with the growth of software, code bases are getting bigger and more complex. As a result of this, the architectural patterns, which systems rely upon, are becoming increasingly important. Recently, decomposed architectural styles have become a popular choice. This thesis explores system behavior with respect to decomposing system granularity and external communication between the resulting decomposed services. An e-commerce scenario was modeled and implemented at different granularity levels to measure the response time. In establishing the communication, both REST with HTTP and JSON and the gRPC framework were utilized. The results showed that decomposition has impact on run-time behaviour and external communication. The highest granularity level implemented with gRPC for communication establishment adds 10ms. In the context of how the web behaves today, it can be interpreted as feasible but there is no discussion yet on whether it is theoretically desirable. / I linje med de växande mjukvarusystemen blir kodbaserna större och mer komplexa. Arkitekturerna som systemen bygger på får allt större betydelse. Detta examensarbete utforskar hur upplösning av system som tillämpar mikroservicearkitektur beter sig, och hur de påverkas av kommunikationsupprättande bland de upplösta och resulterande tjänsterna. Ett e-handelsscenario modelleras i olika granularitetsnivåer där REST med HTTP och JSON samt gRPC används för att upprätta kommunikationen. Resultaten visar att upplösningen påverkar runtimebeteendet och den externa kommunikationen blir långsammare. En möjlig slutsats är att påverkan från den externa kommunikationen i förhållande till hur webben beter sig idag är acceptabel. Men om man ska ligga inom teoretiskt optimala gränser kan påverkan ses som för stor.
|
34 |
Advanced middleware support for distributed data-intensive applicationsDu, Wei 12 September 2005 (has links)
No description available.
|
35 |
Tools and Techniques for Evaluating Reliability Trade-offs for Nano-ArchitecturesBhaduri, Debayan 20 May 2004 (has links)
It is expected that nano-scale devices and interconnections will introduce unprecedented level of defects in the substrates, and architectural designs need to accommodate the uncertainty inherent at such scales. This consideration motivates the search for new architectural paradigms based on redundancy based defect-tolerant designs. However, redundancy is not always a solution to the reliability problem, and often too much or too little redundancy may cause degradation in reliability. The key challenge is in determining the granularity at which defect tolerance is designed, and the level of redundancy to achieve a specific level of reliability. Analytical probabilistic models to evaluate such reliability-redundancy trade-offs are error prone and cumbersome, and do not scalewell for complex networks of gates. In this thesiswe develop different tools and techniques that can evaluate the reliability measures of combinational circuits, and can be used to analyze reliability-redundancy trade-offs for different defect-tolerant architectural configurations. In particular, we have developed two tools, one of which is based on probabilistic model checking and is named NANOPRISM, and another MATLAB based tool called NANOLAB. We also illustrate the effectiveness of our reliability analysis tools by pointing out certain anomalies which are counter-intuitive but can be easily discovered by these tools, thereby providing better insight into defecttolerant design decisions. We believe that these tools will help furthering research and pedagogical interests in this area, expedite the reliability analysis process and enhance the accuracy of establishing reliability-redundancy trade-off points. / Master of Science
|
36 |
多期數之信用風險違約機率驗證法 / The Calibration Method of Probability of Default under Multiple Periods林福文, Lin, Fu-Wen Unknown Date (has links)
新巴賽爾資協定中,針對銀行風險管理具備三大支柱,支柱一管理信用風險、市場風險及作業風險,其中信用風險方法更分為標準法、基礎內部模型法與進階內部模型法。不論銀行採用何種內部模型法,銀行必須有估計違約機率之能力,並且送交監理機關審查核准。為了確保預測違約機率之適當,巴賽爾銀行監理委員會BCBS (2005) 對於不同資料長度與驗證期間分別建議二項檢定、卡方檢定、常態檢定與紅綠燈檢定。當資料期數足夠時,BCBS推薦使用紅綠燈檢定,但該檢定需要若干假設:違約事件間相互獨立且違約事件在時間上亦獨立,因此在BCBS (2005) 中之某些情境下,採用紅綠燈檢定驗證違約機率會受到違約事件之間並非獨立,造成中央極限定理不適當地近似標準化之違約機率至常態分配,且模擬之型一誤差亦有高估之結果。
在違約事件之間獨立且無時間相關性下,本文建議採用卜瓦松分配近似二項分配;在違約事件之間非獨立且具有時間相關性下,本文則建議採用二項分配,結合granularity adjustment,使違約事件間之相關性可以反映在不同顏色之分色點上。最後,由數量模擬結果顯示:本文建議採用之改良方法,皆可有效將型一誤差維持在設定之顯著水準上,並反映真實之檢定力。因此,不論對銀行或監理機關來說,改良之違約機率驗證方法係值得使用之方法。 / There are three methods in Basel II (Standardized Approach, Foundation IRB Approach and Advanced IRB Approach) to calculate the capital charges. The banks have to estimate probability of default (PD) if they use IRB approach. Four statistic methods recommended by BCBS worthy to validate the PD: Binomial test, Chi-square test, Normal test and Extended Traffic Lights test (ETLT). If the data are long enough, BCBS recommended using the ETLT with the assumptions that the obligors are independent and also independent in time. From numerical results, validating PDs by ETLT will overestimate the type I errors and statistic power.
We suggest two methods in different scenarios to make the type I errors closed to the significant level. First, we suggest to approximate Normal distribution in Poisson distribution with randomization technique. Second, we combine Binomial distribution with granularity adjustment to fit the correlation between the obligors. Both methods not only perform well in type I errors, but also reflect the real statistic power. For the banks, both methods are worthy to use for avoiding to increasing the capital charges unexpectedly or the operational risk of the banks.
|
37 |
Návrh změn informačního systému firmy / Proposal for Changes in the Company Information SystemKeclík, David January 2011 (has links)
The main aim of this diploma thesis is to propose for changes in the company information system. Document content is focuse on analyse, implementation and using data warehouse and usage advance to optimaze methods for achievement added value to users. The first part of this thesis is focused on choice of suitable concept of data warehouse, implementation and possible problems. In the second part, I compare assets of this system with costs for development and administration.
|
38 |
Subtle Semblances of Sorrow: Exploring Music, Emotional Theory, and MethodologyWarrenburg, Lindsay Alison January 2019 (has links)
No description available.
|
39 |
Allostas, interoception och emotionell granularitet i psykologisk behandling av emotionell problematik : –en litteraturstudie / Allostasis, Interoception and Emotional Granularity in Psychological Treatment of Emotional Problems : - A Litterature StudyCederfjärd, Christina, Schroderus, Ramona January 2022 (has links)
För forskning om emotioner och emotionsbehandling är det en spännande tid. Nyare hjärnforskning har öppnat nya möjligheter för förståelsen av hjärnan och emotioner. En ny teori inom psykologisk konstruktionism, som utgår från hjärnforskningen, tvärvetenskapliga studier och bristerna i den dominerande affektteorin, basic emotion, är Theory of constructed emotion (TCE). Ännu finns ingen behandlingsmodell kopplad till TCE men utifrån dess fokus att beskriva hur hjärnan fungerar och emotioner skapas är det ändå intressant att undersöka om dess verksamma mekanismer. Detta är en översiktlig litteraturstudie med syfte att undersöka begreppen; allostas, interoception och emotionell granularitet i sammanhanget psykisk ohälsa och psykologisk behandling av emotionell problematik. Resultatet visar att allostas, kroppsbudgeten, är grunden i vår fysiska och psykiska mående, interoception och hjärnans prediktioner är viktiga för vår förmåga att förstå våra emotioner och emotionell granularitet hjälper oss att konstruera finkorniga emotionskoncept vilket hjälper oss att välja rätt handling vid rätt tillfälle till rätt emotion. Att träna upp interoception och emotionell granularitet hjälper oss att vidmakthålla psykisk hälsa och är bra psykoedukativa inslag i psykologisk behandling. Mer forskning behövs, främst kring hur man kan tillämpa begreppen i psykologisk behandling samt generellt för att bättre kunna integrera ny emotionsforskning med dominerande teorier för en gemensam förståelse för emotioner och psykoterapi. / For research on emotions and emotion processing, it's an exciting time. More recent brain research has opened new possibilities for the understanding of the brain and emotions. A new theory in psychological constructionism, based on brain research, interdisciplinary studies and the shortcomings of the dominant affect theory, basic emotion, is the Theory of constructed emotion (TCE). There is no treatment model linked to TCE yet but based on its focus on describing how the brain works and emotions are created, it is still interesting to investigate whether its active mechanisms. This is a general literature study with the aim of examining the concepts; allostasis, interoception and emotional granularity in the context of mental illness and psychological treatment of emotional problems. The results show that allostasis, the body budget, is the foundation of our physical and mental well-being, interoception and brain predictions are important for our ability to understand our emotions and emotional granularity helps us construct fine-grained emotion concepts that help us choose the right action at the right time to the right emotion. Training interoception and emotional granularity helps us maintain mental health and is a good psychoeducative element in psychological treatment. More research is needed, primarily on how to apply the concepts in psychological treatment and in general to better integrate new emotion research with dominant theories for a common understanding of emotions and psychotherapy.
|
40 |
應用記憶體內運算於多維度多顆粒度資料探勘之研究―以醫療服務創新為例 / A Research Into In-memory Computing In Multidimensional, Multi-granularity Data Mining ― With Healthcare Services Innovation朱家棋, Chu, Chia Chi Unknown Date (has links)
全球面臨人口老化與人口不斷成長的壓力下,對於醫療服務的需求不斷提升。醫療服務領域中常以資料探勘「關聯規則」分析,挖掘隱藏在龐大的醫學資料庫中的知識(knowledge),以支援臨床決策或創新醫療服務。隨著醫療服務與應用推陳出新(如,電子健康紀錄或行動醫療等),與醫療機構因應政府政策需長期保存大量病患資料,讓醫療領域面臨如何有效的處理巨量資料。
然而傳統的關聯規則演算法,其效能上受到相當大的限制。因此,許多研究提出將關聯規則演算法,在分散式環境中,以Hadoop MapReduce框架實現平行化處理巨量資料運算。其相較於單節點 (single-node) 的運算速度確實有大幅提升。但實際上,MapReduce並不適用於需要密集迭帶運算的關聯規則演算法。
本研究藉由Spark記憶體內運算框架,在分散式叢集上實現平行化挖掘多維度多顆粒度挖掘關聯規則,實驗結果可以歸納出下列三點。第一點,當資料規模小時,由於平行化將資料流程分為Map與Reduce處理,因此在小規模資料處理上沒有太大的效益。第二點,當資料規模大時,平行化策略模式與單機版有明顯大幅度差異,整體運行時間相差100倍之多;然而當項目個數大於1萬個時,單機版因記憶體不足而無法運行,但平行化策略依舊可以運行。第三點,整體而言Spark雖然在小規模處理上略慢於單機版的速度,但其運行時間仍小於Hadoop的4倍。大規模處理速度上Spark依舊優於Hadoop版本。因此,在處理大規模資料時,就運算效能與擴充彈性而言,Spark都為最佳化解決方案。 / Under the population aging and population growth and rising demand for Healthcare. Healthcare is facing a big issue how to effectively deal with huge amounts of data. Cased by new healthcare services or applications (such as electronic health records or health care, etc), and also medical institutions in accordance with government policy for long-term preservation of a large number of patient data.
But the traditional algorithms for mining association rules, subject to considerable restrictions on their effectiveness. Therefore, many studies suggest that the association rules algorithm in a distributed computing, such as Hadoop MapReduce framework implements parallel to process huge amounts of data operations. But in fact, MapReduce does not apply to require intensive iterative computation algorithm of association rules.
Studied in this Spark in-memory computing framework, implemented on a distributed cluster parallel mining association rules mining multidimensional granularity, the experimental results can be summed up in the following three points. 1th, when data is small, due to the parallel data flow consists of Map and Reduce, so not much in the small-scale processing of benefits. 2nd, when the data size is large, parallel strategy models and stand-alone obviously significant differences overall running time is 100 times as much when the item number is greater than 10,000, however, stand-alone version cannot run due to insufficient memory, but parallel strategies can still run. 3rd, overall Spark though somewhat slower than the single version in small scale processing speed, but the running time is less than 4 times times the Hadoop. Massive processing speed Spark is still superior to the Hadoop version. Therefore, when working with large data, operational efficiency and expansion elasticity, Spark for optimum solutions.
|
Page generated in 0.0795 seconds