• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 356
  • 348
  • 40
  • 34
  • 33
  • 30
  • 26
  • 23
  • 8
  • 6
  • 6
  • 4
  • 3
  • 3
  • 2
  • Tagged with
  • 1026
  • 1026
  • 331
  • 274
  • 189
  • 129
  • 112
  • 90
  • 89
  • 87
  • 77
  • 73
  • 71
  • 70
  • 61
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
411

Cross region cloud redundancy : A comparison of a single-region and a multi-region approach

Lindén, Oskar January 2023 (has links)
In order to increase the resiliency and redundancy of a distributed system, it is common to keep standby systems and backups of data in different locations than the primary site, separated by a meaningful distance in order to tolerate local outages. Nasdaq has accomplished this by maintaining primary-standby pairs or primary-standby-disaster triplets with at least one system residing in a different site. The team at Nasdaq is experimenting with a redundant deployment scheme in Kubernetes with three availability zones, located within a single geographical region, in Amazon Web Services. They want to move the disaster zone to another geographical region in order to improve the redundancy and resiliency of the system. The aim of this thesis is to investigate how this could be done and to compare the different approaches. To compare the different approaches, a simple observable model of the chain replicating strategy is implemented. The model is deployed in an Elastic Kubernetes Cluster on Amazon Web Services, using Helm. The supporting infrastructure is defined and created using Terraform. This model is subjected to evaluation through HTTP requests with different configurations and scenarios, to measure latency and throughput. The first scenario is a single user making HTTP requests to the system, and the second scenario is multiple users making requests to the system. The results show that the throughput is lower and the latency is higher with the multi-region approach. The relative difference in median throughput is -54.41% and the relative difference in median latency is 119.20%, in the single-producer case. In the multi-producer case, both the relative difference in median throughput and latency is reduced when increasing the amount of partitions in the system.
412

RESTful API vs. GraphQL a CRUD performance comparison

Niklasson, Alexander, Werèlius, Vincent January 2023 (has links)
The utilization of Application Programming Interfaces (APIs) has experiencedsignificant growth due to the increasing number of applications being devel-oped. APIs serve as a means to transfer data between different applications.While RESTful has been the standard API since its emergence around 2000,it is now being challenged by Facebook’s GraphQL, which was introducedin 2015. This study aims to fill a knowledge gap in the existing literatureon API performance evaluation by extending the focus beyond read opera-tions to include CREATE, UPDATE, and DELETE operations in both REST-ful APIs and GraphQL. Previous studies have predominantly examined theperformance of read operations, but there is a need to comprehensively un-derstand the behavior and effectiveness of additional CRUD operations. Toaddress this gap, we conducted a series of controlled experiments and anal-yses to evaluate the response time and RAM utilization of RESTful APIsand GraphQL when executing CREATE, UPDATE, and DELETE operations.We tested various scenarios and performance metrics to gain insights into thestrengths and weaknesses of each approach. Our findings indicate that con-trary to our initial beliefs, there are no significant differences between the twoAPI technologies in terms of CREATE, UPDATE, and DELETE operations.However, RESTful did slightly outperform GraphQL in the majority of tests.We also observed that GraphQL’s inherent batching functionality resulted infaster response times and lower RAM usage throughout the tests. On the otherhand, RESTful, despite its simpler queries, exhibited faster response times inGET operations, consistent with related work. Lastly, our findings suggestthat RESTful uses slightly less RAM compared to GraphQL in the context ofCREATE, UPDATE, and DELETE operations.
413

Evaluating the Robustness of Resource Allocations Obtained through Performance Modeling with Stochastic Process Algebra

Srivastava, Srishti 09 May 2015 (has links)
Recent developments in the field of parallel and distributed computing has led to a proliferation of solving large and computationally intensive mathematical, science, or engineering problems, that consist of several parallelizable parts and several non-parallelizable (sequential) parts. In a parallel and distributed computing environment, the performance goal is to optimize the execution of parallelizable parts of an application on concurrent processors. This requires efficient application scheduling and resource allocation for mapping applications to a set of suitable parallel processors such that the overall performance goal is achieved. However, such computational environments are often prone to unpredictable variations in application (problem and algorithm) and system characteristics. Therefore, a robustness study is required to guarantee a desired level of performance. Given an initial workload, a mapping of applications to resources is considered to be robust if that mapping optimizes execution performance and guarantees a desired level of performance in the presence of unpredictable perturbations at runtime. In this research, a stochastic process algebra, Performance Evaluation Process Algebra (PEPA), is used for obtaining resource allocations via a numerical analysis of performance modeling of the parallel execution of applications on parallel computing resources. The PEPA performance model is translated into an underlying mathematical Markov chain model for obtaining performance measures. Further, a robustness analysis of the allocation techniques is performed for finding a robustmapping from a set of initial mapping schemes. The numerical analysis of the performance models have confirmed similarity with the simulation results of earlier research available in existing literature. When compared to direct experiments and simulations, numerical models and the corresponding analyses are easier to reproduce, do not incur any setup or installation costs, do not impose any prerequisites for learning a simulation framework, and are not limited by the complexity of the underlying infrastructure or simulation libraries.
414

Relationships Between Information Technology Skills and Performance Evaluation Scores of Mississippi State University Extension Service Agents

Loper, James R 09 December 2016 (has links)
A study was conducted to see if the level of use, expertise, and problem solving abilities using information technology among Mississippi State University Extension agents was positively correlated with the performance quality of the agent as measured in the Mississippi State University Extension Service agent evaluation system. A second purpose was to examine how well agents self-assess their technology skills. Lastly, the study attempted to determine if there was a set of factors (including information technology skills) that explained a substantial portion of the variation in performance evaluation scores. The results showed that the Mississippi State University Extension agent evaluation system does not consider information technology skills and usage of agents. It was also found that agents are fairly adept at self-assessment of their technology skills. Lastly, no set of factors were found that would substantially explain performance evaluation ratings.
415

Přinášejí podílové fondy nabízené v České republice hodnotu svým investorům? / Do mutual funds offered in Czech Republic add value to investors?

Nosek, Jiří January 2022 (has links)
We estimate the proportions of skilled, unskilled, and zero-alpha funds preva- lent in the mutual Funds population easily accessible by Czech Investors. We estimate alphas from a regression against a concise set of Exchange Traded Funds and control for luck using False Discovery rate. We design a straight- forward ETF selection algorithm and find that if investors adhere to simple diversification rules, they can outperform a large proportion of mutual funds. We further document a negative relationship between the performance of mu- tual funds and its Total Expense ratio, suggesting that portfolio managers are on average unable to compensate their costs with better performance. JEL Classification C12, C20, G12, G23 Keywords Mutual Funds, Exchange Traded Funds, Perfor- mance evaluation Title Do mutual funds offered in Czech Republic add value to investors?
416

Domain Expertise–Agnostic Feature Selection for the Analysis of Breast Cancer Data

Pozzoli, Susanna January 2019 (has links)
At present, high-dimensional data sets are becoming more and more frequent. The problem of feature selection has already become widespread, owing to the curse of dimensionality. Unfortunately, feature selection is largely based on ground truth and domain expertise. It is possible that ground truth and/or domain expertise will be unavailable, therefore there is a growing need for unsupervised feature selection in multiple fields, such as marketing and proteomics.Now, unlike in past time, it is possible for biologists to measure the amount of protein in a cancer cell. No wonder the data is high-dimensional, the human body is composed of thousands and thousands of proteins. Intuitively, only a handful of proteins cause the onset of the disease. It might be desirable to cluster the cancer sufferers, but at the same time we want to find the proteins that produce good partitions.We hereby propose a methodology designed to find the features able to maximize the clustering performance. After we divided the proteins into different groups, we clustered the patients. Next, we evaluated the clustering performance. We developed a couple of pipelines. Whilst the first focuses its attention on the data provided by the laboratory, the second takes advantage both of the external data on protein complexes and of the internal data. We set the threshold of clustering performance thanks to the biologists at Karolinska Institutet who contributed to the project.In the thesis we show how to make a good selection of features without domain expertise in case of breast cancer data. This experiment illustrates how we can reach a clustering performance up to eight times better than the baseline with the aid of feature selection. / Högdimensionella dataseter blir allt vanligare. Problemet med funktionsval har redan blivit utbrett på grund av dimensionalitetens förbannelse. Dessvärre är funktionsvalet i stor utsträckning baserat på grundläggande sanning och domänkunskap. Det är möjligt att grundläggande sanning och/eller domänkunskap kommer att vara otillgänglig, därför finns det ett växande behov av icke-övervakat funktionsval i flera områden, såsom marknadsföring och proteomics.I nuläge, till skillnad från tidigare, är det möjligt för biologer att mäta mängden protein i en cancercell. Inte undra på att data är högdimensionella, människokroppen består av tusentals och tusentals proteiner. Intuitivt orsakar bara en handfull proteiner sjukdomsuppkomsten. Det kan vara önskvärt att klustrera cancerlidarna, men samtidigt vill vi hitta proteiner som producerar goda partitioner.Vi föreslår härmed en metod som är utformad för att hitta funktioner som kan maximera klustringsprestandan. Efter att vi delat proteinerna i olika grupper klustrade vi patienterna. Därefter utvärderade vi klustringsprestandan. Vi utvecklade ett par pipelines. Medan den första fokuserar på de data som laboratoriet tillhandahåller, utnyttjar den andra både extern data på proteinkomplex och intern data. Vi ställde gränsen för klusterprestationen tack vare biologerna vid Karolinska Institutet som bidragit till projektet.I avhandlingen visar vi hur man gör ett bra utbud av funktioner utan domänkompetens vid bröstcancerdata. Detta experiment illustrerar hur vi kan nå en klusterprestation upp till åtta gånger bättre än baslinjen med hjälp av funktionsval.
417

Investigating differences in response time and error rate between a monolithic and a microservice based architecture

Johansson, Gustav January 2019 (has links)
With great advancements in cloud computing, the microservice architecture has become a promising architectural style for enterprise software. It has been proposed to cope with problems of the traditional monolithic architecture which includes slow release cycles, limited scalability and low developer productivity. Therefore, this thesis aims to investigate the affordances and challenges of adopting microservices as well as the difference in performance compared to the monolithic approach at one of Sweden’s largest banks, SEB - the Scandinavian Individual Bank. The investigation consisted of a literature study of research papers and official documentation of microservices. Moreover, two applications were developed and deployed using two different system architectures - a monolithic architecture and a microservice architecture. Performance tests were executed on both systems to gather quantitative data for analysis. The two metrics investigated in this study were response time and error rate. The results indicate the microservice architecture has a significantly higher error rate but a slower response time than the monolithic approach, further strengthening the results of Ueda et. al. [47] and Villamizar et. al. [48]. The findings have then been discussed with regards to the challenges and complexity involved in implementing distributed systems. From this study, it becomes clear the complexity shifts from inside the application out towards infrastructure with a microservice architecture. Therefore, microservices should not be seen as a silver bullet. Rather, the type of architecture is highly dependent on the scope of the project and the size of the organization. / Med stora framstegen inom molntjänster har microservice arkitekturen kommit att bli en lämplig kandidat för utveckling av företagsprogramvara. Denna typ av systemarkitektur har föreslagits att lösa de problem som den traditionella monolitiska arkitekturen medför; långsamma lanseringar, begränsad skalbarhet och låg produktivitet. Således fokuserar denna avhandling på att utforska de möjligheter samt utmaningar som följer vid adoptering av microservices samt skillnaden i prestanda jämfört med den monolitiska arkitekturen. Detta undersöktes på en av Sveriges största banker, SEB, den Skandinaviska Enskilda Banken. Utredningen bestod av en litteraturstudie av vetenskapliga artiklar samt officiell dokumentation för microservices. Dessutom utvecklades och lanserades två applikationer byggt med två olika typer av systemarkitektur - en som monolitisk arkitektur och den andra som en microservice arkitektur. Prestandatest utfördes sedan på båda systemen för att samla kvantitativ data för analys. De två nyckelvardena som undersöktes i denna studie var responstid och felfrekvens. Resultaten indikerar att microservice arkitekturen har en signifikant högre felfrekvens men en långsammare responstid än den monolitiska arkitekturen, vilket stärker resultaten av Ueda et. al. [47] och Villamizar et. al. [48]. Forskningsresultaten har diskuterats med hänsyn till den komplexitet och de utmaningar som följer vid implementering av distribuerade system. Från denna studie blir det tydligt att komplexiteten i en microservice arkitektur skiftar från inuti applikationen ut till infrastrukturen. Således borde microservices inte ses som en silverkula. Istället är valet av systemarkitektur strikt beroende på omfattningen av projektet samt storleken på organisationen i fråga.
418

Deep Learning Optimization and Acceleration

Jiang, Beilei 08 1900 (has links)
The novelty of this dissertation is the optimization and acceleration of deep neural networks aimed at real-time predictions with minimal energy consumption. It consists of cross-layer optimization, output directed dynamic quantization, and opportunistic near-data computation for deep neural network acceleration. On two datasets (CIFAR-10 and CIFAR-100), the proposed deep neural network optimization and acceleration frameworks are tested using a variety of Convolutional neural networks (e.g., LeNet-5, VGG-16, GoogLeNet, DenseNet, ResNet). Experimental results are promising when compared to other state-of-the-art deep neural network acceleration efforts in the literature.
419

Work-family Conflict And Performance Evaluations: Who Gets A Break?

Hickson, Kara 01 January 2008 (has links)
Forty percent of employed parents report that they experience work-family conflict (Galinsky, Bond, & Friedman, 1993). Work-family conflict (WFC) exists when role pressures from the work and family domains are mutually incompatible. WFC is associated with decreases in family, job, and life satisfaction and physical health; intention to quit one's job; and increases in workplace absenteeism. Women may be more impacted by WFC than men, as women report completing 65-80% of the child care (Sayer, 2001) and spend 80 hours per week fulfilling work and home responsibilities (Cowan, 1983). Research suggests that WFC can be reduced with social support, such as co-workers providing assistance when family interferes with work (Carlson & Perrewe, 1999). It is unclear whether parents 'get a break' or are penalized by co-workers. The purpose of the present study was to examine co-workers' reactions to individuals who experience WFC. Based on sex role theory and attribution theory, it was predicted that women, people who experience family interference with work, and those who have more control over the work interference would be helped less and evaluated more poorly on a team task than men, people who experience non-family related work interference, and those who have less control over the work interference. A laboratory experiment was conducted in which participants signed up for a team-based study. The teammate was a confederate who was late for the study. Teammate control over the tardiness (unexpected physician's visit versus forgotten physician's appointment), type of work conflict (self- versus family-related), and gender of the teammate were manipulated. After learning about the reasons for the tardiness of their teammate, the 218 participants (63% female; 59% Caucasian) decided whether to help the late teammate by completing a word sort task for them or letting the late teammate make up the work after the experiment. When the teammate arrived, the participants completed a team task and then evaluated the task performance of their teammate. None of the hypotheses were confirmed in this study. However, exploratory analyses showed that people who had more control over the tardiness were rated lower than people who had less control over the tardiness. Contrary to expectations, exploratory analyses also showed that men rated women who were late to the study for a family-related reason higher than women who were late due to a self-related reason. These findings suggest that male co-workers may give women a break when they experience family interference with work. Implications for future research and practice are discussed.
420

Enabling Peer-to-Peer Swarming for Multi-Commodity Dissemination

Menasche, Daniel Sadoc 13 May 2011 (has links)
Peer-to-peer swarming, as used by BitTorrent, is one of the de facto solutions for content dissemination in today’s Internet. By leveraging resources provided by users, peer-to-peer swarming is a simple, scalable and efficient mechanism for content distribution. Although peer-to-peer swarming has been widely studied for a decade, prior work has focused on the dissemination of one commodity (a single file). This thesis focuses on the multi-commodity case. We have discovered through measurements that a vast number of publishers currently disseminate multiple files in a single swarm (bundle). The first contribution of this thesis is a model for content availability. We use the model to show that, when publishers are intermittent, bundling K files increases content availability exponentially as function of K. When there is a stable publisher, we consider content availability among peers (excluding the publisher). Our second contribution is the estimate of the dependency of peers on the stable publisher, which is useful for provisioning purposes as well as in deciding how to bundle. To this goal, we propose a new metric, swarm self-sustainability, and present a model that yields swarm self-sustainability as a function of the file size, popularity and service capacity of peers. Then, we investigate reciprocity and the use of barter that occurs among peers. As our third contribution, we prove that the loss of efficiency due to the download of unrequested content to enforce direct reciprocity, as opposed to indirect reciprocity, is at most two in a class of networks without relays. Finally, we study algorithmic and economic problems faced by enterprises who leverage swarming systems and who control prices and bundling strategies. As our fourth contribution, we present two formulations of the optimal bundling problem, and prove that one is NP hard whereas the other is solvable by a greedy strategy. From an economic standpoint, we present conditions for the existence and uniqueness of an equilibrium between publishers and peers.

Page generated in 0.034 seconds