• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 5
  • 2
  • 1
  • 1
  • Tagged with
  • 29
  • 29
  • 7
  • 6
  • 6
  • 6
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Performance Comparison Study of Clusters on Public Clouds / Prestandajämförelse av cluster på offentliga molnleverantörer

Wahlberg, Martin January 2019 (has links)
As cloud computing has become the more popular choice to host clusters in recent times there are multiple providers that offer their services to the public such as Amazon web services, Google cloud platform and Microsoft Azure. The decision of cluster provider is not only a decision of provider, it is also an indirect decision of cluster infrastructure. The indirect choice of infrastructure makes it important to consider any potential differences in cluster performance caused by the infrastructure in combination with the workload type, but also the cost of the infrastructure on the available public cloud providers. To evaluate whether or not there are any significant differences in either cluster cost or performance between the available public cloud providers, a performance comparison study was conducted. The study consisted of multiple clusters hosted on Amazon Web Services and the Google Cloud Platform. The clusters had access to five different instance types that each correspond to a specific number of available cores, amount of memory and storage. All clusters executed a CPU intensive, I/O intensive, and MapReduce workload while simultaneously having its performance monitored with regard to CPU, memory, and disk usage. The performance comparison study revealed that there are significant performance differences between clusters hosted on Amazon web services and Google cloud platform for the chosen workload types. Since there are significant differences it can be concluded that the choice of provider is crucial as it impacts the cluster performance. Comparing the selected instance types against each other with regard to performance and cost, reveals that a subset of them have better performance as well as lower cost. The instance types that is not a part of this subset, have either better performance or lower cost than its counterpart on the other provider.
2

Performance Comparison of Multi Agent Platforms in Wireless Sensor Networks.

Bösch, Bernhard Bösch January 2012 (has links)
The technology for the realization of wireless sensors has been available for a long time, but due to progress  and  development  in  electrical  engineering  such  sensors  can  be  manufactured  cost effectively  and  in  large  numbers  nowadays.  This  availability  and  the  possibility  of  creating cooperating  wireless  networks  which  consist  of  such  sensors  nodes,  has  led  to  a  rapidly  growing popularity  of  a  technology  named  Wireless  Sensor  Networks  (WSN).  Its  disadvantage  is  a  high complexity in the task of programming applications based on WSN, which is a result of its distributed and  embedded  characteristic.  To  overcome  this  shortcoming,  software  agents  have  been  identified as  a  suitable  programming  paradigm.  The  agent  based  approach  commonly  uses  a  middleware  for the execution of the software agent. This thesis is meant to compare such agent middleware in their performance in the WSN domain. Therefore two prototypes of applications based on different agent models are implemented for a given set of middleware. After the implementation measurements are extracted  in  various  experiments,  which  give  information  about  the  runtime  performance  of  every middleware in the test set.  In the following analysis it is examined whether each middleware under test  is  suited  for  the  implemented  applications  in  WSN.  Thereupon,  the  results  are  discussed  and compared with the author’s expectations. Finally a short outlook of further possible development and improvements is presented.
3

A MapReduce Performance Study of XML Shredding

Lam, Wilma Samhita Samuel 20 October 2016 (has links)
No description available.
4

Software Agents for Dlnet Content Review: Study and Experimentation

Mitra, Seema 06 April 2007 (has links)
This research is an effort to test our hypothesis that a software agent based architecture will provide a better response time and will be more maintainable and reusable than the present J2EE based architecture of DLNET (Digital Library Network for Engineering and Technology). We have taken a portion of the complete DLNET application for our study, namely the Content Review Process, as our test bed. In this work, we have explored the use of software agents in the current setup of DLNET for the first time, specifically for the Content Review part of the application and tried to evaluate the performance of the resulting application. Our work is a novel approach of doing content review using software agent architecture. The proposed system is an automated process that will asynchronously look for suitable reviewers based on content (the input) and create logs for the administrator to view and analyze. In the first part of the thesis we develop a new system that is parallel to the existing DLNET Content Review Process. In the second part, we compare the newly developed Content Review Process with the baseline (old Content review Process) by designing comparison tests and measuring instruments. This part of the thesis includes the selection of dependent variables, design of various measurement instruments, execution of the quasi-experiments and analysis of the empirical results of comparisons tests. The quasi-experiments are done to measure the response time, maintainability, scalability, correctness, reliability and reusability of the two systems. The results show that the proposed software agent based system gives better response time (an improvement ranging from 57% to 82%) and is more maintainable (an improvement ranging from 16% to 67%) and more reusable (an improvement ranging from 1% to 26%). The improvement in the response time may be attributed to the fact that the agent based systems are inherently multithreaded while the existing content review system is a serial application. Both the systems, however, give comparable results for other dependent variables. / Master of Science
5

Uma análise comparativa de ambientes para Big Data: Apche Spark e HPAT / A comparative analysis for Big Data environments: Apache Spark and HPAT

Carvalho, Rafael Aquino de 16 April 2018 (has links)
Este trabalho compara o desempenho e a estabilidade de dois arcabouços para o processamento de Big Data: Apache Spark e High Performance Analytics Toolkit (HPAT). A comparação foi realizada usando duas aplicações: soma dos elementos de um vetor unidimensional e o algoritmo de clusterização K-means. Os experimentos foram realizados em ambiente distribuído e com memória compartilhada com diferentes quantidades e configurações de máquinas virtuais. Analisando os resultados foi possível concluir que o HPAT tem um melhor desempenho em relação ao Apache Spark nos nossos casos de estudo. Também realizamos uma análise dos dois arcabouços com a presença de falhas. / This work compares the performance and stability of two Big Data processing tools: Apache Spark and High Performance Analytics Toolkit (HPAT). The comparison was performed using two applications: a unidimensional vector sum and the K-means clustering algorithm. The experiments were performed in distributed and shared memory environments with different numbers and configurations of virtual machines. By analyzing the results we are able to conclude that HPAT has performance improvements in relation to Apache Spark in our case studies. We also provide an analysis of both frameworks in the presence of failures.
6

LOGGNING AV INTERAKTION MED DATAINSAMLINGSMETODER FÖR WEBBEVENTLOGGNINGSVERKTYG : Experiment om påverkan i svarstider vid loggning av interaktionsdata / LOGING OF INTERACTION WITH DATA COLLECTING METHODS FOR WEB EVENT LOG TOOLS : Experiment about affect in response time when loging interaction data

Henriksson, William January 2018 (has links)
Denna studie undersöker en eventuell påverkan av webbeventloggningsverktyg förautomatiserade användbarhetstestning av användarnas interaktion. I ett experiment mätssvarstider då inspelad interaktion av testpersonerna återuppspelas på den webbapplikationsom testas av webbeventloggningsverktygen med olika datainsamlingsmetoder.Experimentet är uppbyggt av fyra grupper som består av 3 loggningsverktyg somimplementerades utefter de delmålen som sattes upp. Webbeventloggningsverktygensimplementation inspireras av studiens förstudie och i deras numrering loggas allt merinteraktion av användaren som leder till en ökande mängd loggning i bytes. Studiens resultatmötte hypotesen att svarstiden för webbapplikationen när en användare interagerar på sidanökade inte märkbart och det var inte heller en statistiskt signifikant skillnad när loggningenutfördes jämfört mot den nuvarande webbplatsen.
7

Tvorba metodiky pro výkonové srovnání databázových systémů datových skladů / Development of methodics for performance comparison of data warehouse systems

Ronovský, Jan January 2017 (has links)
This thesis focuses on developing methodics for performance comparison of data warehouse systems. Firstly, the thesis defines data warehouses in various development stages of organization BI and describes knowledge about data warehouses. Methodics developed in the thesis describes architecture of standardized data warehouse, data flow between areas of data warehouse and processes in data warehouse. Understanding of these concepts is crucial for assurance of methodics applicability. Methodics offer logical progression of steps, which start and include testing of data warehouse systems. The contribution of the thesis is in having guide, what needs to be done when organization must do while testing various systems for data warehouses. Also it describes how this testing should be done on middle level detail, which is the absolute minimum level of abstraction that can be done due to wide applicability of methodics. Methodics offers solution to the problem of performance comparison when organization need to answer question - Which data warehouse system should we use in our organization? However, methodics expects already some knowledge about data warehouse content.
8

Možnosti porovnávání výkonnosti databázového systému Oracle / The ways of comparing performance of Oracle databases

Mareček, Aleš January 2012 (has links)
This thesis examines ways of comparing performance of Oracle databases. The need to compare performance is given by making changes in database systems, for example changes in databases structures or database management systems. One of the goals of this thesis includes a description of options and ways of comparing performance of Oracle databases which does not involve any additional licensing costs. Because there is a need to evaluate obtained performance indicators, part of this thesis deals with the design and implementation tool, which allows analysis of data and their evaluation through defined reports. Functionality of the tool is verified and demonstrated on data obtained from real databases. The main contribution of this thesis is the implementation of the tool, which significantly facilitate evaluating performance impact of changes which should be made in the production database.
9

Uma análise comparativa de ambientes para Big Data: Apche Spark e HPAT / A comparative analysis for Big Data environments: Apache Spark and HPAT

Rafael Aquino de Carvalho 16 April 2018 (has links)
Este trabalho compara o desempenho e a estabilidade de dois arcabouços para o processamento de Big Data: Apache Spark e High Performance Analytics Toolkit (HPAT). A comparação foi realizada usando duas aplicações: soma dos elementos de um vetor unidimensional e o algoritmo de clusterização K-means. Os experimentos foram realizados em ambiente distribuído e com memória compartilhada com diferentes quantidades e configurações de máquinas virtuais. Analisando os resultados foi possível concluir que o HPAT tem um melhor desempenho em relação ao Apache Spark nos nossos casos de estudo. Também realizamos uma análise dos dois arcabouços com a presença de falhas. / This work compares the performance and stability of two Big Data processing tools: Apache Spark and High Performance Analytics Toolkit (HPAT). The comparison was performed using two applications: a unidimensional vector sum and the K-means clustering algorithm. The experiments were performed in distributed and shared memory environments with different numbers and configurations of virtual machines. By analyzing the results we are able to conclude that HPAT has performance improvements in relation to Apache Spark in our case studies. We also provide an analysis of both frameworks in the presence of failures.
10

A performance comparison on REST-APIs in Express.js, Flask and ASP.NET Core

Qvarnström, Eric, Jonsson, Max January 2022 (has links)
APIs can have different architectures and standards, one of which is REST. REST stands for representational state transfer and is a commonly used architecture when implementing and creating APIs for the web. Choosing a web framework for a REST API implementation is not as trivial as one might think; there are many metrics to consider, one of which is performance. In this study, we compared the most used back-end web frameworks in 2021, ASP.NET Core, Express.js, and Flask, to see which performs best in throughput, response time, and computer resource usage. Finding the best-performing framework will help future developers choose which framework to use in terms of performance. Selecting a good framework from the beginning is essential to prevent the need to change framework in the future. To benchmark the different APIs, we did an experiment where we used JMeter, an open-source software for testing the performance of websites and APIs. By varying the number of virtual users and throughput, we were able to find the limit of each framework and their respective resource usage during different loads. We have concluded that ASP.NET Core had the best performance when it comes to response time and throughput. Furthermore, ASP.NET Core had the most efficient memory utilization throughout the entire experiment, and during loads higher than 4500 throughputs per second, it was also the most CPU efficient. Below 4500 throughput per second, Express.js was the most CPU-efficient framework but still had more memory usage than ASP.NET Core. According to our metrics, the performance of Flask was far behind Express.js and ASP.NET Core and should therefore not be considered a high-performance framework.

Page generated in 0.1002 seconds