• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 370
  • 356
  • 40
  • 34
  • 34
  • 32
  • 30
  • 28
  • 8
  • 7
  • 6
  • 4
  • 4
  • 3
  • 2
  • Tagged with
  • 1074
  • 1074
  • 331
  • 274
  • 193
  • 134
  • 117
  • 99
  • 92
  • 91
  • 77
  • 75
  • 74
  • 72
  • 65
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
441

Performance Evaluation of Stereo Reconstruction Algorithms on NIR Images / Utvärdering av algoritmer för stereorekonstruktion av NIR-bilder

Vidas, Dario January 2016 (has links)
Stereo vision is one of the most active research areas in computer vision. While hundreds of stereo reconstruction algorithms have been developed, little work has been done on the evaluation of such algorithms and almost none on evaluation on Near-Infrared (NIR) images. Of almost a hundred examined, we selected a set of 15 stereo algorithms, mostly with real-time performance, which were then categorized and evaluated on several NIR image datasets, including single stereo pair and stream datasets. The accuracy and run time of each algorithm are measured and compared, giving an insight into which categories of algorithms perform best on NIR images and which algorithms may be candidates for real-time applications. Our comparison indicates that adaptive support-weight and belief propagation algorithms have the highest accuracy of all fast methods, but also longer run times (2-3 seconds). On the other hand, faster algorithms (that achieve 30 or more fps on a single thread) usually perform an order of magnitude worse when measuring the per-centage of incorrectly computed pixels.
442

Performance of a Micro-CT System : Characterisation of Hamamatsu X-ray source L10951-04 and flat panel C7942CA-22 / Prestanda hos ett Micro-CT System : Karaktärisering av Hamamatsu röntgenkälla L10951-04 och plattpanel C7942CA-22

Baumann, Michael January 2014 (has links)
This master thesis evaluated the performance of a micro-CT system consisting of Hamamatsu microfocus X-ray source L10951-04 and CMOS flat panel C7942CA-22. The X-ray source and flat panel have been characterised in terms of dark current, image noise and beam profile. Additionally, the micro-CT system’s spatial resolution, detector lag and detector X-ray response have been measured. Guidance for full image correction and methods for characterisation and performance test of the X-ray source and detector is presented. A spatial resolution of 7 lp/mm at 10 % MTF was measured. A detector lag of 0.3 % was observed after ten minutes of radiation exposure. The performance of the micro-CT system was found to be sufficient for high resolution X-ray imaging. However, the detector lag effect is strong enough to reduce image quality during subsequent image acquisition and must either be avoided or corrected for.
443

Service Management for P2P EnergySharing Scenarios Using Blockchain--Identification of Performance of Computational efforts

Patha, Ragadeep January 2022 (has links)
Peer-to-Peer energy trading enables the prosumers and consumers to trade their energy in a simple services.By this the energy users have possibility to have a surplusshare of energy without any interruptions[1].But for the higher deployment of thep2p energy services, the allocation of the resources for the energy trading transactions are also challenging to model in these days. Blockchain technology, which isof a distributed ledger system and also provides a secure way of sharing the information between the peers of the network, is suitable for the proposed p2p energytrading model which can be useful for the higher scale deployments. This thesis provides an initial implementation of the p2p energy trading modelusing the blockchain and also measures the performance of the implemented modelwith the computational.A literature review is conducted for obtaining the previousstudies related to p2p energy trading using blockchain with the performance evaluation.Then the technologies related to the thesis are described and from the literaturestudies the required models are described and considered for proposing the systemmodel for the thesis. The implemented system model is also analyzed with different computational efforts for the service management functions. For generating the transactions, a Fabricclient SDK is created, which ensures that each transaction communicates with theblockchain’s smart contract for the secured transaction. Finally, after measuring thecomputational efforts, I want to observe the performance outcome for the measuredcomputational parameters so that the system’s behavior can be analyzed when thetransactions are happening between the peers by using the specific blockchain technology.
444

Computer systems in airborne radar : Virtualization and load balancing of nodes

Isenstierna, Tobias, Popovic, Stefan January 2019 (has links)
Introduction. For hardware used in radar systems of today, technology is evolving in an increasing rate. For existing software in radar systems, relying on specific drivers or hardware, this quickly becomes a problem. When hardware required is no longer produced or outdated, compatibility problems emerges between the new hardware and existing software. This research will focus on exploring if the virtualization technology can be helpful in solving this problem. Would it be possible to address the compatibility problem with the help of hypervisor solutions, while also maintaining high performance? Objectives. The aim with this research is to explore the virtualization technology with focus on hypervisors, to improve the way that hardware and software cooperate within a radar system. The research will investigate if it is possible to solve compatibility problems between new hardware and already existing software, while also analysing the performance of virtual solutions compared to non-virtualized. Methods. The proposed method is an experiment were the two hypervisors Xen and KVM will analysed. The hypervisors will be running on two different systems. A native environment with similarities to a radar system will be built and then compared with the same system, but now with hypervisor solutions applied. Research around the area of virtualization will be conducted with focus on security, hypervisor features and compatibility. Results. The results will present a proposed virtual environment setup with the hypervisors installed. To address the compatibility issue, an old operating system has been used to prove that implemented virtualization works. Finally performance results are presented for the native environment compared against a virtual environment. Conclusions. From results gathered with benchmarks, we can see that the individual performance might vary, which is to be expected when used on different hardware. A virtual setup has been built, including Xen and KVM hypervisors, together with NAS communication. Running an old operating system as a virtual guest, compatibility has been proven to exist between software and hardware using KVM as the virtual solution. From the results gathered, KVM seems like a good solution to investigate more.
445

Modélisation de performance des caches basée sur l'analyse de données / A Data Driven Approach for Cache Performance Modeling

Olmos Marchant, Luis Felipe 30 May 2016 (has links)
L’Internet d’aujourd’hui a une charge de trafic de plus en plus forte à cause de la prolifération des sites de vidéo, notamment YouTube. Les serveurs Cache jouent un rôle clé pour faire face à cette demande qui croît vertigineusement. Ces serveurs sont déployés à proximité de l’utilisateur, et ils gardent dynamiquement les contenus les plus populaires via des algorithmes en ligne connus comme « politiques de cache ». Avec cette infrastructure les fournisseurs de contenu peuvent satisfaire la demande de façon efficace, en réduisant l’utilisation des ressources de réseau. Les serveurs Cache sont les briques basiques des Content Delivery Networks (CDNs), que selon Cisco fourniraient plus de 70% du trafic de vidéo en 2019.Donc, d’un point de vue opérationnel, il est très important de pouvoir estimer l’efficacité d’un serveur Cache selon la politique employée et la capacité. De manière plus spécifique, dans cette thèse nous traitons la question suivante : Combien, au minimum, doit-on investir sur un serveur cache pour avoir un niveau de performance donné?Produit d’une modélisation qui ne tient pas compte de la façon dont le catalogue de contenus évolue dans le temps, l’état de l’art de la recherche fournissait des réponses inexactes à la dernière question.Dans nos travaux, nous proposons des nouveaux modèles stochastiques, basés sur les processus ponctuels, qui permettent d’incorporer la dynamique du catalogue dans l’analyse de performance. Dans ce cadre, nous avons développé une analyse asymptotique rigoureuse pour l’estimation de la performance d’un serveur Cache pour la politique « Least Recently Used » (LRU). Nous avons validé les estimations théoriques avec longues traces de trafic Internet en proposant une méthode de maximum de vraisemblance pour l’estimation des paramètres du modèle. / The need to distribute massive quantities of multimedia content to multiple users has increased tremendously in the last decade. The current solution to this ever-growing demand are Content Delivery Networks, an application layer architecture that handle nowadays the majority of multimedia traffic. This distribution problem has also motivated the study of new solutions such as the Information Centric Networking paradigm, whose aim is to add content delivery capabilities to the network layer by decoupling data from its location. In both architectures, cache servers play a key role, allowing efficient use of network resources for content delivery. As a consequence, the study of cache performance evaluation techniques has found a new momentum in recent years.In this dissertation, we propose a framework for the performance modeling of a cache ruled by the Least Recently Used (LRU) discipline. Our framework is data-driven since, in addition to the usual mathematical analysis, we address two additional data-related problems: The first is to propose a model that a priori is both simple and representative of the essential features of the measured traffic; the second, is the estimation of the model parameters starting from traffic traces. The contributions of this thesis concerns each of the above tasks.In particular, for our first contribution, we propose a parsimonious traffic model featuring a document catalog evolving in time. We achieve this by allowing each document to be available for a limited (random) period of time. To make a sensible proposal, we apply the "semi-experimental" method to real data. These "semi-experiments" consist in two phases: first, we randomize the traffic trace to break specific dependence structures in the request sequence; secondly, we perform a simulation of an LRU cache with the randomized request sequence as input. For candidate model, we refute an independence hypothesis if the resulting hit probability curve differs significantly from the one obtained from original trace. With the insights obtained, we propose a traffic model based on the so-called Poisson cluster point processes.Our second contribution is a theoretical estimation of the cache hit probability for a generalization of the latter model. For this objective, we use the Palm distribution of the model to set up a probability space whereby a document can be singled out for the analysis. In this setting, we then obtain an integral formula for the average number of misses. Finally, by means of a scaling of system parameters, we obtain for the latter expression an asymptotic expansion for large cache size. This expansion quantifies the error of a widely used heuristic in literature known as the "Che approximation", thus justifying and extending it in the process.Our last contribution concerns the estimation of the model parameters. We tackle this problem for the simpler and widely used Independent Reference Model. By considering its parameter (a popularity distribution) to be a random sample, we implement a Maximum Likelihood method to estimate it. This method allows us to seamlessly handle the censor phenomena occurring in traces. By measuring the cache performance obtained with the resulting model, we show that this method provides a more representative model of data than typical ad-hoc methodologies.
446

Cross region cloud redundancy : A comparison of a single-region and a multi-region approach

Lindén, Oskar January 2023 (has links)
In order to increase the resiliency and redundancy of a distributed system, it is common to keep standby systems and backups of data in different locations than the primary site, separated by a meaningful distance in order to tolerate local outages. Nasdaq has accomplished this by maintaining primary-standby pairs or primary-standby-disaster triplets with at least one system residing in a different site. The team at Nasdaq is experimenting with a redundant deployment scheme in Kubernetes with three availability zones, located within a single geographical region, in Amazon Web Services. They want to move the disaster zone to another geographical region in order to improve the redundancy and resiliency of the system. The aim of this thesis is to investigate how this could be done and to compare the different approaches. To compare the different approaches, a simple observable model of the chain replicating strategy is implemented. The model is deployed in an Elastic Kubernetes Cluster on Amazon Web Services, using Helm. The supporting infrastructure is defined and created using Terraform. This model is subjected to evaluation through HTTP requests with different configurations and scenarios, to measure latency and throughput. The first scenario is a single user making HTTP requests to the system, and the second scenario is multiple users making requests to the system. The results show that the throughput is lower and the latency is higher with the multi-region approach. The relative difference in median throughput is -54.41% and the relative difference in median latency is 119.20%, in the single-producer case. In the multi-producer case, both the relative difference in median throughput and latency is reduced when increasing the amount of partitions in the system.
447

RESTful API vs. GraphQL a CRUD performance comparison

Niklasson, Alexander, Werèlius, Vincent January 2023 (has links)
The utilization of Application Programming Interfaces (APIs) has experiencedsignificant growth due to the increasing number of applications being devel-oped. APIs serve as a means to transfer data between different applications.While RESTful has been the standard API since its emergence around 2000,it is now being challenged by Facebook’s GraphQL, which was introducedin 2015. This study aims to fill a knowledge gap in the existing literatureon API performance evaluation by extending the focus beyond read opera-tions to include CREATE, UPDATE, and DELETE operations in both REST-ful APIs and GraphQL. Previous studies have predominantly examined theperformance of read operations, but there is a need to comprehensively un-derstand the behavior and effectiveness of additional CRUD operations. Toaddress this gap, we conducted a series of controlled experiments and anal-yses to evaluate the response time and RAM utilization of RESTful APIsand GraphQL when executing CREATE, UPDATE, and DELETE operations.We tested various scenarios and performance metrics to gain insights into thestrengths and weaknesses of each approach. Our findings indicate that con-trary to our initial beliefs, there are no significant differences between the twoAPI technologies in terms of CREATE, UPDATE, and DELETE operations.However, RESTful did slightly outperform GraphQL in the majority of tests.We also observed that GraphQL’s inherent batching functionality resulted infaster response times and lower RAM usage throughout the tests. On the otherhand, RESTful, despite its simpler queries, exhibited faster response times inGET operations, consistent with related work. Lastly, our findings suggestthat RESTful uses slightly less RAM compared to GraphQL in the context ofCREATE, UPDATE, and DELETE operations.
448

Evaluating the Robustness of Resource Allocations Obtained through Performance Modeling with Stochastic Process Algebra

Srivastava, Srishti 09 May 2015 (has links)
Recent developments in the field of parallel and distributed computing has led to a proliferation of solving large and computationally intensive mathematical, science, or engineering problems, that consist of several parallelizable parts and several non-parallelizable (sequential) parts. In a parallel and distributed computing environment, the performance goal is to optimize the execution of parallelizable parts of an application on concurrent processors. This requires efficient application scheduling and resource allocation for mapping applications to a set of suitable parallel processors such that the overall performance goal is achieved. However, such computational environments are often prone to unpredictable variations in application (problem and algorithm) and system characteristics. Therefore, a robustness study is required to guarantee a desired level of performance. Given an initial workload, a mapping of applications to resources is considered to be robust if that mapping optimizes execution performance and guarantees a desired level of performance in the presence of unpredictable perturbations at runtime. In this research, a stochastic process algebra, Performance Evaluation Process Algebra (PEPA), is used for obtaining resource allocations via a numerical analysis of performance modeling of the parallel execution of applications on parallel computing resources. The PEPA performance model is translated into an underlying mathematical Markov chain model for obtaining performance measures. Further, a robustness analysis of the allocation techniques is performed for finding a robustmapping from a set of initial mapping schemes. The numerical analysis of the performance models have confirmed similarity with the simulation results of earlier research available in existing literature. When compared to direct experiments and simulations, numerical models and the corresponding analyses are easier to reproduce, do not incur any setup or installation costs, do not impose any prerequisites for learning a simulation framework, and are not limited by the complexity of the underlying infrastructure or simulation libraries.
449

Relationships Between Information Technology Skills and Performance Evaluation Scores of Mississippi State University Extension Service Agents

Loper, James R 09 December 2016 (has links)
A study was conducted to see if the level of use, expertise, and problem solving abilities using information technology among Mississippi State University Extension agents was positively correlated with the performance quality of the agent as measured in the Mississippi State University Extension Service agent evaluation system. A second purpose was to examine how well agents self-assess their technology skills. Lastly, the study attempted to determine if there was a set of factors (including information technology skills) that explained a substantial portion of the variation in performance evaluation scores. The results showed that the Mississippi State University Extension agent evaluation system does not consider information technology skills and usage of agents. It was also found that agents are fairly adept at self-assessment of their technology skills. Lastly, no set of factors were found that would substantially explain performance evaluation ratings.
450

Přinášejí podílové fondy nabízené v České republice hodnotu svým investorům? / Do mutual funds offered in Czech Republic add value to investors?

Nosek, Jiří January 2022 (has links)
We estimate the proportions of skilled, unskilled, and zero-alpha funds preva- lent in the mutual Funds population easily accessible by Czech Investors. We estimate alphas from a regression against a concise set of Exchange Traded Funds and control for luck using False Discovery rate. We design a straight- forward ETF selection algorithm and find that if investors adhere to simple diversification rules, they can outperform a large proportion of mutual funds. We further document a negative relationship between the performance of mu- tual funds and its Total Expense ratio, suggesting that portfolio managers are on average unable to compensate their costs with better performance. JEL Classification C12, C20, G12, G23 Keywords Mutual Funds, Exchange Traded Funds, Perfor- mance evaluation Title Do mutual funds offered in Czech Republic add value to investors?

Page generated in 0.1134 seconds