• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 369
  • 356
  • 40
  • 34
  • 34
  • 32
  • 30
  • 28
  • 8
  • 7
  • 6
  • 4
  • 4
  • 3
  • 2
  • Tagged with
  • 1072
  • 1072
  • 331
  • 274
  • 193
  • 134
  • 117
  • 97
  • 92
  • 91
  • 77
  • 74
  • 72
  • 72
  • 65
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
391

Web Server Performance Evaluation in Cloud Computing and Local Environment / Web Server Performance Evaluation in Cloud Computing and Local Environment

Khan, Majid, Amin, Muhammad Faisal January 2012 (has links)
Context: Cloud computing is a concept in which a user get services like SaaS, PaaS and IaaS by deploying their data and application on remotely servers. Users have to pay only for the time the resources are acquired. They do not need to install and upgrade software and hardware. Due to these benefits organization are willing to move their data into the cloud and minimize their overhead. Organizations need to confirm that cloud can replace the traditional platform, software and hardware in an efficient way and provide robust performance. Web servers play a vital role providing services and deploying application. One might be interested to have information about a web server performance in the cloud. With this aim, we have compared cloud server performance with a local web server. Objectives: The objective of this study is to investigate cloud performance. For this purpose, we first find out the parameters and factors that affect a web server performance. Finding the parameters helped us in measuring the actual performance of a cloud server on some specific task. These parameters will help users, developers and IT specialists to measure cloud performance based on their requirements and needs. Methods: In order to fulfill the objective of this study, we performed a Systematic literature review and an experiment. The Systematic literature review is performed by studying articles from electronic sources including ACM Digital Library, IEEE, EiVillage (Compendx,Inspec). The Snowball method is used to minimize the chance of missing articles and to increase the validity of our findings. In experiment, two performance parameters (Throughput and Execution Time) are used to measure the performance of the Apache Web Server in Local and Cloud environment. Results: In Systematic literature review, we found many factors that affect the performance of a web server in Cloud computing. Most common of them are throughput, response time, execution time, CPU and other resource utilization. The experimental results revealed that web server performed well in local environment as compared to cloud environment. But there are other factors like cost overhead, software/ hardware configuration, software/hardware up -gradation and time consumption due to which cloud computing cannot be neglected. Conclusions: The parameters that affect the cloud performance are throughput, response time, execution time, CPU utilization and memory utilization. Increase and decrease in values of these parameters can affect cloud performance to a great extent. Overall performance of a cloud is not that effective but there are other reasons for using cloud computing
392

Software Architecture Simulation : Performance evaluation during the design phase

Borowski, Jimmy January 2004 (has links)
Due to the increasing size and complexity of software systems, software architectures have become a crucial part in development projects. A lot of effort has been put into defining formal ways for describing architecture specifications using Architecture Description Languages (ADLs). Since no common ADL today offers tools for evaluating the performance, an attempt to develop such a tool based on an event-based simulation engine has been made. Common ADLs were investigated and the work was based on the fundamentals within the field of software architectures. The tool was evaluated both in terms of correctness in predictions as well as usability to show that it actually is possible to evaluate the performance using high-level architectures as models.
393

CLIENT-SIDE EVALUATION OF QUALITY OF SERVICE IN CLOUD APPLICATIONS

Larsson, Jonathan January 2017 (has links)
Cloud computing is a constantly developing topic that reaches most of the people in the world on a daily basis. Almost every website and mobile application is hosted through a cloud provider. Two of the most important metrics for customers is performance and availability. Current tools that mea- sure availability are using the Internet Control Message Protocol (ICMP) to monitor availability, which has shown to be unreliable. This thesis suggests a new way of monitoring both availability and response time by using Hypertext Transfer Protocol (HTTP). Through HTTP, we are able to reach both the front-end of the cloud service (just as ICMP), but also deeper, to find failures in the back-end, that ICMP would miss. With our monitoring tool, we have monitored five different cloud data centers during one week. We found that cloud providers are not always keeping their promised SLA and it might be up to the cloud customers to reach a higher availability. We also perform load tests to analyze how vertical and horizontal scaling performs with regards to response time. Our analysis concludes that, at this time, vertical scaling outperforms horizontal scaling when it comes to response time. Even when this is the case, we suggest that developers should build applications that are horizontally scalable. With a horizontally scalable application and our monitoring tool combined, we can reach higher availability than is currently possible.
394

Performance evaluation based on data from code reviews

Andrej, Sekáč January 2016 (has links)
Context. Modern code review tools such as Gerrit have made available great amounts of code review data from different open source projects as well as other commercial projects. Code reviews are used to keep the quality of produced source code under control but the stored data could also be used for evaluation of the software development process. Objectives. This thesis uses machine learning methods for an approximation of review expert’s performance evaluation function. Due to limitations in the size of labelled data sample, this work uses semisupervised machine learning methods and measure their influence on the performance. In this research we propose features and also analyse their relevance to development performance evaluation. Methods. This thesis uses Radial Basis Function networks as the regression algorithm for the performance evaluation approximation and Metric Based Regularisation as the semi-supervised learning method. For the analysis of feature set and goodness of fit we use statistical tools with manual analysis. Results. The semi-supervised learning method achieved a similar accuracy to supervised versions of algorithm. The feature analysis showed that there is a significant negative correlation between the performance evaluation and three other features. A manual verification of learned models on unlabelled data achieved 73.68% accuracy. Conclusions. We have not managed to prove that the used semisupervised learning method would perform better than supervised learning methods. The analysis of the feature set suggests that the number of reviewers, the ratio of comments to the change size and the amount of code lines modified in later parts of development are relevant to performance evaluation task with high probability. The achieved accuracy of models close to 75% leads us to believe that, considering the limited size of labelled data set, our work provides a solid base for further improvements in the performance evaluation approximation.
395

On the Performance of the Solaris Operating System under the Xen Security-enabled Hypervisor

Bavelski, Alexei January 2007 (has links)
This thesis presents an evaluation of the Solaris version of the Xen virtual machine monitor and a comparison of its performance to the performance of Solaris Containers under similar conditions. Xen is a virtual machine monitor, based on the paravirtualization approach, which provides an instruction set different to the native machine environment and therefore requires modifications to the guest operating systems. Solaris Zones is an operating system-level virtualization technology that is part of the Solaris OS. Furthermore, we provide a basic performance evaluation of the security modules for Xen and Zones, known as sHype and Solaris Trusted Extensions, respectively. We evaluate the control domain (know as Domain-0) and the user domain performance as the number of user domains increases. Testing Domain-0 with an increasing number of user domains allows us to evaluate how much overhead virtual operating systems impose in the idle state and how their number influences the overall system performance. Testing one user domain and increasing the number of idle domains allows us to evaluate how the number of domains influences operating system performance. Testing concurrently loaded increasing numbers of user domains we investigate total system efficiency and load balancing dependent on the number of running systems. System performance was limited by CPU, memory, and hard drive characteristics. In the case of CPU-bound tests Xen exhibited performance close to the performance of Zones and to the native Solaris performance, loosing 2-3% due to the virtualization overhead. In case of memory-bound and hard drive-bound tests Xen showed 5 to 10 times worse performance.
396

On Switchover Performance in Multihomed SCTP

Eklund, Johan January 2010 (has links)
The emergence of real-time applications, like Voice over IP and video conferencing, in IP networks implies a challenge to the underlying infrastructure. Several real-time applications have requirements on timeliness as well as on reliability and are accompanied by signaling applications to set up, tear down and control the media sessions. Since neither of the traditional transport protocols responsible for end-to-end transfer of messages was found suitable for signaling traffic, the Stream Control Transmission Protocol (SCTP) was standardized. The focus for the protocol was initially on telephony signaling applications, but it was later widened to serve as a general purpose transport protocol. One major new feature to enhance robustness in SCTP is multihoming, which enables for more than one path within the same association. In this thesis we evaluate some of the mechanisms affecting transmission performance in case of a switchover between paths in a multihomed SCTP session. The major part of the evaluation concerns a failure situation, where the current path is broken. In case of failure, the endpoint does not get an explicit notification, but has to react upon missing acknowledgements. The challenge is to distinguish path failure from temporary congestion to decide  when to switch to an alternate path. A too fast switchover may be spurious, which could reduce transmission performance, while a too late switchover also results in reduced transmission performance. This implies a tradeoff which involves several protocol as well as network parameters and we elaborate among these to give a coherent view of the parameters and their interaction. Further, we present a recommendation on how to tune the parameters to meet  telephony signaling requirements, still without violating fairness to other traffic. We also consider another angle of switchover performance, the startup on the alternate path. Since the available capacity is usually unknown to the sender, the transmission on a new path is started at a low rate and then increased as acknowledgements of successful transmissions return. In case of switchover in the middle of a media session the startup phase after a switchover could cause problems to the application. In multihomed SCTP the availability of the alternate path makes it feasible for the end-host to estimate the available capacity on the alternate path prior to the switchover. Thus, it would be possible to implement a more efficient startup scheme. In this thesis we combine different switchover scenarios with relevant traffic. For these combinations, we analytically evaluate and quantify the potential performance gain from utilizing an ideal startup mechanism as compared to the traditional startup procedure.
397

Performance Evaluation of Embedded Microcomputers for Avionics Applications

Bilen, Celal Can, Alcalde, John January 2010 (has links)
Embedded microcomputers are used in a wide range of applications nowadays. Avionics is one of these areas and requires extra attention regarding reliability and determinism. Thus, these issues should also be born in mind in addition to performance when evaluating embedded microcomputers. This master thesis suggests a framework for performance evaluation of two members of the PowerPC microprocessor family, namely the MPC5554 from Freescale and PPC440EPx from AMCC, and analyzes the results within and between these processors. The framework can be generalized to be used in any microprocessor family, if required. Apart from performance evaluation, this thesis also suggests also a new terminology by introducing the concept of determinism levels to be able to estimate determinism issues in avionics applications more clearly, which is crucial regarding the requirements and working conditions of this very application. Such estimation does not include any practical results as in performance evaluation, but rather remains theoretical. Similar to Automark™ used by AutoBench™ in the EEMBC Benchmark Suite, we introduce a new performance metric score that we call ”Aviomark” and we carry out a detailed comparison of Aviomark with the traditional Automark™ score to be able to see how Aviomark differs from Automark™ in behavior. Finally, we have developed a graphical user interface (GUI) which works in parallel with the Green Hills MULTI Integrated Development Environment (IDE) in order to simplify and automate the evaluation process. By the help of the GUI, the users will be able to easily evaluate their specific PowerPC processors by starting the debugging from MULTI IDE.
398

Implementation and Experimental Evaluation of a Partially Reliable Transport Protocol

Asplund, Katarina January 2004 (has links)
In the last decade, we have seen an explosive growth in the deployment of multimedia applications on the Internet. However, the transport service provided over the Internet is not always feasible for these applications, since the network was originally designed for other types of applications. One way to better accommodate the service requirements of some of these applications is to provide a partially reliable transport service. A partially reliable transport service does not insist on recovering all, but just some of the packet losses, thus providing a lower transport delay than a reliable transport service. The work in this thesis focuses on the design, implementation, and evaluation of a partially reliable transport protocol called PRTP. PRTP has been designed as an extension to TCP in order to show that such a service could be effectively integrated with current protocol standards. An important feature of PRTP is that all modifications for PRTP are restricted to the receiver side, which means that it could be very easily deployed. The thesis presents performance results from various experiments on a Linux implementation of PRTP. The results suggest that transfer times can be decreased significantly when using PRTP as opposed to TCP in networks in which packet loss occurs. Furthermore, the thesis includes a study that investigates how users perceive an application that is based on a partially reliable service. Specifically, how users select the trade-off between image quality and latency when they download Web pages is explored. The results indicate that many of the users in the study could accept less than perfect image quality if the latency could be shortened.
399

An Implementation and Performance Evaluation of a Peer-to-Peer Chat System

Edänge, Simon January 2015 (has links)
Context: Chat applications have been around since the beginning of the modern internet. Today, there are many different chat systems with various communication solutions, but only a few utilize the fully decentralized Peer-to-Peer concept. Objectives: In this report, we want to investigate to see if a fully decentralized P2P concept is a suitable choice for chat applications. In order to investigate, a P2P architecture was selected and a simulation was implemented in Java. The simulation was used to make a performance evaluation in order see if the P2P concept could meet the requirements of a chat application, and to identify problems and difficulties. Methods: Two main methods were used in this thesis. First, a qualitative design method was used to identify and discuss different possibilities of designing a distributed chat application. Second, a performance evaluation was conducted to verify the selected and implemented mechanisms are able to obtain their general performance capabilities and to tune them towards anticipated performance. Results: The simulation proved that a decentralized P2P system can scale and find resources in a network quite efficiently without the need of any centralized service. It also proved to be simpler for the user to use the P2P concept, as no special configurations are needed. However, the selected protocol (Chord) had problems with high rates of churn, which could cause problems in big chat environments. The P2P concept was also shown to be highly complex to implement. Conclusion: P2P technology is a more complex technology, but it gives the host a lower cost in terms of hardware and maintenance. It also makes the system more robust and fault-tolerant. As we have seen in this report, P2P can scale and find other resources efficiently without the need of a centralized service. However, it will consume more power for each user, which makes mobile devices bad peers.
400

Google Glass : A backend support for Google Glass

Hoorn, Richard January 2015 (has links)
This dissertation describes a project to create a prototype application for Google Glass,where the purpose is to help assembly line industries by allowing workers to see instructionsvisually while working with both hands free. This solves the problem of requiringan instruction manual since instead all instructions will be stored in a database. GoogleGlass retrieves and displays the information for the user after scanning a QR-code forthe product which is going to be assembled. An important aspect is to see if such a systemis powerful enough for industries to start working with Google Glass. This conceptwas developed into a working prototype system, where Google Glass can retrieve data byscanning a QR-code that contains information about a specific product. This informationwill give a step by step instruction on which components the product contains and theinstructions for assembling them. The results presented in this dissertation shows thatGoogle Glass is not suited for the industry at its current state.

Page generated in 0.1188 seconds