• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 807
  • 129
  • 55
  • 8
  • 4
  • 1
  • Tagged with
  • 1004
  • 570
  • 264
  • 233
  • 214
  • 200
  • 199
  • 138
  • 128
  • 107
  • 103
  • 97
  • 82
  • 72
  • 71
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
401

Study of Open Mobile Alliance Device Management sessions for most effective device management

Smolarek, Tomasz January 2011 (has links)
Effective device management is not trivial due to a variety of devices and software. To keep costs to minimum companies must effectively utilize a unified solution for device management. This research investigated Funambol’s implementation of Open Mobile Alliance Device Management (OMA DM) which is the most popular device management solution. Interviews were used to set experiments and create realistic test cases. A set of devices and a collection of Funambol software were used to create device management sessions. All of the sessions were recorded, analysed, manipulated and resent to identify efficient ways of device management. Additionally, an influence of compression and buffer-like mechanisms were checked. Methods and guidelines are provided for efficient use of OMA DM as well as a reliable analysis of OMA DM Sessions under various conditions. It was found that for most data it is best to use a built-in transport protocol compressor. Hypertext Transfer Protocol’s (HTTP) deflate with a combination of client-side buffering-like mechanism at a client side performed best at most cases. Funambol’s implementation of the Binary Extensible Markup Language (WBXML), in most cases, performed very badly, even though it was designed specifically to compress OMA DM Session messages. It was found that for an efficient use of OMA DM a proper software option set (e.g. forced use of compression) may be sufficient.
402

Evaluation of VoIP Codecs over 802.11 Wireless Networks : A Measurement Study

Nazar, Arbab January 2009 (has links)
Voice over Internet Protocol (VoIP) has become very popular in recent days andbecome the first choice of small to medium companies for voice and data integration inorder to cut down the cost and use the IT resources in much more efficient way. Anotherpopular technology that is ruling the world after the year 2000 is 802.11 wirelessnetworks. The Organization wants to implement the VoIP on the wireless network. Thewireless medium has different nature and requirement than the 802.3 (Ethernet) andspecial consideration take into account while implementing the VoIP over wirelessnetwork.One of the major differences between 802.11 and 802.3 is the bandwidthavailability. When we implement the VoIP over 802.11, we must use the availablebandwidth is an efficient way that the VoIP application use as less bandwidth as possiblewhile retaining the good voice quality. In our project, we evaluated the differentcompression and decompression (CODEC) schemes over the wireless network for VoIP.To conduct this test we used two computers for comparing and evaluatingperformance between different CODEC. One dedicated system is used as Asterisk server,which is open source PBX software that is ready to use for main stream VoIPimplementation. Our main focus was on the end-to-end delay, jitter and packet loss forVoIP transmission for different CODECs under the different circumstances in thewireless network. The study also analyzed the VoIP codec selection based on the MeanOpinion Score (MOS) delivered by the softphone. In the end, we made a comparisonbetween all the proposed CODECs based on all the results and suggested the one Codecthat performs well in wireless network.
403

Test Data Post-Processing and Analysis of Link Adaptation

Nedstrand, Paul, Lindgren, Razmus January 2015 (has links)
Analysing the performance of cell phones and other wireless connected devices to mobile networks are key when validating if the standard of the system is achieved. This justies having testing tools that can produce a good overview of the data between base stations and cell phones to see the performance of the cell phone. This master thesis involves developing a tool that produces graphs with statistics from the trac data in the communication link between a connected mobile device and a base station. The statistics will be the correlation between two parameters in the trac data in the channel (e.g. throughput over the channel condition). The tool is oriented on analysis of link adaptation and by the produced graphs the testing personnel at Ericsson will be able to analyse the performance of one or several mobile equipments. We performed our own analysis on link adaptation using the tool to show that this type of analysis is possible with this tool. To show that the tool is useful for Ericsson we let test personnel answer a survey on the usability and user friendliness of it.
404

A Spectrum Decision Support System for Cognitive Radio Networks

Yao, Yong January 2012 (has links)
Cognitive Radio Networks (CRNs) offer a promising capability of alleviating the problem of spectrum insufficiency. In CRNs, the licensed spectrum channels are either exclusively reserved for licensed users or temporarily used by unlicensed users. The requirement for unlicensed users is to not harmfully impair the licensed users transmissions. Because of this, the unlicensed users must solve the task to decide which available channels should be selected. The selection process is often referred to as spectrum decision, with the aim to optimize the transmission performance of unlicensed users. A support system for CRNs is introduced, which is called Spectrum Decision Support System (SDSS). SDSS provides an intelligent spectrum decision strategy that integrates different decision making algorithms and takes into account various channel characterization parameters. The objective is to develop a scientific framework for decision making in CRNs, which involve theoretical analysis, simulation evaluation and practical implementation. Three important components of SDSS are discussed: 1) setting up an overlay decision maker, 2) prediction based spectrum decision strategy and 3) queuing modeling of CRNs. The reported results indicate the feasibility of the suggested algorithms.
405

Non-Intrusive Network-Based Estimation of Web Quality of Experience Indicators

Shaikh, Junaid January 2012 (has links)
Quality of Experience (QoE) deals with the acceptance of a service quality by the users and has evolved significantly as an important concept over the past 10 years. Network operators and service providers have gained interest in QoE-aware management of networks, in order to better fulfill end-user demands and gain a competitive edge in the market. While this growth promises new business opportunities, it also presents several challenges to the networking researchers, which are mainly related to the assessment of user experience. Several QoE assessment models have been proposed to estimate the user satisfaction for a given service quality. Most of them are intrusive and require knowledge of the content reference. In contrast, the network operators require non-intrusive methods, which allow models to be implementable on the network-level without having much knowledge about that reference. The methods should be able to monitor QoE passively in real-time, based on the information readily available on network level. This thesis investigates indicators, which are intended to be used in the development of non-intrusive network-based methods for the real-time QoE assessment and monitoring. First, a bridge is made between the user and the network perspectives by correlating the user traffic characteristics measured on an operational network and user subjective experience tested on an experimental platform. It is shown that the user session volume appears to be an indicator of users’ interest in the service. Second, the TCP connection interruptions are investigated as an indicator to infer the user experience. It is found out that the request-level performance metrics show stronger correlations between the interruption rates and the network Quality of Service (QoS). Third, a wavelet-based criterion is devised to assist in the identification of those traffic gaps, which may result in the degradation of QoE. It can be implemented on the network-level in quasi-real-time to quickly identify the user-perceived performance issues.
406

Virtualization of Data Centers : Case Study on Server Virtualizationn

Kappagantula, Sri Kasyap January 2018 (has links)
Nowadays, Data Centers use Virtualization, as a technique to make use of the opportunity for extension of independent virtual resources from the available physical hardware. Virtualization technique is implemented in the Data Centers to maximize the utilization of physical hardware (which significantly reduces the energy consumption and operating costs) without affecting the “Quality of Service (QoS)”. The main objective of this thesis is to study, the different network topologies used in the Data Center architecture, and to compare the QoS parameters of the virtual server over the physical server and also to abstract the better technology exists for virtualization. The research methodology used in this thesis is “qualitative”. To measure the QoS, we take the Latency, Packet loss, Throughput aspects of virtual servers under different virtualization technologies (KVM, ESXi, Hyper-V, Fusion, and Virtualbox) and compare their performance over the physical server. The work also investigates the CPU, RAM Utilizations and compare the physical and virtual server's behavior under different load conditions. The Results shows us that the virtual servers have performed better in terms of resource utilization, Latency and response times when compared to the physical servers. But there are some factors like backup and recovery, VM Sprawl, capacity planning, building a private cloud, which should be addressed for the growth of virtual data centers. Parameters that affect the performance of the virtual servers are addressed and the trade-off between the virtual and physical servers is established in terms of QoS aspects. The overall performance of virtual servers is effective when compared to the performance of physical servers.
407

Virtualization of Data Centers : study on Server Energy Consumption Performance

Padala, Praneel Reddy January 2018 (has links)
Due to various reasons data centers have become ubiquitous in our society. Energy costs are significant portion of data centers total lifetime costs which also makes financial sense to operators. This increases huge concern towards the energy costs and environmental impacts of data center. Power costs and energy efficiency are the major challenges front of us.From overall cyber energy used, 15% is used by networking portion ofa data center. Its estimated that the energy used by network infrastructure in a data center world wide is 15.6 billion kWh and is expected to increase to around 50%. Power costs and Energy Consumption plays a major role through out the life time of a data center, which also leads to the increase in financial costs for data center operators and increased usage of power resources. So, resource utilization has become a major issue in the data centers. The main aim of this thesis study is to find the efficient way for utilization of resources and decrease the energy costs to the operators in the data centers using virtualization. Virtualization technology is used to deploy virtual servers on physical servers which uses the same resources and helps to decrease the energy consumption of a data center.
408

An investigation of lightweight cryptography and using the key derivation function for a hybrid scheme for security in IoT

Khomlyak, Olha January 2017 (has links)
Data security plays a central role in the design of Internet of Things (IoT). Since most of the "things" in IoT are embedded computing devices it is appropriate to talk about cryptography in embedded of systems. This kind of devices is based on microcontrollers, which have limited resources (processing power, memory, storage, and energy). Therefore, we can apply only lightweight cryptography. The goal of this work is to find the optimal cryptographic solution for IoT devices. It is expected that perception of this solution would be useful for implementation on “limited” devices. In this study, we investigate which lightweight algorithm is better to implement. Also, how we can combine two different algorithms in a hybrid scheme and modify this scheme due to data sending scenario. Compendex, Inspec, IEEE Xplore, ACM Digital Library, and Springer Link databases are used to conduct a comprehensive literature review. Experimental work adopted in this study involves implementations, measurements, and observations from the results. The experimental research covers implementations of different algorithms and experimental hybrid scheme, which includes additional function. Results show the performance of the considered algorithms and proposed hybrid scheme. According to our results, security solutions for IoT have to utilize algorithms, which have good performance. The combination of symmetric and asymmetric algorithms in the hybrid scheme can be a solution, which provides the main security requirements: confidentiality, integrity, and authenticity. Adaptation of this scheme to the possible IoT scenarios shows the results acceptable for implementation due to limited resources of hardware.
409

Analysis of Total Cost of Ownership for Medium Scale Cloud Service Provider with emphasis on Technology and Security

Dagala, Wadzani Jabani January 2017 (has links)
Total cost of ownership is a great factor to consider when deciding to deploy cloud computing. The cost to own a data centre or run a data centre outweighs the thought of IT manager or owner of the business organisation.The research work is concerned with specifying the factors that sum the TCO for medium scale service providers with respect to technology and security. A valid analysis was made with respect to the cloud service providers expenses and how to reduce the cost of ownership.In this research work, a review of related articles was used from a wide source, reading through the abstract and overview of the articles to find its relevance to the subject. A further interview was conducted with two medium scale cloud service providers and one cloud user.In this study, an average calculation of the TCO was made and we implemented a proposed cost reduction method. We made a proposal on which and how to decide as to which cloud services users should deploy in terms of cost and security.We conclude that many articles have focused their TCO calculation on the building without making emphasis on the security. The security accumulates huge amount under hidden cost and this research work identified the hidden cost, made an average calculation and proffer a method of reducing the TCO. / <p><em></em></p>
410

Performance, Isolation and Service Guarantees in Virtualized Network Functions

Rathore, Muhammad Siraj January 2017 (has links)
A network is generally a collection of different hardware-based network devices carrying out various network functions, (NF). These NF implementations are special purpose and expensive. Network function virtualization (NFV) is an alternative which uses software-based implementation of NFs in inexpensive commodity servers. However, it is challenging to achieve high networking performance due to bottlenecks in software, particularly in a virtualized environment where NFs are implemented inside the virtual machines (VM). The performance isolation is yet another challenge, which means that the load on one VM should not affect the performance of other VMs. However, it is difficult to provide performance isolation due to resource contention in a commodity server. Furthermore, different NFs may require different service guarantees which are difficult to ensure due to the non-deterministic performance behavior of a commodity server. In this thesis we investigate how the challenges of performance, isolation and service guarantees can be addressed for virtual routers (VR), as an example of a virtualized NF. It is argued that the forwarding path of a VR can be modified in an efficient manner in order to improve the forwarding performance. When it comes to performance isolation, poor isolation is observed due to shared network queues and CPU sharing among VRs. We propose a design with SR-IOV, which allows reserving a network queue and CPU core for each VR. As a result, the resource contention is reduced and strong performance isolation is achieved. Finally, it is investigated how average throughput and bounded packet delay can be guaranteed to VRs. We argue that a classic rate-controlled service discipline can be adapted in a virtual environment to achieve service guarantees. We demonstrate that firm service guarantees can be achieved with little overhead of adding token bucket regulator in the forwarding path of a VR. / <p>QC 20170511</p>

Page generated in 0.0951 seconds