• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 304
  • 109
  • 60
  • 54
  • 52
  • 25
  • 20
  • 15
  • 11
  • 10
  • 10
  • 9
  • 9
  • 9
  • 8
  • Tagged with
  • 761
  • 256
  • 227
  • 150
  • 141
  • 121
  • 103
  • 89
  • 79
  • 73
  • 71
  • 70
  • 68
  • 61
  • 59
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
321

Quality-Impact Assessment of Software Products and Services in a Future Internet Platform

Fotrousi, Farnaz January 2015 (has links)
The idea of a Future Internet platform is to deliver reusable and common functionalities to facilitate making wide ranges of software products and services.  The Future Internet platform, introduced by the Future Internet Public Private Partnership (FI-PPP) project, makes the common functionalities available through so-called Enablers to be instantly integrated into software products and services with less cost and complexity rather than a development from scratch. Quality assessment of software products and services and gaining insights into whether the quality fulfills users’ expectations within the platform are challenging. The challenges are due to the propagation of quality in the heterogeneous composite software that uses Enablers and infrastructure developed by third parties. The practical problem is how to assess the quality of such composite software as well as the impacts of the quality on users’ Quality of Experience (QoE). The research objective is to study an analytics-driven Quality-Impact approach identifying how software quality analytics together with their impact on QoE of users can be used for the assessment of software products and services in a Future Internet platform. The research was conducted with one systematic mapping study, two solution proposals, and one empirical study. The systematic mapping study is contributed to produce a map overviewing important analytics for managing a software ecosystem. The thesis also proposes a solution to introduce a holistic software-human analytics approach in a Future Internet platform. As the core of the solution, it proposes a Quality-Impact inquiry approach exemplified with a real practice. In the early validation of the proposals, a mixed qualitative-quantitative empirical research is conducted with the aim of designing a tool for the inquiry of user feedback. This research studies the effect of the instrumented feedback tool on QoE of a software product. The findings of the licentiate thesis show that satisfaction, performance, and freedom from risks analytics are important groups of analytics for assessing software products and services.  The proposed holistic solution takes up the results by describing how to measure the analytics and how to assess them practically using a composition model during the lifecycle of products and services in a Future Internet platform. As the core of the holistic approach, the Quality-Impact assessment approach could elicit relationships between software quality and impacts of the quality on stakeholders. Moreover, the early validation of the Quality-Impact approach parameterized suitable characteristics of a feedback tool. We found that disturbing feedback tools have negligible impacts on the perceived QoE of software products. The Quality-Impact approach is helpful to acquire insight into the success of software products and services contributing to the health and sustainability of the platform. This approach was adopted as a part of the validation of FI-PPP project. Future works will address the validation of the Quality-Impact approach in the FI-PPP or other real practices.
322

Estratégia para otimização de offloading entre as redes móveis VLC e LTE baseada em q-learning / Strategy for offloading optimization between mobile networks VLC and LTE based q-learning

SOUTO, Anderson Vinicius de Freitas 31 August 2018 (has links)
Submitted by Luciclea Silva (luci@ufpa.br) on 2018-11-09T17:16:39Z No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Dissertacao_Estrategiaotimizacaooffloading.pdf: 4353496 bytes, checksum: 660c9fb62874c25c2071d6e88692d9a9 (MD5) / Approved for entry into archive by Luciclea Silva (luci@ufpa.br) on 2018-11-09T17:17:01Z (GMT) No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Dissertacao_Estrategiaotimizacaooffloading.pdf: 4353496 bytes, checksum: 660c9fb62874c25c2071d6e88692d9a9 (MD5) / Made available in DSpace on 2018-11-09T17:17:01Z (GMT). No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Dissertacao_Estrategiaotimizacaooffloading.pdf: 4353496 bytes, checksum: 660c9fb62874c25c2071d6e88692d9a9 (MD5) Previous issue date: 2018-08-31 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / O aumento no consumo de tráfego de dados é motivado pelo aumento do número de dispositivos como smartphone e tablets, já que há uma necessidade de estar conectado com tudo e com todos. As aplicações como streaming de vídeo e jogos online demandam por maior taxa de transmissão de dados, essa alta demanda corrobora para um a sobrecarga das redes móveis baseadas por radiofrequência, de modo a culminar em uma possível escassez do espectro RF. Por tanto, este trabalho busca otimizar o offloading entre LTE e VLC, e para isso é utilizado uma metodologia baseado em aprendizado por reforço denominada de Q-Learning. O algoritmo utiliza como entrada as variáveis do ambiente que estão relacionadas à qualidade do sinal, densidade e velocidade do usuário para aprender e selecionar a melhor conexão. Por tanto, os resultados da simulação mostram a eficiência da metodologia proposta em comparação com o esquema RSS predominante na literatura da área. já que provou por métricas de QoS, suportar maiores taxas de transmissão de dados, assim como, garantiu uma melhoria de 18% em relação as interrupções de serviço a medida que o número de usuários aumenta no sistema. / The increase in the consumption of data traffic is motivated by the increasing number of devices like smartphone and tablets, since there is a need to be connected with everything and with everyone. Applications such as streaming video and online games require a higher rate of data transmission, this high demand corroborates the overload of mobile networks based on radio frequency, so as to culminate in a possible shortage of the RF spectrum. Therefore, this work seeks to optimize offloading between LTE and VLC, and for this a methodology based on reinforcement learning called Q-Learning is used. The algorithm uses as input the environment variables that are related to the signal quality, density and speed of the user to learn and select the best connection. Therefore, the results of the simulation show the efficiency of the proposed methodology in comparison with the predominant RSS scheme in the area literature. as it has been proven by QoS metrics to support higher data rates, as well as ensuring an 18% improvement over service interruptions as the number of users increases in the system.
323

Traffic Sensitive Quality of Service Controller

Kumar, Abhishek Anand 14 January 2004 (has links)
Internet applications have varied Quality of Service (QoS) Requirements. Traditional applications such as FTP and email are throughput sensitive since their quality is primarily affected by the throughput they receive. There are delay sensitive applications such as streaming audio/video and IP telephony, whose quality is more affected by the delay. The current Internet however does not provide QoS support to the applications and treats the packets from all applications as primarily throughput sensitive. Delay sensitive applications can however sacrifice throughput for delay to obtain better quality. We present a Traffic Sensitive QoS controller (TSQ) which can be used in conjunction with many existing Active Queue Management (AQM) techniques at the router. The applications inform the TSQ enabled router about their delay sensitivity by embedding a delay hint in the packet header. The delay hint is a measure of an application's delay sensitivity. The TSQ router on receiving packets provides a lower queueing delay to packets from delay sensitive applications based on the delay hint. It also increases the drop probability of such applications thus decreasing their throughput and preventing any unfair advantage over throughput sensitive applications. We have also presented the quality metrics of some typical Internet applications in terms of delay and throughput. The applications are free to choose their delay hints based on the quality they receive. We evaluated TSQ in conjunction with the PI-controller AQM over the Network Simulator (NS-2). We have presented our results showing the improvement in QoS of applications due to the presence of TSQ.
324

Selective Flooding for Better QoS Routing

Kannan, Gangadharan 10 May 2000 (has links)
Quality-of-service (QoS) requirements for the timely delivery of real-time multimedia raise new challenges for the networking world. A key component of QoS is QoS routing which allows the selection of network routes with sufficient resources for requested QoS parameters. Several techniques have been proposed in the literature to compute QoS routes, most of which require dynamic update of link-state information across the Internet. Given the growing size of the Internet, it is becoming increasingly difficult to gather up-to-date state information in a dynamic environment. We propose a new technique to compute QoS routes on the Internet in a fast and efficient manner without any need for dynamic updates. Our method, known as Selective Flooding, checks the state of the links on a set of pre-computed routes from the source to the destination in parallel and based on this information computes the best route and then reserves resources. We implemented Selective Flooding on a QoS routing simulator and evaluated the performance of Selective Flooding compared to source routing for a variety of network parameters. We find Selective Flooding consistently outperforms source routing in terms of call-blocking rate and outperforms source routing in terms of network overhead for some network conditions. The contributions of this thesis include the design of a new QoS routing algorithm, Selective Flooding, extensive evaluation of Selective Flooding under a variety of network conditions and a working simulation model for future research.
325

Power control and resource allocation for QoS-constrained wireless networks

Feng, Ziqiang January 2017 (has links)
Developments such as machine-to-machine communications and multimedia services are placing growing demands on high-speed reliable transmissions and limited wireless spectrum resources. Although multiple-input multiple-output (MIMO) systems have shown the ability to provide reliable transmissions in fading channels, it is not practical for single-antenna devices to support MIMO system due to cost and hardware limitations. Cooperative communication allows single-antenna devices to share their spectrum resources and form a virtual MIMO system where their quality of service (QoS) may be improved via cooperation. Most cooperative communication solutions are based on fixed spectrum access schemes and thus cannot further improve spectrum efficiency. In order to support more users in the existing spectrum, we consider dynamic spectrum access schemes and cognitive radio techniques in this dissertation. Our work includes the modelling, characterization and optimization of QoS-constrained cooperative networks and cognitive radio networks. QoS constraints such as delay and data rate are modelled. To solve power control and channel resource allocation problems, dynamic power control, matching theory and multi-armed bandit algorithms are employed in our investigations. In this dissertation, we first consider a cluster-based cooperative wireless network utilizing a centralized cooperation model. The dynamic power control and optimization problem is analyzed in this scenario. We then consider a cooperative cognitive radio network utilizing an opportunistic spectrum access model. Distributed spectrum access algorithms are proposed to help secondary users utilize vacant channels of primary users in order to optimize the total utility of the network. Finally, a noncooperative cognitive radio network utilizing the opportunistic spectrum access model is analyzed. In this model, primary users do not communicate with secondary users. Therefore, secondary users are required to find vacant channels on which to transmit. Multi-armed bandit algorithms are proposed to help secondary users predict the availability of licensed channels. In summary, in this dissertation we consider both cooperative communication networks and cognitive radio networks with QoS constraints. Efficient power control and channel resource allocation schemes have been proposed for optimization problems in different scenarios.
326

Mécanismes de collaboration entre réseaux et services applicatifs pour l'optimisation des ressources et des services / Collaboration mechanisms between overlays and networks for the optimization of ressources and services

Ellouze, Selim 02 July 2013 (has links)
Dans cette nouvelle ère du numérique, l'accès à l'information est entré désormais dans une autre dimension. Nous assistons à la dominance d'un modèle fondé sur les opportunités offertes par un accès mondialisé à l'Internet et à son application phare : le "World Wide Web". Les services se sont multipliés. Les terminaux se sont diversifiés. Les technologies de transport se sont améliorées. Les attentes se sont élevées. Dans cette spirale que nous nous abstenons de qualifier, les opérateurs se trouvent désormais confrontés à une croissance soutenue du trafic dans leurs réseaux, en grande partie due au transport de flux vidéo. Les fournisseurs de services sur Internet se trouvent aussi concernés par la problématique de la qualité de service dont dépend la satisfaction de leurs utilisateurs. Pour l'ensemble des acteurs, ces nouvelles tendances présentent à la fois des défis et des opportunités. Les défis se concentrent dans la problématique de gestion de la demande croissante du trafic tout en maintenant une qualité d'expérience appropriée pour les utilisateurs. Les opportunités proviendront de l'adéquation entre une demande croissante des services Web en termes de qualité de services et des ressources qui devront supporter la distribution de ces services. Il est crucial pour chaque acteur de bien se positionner dans la chaîne de valeur pour gérer cette adéquation. Le rôle que prendra le réseau support, simple ensemble de tuyaux surdimensionnés, ou bien réseau intelligent offrant des fonctions avancées de contrôle illustre parfaitement cet enjeu. Ces deux alternatives sont respectivement connues sous les termes "dumb-pipe" ou "smart network". Dans cette thèse, nous considérons une nouvelle approche, qui se veut simple, efficace et adaptée pour faire face à ces défis. Les opérateurs réseaux et les fournisseurs de services sont mutuellement gagnants dans l'amélioration du transport de données dans les réseaux tout en continuant à opérer leur propre infrastructure. Cette démarche coopérative est le point de départ de nos travaux qui visent à définir un cadre, une architecture et des techniques appropriées qui amèneront ces acteurs à collaborer en vue de gérer conjointement cette problématique. Cette collaboration est nécessaire car chaque acteur quoique prisonnier de ses contraintes peut les transformer en relations contractuelles dans un processus client fournisseur pour l'optimisation de la gestion du trafic. / In this new digital world, driven by the dominance of a model based on the opportunities offered by global access to the Internet and its killer app: the World Wide Web, access to information is becoming a matter of a good experience and responsiveness. We are witnessing the Web services are of increasing popularity. New types of terminals are proposed. Communications technologies are improved. Users expectations are becoming higher. In such a context, network operators are facing serious challenges arising from the management of a massive traffic growth, largely driven by the increasing amount of video streams while internet services providers are also concerned by the issue of providing an adequate quality of experience to their end-users. For both actors, these dual trends present both challenges and opportunities. The challenges arise from the issues of managing the growing demand for traffic while maintaining appropriate quality of experience for users. Opportunities come from a smart management of the increasing demands of Web services in terms of quality of services and of the resources that will support the delivery of these services. It is then crucial for each actor to be well-positioned in the value chain to take part in this process. The role that will be played by the networks, as a basic set of oversized pipes, or as an intelligent network providing advanced management facilities, perfectly illustrates this issue. These two alternatives are respectively known as the "dumb-pipe" or "smart networks". In this thesis, we consider a new approach, which is simple, effective and adapted to meet these challenges. Network operators and service providers can mutually benefit from improving the data delivery in the networks while continuing to fully control their infrastructures. This collaborative approach is the starting bloc of our work aiming at defining a framework, an architecture and appropriate procedures to bring these actors to work together to manage this problem. This collaboration is particularly necessary because each actor, though prisoner of its constraints and capacities, can turn them into a contractual relation with the other in a client-supplier process for the optimization of traffic management.
327

Telecom Networks Virtualization : Overcoming the Latency Challenge

Oljira, Dejene Boru January 2018 (has links)
Telecom service providers are adopting a Network Functions Virtualization (NFV) based service delivery model, in response to the unprecedented traffic growth and an increasing customers demand for new high-quality network services. In NFV, telecom network functions are virtualized and run on top of commodity servers. Ensuring network performance equivalent to the legacy non-virtualized system is a determining factor for the success of telecom networks virtualization. Whereas in virtualized systems, achieving carrier-grade network performance such as low latency, high throughput, and high availability to guarantee the quality of experience (QoE) for customer is challenging. In this thesis, we focus on addressing the latency challenge. We investigate the delay overhead of virtualization by comprehensive network performance measurements and analysis in a controlled virtualized environment. With this, a break-down of the latency incurred by the virtualization and the impact of co-locating virtual machines (VMs) of different workloads on the end-to-end latency is provided. We exploit this result to develop an optimization model for placement and provisioning of the virtualized telecom network functions to ensure both the latency and cost-efficiency requirements. To further alleviate the latency challenge, we propose a multipath transport protocol MDTCP, that leverage Explicit Congestion Notification (ECN) to quickly detect and react to an incipient congestion to minimize queuing delays, and achieve high network utilization in telecom datacenters. / HITS, 4707
328

Provider recommendation based on client-perceived performance

Thio, Niko January 2009 (has links)
In recent years the service-oriented design paradigm has enabled applications to be built by incorporating third party services. With the increasing popularity of this new paradigm, many companies and organizations have started to adopt this technology, which has resulted in an increase of the number and variety of third party providers. With the vast improvement of global networking infrastructure, a large number of providers offer their services for worldwide clients. As a result, clients are often presented with a number of providers that offer services with the same or similar functionalities, but differ in terms of non-functional attributes (or Quality of Service – QoS), such as performance. In this environment, the role of provider recommendation has become more important - in assisting clients in choosing the provider that meets their QoS requirement. / In this thesis we focus on provider recommendation based on one of the most important QoS attributes – performance. Specifically, we investigate client-perceived performance, which is the application-level performance measured at the client-side every time the client invokes the service. This performance metric has the advantage of accurately representing client experience, compared to the widely used server-side metrics in the current frameworks (e.g. Service Level Agreement or SLA in Web Services context). As a result, provider recommendation based on this metric will be favourable from the client’s point of view. / In this thesis we address two key research challenges related to provider recommendation based on client-perceived performance - performance assessment and performance prediction. We begin by identifying heterogeneity factors that affect client-perceived performance among clients in a global Internet environment. We then perform extensive real-world experiments to evaluate the significance of each factor to the client-perceived performance. / From our finding on heterogeneity factors, we then develop a performance estimation technique to address performance assessment for cases where direct measurements are unavailable. This technique is based on the generalization concept, i.e. estimating performance based on the measurement gathered by similar clients. A two-stage grouping scheme based on the heterogeneity factors we identified earlier is proposed to address the problem of determining client similarity. We then develop an estimation algorithm and validate it using synthetic data, as well as real world datasets. / With regard to performance prediction, we focus on the medium-term prediction aspect to address the needs of the emerging technology requirements: distinguishing providers based on medium-term (e.g. one to seven days) performance. Such applications are found when the providers require subscription from their clients to access the service. Another situation where the medium-term prediction is important is in temporal-aware selection: the providers need to be differentiated, based on the expected performance of a particular time interval (e.g. during business hours). We investigate the applicability of classical time series prediction methods: ARIMA and exponential smoothing, as well as their seasonal counterparts – seasonal ARIMA and Holt-Winters. Our results show that these existing models lack the ability to capture the important characteristics of client-perceived performance, thus producing poor medium-term prediction. We then develop a medium-term prediction method that is specifically designed to account for the key characteristics of a client-perceived performance series, and to show that our prediction methods produce higher accuracy for medium-term prediction compared to the existing methods. / In order to demonstrate the applicability of our solution in practice, we developed a provider recommendation framework based on client-perceived performance (named PROPPER), which utilizes our findings on performance assessment and prediction. We formulated the recommendation algorithm and evaluated it through a mirror selection case study. It is shown that our framework produces better outcomes in most cases, compared to country-based or geographic distance-based selection schemes, which are the current approach of mirror selection nowadays.
329

Qos, Classification et Contrôle d'admission des flux TCP

Khanafer, Rana 03 1900 (has links) (PDF)
De nombreux développements sont actuellement en cours dans le réseau Internet, particulièrement sur la gestion de la QoS et sur l'intégration des différents services. Les aspects portant sur l'amélioration des performances des flux élastiques ont été quelque peu négligés par la communauté scientifique. Le travail effectué porte sur l'évaluation et l'amélioration des performances des flux élastiques. Plus exactement, nos études mettent en avant l'importance d'assurer une bonne qualité de service à ce type de trafic. Deux architectures de QoS ont été proposées. La première architecture est basée sur la classification et le contrôle d'admission appliqués sur les deux types de flux TCP: courts et longs. La classification des flux nous permet d'avoir un système plus prévisible et plus facile à dimensionner puisque les flux d'un agrégat parviennent à partager équitablement la bande passante qui leur est allouée au sein d'une même classe. Le contrôle d'admission prend en compte la caractérisation en flux longs et flux courts ainsi que les contraintes de QoS propres à chaque type de flux. Un modèle analytique ainsi que des simulations ont été réalisés afin d'évaluer les avantages de l'architecture proposée et d'analyser l'impact des seuils d'admission. Il est important de signaler que, outre l'amélioration des performances dans les cas étudiés, l'approche proposée fournit un outil de dimensionnement du réseau permettant d'atteindre, pour une structure de trafic donnée, les mesures de performances attendues. La deuxième architecture est basée sur le traitement préférentiel. Ce dernier est appliqué aux premiers paquets de chaque connexion favorisant ainsi les connexions courtes. Nous comptons sur l'architecture DiffServ pour classifier les flux aux bordures d'un réseau. Plus spécifiquement, nous maintenons la longueur (en paquets) de chaque flux actif aux routeurs de bordures et l'employons pour classifier les paquets entrants. Cette architecture a la particularité de ne pas nécessiter le maintien en mémoire d'un état par flux au coeur du réseau. Dans ce dernier, nous utilisons la politique de gestion de file d'attente RED avec des seuils différents pour les deux types de classes. Ceci nous permet de réduire le taux de pertes éprouvé par les paquets des flux courts. Nous montrons, à travers des analyses et des simulations que notre modèle peut réaliser une meilleure équité et un temps de réponse plus petit pour les flux courts que les modèles sans traitement préférentiel.
330

User-centric session et QoS dynamique pour une approche intégrée du NGN

Wu, Yijun 17 June 2010 (has links) (PDF)
La capacité à assurer la mobilité sans couture avec une E2E QoS sera capitale pour la réussite du NGN (Next Generation Network). Pour ce faire, les verrous à lever que nous avons relevés dans cette thèse se positionnent à l'interconnexion de trois domaines, à savoir : les mobilités, l'hétérogénéité et les préférences utilisateur. Notre première proposition d'ordres organisationnel et fonctionnel, pour laquelle nous préconisons la convergence des trois plans (user, contrôle et gestion) et les fonctionnalités associées. Ainsi nous obtenons une QoS dynamique pour satisfaire l'approche orientée "User-Centric ". Afin de mettre en œuvre la E2E QoS incluant la personnalisation dans la session "User-Centric", nous avons proposé une "Signalisation dynamique d'E2E QoS", qui est d'ordre protocolaire, sur le niveau de service afin de parvenir à la fourniture des services demandés par l'user et de se conformer au SLA. Pour couvrir tout impact de mobilité, nous avons ensuite proposé un " cross layer E2E Session Binding" au sein de notre architecture à quatre niveaux de visibilité (Equipement, Réseau, Service et User). Par le binding nous assurons la cohérence des informations entre les quatre niveaux de visibilité. Au-delà du binding, notre contribution sur la dimension informationnelle a porté sur les profils impliqués dans chaque étape du cycle de vie du service incluant les critères de QoS, les quels fournissent une image générique des composants du système de l'utilisateur et de toutes les ressources ambiantes. Finalement, nous montrons la faisabilité de nos contributions à travers des expérimentations sur notre plate-forme.

Page generated in 0.043 seconds