• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 276
  • 119
  • 60
  • 57
  • 38
  • 27
  • 23
  • 16
  • 9
  • 9
  • 7
  • 7
  • 5
  • 5
  • 5
  • Tagged with
  • 746
  • 746
  • 195
  • 167
  • 145
  • 119
  • 107
  • 102
  • 100
  • 90
  • 89
  • 88
  • 86
  • 75
  • 69
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Provider recommendation based on client-perceived performance

Thio, Niko January 2009 (has links)
In recent years the service-oriented design paradigm has enabled applications to be built by incorporating third party services. With the increasing popularity of this new paradigm, many companies and organizations have started to adopt this technology, which has resulted in an increase of the number and variety of third party providers. With the vast improvement of global networking infrastructure, a large number of providers offer their services for worldwide clients. As a result, clients are often presented with a number of providers that offer services with the same or similar functionalities, but differ in terms of non-functional attributes (or Quality of Service – QoS), such as performance. In this environment, the role of provider recommendation has become more important - in assisting clients in choosing the provider that meets their QoS requirement. / In this thesis we focus on provider recommendation based on one of the most important QoS attributes – performance. Specifically, we investigate client-perceived performance, which is the application-level performance measured at the client-side every time the client invokes the service. This performance metric has the advantage of accurately representing client experience, compared to the widely used server-side metrics in the current frameworks (e.g. Service Level Agreement or SLA in Web Services context). As a result, provider recommendation based on this metric will be favourable from the client’s point of view. / In this thesis we address two key research challenges related to provider recommendation based on client-perceived performance - performance assessment and performance prediction. We begin by identifying heterogeneity factors that affect client-perceived performance among clients in a global Internet environment. We then perform extensive real-world experiments to evaluate the significance of each factor to the client-perceived performance. / From our finding on heterogeneity factors, we then develop a performance estimation technique to address performance assessment for cases where direct measurements are unavailable. This technique is based on the generalization concept, i.e. estimating performance based on the measurement gathered by similar clients. A two-stage grouping scheme based on the heterogeneity factors we identified earlier is proposed to address the problem of determining client similarity. We then develop an estimation algorithm and validate it using synthetic data, as well as real world datasets. / With regard to performance prediction, we focus on the medium-term prediction aspect to address the needs of the emerging technology requirements: distinguishing providers based on medium-term (e.g. one to seven days) performance. Such applications are found when the providers require subscription from their clients to access the service. Another situation where the medium-term prediction is important is in temporal-aware selection: the providers need to be differentiated, based on the expected performance of a particular time interval (e.g. during business hours). We investigate the applicability of classical time series prediction methods: ARIMA and exponential smoothing, as well as their seasonal counterparts – seasonal ARIMA and Holt-Winters. Our results show that these existing models lack the ability to capture the important characteristics of client-perceived performance, thus producing poor medium-term prediction. We then develop a medium-term prediction method that is specifically designed to account for the key characteristics of a client-perceived performance series, and to show that our prediction methods produce higher accuracy for medium-term prediction compared to the existing methods. / In order to demonstrate the applicability of our solution in practice, we developed a provider recommendation framework based on client-perceived performance (named PROPPER), which utilizes our findings on performance assessment and prediction. We formulated the recommendation algorithm and evaluated it through a mirror selection case study. It is shown that our framework produces better outcomes in most cases, compared to country-based or geographic distance-based selection schemes, which are the current approach of mirror selection nowadays.
152

Quality of Service in Ad Hoc Networks by Priority Queuing / Tjänstekvalitet i ad hoc nät med köprioritering

Tronarp, Otto January 2003 (has links)
<p>The increasing usage of information technology in military affairs raises the need for robust high capacity radio networks. The network will be used to provide several different types of services, for example group calls and situation awareness services. All services have specific demands on packet delays and packet losses in order to be fully functional, and therefore there is a need for a Quality of Service (QoS) mechanism in the network. </p><p>In this master thesis we examine the possibility to provide a QoS mechanism in Ad Hoc networks by using priority queues. The study includes two different queuing schemes, namely fixed priority queuing and weighted fair queuing. The performance of the two queuing schemes are evaluated and compared with respect to the ability to provide differentiation in network delay, i.e., provide high priority traffic with lower delays than low priority traffic. The study is mainly done by simulations, but for fixed priority queuing we also derive a analytical approximation of the network delay. </p><p>Our simulations show that fixed priority queuing provides a sharp delay differentiation between service classes, while weighted fair queuing gives the ability to control the delay differentiation. One of those queuing schemes alone might not be the best solution for providing QoS, instead we suggest that a combination of them is used.</p>
153

Real-time Transmission Over Internet

Gao, Qi January 2004 (has links)
<p>With the Internet expansion, real-time transmission over Internet is becoming a new promising application. Successful real-time communication over IP networks requires reasonably reliable, low delay, low loss date transport. Since Internet is a non-synchronous packet switching network, high load and lack of guarantees on data delivery make real-time communication such as Voice and Video over IP a challenging application to become realistic on the Internet. </p><p>This thesis work is composed of two parts within real-time voice and video communication: network simulation and measurement on the real Internet. In the network simulation, I investigate the requirement for the network"overprovisioning"in order to reach certain quality-of-service. In the experiments on the real Internet, I simulate real-time transmission with UDP packets along two different traffic routes and analyze the quality-of- service I get in each case. </p><p>The overall contribution of this work is: To create scenarios to understand the concept of overprovisioning and how it affects the quality-of-service. To develop a mechanism to measure the quality-of-service for real-time traffic provided by the current best-effort network.</p>
154

QoS Routing With Multiple Constraints

Jishnu, A 03 1900 (has links) (PDF)
No description available.
155

A Middleware for Self-Managing Large-Scale Systems

Adam, Constantin January 2006 (has links)
This thesis investigates designs that enable individual components of a distributed system to work together and coordinate their actions towards a common goal. While the basic motivation for our research is to develop engineering principles for large-scale autonomous systems, we address the problem in the context of resource management in server clusters that provide web services. To this end, we have developed, implemented and evaluated a decentralized design for resource management that follows four principles. First, in order to facilitate scalability, each node has only partial knowledge of the system. Second, each node can adapt and change its role at runtime. Third, each node runs a number of local control mechanisms independently and asynchronously from its peers. Fourth, each node dynamically adapts its local configuration in order to optimize a global utility function. The design includes three fundamental building blocks: overlay construction, request routing and application placement. Overlay construction organizes the cluster nodes into a single dynamic overlay. Request routing directs service requests towards nodes with available resources. Application placement partitions the cluster resources between applications, and dynamically adjusts the allocation in response to changes in external load, node failures, etc. We have evaluated the design using complexity analysis, simulation and prototype implementation. Using complexity analysis and simulation, we have shown that the system is scalable, operates efficiently in steady state, quickly adapts to external events and allows for effective service differentiation by a system administrator. A prototype has been built using accepted technologies (Java, Tomcat) and evaluated using standard benchmarks (TPC-W and RUBiS). The evaluation results show that the behavior of the prototype matches closely that of the simulated design for key metrics related to adaptability and robustness, therefore validating our design and proving its feasibility. / QC 20100629
156

A model to measure and evaluate the quality of service of healthcare systems in a gynecological clinic : A Case Study

Mete, Çiğdem, Dönmez, Selin January 2013 (has links)
In today’s world, there is a strong competition between companies. In this competition environment, it is accepted that one of the most important elements is quality for companies to be differentiated. The company which can produce high quality products or high quality services can survive in this competition. Nevertheless, the situations of being high quality or low quality depends on the tolerance limits which are determined by customers and it can be changed from person to person. In today’s world, even though, when quality concept is considered, the first thing that comes to the minds is “the quality of products”, the fact of quality is not important for only manufacturing companies but also service companies. To be able to comprehend that if the products or the services which are provided by the company satisfy the customers’ needs, the quality should be measured. Nevertheless, when the process of measuring the quality is carried out by experiments, observations and tangible data in the manufacturing companies, there is no tangible data in the service companies to measure the quality. That’s why there are many different models and qualitative approaches that are developed to be able to measure the quality in service companies. In the scope of this thesis, a model has been developed to measure the current quality and evaluate the perceived quality of a healthcare company. Developed model is consisted of two main parts which are functional quality and technical quality. Besides, for determining the perceived quality, socio-demographic attributes are also considered. To be able to implement the functional quality part of the model, a survey has been created about the clinic and it has been sent to the patients. And the data about the perceived quality has been gathered from the patients. To be able to implement the technical quality part of the model, interviews with the staff of the clinic have been made and the studies to improve the perceived quality of the clinic, permissions and certificates are identified. Developed model has been tested in the case company and it has been found acceptable. The model ensures to measure the current service quality of the clinic and unveils the areas which cause the low quality in the clinic. Thus, the spots that need to be improved are identified.
157

A study on the quality of service in a non-profit organization : a case study of the Macau Federation of Trade Unions (MFTU) / Study on the quality of service in a non-profit organization : a case study of the Macau Federation of Trade Unions (MFTU)

Lo, Pui Hong January 2007 (has links)
University of Macau / Faculty of Business Administration / Department of Management and Marketing
158

Mechanisms to Reduce Routing Information Inaccuracy Effects: Application to MPLS and WDM Networks

Masip Bruin, Xavier 07 October 2003 (has links)
Les xarxes IP tradicionals utilitzen el model de transmissió "best-effort" per transportar tràfic entre clients de la xarxa. Aquest model de transmissió de tràfic no és el més adequat per les aplicacions en temps real com per exemple, vídeo sota demanda, conferències multimedia o realitat virtual que per altra banda tenen cada cop més adeptes entre els clients de la xarxa. A fi de garantir el correcte funcionament d'aquest tipus d'aplicacions, l'estructura de la xarxa ha de ser substancialment modificada amb l'objectiu final de poder optimitzar els seus propis recursos i així poder fer front a aquells tipus de tràfics i de clients que requereixen certes garanties de "Qualitat de Servei" (QoS) per a la seva correcta transmissió.Aquestes modificacions o millores de la xarxa poden ser perfectament realitzades sota l'entorn d'Enginyeria de Tràfic (Traffic Engineering, TE). Dos són els principals aspectos relacionats amb el funcionament de la xarxa en aquest entorn de TE: els mecanismes de commutació i els mecanismes d'encaminament. Així, per una banda es necessita un mecanisme de commutació molt ràpid en els nodes interns de la xarxa a fi de que els paquets de dades puguin ser processats amb el menor temps possible. En xarxes IP aquest objectiu s'aconsegueix amb el Multiprotocol Label Switching (MPLS). Per altra banda, a fi de garantir certa QoS, les decisions d'encaminament s'han de realitzar tenint en compte quines són les restriccions de QoS sol·licitades per el node client que origina el tràfic. Aquest objectiu s'aconsegueix modificant els esquemes d'encaminament tradicionals, incorporant-hi els paràmetres de QoS en les decisions d'encaminament, generant el que es coneix com algorismes d'encaminament amb QoS (QoS routing).Centrant-nos en aquest darrer aspecte, la majoria dels algorismes d'encaminament amb QoS existents, realitzen la selecció de la ruta a partir de la informació d'estat de l'enllaç emmagatzemada en les bases de dades d'estat de l'enllaç contingudes en els nodes. Per poder garantir que els successius canvis en l'estat de la xarxa estiguin perfectament reflectits en aquesta informació d'encaminament, el protocol d'encaminament ha d'incloure un mecanisme d'actualització que faci possible garantir que la selecció de les rutes es fa a partir d'informació acurada de l'estat real de la xarxa. En un entorn IP tradicional, el qual inicialment no inclou paràmetres de QoS, els canvis produïts en la informació d'encaminament són tan sols deguts a modificacions en la topologia i connectivitat de la xarxa. En aquest entorn, donat que la freqüència en la qual s'espera rebre missatges advertint d'aquestes modificacions no és elevada, la majoria dels mecanismes d'actualització es basen en la inclusió d'un cert període de refresc. Així, les bases de dades s'actualitzen periòdicament mitjançant la distribució d'uns missatges que informen a la resta de nodes de l'estat de la xarxa,a fi de que cada node pugui actualitzar la seva base de dades.No obstant això, hem de tenir en compte que en aquelles xarxes IP/MPLS altament dinàmiques amb requeriments de QoS, aquest mecanisme d'actualització basat en un refresc periòdic no serà útil. Això és degut a la rigidesa que presenta aquest mecanisme, la qual fa que no sigui aplicable a un entorn que presenti contínues variacions del paràmetres dels enllaços cada cop que s'estableixi o s'alliberi una connexió (ara a més de la topologia i connectivitat, s'inclouen paràmetres de QoS, com ampla de banda, retard, variació del retard, etc.). Per tot això, s'haurà de generar un mecanisme d'actualització molt més eficient que sigui capaç de mantenir les bases de dades dels nodes perfectament actualitzades reflectint els continus canvis en l'estat de la xarxa. L'alta granularitat d'aquest mecanisme provocarà una sobrecàrrega de la xarxa, degut a l'enorme quantitat de missatges d'actualització que seran necessaris per poder mantenir informació actualitzada en les bases de dades d'estat de l'enllaç en cada node.Per reduir aquesta sobrecàrrega de senyalització apareixen les polítiques d'activació (triggering policies) que tenen per objectiu determinar en quin moment un node ha d'enviar un missatge d'actualització a la resta de nodes de la xarxa advertint-los de les variacions produïdes en els seus enllaços. Desafortunadament, l'ús d'aquestes polítiques d'activació produeix un efecte negatiu sobre el funcionament global de la xarxa. En efecte, si l'actualització de la informació de l'estat de l'enllaç en els nodes no es fa cada cop que aquesta informació es veu modificada, sinó que es fa d'acord a una certa política d'activació, no es podrà garantir que aquesta informació representi de forma acurada l'esta actual de la xarxa en tot moment. Això pot provocar una selecció no òptima de la ruta seleccionada i un increment en la probabilitat de bloqueig de noves connexions a la xarxa. / Las redes IP tradicionales utilizan el modelo de transmisión best-effort para transportar tráfico entre clientes de la red. Es bien sabido que este modelo de transmisión de tráfico no es el más adecuado para las aplicaciones en tiempo real, tales como video bajo demanda, conferencias multimedia o realidad virtual, que cada vez son más de uso común entre los clientes de la red. Para garantizar el correcto funcionamiento de dichas aplicaciones la estructura de la red debe ser modificada a fin de optimizar la utilización de sus propios recursos y para poder hacer frente a aquellos tráficos que requieran ciertas garantías de Calidad de Servicio (QoS) para su correcta transmisión.Estas modificaciones o mejoras de la red pueden ser perfectamente realizadas bajo el entorno de Traffic Engineering (TE). Dos son los principales aspectos relacionados con el funcionamiento de la red en el entorno de TE: los mecanismos de conmutación y los mecanismos de encaminamiento. Así, por una parte, se necesita un mecanismo de conmutación muy rápido en los nodos intermedios de la red a fin de que los paquetes de datos puedan ser procesados con el menor tiempo posible. En redes IP este objetivo se consigue con el Multiprotocol Label Switching (MPLS). Por otra parte a fin de garantizar cierta QoS, las decisiones de encaminamiento se deben realizar acorde con los parámetros de QoS requeridos por el cliente que origina tráfico. Este objetivo se consigue modificando los esquemas de encaminamiento tradicionales e incorporando parámetros de QoS en las decisiones de encaminamiento, lo que deriva en la generación de encaminamiento con QoS (QoS routing).Centrándonos en este último aspecto de encaminamiento, la mayoría de los algoritmos de QoS routing existentes realizan la selección de la ruta a partir de la información de estado del enlace que está almacenada en las bases de datos de estado del enlace contenidas en los nodos. A fin de garantizar que los sucesivos cambios en el estado de la red estén perfectamente reflejados en dicha información, el mecanismo de encaminamiento debe incorporar un mecanismo de actualización cuyo objetivo sea garantizar que las decisiones de encaminamiento se realizan a partir de información fidedigna del estado de la red. En un entorno IP tradicional, el cual no incluye parámetros de QoS, los cambios producidos en dicha información son los debidos a modificaciones en la topología y conectividad. En dicho entorno dado que no son esperadas frecuentes variaciones de la topología de la red, la mayoría de los mecanismos de actualización están basados en la inclusión de un cierto periodo de refresco.Sin embargo, en redes IP/MPLS altamente dinámicas con requerimientos de QoS, este mecanismo de actualización no será adecuado debido a su rigidez y a las continuas variaciones de los parámetros de los enlaces (que ahora incluirá parámetros de QoS, tales como, ancho de banda, retardo, variación del retado, etc.) que se producirán cada vez que se establezca/libere una conexión. Por tanto, se deberá generar un mecanismo de actualización mucho más eficiente que sea capaz de actualizar las bases de datos de los nodos a fin de reflejar las constantes variaciones del estado de la red. La alta granularidad de este mecanismo provocará una sobrecarga de la red, debido a la enorme cantidad de mensajes de actualización necesarios para mantener información actualizada del estado de la red. Para reducir esta sobrecarga de señalización aparecen las políticas de disparo (triggering policies), cuyo objetivo es determinar en qué momento un nodo debe enviar un mensaje de actualización al resto de nodos de la red advirtiéndoles de las variaciones producidas en sus enlaces.Desafortunadamente el uso de dichas políticas de disparo produce un efecto negativo sobre el funcionamiento global de la red. En efecto, si la actualización de la información de estado del enlace en los nodos no se realiza cada vez que dicha información es modificada sino de acuerdo con cierta política de disparo, no se puede garantizar que dicha información represente fielmente el estado de la red. Así, la selección de la ruta, podrá ser realizada basada en información inexacta o imprecisa del estado de lo red, lo cual puede provocar una selección no óptima de la ruta y un incremento en la probabilidad de bloqueo de la red.Esta Tesis se centra en definir y solucionar el problema de la selección de rutas bajo información inexacta o imprecisa de la red (routing inaccuracy problem). Se consideran dos escenarios de trabajo, las actuales redes MPLS y las futuras redes WDM, para los cuales se propone un nuevo mecanismo de encaminamiento: BYPASS Based Routing (BBR) para redes IP/MPLS y BYPASS Based Optical Routing (BBOR) para redes WDM. Ambos mecanismos de encaminamiento se basan en un concepto común denominado "bypass dinámico".El concepto de "bypass dinámico" permite que un nodo intermedio de la red encamine el mensaje de establecimiento que ha recibido del nodo fuente, a través de una ruta distinta a la calculada por el nodo fuente (y explícitamente indicada en el mensaje de establecimiento), cuando detecte que inesperadamente el enlace de salida no dispone de recursos suficientes para soportar las garantías de QoS requeridas por la conexión a establecer. Estas rutas alternativas, denominadas bypass-paths, son calculadas por el nodo fuente o de entrada a la red simultáneamente con la ruta principal para ciertos nodos intermedios de la misma. En redes IP/MPLS el mecanismo BBR aplica el concepto de "bypass dinámico" a las peticiones de conexión con restricciones de ancho de banda. En cambio, en redes WDM, el mecanismo BBOR aplica el concepto de "bypass dinámico" a la hora de asignar una longitud de onda por la cual se va a transmitir el trafico. / Traditional IP networks are based on the best effort model to transport traffic flowsbetween network clients. Since this model cannot properly support the requirements demanded by several emerging real time applications (such as video on demand, multimedia conferences or virtual reality), some modifications in the network structure, mainly oriented to optimise network performance, are required in order to provide Quality of Service (QoS) guarantees.Traffic Engineering is an excellent framework to achieve these network enhancements.There are two main aspects in this context that strongly interact with network performance: switching mechanisms and routing mechanisms. On one hand, a quick switching mechanism is required to reduce the processing time in the intermediate nodes. In IP networks this behaviour is obtained by introducing Multiprotocol Label Switching (MPLS). On the other hand, a powerful routing mechanism that includes QoS attributes when selecting routes (QoS Routing) is also required.Focusing on the latter aspect, most QoS routing algorithms select paths based on the information contained in the network state databases stored in the network nodes. Because of this, routing mechanisms must include an updating mechanism to guarantee that the network state information perfectly represents the current network state. Since network state changes (topology) are not produced very often, in conventional IP networks without QoS capabilities, most updating mechanisms are based on a periodic refresh.In contrast, in highly dynamic large IP/MPLS networks with QoS capabilities a finer updating mechanism is needed. This updating mechanism generates an important and nondesirablesignalling overhead if maintaining accurate network state information is pursued. To reduce the signalling overhead, triggering policies are used. The main function of a triggering policy is to determine when a network node must advertise changes in its directly connected links to other network nodes. As a consequence of reduced signalling, the information in the network state databases might not represent an accurate picture of the actual network state.Hence, path selection may be done according to inaccurate routing information, which could cause both non-optimal path selection and an increase in connection blocking frequency.This Thesis deals with this routing inaccuracy problem, introducing new mechanisms to reduce the effects on global network performance when selecting explicit paths under inaccurate routing information. Two network scenarios are considered, namely current IP/MPLS networks and future WDM networks, and one routing mechanism per scenario is suggested:BYPASS Based Routing (BBR) for IP/MPLS and BYPASS Based Optical Routing (BBOR) for WDM networks. Both mechanisms are based on a common concept, which is defined as dynamic bypass.According to the dynamic bypass concept, whenever an intermediate node along the selected path (unexpectedly) does not have enough resources to cope with the incoming MPLS/optical-path demand requirements, it has the capability to reroute the set-up message through alternative pre-computed paths (bypass-paths). Therefore, in IP/MPLS networks the BBR mechanism applies the dynamic bypass concept to the incoming LSP demands under bandwidth constraints, and in WDM networks the BBOR mechanism applies the dynamic bypass concept when selecting light-paths (i.e., selecting the proper wavelength in both wavelength selective and wavelength interchangeable networks). The applicability of the proposed BBR and the BBOR mechanisms is validated by simulation and compared with existing methods on their respective network scenarios. These network scenarios have been selected so that obtained results may be extrapolated to a realistic network.
159

Offset time-emulated architecture for optical burst switching-modelling and performance evaluation

Klinkowski, Miroslaw 14 February 2008 (has links)
L'evolució de les xarxes publiques de transport de dades destaca per el continu augment de la demanda de tràfic a la que estan sotmeses. La causa és la imparable popularització d'Internet i del seu ús per a tot tipus d'aplicacions. Les xarxes de commutació de ràfegues òptiques (OBS: Optical Bursts Switching) són una solució extraordinàriament prometedora per la pròxima generació de xarxes, tant per la flexibilitat que ofereixen com per el seu alt rendiment fruit de l'explotació de la multiplexació estadística en el domini òptic.Aquesta tesi presenta l'anàlisi, modelització i avaluació de les xarxes de commutació de ràfegues òptiques basades en l'emulació del temps de compensació (emulated offset time: E-OBS). El concepte d'E-OBS defineix una arquitectura de xarxa OBS per al transportar i commutar ràfegues òptiques en una xarxa troncal en la que, al contrari de l'arquitectura convencional (C-OBS) en la que el temps de compensació s'introdueix des dels nodes d'accés, el temps de compensació s'introdueix en cadascun dels nodes de la xarxa per mitjà d'un retardador de fibra addicional. L'arquitectura E-OBS permet superar algunes de les desavantatges inherents a arquitectures C-OBS, però la seva gran virtut és la compatibilitat amb les xarxes de commutació de circuits òptics (OCS: Optical Circuit Switching) actuals i les futures xarxes de commutació de paquets òptics (OPS: Optical Packet Switching), de manera que les xarxes OBS basades en una arquitectura E-OBS) poden facilitat enormement la transició de unes a les altres.A ala vista dels principals requeriments de disseny de les xarxes OBS, que són la resolució de contencions en el domini òptic, la provisió de qualitat de servei (QoS) i l'òptim encaminament de les ràfegues per tal de minimitzar la congestió de la xarxa, . en aquesta tesi es proposa un disseny de l'arquitectura E-OBS basada en (i) un mètode viable per a la provisió del temps de compensació, (ii) una qualitat alta global de servei, i (iii) un mecanisme d'encaminament que minimitzi congestió de xarxa.- La primera part d'aquesta tesi proporciona la informació documental necessària per al disseny d'E-OBS.- La segona part se centra en l'estudi de la funcionalitat i viabilitat de l'arquitectura E-OBS. S'introdueixen els principis d'operació d'E-OBS i s'identifiquen els principals esculls que presenten les arquitectures C-OBS i que deixen de ser-ho en una arquitectura E-OBS. Alguns d'aquests esculls són la dificultat d'utilitzar un algorisme d'encaminament amb rutes alternatives, la complexitat dels algoritmes de reserva de recursos i la seva falta d'equitat, la complexitat en la provisió de la QoS, etc. En aquesta segona part es constata que l'arquitectura E-OBS redueix la complexitat dels de reserva de recursos i es verifica la viabilitat d'operació i de funcionament de la provisió del tremps de compensació en aquesta arquitectura a partir de figures de comportament obtingudes amb retardadors de fibra comercialment disponibles.- La tercera part encara el problema de la provisió de la QoS. Primer s'hi revisen els conceptes bàsics de QoS així com els mecanismes de tractament de la QoS per a xarxes OBS fent-ne una comparació qualitativa i de rendiment de tots ells. Com a resultat s'obté que el mecanisme que presenta un millor comportament és el d'avortament de la transmissió de les ràfegues de més baixa prioritat quan aquestes col·lisionen amb una de prioritat més alta (es l'anomenat Burst Preemption mechanism), el qual en alguns casos presenta un problema de senyalització innecessària. Aquesta tercera part es conclou amb la proposta d'un mecanisme de finestra a afegir al esquema de Burst Preemption que només funciona sobre una arquitectura E-OBS i que soluciona aquest problema.- En la quarta part s'afronta el problema de l'encaminament en xarxes OBS. S'estudia el comportament dels algoritmes d'encaminament adaptatius, els aïllats amb rutes alternatives i els multicamí distribuïts, sobre xarxes E-OBS. A la vista dels resultats no massa satisfactoris que s'obtenen, es planteja una solució alternativa que es basa en model d'optimització no lineal. Es formulen i resolen dos models d'optimització per als algoritmes encaminament de font multicamí que redueixen notablement la congestió en les xarxes OBS.Finalment, aquesta tesi conclou que l'arquitectura E-OBS és factible, que és més eficient que la C-OBS, que proveeix eficaçment QoS, i que és capaç d'operar amb diverses estratègies d'encaminament i de reduir eficaçment la congestió de xarxa. / The fact that the Internet is a packet-based connection-less network is the main driver to develop a data-centric transport network. In this context, the optical burst switching (OBS) technology is considered as a promising solution for reducing the gap between transmission and switching speeds in future networks.This thesis presents the analysis, modelling, and evaluation of the OBS network with Emulated offset-time provisioning (E-OBS). E-OBS defines an OBS network architecture to transport and switch optical data bursts in a core network. On the contrary to a conventional offset-time provisioning OBS (C-OBS) architecture, where a transmission offset time is introduced in the edge node, in an E-OBS network the offset time is provided in the core node by means of an additional fibre delay element. The architecture is motivated by several drawbacks inherent to C-OBS architectures. It should be pointed out that the E-OBS has not been studied intensively in the literature and this concept has been considered rather occasionally.Due to the limitations in optical processing and queuing, OBS networks need a special treatment so that they could solve problems typical of data-centric networks. Contention resolution in optical domain together with quality of service (QoS) provisioning for quality demanding services are, among other things, the main designing issues when developing OBS networks. Another important aspect is routing problem, which concerns effective balancing of traffic load so that to reduce burst congestion at overloaded links. Accounting for these requirements, the design objectives for the E-OBS architecture are (i) feasibility of offset-time provisioning, (ii) an overall high quality of service, and (iii) reduction of network congestion. These objectives are achieved by combining selected concepts and strategies, together with appropriate system design as well as network traffic engineering.The contributions in this thesis can be summarized as follows.- At the beginning, we introduce the principles of E-OBS operation and we demonstrate that C-OBS possesses many drawbacks that can be easily avoided in E-OBS. Some of the discussed issues are the problem of unfairness in resources reservation, difficulty with alternative routing, complexity of resources reservation algorithms, efficiency of burst scheduling, and complexity in QoS provisioning. The feasibility of E-OBS operation is investigated as well; in this context, the impact of congestion in control plane on OBS operation is studied. As a result, we confirm the feasibility of E-OBS operation with commercially available fibre delay elements.- Then, we provide both qualitative and quantitative comparison of the selected, most addressed in the literature, QoS mechanisms. As an outcome a burst preemption mechanism, which is characterized by the highest overall performance, is qualified for operating in E-OBS. Since the preemptive mechanism may produce the overbooking of resources in an OBS network we address this issue as well. We propose the preemption window mechanism to solve the problem. An analytical model of the mechanism legitimates correctness of our solution.- Finally, we concern with a routing problem - our routing objective is to help the contention resolution algorithms in the reduction of burst losses. We propose and evaluate two isolated alternative routing algorithms designed for labelled E-OBS networks. Then we study multi-path source routing and we use network optimization theory to improve it. The presented formulae for partial derivatives, to be used in a non-linear optimization problem, are straightforward and very fast to compute. It makes the proposed non-linear optimization method a viable alternative for linear programming formulations based on piecewise linear approximations.Concluding, E-OBS is shown to be a feasible OBS network architecture of profitable functionality, to support efficiently the QoS provisioning, and to be able to operate with different routing strategies and effectively reduce the network congestion.
160

Improving cache Behavior in CMP architectures throug cache partitioning techniques

Moretó Planas, Miquel 19 March 2010 (has links)
The evolution of microprocessor design in the last few decades has changed significantly, moving from simple inorder single core architectures to superscalar and vector architectures in order to extract the maximum available instruction level parallelism. Executing several instructions from the same thread in parallel allows significantly improving the performance of an application. However, there is only a limited amount of parallelism available in each thread, because of data and control dependences. Furthermore, designing a high performance, single, monolithic processor has become very complex due to power and chip latencies constraints. These limitations have motivated the use of thread level parallelism (TLP) as a common strategy for improving processor performance. Multithreaded processors allow executing different threads at the same time, sharing some hardware resources. There are several flavors of multithreaded processors that exploit the TLP, such as chip multiprocessors (CMP), coarse grain multithreading, fine grain multithreading, simultaneous multithreading (SMT), and combinations of them.To improve cost and power efficiency, the computer industry has adopted multicore chips. In particular, CMP architectures have become the most common design decision (combined sometimes with multithreaded cores). Firstly, CMPs reduce design costs and average power consumption by promoting design re-use and simpler processor cores. For example, it is less complex to design a chip with many small, simple cores than a chip with fewer, larger, monolithic cores.Furthermore, simpler cores have less power hungry centralized hardware structures. Secondly, CMPs reduce costs by improving hardware resource utilization. On a multicore chip, co-scheduled threads can share costly microarchitecture resources that would otherwise be underutilized. Higher resource utilization improves aggregate performance and enables lower cost design alternatives.One of the resources that impacts most on the final performance of an application is the cache hierarchy. Caches store data recently used by the applications in order to take advantage of temporal and spatial locality of applications. Caches provide fast access to data, improving the performance of applications. Caches with low latencies have to be small, which prompts the design of a cache hierarchy organized into several levels of cache.In CMPs, the cache hierarchy is normally organized in a first level (L1) of instruction and data caches private to each core. A last level of cache (LLC) is normally shared among different cores in the processor (L2, L3 or both). Shared caches increase resource utilization and system performance. Large caches improve performance and efficiency by increasing the probability that each application can access data from a closer level of the cache hierarchy. It also allows an application to make use of the entire cache if needed.A second advantage of having a shared cache in a CMP design has to do with the cache coherency. In parallel applications, different threads share the same data and keep a local copy of this data in their cache. With multiple processors, it is possible for one processor to change the data, leaving another processor's cache with outdated data. Cache coherency protocol monitors changes to data and ensures that all processor caches have the most recent data. When the parallel application executes on the same physical chip, the cache coherency circuitry can operate at the speed of on-chip communications, rather than having to use the much slower between-chip communication, as is required with discrete processors on separate chips. These coherence protocols are simpler to design with a unified and shared level of cache onchip.Due to the advantages that multicore architectures offer, chip vendors use CMP architectures in current high performance, network, real-time and embedded systems. Several of these commercial processors have a level of the cache hierarchy shared by different cores. For example, the Sun UltraSPARC T2 has a 16-way 4MB L2 cache shared by 8 cores each one up to 8-way SMT. Other processors like the Intel Core 2 family also share up to a 12MB 24-way L2 cache. In contrast, the AMD K10 family has a private L2 cache per core and a shared L3 cache, with up to a 6MB 64-way L3 cache.As the long-term trend of increasing integration continues, the number of cores per chip is also projected to increase with each successive technology generation. Some significant studies have shown that processors with hundreds of cores per chip will appear in the market in the following years. The manycore era has already begun. Although this era provides many opportunities, it also presents many challenges. In particular, higher hardware resource sharing among concurrently executing threads can cause individual thread's performance to become unpredictable and might lead to violations of the individual applications' performance requirements. Current resource management mechanisms and policies are no longer adequate for future multicore systems.Some applications present low re-use of their data and pollute caches with data streams, such as multimedia, communications or streaming applications, or have many compulsory misses that cannot be solved by assigning more cache space to the application. Traditional eviction policies such as Least Recently Used (LRU), pseudo LRU or random are demand-driven, that is, they tend to give more space to the application that has more accesses to the cache hierarchy.When no direct control over shared resources is exercised (the last level cache in this case), it is possible that a particular thread allocates most of the shared resources, degrading other threads performance. As a consequence, high resource sharing and resource utilization can cause systems to become unstable and violate individual applications' requirements. If we want to provide a Quality of Service (QoS) to applications, we need to enhance the control over shared resources and enrich the collaboration between the OS and the architecture.In this thesis, we propose software and hardware mechanisms to improve cache sharing in CMP architectures. We make use of a holistic approach, coordinating targets of software and hardware to improve system aggregate performance and provide QoS to applications. We make use of explicit resource allocation techniques to control the shared cache in a CMP architecture, with resource allocation targets driven by hardware and software mechanisms.The main contributions of this thesis are the following:- We have characterized different single- and multithreaded applications and classified workloads with a systematic method to better understand and explain the cache sharing effects on a CMP architecture. We have made a special effort in studying previous cache partitioning techniques for CMP architectures, in order to acquire the insight to propose improved mechanisms.- In CMP architectures with out-of-order processors, cache misses can be served in parallel and share the miss penalty to access main memory. We take this fact into account to propose new cache partitioning algorithms guided by the memory-level parallelism (MLP) of each application. With these algorithms, the system performance is improved (in terms of throughput and fairness) without significantly increasing the hardware required by previous proposals.- Driving cache partition decisions with indirect indicators of performance such as misses, MLP or data re-use may lead to suboptimal cache partitions. Ideally, the appropriate metric to drive cache partitions should be the target metric to optimize, which is normally related to IPC. Thus, we have developed a hardware mechanism, OPACU, which is able to obtain at run-time accurate predictions of the performance of an application when running with different cache assignments.- Using performance predictions, we have introduced a new framework to manage shared caches in CMP architectures, FlexDCP, which allows the OS to optimize different IPC-related target metrics like throughput or fairness and provide QoS to applications. FlexDCP allows an enhanced coordination between the hardware and the software layers, which leads to improved system performance and flexibility.- Next, we have made use of performance estimations to reduce the load imbalance problem in parallel applications. We have built a run-time mechanism that detects parallel applications sensitive to cache allocation and, in these situations, the load imbalance is reduced by assigning more cache space to the slowest threads. This mechanism, helps reducing the long optimization time in terms of man-years of effort devoted to large-scale parallel applications.- Finally, we have stated the main characteristics that future multicore processors with thousands of cores should have. An enhanced coordination between the software and hardware layers has been proposed to better manage the shared resources in these architectures.

Page generated in 0.075 seconds