Spelling suggestions: "subject:"qualityandservice"" "subject:"quality:service""
441 |
Towards time domain invariant QoS measures for queues with correlated trafficLi, W., Kouvatsos, Demetres D., Fretwell, Rod J. 25 June 2014 (has links)
No / An investigation is carried out on the nature of QoS measures for queues with correlated traffic in both discrete and continuous time domains. The study focuses on the single server GI(G)/M-[x]/1/N and GI(G)/Geo([x])/1/N queues with finite capacity, N, a general batch renewal arrival process (BRAP), GI(G) and either batch Poisson, M-[x] or batch geometric, Geo([x]) service times with general batch sizes, X. Closed form expressions for QoS measures, such as queue length and waiting time distributions and blocking probabilities are stochastically derived and showed to be, essentially, time domain invariant. Moreover, the sGGeo(sGGo)/Geo/l/N queue with a shifted generalised geometric (sGGeo) distribution is employed to assess the adverse impact of varying degrees of traffic correlations upon basic QoS measures and consequently, illustrative numerical results are presented. Finally, the global balance queue length distribution of the M-Geo/M-Geo/1/N queue is devised and reinterpreted in terms of information theoretic principle of entropy maximisation. (C) 2014 Elsevier Inc. All rights reserved.
|
442 |
Arquitecturas para la computación de altas prestaciones en la nube. Aplicación a procesos de geometría computacionalSánchez-Ribes, Víctor 03 March 2024 (has links)
La computación en nube es una de las tecnologías que están dando forma al mundo actual. En este sentido, las empresas deben hacer uso de esta tecnología para seguir siendo competitivas en un mercado globalizado. Los sectores tradicionales de la industria manufacturera (calzado, muebles, juguetes, entre otros) se caracterizan principalmente por tener un diseño intensivo y un trabajo de fabricación en la producción de nuevos productos de temporada. Este trabajo se realiza a través de software de modelado y fabricación 3D. Este software se conoce habitualmente como “CAD/CAM”. Se basa principalmente en la aplicación de primitivas de modelado y cálculo geométrico. La externalización de procesamiento es el método utilizado para externalizar la carga de procesamiento a la nube. Esta técnica aporta muchas ventajas a los procesos de diseño y fabricación: reducción del coste inicial para pequeñas y medianas empresas que necesitan una gran capacidad de cálculo, infraestructura muy flexible para proporcionar potencia de cálculo ajustable, prestación de servicios informáticos “CAD/CAM” a diseñadores de todo el mundo, etc.. Sin embargo, la externalización del cálculo geométrico a la nube implica varios retos que deben superarse para que la propuesta sea viable. El objetivo de este trabajo es explorar nuevas formas de aprovechar los dispositivos especializados y mejorar las capacidades de las “GPUs” mediante la revisión y comparación de las técnicas de programación paralela disponibles, y proponer la configuración óptima de la arquitectura “Cloud” y el desarrollo de aplicaciones para mejorar el grado de paralelización de los dispositivos de procesamiento especializados, sirviendo de base para su mayor explotación en la nube para pequeñas y medianas empresas. Finalmente, este trabajo muestra los experimentos utilizados para validar la propuesta tanto a nivel de arquitectura de comunicación como de la programación en las "GPU" y aporta unas conclusiones derivadas de esta experimentación.
|
443 |
Public-private partnership in the provision of secondary education in the Gaborone city area of BotswanaSedisa, Kitso Nkaiwa 30 June 2008 (has links)
Public sector organisations are established in order to promote the quality of citizen's lives through the provision of public services. However, the demands for public services often outstrip the limited resources at the disposal of the public sector for the delivery of such services. Public-private partnerships (PPPs) are emerging as an important tool of public policy to deliver public infrastructure and the attendant services.
The main aim of this study is to establish the extent to which PPPs can be used to improve the quality of the delivery of secondary education in the Gaborone City area in Botswana. The study includes a conceptual analysis of the nature of the public services in general, and in particular, the nature and the provision of secondary education in Botswana with specific reference to the Gaborone City area. The study also includes a conceptual analysis of PPPs as gleaned from published literature. Various dimensions of PPPs are analysed and these include but are not limited to definitions, benefits, models and the antecedents for the successful implementation of PPPs. Among the various models that are analysed in the study, the design, build, operate and finance (DBOF) model is preferred for improving the quality of the delivery of secondary education in the Gaborone City area in Botswana.
In addition to the conceptual analysis, an empirical research study is undertaken in which the secondary school heads are the respondents to a structured questionnaire. The results of the empirical research support the conceptual analysis to the extent that in both cases, it is possible to improve the quality of the delivery of secondary education through PPPs. More secondary schools can be built and more facilities be made available to schools. Through the use of PPPs, most if not all learners can receive the entire secondary education programme, from junior to senior secondary education. Existing secondary schools can be modernised through PPPs. Ancillary services can be delivered by the organisations that have the necessary expertise. Certain antecedents for the successful implementation of PPPs are necessary. Through PPPs, secondary schools can be made attractive and intellectually stimulating. / Public Administration / (D.Litt. et Phil. ( Public Administration))
|
444 |
Some new localized quality of service models and algorithms for communication networks : the development and evaluation of new localized quality of service routing algorithms and path selection methods for both flat and hierarchical communication networksMustafa, Elmabrook B. M. January 2009 (has links)
The Quality of Service (QoS) routing approach is gaining an increasing interest in the Internet community due to the new emerging Internet applications such as real-time multimedia applications. These applications require better levels of quality of services than those supported by best effort networks. Therefore providing such services is crucial to many real time and multimedia applications which have strict quality of service requirements regarding bandwidth and timeliness of delivery. QoS routing is a major component in any QoS architecture and thus has been studied extensively in the literature. Scalability is considered one of the major issues in designing efficient QoS routing algorithms due to the high cost of QoS routing both in terms of computational effort and communication overhead. Localized quality of service routing is a promising approach to overcome the scalability problem of the conventional quality of service routing approach. The localized quality of service approach eliminates the communication overhead because it does not need the global network state information. The main aim of this thesis is to contribute towards the localised routing area by proposing and developing some new models and algorithms. Toward this goal we make the following major contributions. First, a scalable and efficient QoS routing algorithm based on a localised approach to QoS routing has been developed and evaluated. Second, we have developed a path selection technique that can be used with existing localized QoS routing algorithms to enhance their scalability and performance. Third, a scalable and efficient hierarchical QoS routing algorithm based on a localised approach to QoS routing has been developed and evaluated.
|
445 |
Performance modelling and analysis of congestion control mechanisms for communication networks with quality of service constraints : an investigation into new methods of controlling congestion and mean delay in communication networks with both short range dependent and long range dependent trafficFares, Rasha Hamed Abdel Moaty January 2010 (has links)
Active Queue Management (AQM) schemes are used for ensuring the Quality of Service (QoS) in telecommunication networks. However, they are sensitive to parameter settings and have weaknesses in detecting and controlling congestion under dynamically changing network situations. Another drawback for the AQM algorithms is that they have been applied only on the Markovian models which are considered as Short Range Dependent (SRD) traffic models. However, traffic measurements from communication networks have shown that network traffic can exhibit self-similar as well as Long Range Dependent (LRD) properties. Therefore, it is important to design new algorithms not only to control congestion but also to have the ability to predict the onset of congestion within a network. An aim of this research is to devise some new congestion control methods for communication networks that make use of various traffic characteristics, such as LRD, which has not previously been employed in congestion control methods currently used in the Internet. A queueing model with a number of ON/OFF sources has been used and this incorporates a novel congestion prediction algorithm for AQM. The simulation results have shown that applying the algorithm can provide better performance than an equivalent system without the prediction. Modifying the algorithm by the inclusion of a sliding window mechanism has been shown to further improve the performance in terms of controlling the total number of packets within the system and improving the throughput. Also considered is the important problem of maintaining QoS constraints, such as mean delay, which is crucially important in providing satisfactory transmission of real-time services over multi-service networks like the Internet and which were not originally designed for this purpose. An algorithm has been developed to provide a control strategy that operates on a buffer which incorporates a moveable threshold. The algorithm has been developed to control the mean delay by dynamically adjusting the threshold, which, in turn, controls the effective arrival rate by randomly dropping packets. This work has been carried out using a mixture of computer simulation and analytical modelling. The performance of the new methods that have.
|
446 |
Exploring tradeoffs in wireless networks under flow-level traffic: energy, capacity and QoSKim, Hongseok 21 June 2010 (has links)
Wireless resources are scarce, shared and time-varying making resource allocation mechanisms, e.g., scheduling, a key and challenging element of wireless system design. In designing good schedulers, we consider three types of performance metrics: system capacity, quality of service (QoS) seen by users, and the energy expenditures (battery lifetimes) incurred by mobile terminals. In this dissertation we investigate the impact of scheduling policies on these performance metrics, their interactions, and/or tradeoffs, and we specifically focus on flow-level performance under stochastic traffic loads. In the first part of the dissertation we evaluate interactions among flow-level performance metrics when integrating QoS and best effort flows in a wireless system using opportunistic scheduling. We introduce a simple flow-level model capturing the salient features of bandwidth sharing for an opportunistic scheduler which ensures a mean throughput to each QoS stream on every time slot. We show that the integration of QoS and best effort flows results in a loss of opportunism, which in turn results in a reduction of the stability region, degradation in system capacity, and increased file transfer delay. In the second part of the dissertation we study several ways in which mobile terminals can backoff on their uplink transmit power (thus slow down their transmissions) in order to extend battery lifetimes. This is particularly effective when a wireless system is underloaded, so the degradation in the users' perceived performance can be negligible. The challenge, however, is developing a mechanism that achieves a good tradeoff among
transmit power, idling/circuit power, and the performance customers will see. We consider systems with flow-level dynamics supporting either real-time or best effort (e.g., file transfers) sessions. We show that significant energy savings can be achieved by leveraging dynamic spare capacity. We then extend our study to the case where mobile terminals have multiple transmit antennas. In the third part of the dissertation we develop a framework for user association in infrastructure-based wireless networks, specifically focused on adaptively balancing flow loads given spatially inhomogeneous traffic distributions. Our work encompasses several possible user association objective functions resulting in rate-optimal, throughput-optimal, delay-optimal, and load-equalizing policy, which we collectively denote [alpha]-optimal user association. We prove that the optimal load vector that minimizes this function is the fixed point of a certain mapping. Based on this mapping we propose an iterative distributed user association policy and prove that it converges to the globally optimal decision in steady state. In addition we address admission control policies for the case where the system cannot be stabilized. / text
|
447 |
Adaptive Middleware for Self-Configurable Embedded Real-Time Systems : Experiences from the DySCAS Project and Remaining ChallengesPersson, Magnus January 2009 (has links)
<p>Development of software for embedded real-time systems poses severalchallenges. Hard and soft constraints on timing, and usually considerableresource limitations, put important constraints on the development. Thetraditional way of coping with these issues is to produce a fully static design,i.e. one that is fully fixed already during design time.Current trends in the area of embedded systems, including the emergingopenness in these types of systems, are providing new challenges for theirdesigners – e.g. integration of new software during runtime, software upgradeor run-time adaptation of application behavior to facilitate better performancecombined with more ecient resource usage. One way to reach these goals is tobuild self-configurable systems, i.e. systems that can resolve such issues withouthuman intervention. Such mechanisms may be used to promote increasedsystem openness.This thesis covers some of the challenges involved in that development.An overview of the current situation is given, with a extensive review ofdi erent concepts that are applicable to the problem, including adaptivitymechanisms (incluing QoS and load balancing), middleware and relevantdesign approaches (component-based, model-based and architectural design).A middleware is a software layer that can be used in distributed systems,with the purpose of abstracting away distribution, and possibly other aspects,for the application developers. The DySCAS project had as a major goaldevelopment of middleware for self-configurable systems in the automotivesector. Such development is complicated by the special requirements thatapply to these platforms.Work on the implementation of an adaptive middleware, DyLite, providingself-configurability to small-scale microcontrollers, is described andcovered in detail. DyLite is a partial implementation of the concepts developedin DySCAS.Another area given significant focus is formal modeling of QoS andresource management. Currently, applications in these types of systems arenot given a fully formal definition, at least not one also covering real-timeaspects. Using formal modeling would extend the possibilities for verificationof not only system functionality, but also of resource usage, timing and otherextra-functional requirements. This thesis includes a proposal of a formalismto be used for these purposes.Several challenges in providing methodology and tools that are usablein a production development still remain. Several key issues in this areaare described, e.g. version/configuration management, access control, andintegration between di erent tools, together with proposals for future workin the other areas covered by the thesis.</p> / <p>Utveckling av mjukvara för inbyggda realtidssystem innebär flera utmaningar.Hårda och mjuka tidskrav, och vanligtvis betydande resursbegränsningar,innebär viktiga inskränkningar på utvecklingen. Det traditionellasättet att hantera dessa utmaningar är att skapa en helt statisk design, d.v.s.en som är helt fix efter utvecklingsskedet.Dagens trender i området inbyggda system, inräknat trenden mot systemöppenhet,skapar nya utmaningar för systemens konstruktörer – exempelvisintegration av ny mjukvara under körskedet, uppgradering av mjukvaraeller anpassning av applikationsbeteende under körskedet för att nå bättreprestanda kombinerat med e ektivare resursutnyttjande. Ett sätt att nå dessamål är att bygga självkonfigurerande system, d.v.s. system som kan lösa sådanautmaningar utan mänsklig inblandning. Sådana mekanismer kan användas föratt öka systemens öppenhet.Denna avhandling täcker några av utmaningarna i denna utveckling. Enöversikt av den nuvarande situationen ges, med en omfattande genomgångav olika koncept som är relevanta för problemet, inklusive anpassningsmekanismer(inklusive QoS och lastbalansering), mellanprogramvara och relevantadesignansatser (komponentbaserad, modellbaserad och arkitekturell design).En mellanprogramvara är ett mjukvarulager som kan användas i distribueradesystem, med syfte att abstrahera bort fördelning av en applikation överett nätverk, och möjligtvis även andra aspekter, för applikationsutvecklarna.DySCAS-projektet hade utveckling av mellanprogramvara för självkonfigurerbarasystem i bilbranschen som ett huvudmål. Sådan utveckling försvåras avde särskilda krav som ställs på dessa plattformarArbete på implementeringen av en adaptiv mellanprogramvara, DyLite,som tillhandahåller självkonfigurerbarhet till småskaliga mikrokontroller,beskrivs och täcks i detalj. DyLite är en delvis implementering av konceptensom utvecklats i DySCAS.Ett annat område som får särskild fokus är formell modellering av QoSoch resurshantering. Idag beskrivs applikationer i dessa områden inte heltformellt, i varje fall inte i den mån att realtidsaspekter täcks in. Att användaformell modellering skulle utöka möjligheterna för verifiering av inte barasystemfunktionalitet, men även resursutnyttjande, tidsaspekter och andraicke-funktionella krav. Denna avhandling innehåller ett förslag på en formalismsom kan användas för dessa syften.Det återstår många utmaningar innan metodik och verktyg som är användbarai en produktionsmiljö kan erbjudas. Många nyckelproblem i områdetbeskrivs, t.ex. versions- och konfigurationshantering, åtkomststyrning ochintegration av olika verktyg, tillsammans med förslag på framtida arbete iövriga områden som täcks av avhandlingen.</p> / DySCAS
|
448 |
下一代網路資訊服務與系統供應商之成功經營模式研究陳建宏, Chen, Chien Hung Unknown Date (has links)
下一代網路(Next Generation Network,NGN)是因應未來語音、數據、影像等Triple Play之資訊服務對頻寬與網路能力要求日益增加下所提出的一種新網路形態概念。同時,NGN以IP 多媒體子系統(IP Multimedia Subsystem,IMS)為其核心可以降低系統供應商的營運佈建成本。NGN強調的固網行動融合(Fixed Mobile Convergence,FMC)與全IP網路(All IP Network)化以及強化服務品質(Quality of Service,QoS)精神,都使資訊服務與系統供應業者需要思考出一套新的經營模式(Business Model),使其能在NGN上獲利。本研究之主要目的為以NGN的架構與特性為出發,剖析此一網路的數位匯流趨勢與其成功經營模式研究。
本研究採個案研究法(Case Study Method),透過次級資料的蒐集以及與本研究有關之文獻探討的方式,找出未來NGN的經營模式形貌,並透過訪談相關業者來佐證本研究所提出經營模式之合理性。
由於資訊服務產業範圍廣泛,故本研究選定VoIP及IPTV為主要研究對象,期盼能找出一套適合在NGN上獲利的模式。另外針對企業用戶,NGN也能協助企業增加競爭力,在商業情報的取得、溝通成本的降低、資訊管理系統等都能有所貢獻。 / Next Generation Network,NGN, is a new form of internet concept of Triple play informational service integrated voice mail, data and video as the requirements of more wider bandwidth and high internet power increase. Meanwhile, The core of IP Multimedia Subsystem,IMS, NGN targets can lower operational cost to system suppliers. NGN emphasizes the spirit to combine Fixed Mobile Convergence,FMC, all IP network and Enhanced Quality of Service. The spirit makes the suppliers of informational service and system to think a new business model to make profits in NGN. The research of this thesis bases on NGN’s structure and features to analyze the trend of digital convergence in internet and succeeded business model.
This essay adopts Case Study Research. By collecting sub data and relative document research figures out future image of NGN business model. Besides, it is interviewed with pertaining businesses to testify the reality of business model offered in this thesis.
Owning to the scope of informational service business is unlimited, the thesis is chosen VoIP and IPTV as main targets to find out an appropriate and a profitable business model in this informational field. Furthermore, to business user, NGN can support industries to increase competitive ability and gain useful business information, decrease communicative cost and upgrade the performance of MIS
|
449 |
Converged IP-over-standard ethernet progress control networks for hydrocarbon process automation applications controllersAlmadi, Soloman Moses January 2011 (has links)
The maturity level of Internet Protocol (IP) and the emergence of standard Ethernet interfaces of Hydrocarbon Process Automation Application (HPAA) present a real opportunity to combine independent industrial applications onto an integrated IP based network platform. Quality of Service (QoS) for IP over Ethernet has the strength to regulate traffic mix and support timely delivery. The combinations of these technologies lend themselves to provide a platform to support HPAA applications across Local Area Network (LAN) and Wide Area Network (WAN) networks. HPAA systems are composed of sensors, actuators, and logic solvers networked together to form independent control system network platforms. They support hydrocarbon plants operating under critical conditions that — if not controlled — could become dangerous to people, assets and the environment. This demands high speed networking which is triggered by the need to capture data with higher frequency rate at a finer granularity. Nevertheless, existing HPAA network infrastructure is based on unique autonomous systems, which has resulted in multiple, parallel and separate networks with limited interconnectivity supporting different functions. This created increased complexity in integrating various applications and resulted higher costs in the technology life cycle total ownership. To date, the concept of consolidating HPAA into a converged IP network over standard Ethernet has not yet been explored. This research aims to explore and develop the HPAA Process Control Systems (PCS) in a Converged Internet Protocol (CIP) using experimental and simulated networks case studies. Results from experimental and simulation work showed encouraging outcomes and provided a good argument for supporting the co-existence of HPAA and non-HPAA applications taking into consideration timeliness and reliability requirements. This was achieved by invoking priority based scheduling with the highest priority being awarded to PCS among other supported services such as voice, multimedia streams and other applications. HPAA can benefit from utilizing CIP over Ethernet by reducing the number of interdependent HPAA PCS networks to a single uniform and standard network. In addition, this integrated infrastructure offers a platform for additional support services such as multimedia streaming, voice, and data. This network‐based model manifests itself to be integrated with remote control system platform capabilities at the end user's desktop independent of space and time resulting in the concept of plant virtualization.
|
450 |
Estimation of LRD present in H.264 video traces using wavelet analysis and proving the paramount of H.264 using OPF technique in wi-fi environmentJayaseelan, John January 2012 (has links)
While there has always been a tremendous demand for streaming video over Wireless networks, the nature of the application still presents some challenging issues. These applications that transmit coded video sequence data over best-effort networks like the Internet, the application must cope with the changing network behaviour; especially, the source encoder rate should be controlled based on feedback from a channel estimator that explores the network intermittently. The arrival of powerful video compression techniques such as H.264, which advance in networking and telecommunications, opened up a whole new frontier for multimedia communications. The aim of this research is to transmit the H.264 coded video frames in the wireless network with maximum reliability and in a very efficient manner. When the H.264 encoded video sequences are to be transmitted through wireless network, it faces major difficulties in reaching the destination. The characteristics of H.264 video coded sequences are studied fully and their capability of transmitting in wireless networks are examined and a new approach called Optimal Packet Fragmentation (OPF) is framed and the H.264 coded sequences are tested in the wireless simulated environment. This research has three major studies involved in it. First part of the research has the study about Long Range Dependence (LRD) and the ways by which the self-similarity can be estimated. For estimating the LRD a few studies are carried out and Wavelet-based estimator is selected for the research because Wavelets incarcerate both time and frequency features in the data and regularly provides a more affluent picture than the classical Fourier analysis. The Wavelet used to estimate the self-similarity by using the variable called Hurst Parameter. Hurst Parameter tells the researcher about how a data can behave inside the transmitted network. This Hurst Parameter should be calculated for a more reliable transmission in the wireless network. The second part of the research deals with MPEG-4 and H.264 encoder. The study is carried out to prove which encoder is superior to the other. We need to know which encoder can provide excellent Quality of Service (QoS) and reliability. This study proves with the help of Hurst parameter that H.264 is superior to MPEG-4. The third part of the study is the vital part in this research; it deals with the H.264 video coded frames that are segmented into optimal packet size in the MAC Layer for an efficient and more reliable transfer in the wireless network. Finally the H.264 encoded video frames incorporated with the Optimal Packet Fragmentation are tested in the NS-2 wireless simulated network. The research proves the superiority of H.264 video encoder and OPF's master class.
|
Page generated in 0.0359 seconds