Spelling suggestions: "subject:"ett og tjänstekvalitet"" "subject:"ett og tjänstekvaliteten""
1 |
An Innovative Synchronization Technique for OpMiGua-based Mobile Backhauls : The IEEE 1588v2 HPTS SchemePuleio, Francesco January 2010 (has links)
Legacy mobile backhauls are based on Time Division Multiplexing. Due to the current evolving of mobile traffic, a major change to a packet-based network is seen as inevitable. This means that the TDM signal cannot be used as source of synchronization anymore. The packet-layer based approach of the IEEE 1588v2 protocol has experienced a successful diffusion, while the physical-layer based solution of Synchronous Ethernet is still under development.A new packet-based synchronization technique is presented along this thesis. It is based on applying the OpMiGua HPTS scheme over a mobile backhaul that is structured into a cluster-type topology. The proposed technique has been given the name of IEEE 1588v2 HPTS scheme. It consists of exchanging timestamps according to the synchronization algorithm standardized into the IEEE 1588v2 protocol but exploiting the switching capabilities of the OpMiGua HPTS node. Differently from the IEEE 1588v2 protocol, our scheme foresees the timestamps to be preprended to the train of time-slots traveling throughout each ring (i.e. the fundamental element of the cluster-type network topology). The presence of time-slots is due to the adoption of the OpMiGua HPTS scheme. Thanks to the hybrid switching capabilities of the OpMiGua HPTS node, it is assured a fixed end-to-end delay to the timestamps. Work done consists of proposing a structure for the OpMiGua HPTS node, allowing to forward and duplicate for processing the timestamps in the same time, and the format of all the messages foreseen into the new scheme. A header format is presented as well as the framing for the messages containing the timestamps. This is applied over the cluster-type topology since the latter is identified as a more suitable physical layer configuration compared to the tree structure of legacy backhauls.The proposed technique is able to improve the accuracy achievable, since the timestamps are made independent from the traffic load into the nodes. Also a saving in terms of bandwidth consumption is provided.
|
2 |
Economic Profitability in the Norwegian Fibre-to-the-Home MarketIleby, Karin January 2010 (has links)
In todays modern information society, the need for FTTH is emerging. Other technologies may still cope with most of the challenges, but only FTTH does it seamlessly. Also, for the current bandwidth needs, FTTH has a lot of capacity to spare compared to alternative technologies. Thus, FTTH is predicted to be the next leading solution for access networks.However, the deployment of FTTH is very high cost and actors in the industry are struggling to produce profits. The goal of this Thesis has been to shine a light on factors that influence profitability.The Thesis presents an overview of available access network technology and fibre in the access network in particular. The Thesis continues to review pros and cons of different technologies. A small survey has been performed identifying the choices made by a few Network Operators. The drivers for FTTH is also discussed, followed by a review of two investment analyses.A generic macro level model for the FTTH industry is proposed, showing how actors interact in the market. The proposed model is adapted to model the most common operation schemes seen in the Norwegian market; Franchise and Open Access. A business model ontology is presented and used to analyse a generic actor, the Network Operator, on a micro level in the Norwegian market.It is discussed how the findings throughout the Thesis affects profitability. The discussion has two main areas in regards to profitability; influence by choice of technology and influence by business model. The future outlook in terms of technology and profitability in the Norwegian market is discussed.This thesis concludes that both business model and choice of technology influences the profitability in the Norwegian market. We identify a few factors that are likely to shift profitability in a more positive direction over the next few years.
|
3 |
Virtual (floating) Context Sharing between Vehicles : Generating and Sharing Context Information within an Autonomous Network of VehiclesRisan, Øyvind January 2010 (has links)
This thesis investigates some of the potential within Intelligent Transportation Systems and Inter-Vehicle Communication. The aim is to find a method for spreading information throughout an ad hoc network of vehicles with as little impact on the available resources as possible.Two problems are identified and investigated. The first problem is determining the border of a known set of vehicle positions. This is solved by producing a list of edges that encircles vehicles which contain the same information. These edges form a border that can be used to reduce the amount of data needed to represent the status in that particular area. By reducing the amount of data the load on the limited transmission capacity can be reduced.Based on mathematical calculations relating to the relative position of vehicles, an algorithm was devised to detect and order the vehicles that contribute to the border of an area. It is possible to alter the shape of the border by removing selected points on the border. This makes it possible to make the border convex instead of concave. The behaviour of this border finding algorithm is illustrated by a java program. The graphical representation also displays some of the mechanisms used to determine the border.The second problem is related to how information can be distributed efficiently to all vehicles within a network. As there are limited capacities in most vehicle-to-vehicle-networks there is a need to focus on reducing the overall load in such networks. The aim is to distribute the available information to every vehicle in the network as efficiently as possible, without acting at the expense of speed and flexibility.The choice of information exchange method has a serious effect on the transmission load, and the amount of interference that is present in the network. Pure flooding is the most basic and elementary of the methods available. The two main strengths of this method are the reactive behaviour, and the ability to adapt to any network configuration.This thesis suggests an alternative method for broadcasting information throughout the network, without having to use pure flooding. The method is called ICE (Information Combined and Exchanged). This method collects the available data and aggregates this to one single new message. To do this a delay is introduced between a vehicle receiving a message and retransmitting it. All messages received during this time are included in the following transmission.ICE does not limit what kind of information that could be exchanged, but examples of useful information might be warnings, points of interest, infotainment and advertisements.Based upon the behaviour of ICE a simulator was made to test the performance. The simulator made it possible to compare the performance of both ICE and pure flooding by measuring various variables. The main parameters were the size of the introduced delay and the number of vehicles in the simulation area. The combinations of the parameters and broadcast methods lead to 96 simulations from which a great amount of information could be extracted.The results showed that ICE outperformed pure flooding with regards to transmission load, interference and lost messages. At selected delays the time each vehicle is involved in the communication is also superior to pure flooding.The most significant findings are:ICE generally performs at its best with a delay of about 50 ms Any individual vehicles involvement time might be reduced by as much as 74%Depending on the number of unique messages the number of sent messages can be reduced by 77% to 86%The interference can on average be reduced by 66%Pure flooding is outperformed by ICE in almost all the situations that were tested in the simulator in this thesis. A situation with very sparsely populated networks and long delays is the exception. In this case pure flooding is faster, but might waste more resources. Implementation of dynamic delays would make ICE suitable for this scenario as well.
|
4 |
Evaluating QoS and QoE Dimensions in Adaptive Video StreamingStensen, Julianne M. G. January 2012 (has links)
The focus of this thesis has been on Quality of Service (QoS) and Qualityof Experience (QoE) dimensions of adaptive video streaming. By carryingout a literature study reviewing the state of the art on QoS andQoE we have proposed several quality metrics applicable to adaptivevideo streaming, amongst them: initial buffering time, mean duration of arebuffering event, rebuffering frequency, quality transitions and bitrate. Perhapscounterintuitively, other research has found that a higher bitratedoes not always lead to a higher degree of QoE. If one look at bitrate inrelation to quality transitions it has been found that users could prefer astable video stream, with fewer quality transitions, at the cost of an overallhigher bitrate. We have conducted two case studies to see if this isconsidered by today’s adaptive video streaming technologies. The casestudies have been performed by means of measurements on the playersof Tv2 Sumo and Comoyo. We have exposed the players to packet lossand observed their behavior by using tools such as Wireshark. Our resultsindicate that neither player take the cost of quality transitions intoaccount in their rate adaptation logic, the players rather strive for a higherquality level. In both cases we have observed a relatively large numberof quality transitions throughout the various sessions. If we were to giveany recommendations to the Over-the-Top (OTT) service providers, wewould advise them to investigate the effects of quality transitions andconsider including a solution for handling potentially negative effects inthe rate adaptation logic of the player.
|
5 |
Bruk av autostereoskopisk 3D i videosamtaler / Use of autostereoscopic 3D in Video ConversationsGrønningen, Sindre Ruud, Smeplass, Håkon January 2010 (has links)
I oppgaven vurderes det om autostereoskopisk 3D er egnet å bruke i videosamtaler.Bruk av 3D-teknikker for å forbedre den opplevde kvaliteten av video er i dag mer aktuelt enn noen gang før, og 3D får stadig nye bruksområder. I telepresence-systemer er målet å gjøre illusjonen av at menneskene du snakker med sitter i samme rom så realistisk som mulig. Viktigheten av øyenkontakt gjør samtidig bruk av 3D-briller ganske uaktuelt. Det har derfor vært naturlig å se på om bruk av autostereoskopisk 3D er egnet til å forbedre den opplevde kvaliteten og realismen i en videosamtale. For å vurdere dette har vi gjennom å etablere matematiske sammenhenger og gjøre praktiske forsøk kommet fram til hvilke faktorer som er viktige for å lage autostereoskopisk 3D av god kvalitet. Gjennom erfaringene med å lage 3D har vi kommet fram til hvilke muligheter og begrensninger denne 3D-teknikken har, og vi har vurdert i hvilken grad autostereoskopisk 3D er egnet for ulike scenarier. Vi har også gjort en direkte sammenligning av opplevelsen av en 2D-video og 3D-video ved hjelp av kvalitative metoder.Gjennom erfaringene vi har fått og resultatet av forsøk vi har gjort, har vi kommet fram til at autostereoskopisk 3D kan øke den opplevde kvaliteten av videosamtaler. Dybdeeffekten gjør at ansiktsuttrykk og kroppsspråk blir tydeligere, samtidig som brukeren opplever større innlevelse. Illusjonen av at personen på skjermen faktisk sitter der i virkeligheten forsterkes. Samtidig har autostereoskopisk 3D noen klare begrensninger, og er preget av å være en umoden teknologi. Begrenset innsynsvinkel, dårlige overganger mellom visningsvinduer og lang optimal avstand begrenser friheten til seeren mye. På kamerasiden er det utfordrende å finne kameraoppsett som er egnet på både kort og lang avstand og det er vanskelig å synkronisere kameraene.Vi har likevel tro på at ettersom teknologien utvikler seg, utfordringer blir løst og begrensningene i autostereoskopisk 3D blir mindre, vil dette bli en veldig aktuell teknologi å bruke i telepresence-systemer.
|
6 |
Bruk av autostereoskopisk 3D i videosamtaler / Use of autostereoscopic 3D in Video ConversationsGrønningen, Sindre Ruud, Smeplass, Håkon January 2010 (has links)
I oppgaven vurderes det om autostereoskopisk 3D er egnet å bruke i videosamtaler.Bruk av 3D-teknikker for å forbedre den opplevde kvaliteten av video er i dag mer aktuelt enn noen gang før, og 3D får stadig nye bruksområder. I telepresence-systemer er målet å gjøre illusjonen av at menneskene du snakker med sitter i samme rom så realistisk som mulig. Viktigheten av øyenkontakt gjør samtidig bruk av 3D-briller ganske uaktuelt. Det har derfor vært naturlig å se på om bruk av autostereoskopisk 3D er egnet til å forbedre den opplevde kvaliteten og realismen i en videosamtale. For å vurdere dette har vi gjennom å etablere matematiske sammenhenger og gjøre praktiske forsøk kommet fram til hvilke faktorer som er viktige for å lage autostereoskopisk 3D av god kvalitet. Gjennom erfaringene med å lage 3D har vi kommet fram til hvilke muligheter og begrensninger denne 3D-teknikken har, og vi har vurdert i hvilken grad autostereoskopisk 3D er egnet for ulike scenarier. Vi har også gjort en direkte sammenligning av opplevelsen av en 2D-video og 3D-video ved hjelp av kvalitative metoder.Gjennom erfaringene vi har fått og resultatet av forsøk vi har gjort, har vi kommet fram til at autostereoskopisk 3D kan øke den opplevde kvaliteten av videosamtaler. Dybdeeffekten gjør at ansiktsuttrykk og kroppsspråk blir tydeligere, samtidig som brukeren opplever større innlevelse. Illusjonen av at personen på skjermen faktisk sitter der i virkeligheten forsterkes. Samtidig har autostereoskopisk 3D noen klare begrensninger, og er preget av å være en umoden teknologi. Begrenset innsynsvinkel, dårlige overganger mellom visningsvinduer og lang optimal avstand begrenser friheten til seeren mye. På kamerasiden er det utfordrende å finne kameraoppsett som er egnet på både kort og lang avstand og det er vanskelig å synkronisere kameraene.Vi har likevel tro på at ettersom teknologien utvikler seg, utfordringer blir løst og begrensningene i autostereoskopisk 3D blir mindre, vil dette bli en veldig aktuell teknologi å bruke i telepresence-systemer.
|
7 |
Network based QoE Optimization for "Over The Top" ServicesHaugene, Kristian, Jacobsen, Alexander January 2011 (has links)
This report focuses on the quality aspects of media delivery over the Internet. Weinvestigate the constructs of Knowledge Plane, Monitor Plane and Action Planeas controlling functions for the Internet. Our goal is to implement functionality formonitoring services in a home network, allowing the router to reason and take actionsto obtain an optimal traffic situation based on user preferences. The actions takento alter ongoing traffic are implemented in a modular router framework called Click.We will use this router to affect the media stream TCP connections into behavingin accordance with the networks optimal state. New features are implemented tocomplement the functionality found in Click, giving us the tools needed to obtainthe wanted results.Our focus is on adaptive video streaming in general and Silverlight SmoothStreaming in particular. Using custom Silverlight client code, we implemented asolution which allows the applications to report usage statistics to the home gateway.This information will be used by the home gateway to obtain an overview of traffic inthe network. Presenting this information to the user, we retrieve the user preferencesfor the given video streams. The router then dynamically reconfigures itself, andstarts altering TCP packets to obtain an optimal flow of traffic in the home network.Our system has been implemented on a Linux PC where it runs in its currentform. All the different areas of the solution, ranging from the clients, router, Knowl-edge Plane and traffic manipulation elements are put together. They form a workingsystem for QoE/QoS optimization which we have tested and demonstrated. In ad-dition to testing the concept on our own streaming services, the reporting featurefor Silverlight clients has also been implemented in a private build of TV2 Sumo.This is the Internet service of the largest commercial television station in Norway.Further testing with the TV2 Sumo client has given promising results. The systemis working as it is, although we would like to see more complex action reasoning toimprove convergence time for achieving the correct bit rate.
|
8 |
Dependability Differentiation in Cloud ServicesChilwan, Ameen January 2011 (has links)
As cloud computing is becoming more mature and pervasive, almost all types of services are being deployed in clouds. This has also widened the spectrum of cloud users which encompasses from domestic users to large companies. One of the main concerns of large companies outsourcing their IT functions to clouds is the availability of their functions. On the other hand, availability requirements for domestic users are not very strict. This requires the cloud service providers to guarantee different dependability levels for different users and services. This thesis is based upon this requirement of dependability differentiation of cloud services depending upon the nature of services and target users.In this thesis, different types of services are identified and grouped together both according to their deployment nature and their target users. Also a range of techniques for guaranteeing dependability in the cloud environment are identified and classified. In order to quantify dependability provided by different techniques, a cloud system is modeled. Two different levels of dependability differentiation are considered, namely; differentiation depending upon the state of standby replica and differentiation depending upon the spatial separation of active and standby replicas. These two levels are separately modeled by using Markov state diagrams and reliability block diagrams respectively. Due to the limitations imposed by Markov models, the former differentiation level is also studied by using a simulation.Finally, numerical analysis is conducted and different techniques are compared. Also the best technique for each user and service class is identified depending upon the results obtained. The most crucial components for guaranteeing dependability in cloud environment are also identified. This will direct the future prospects of study and also provide an idea to cloud service providers about the cloud components that are worth investing in, for enhancing service availability.
|
9 |
OTN switchingKnudsen-Baas, Per Harald January 2011 (has links)
Increasing traffic volumes in the Internet put strict requirements to the architectureof optical core networks. The exploding number of Internet users, andmassive increase in Internet content consumption forces carriers to constantlyupgrade and transform their core networks in order to cope with the trafficgrowth. The choice of both physical components and transport protocols inthe core network is crucial in order to provide satisfactorily performance.Data traffic in the core network consists of a wide variety of protocols.OTN is a digital wrapper technology, responsible for encapsulating existingframes of data, regardless of native protocol, and adding additional overheadfor addressing, OAM and error control. The wrapped signal is thentransported directly over wavelengths in the optical transport network. Thecommon OTN wrapper overhead makes it possible to monitor and controlthe signals, regardless of the protocol type being transported.OTN is standardized by the ITU through a series of recommendations,the two most important being ITU-T G.709 - "Interfaces for the OpticalTransport Network", and ITU-T G.872 - "Architecture of the Optical TransportNetwork". OTN uses a flexible TDM hierarchy in order to provide highwavelength utilization. The TDM hierarchy makes it possible to performswitching at various sub-wavelength bit rates in network nodes.An introduction to OTN and an overview of recent progress in OTNstandardization is given in the thesis. An OTN switch which utilizes theflexible multiplexing hierarchy of OTN is proposed, and its characteristics istested in a network scenario, comparing it to the packet switched alternative.Simulation results reveal that OTN switching doesn’t provide any performancebenefits compared to packet switching in the core network. OTNswitches do however provide bypass of intermediate IP routers, reducing therequirements for router processing power in each network node. This reducesoverall cost, and improves network scalability.An automatically reconfigurable OTN switch which rearranges link subcapacitiesbased on differences in output buffer queue lengths is also proposedand simulated in the thesis. Simulation results show that the reconfigurableOTN switch has better performance than both pure packet switching andregular OTN switching in the network scenario.
|
10 |
Performance study of the 3LIHON output scheduling partLeli, Gaia January 2012 (has links)
In the last years hybrid optical networking is a topic of increasing interest for graceful migration to future high capacity integrated service networks.A new hybrid network architecture is proposed to harmonize different transport technologies and to support a suitable set of services: ''3-Level Integrated Hybrid Optical Network'' (3LIHON).The aim of this thesis is to study the performance of 3LIHON focusing on examining the Quality of Service (QoS) in the output part of the node.In particular we study the performance for Statistically Multiplexed (SM) traffic.In Chapter 1 we present the motivation of our study and the current work.We give the problem definition and define the goal of the thesis.Chapter 2 shows concepts and architecture of 3LIHON. Firstly we introduce the reference classes used and the Quality of Service (QoS) requirement.Furthermore we give a complete description of 3LIHON architecture describing transport services, architecture in detail, input and output part of the node.Finally we describe the advantages of 3LIHON network.To simulate the 3LIHON architecture we use a programming language called Simula and a context class for discrete event simulation called DEMOS.In Chapter 3 firstly we describe the simulation model implemented, moreover we give a code description.We show the sources characterization and the packets characterization for all type of traffic that 3LIHON is able to handle: Guaranteed Service Transport (GST) traffic, Statistically Multiplexed (SM) Real Time (RT) traffic and Statistically Multiplexed (SM) Best Effort (BE) traffic.The code used in this work is available in Appendix C.In Chapter 4 firstly we present the simulation scenario and then the obtained results.To evaluate the accuracy's level of our results we use a 95% confidence interval and more theoretical details about that are given in Appendix A.We consider three study cases and for each of them we analyze in details the Packet Loss Probability (PLP) of Statistically Multiplexed Real Time (SM/RT) packets, the Packet Loss Probability (PLP) of Statistically Multiplexed Best Effort (SM/BE) packets and the delay of Statistically Multiplexed Best Effort (SM/BE) packets in the Best Effort queue. Some additional results used to obtain the study case called Series Two in Chapter 2 are shown in Appendix B.In Chapter 5 are presented some conclusions of this work and in Chapter 6 we show some hints that can be the sparkle for further works.
|
Page generated in 0.0808 seconds