As the traffic volume on the Internet increases exponentially, so does the demand for fast switching of packets between asynchronous high-speed routers. Although the optical fiber can provide an extremely high capacity, the Internet switches still remain the main point of traffic bottleneck. The packet switching time may run up to nanoseconds in such routers with more than thousands ports, each processing at 10 GB/s. Even modern extremely fast processing units are not capable to satisfy these needs. It is well known that switching of such a high volume of traffic from input to output requires large buffers and fast processors to perform the header processing, complex scheduling and forwarding functions. Although a large number of switching architectures is presented on the market, the considerable part of them is either not scalable or reach their limits in power consumption and complexity.
Therefore, novel and extremely scalable switching systems are essential to be investigated.
The load-balancing switching approach is simple, and therefore, may be capable of performing the switching and forwarding from all inputs to all outputs simultaneously with low complexity and high scalability. Since this simple approach has distributed topology (each component of the switch is controlled by an individual chip) and do not require fast switch control units, primarily because each stage is independent and it makes its own distributed calculations, it becomes a perfect candidate for the future practical deployment.
The load-balancing switching architecture, considered in this thesis, is proved to have high potential to scale up while maintaining good throughput and other performance characteristics. Additionally, the load-balancing switching architecture can effectively resolve the important problem of packets mis-ordering which can appear due to the distributed structure of the system. Unfortunately, in the research conducted previously, some of the mentioned characteristics were obtained under a set of strong assumptions. In particular, it was assumed that all the packets transmitted through the system have equal length, traffic is admissible and central stage buffers are infinite. On the other hand, due to the distributed control the switch is not able to control and maintain a necessary amount of traffic transmitted from stage to stage inside the switch.
The following Ph.D. thesis analyzes behavior of the load-balancing switch equipped with finite central stage buffers. Due to this fact the LB switch will always have a possibility to drop a packet due to an overflow. In this work we first analyze the packet loss probability in the central stage buffers while considering packets of the same length (data cells). The analysis will be performed for both admissible and inadmissible traffic matrices. The obtained results show that the packet loss can have a significant influence on the overall LB switch performance if inputs of the switch are overloaded.
In order to present more realistic scenario, the packet loss analysis was performed in the switch with variable size packets. It is considered that most of the internet switches are operating on the cell-based level (to increase buffer utilization), that means that arriving variable size packets are segmented at inputs and reassembled at outputs. The issue of possible cell and correspondingly a packet loss inside the switch can introduce some significant posterior problems to the load-balancing switch reassembly unit. In order to evaluate packet loss we assumed Markovian behavior to be able to use numerically efficient algorithms to solve the model. The mathematical model characterizing inhomogeneous input traffic presented inside the thesis gives the most precise way of packet loss probability
evaluation. Unfortunately, the high complexity of this model results in irresolvable complex Markov chains even in case of very small switches. Consequently, as a next step, we performed the analysis with fast solution procedures using a restrictive assumption of identical stochastic processes at all inputs. The final results allowed us to conclude that a single cell drop at the central stage buffers cause the whole packet removal and, the packet loss probability inside the system can be extremely high in comparison with the corresponding cell loss. Another important issue observed from the analysis is the difference in packet loss probabilities depending on the traffic traversing path, e.g. sequential number of input, central stage buffer and output of the switch. This property makes more complex the evaluation of the loss probabilities for large switch sizes. The last but not the least issue observed by our analysis was the instability, congestion and large delays appearing at output re-sequencing and reassembly unit due to the the central stage packet loss.
In order to cope with such a behavior, we proposed the novel algorithms which are able to efficiently minimize/avoid packet loss at the central stage buffers of the switch. For instance, the novel minimization protocol is introducing an artificial buffering threshold at the central stage buffers in such a way that packets at the input stage are are dropped in case the actual central stage buffers occupancy is above the threshold. The results show that due to possible packet removal at the input stage of the switch, the overall packet loss probability is significantly reduced. Similarly to the loss minimization service protocol, the novel NoLoss load-balancing switch operates while using information from both inputs and central stage buffers, and allows a packet transmission through the switch only if the central stage buffers have enough space to accept it during the current and the following time slots. In order to minimize communication overheads, the algorithm was implemented by means of centralize controller. Finally, such kind of management helped us to reach the lower boundary in the overall packet loss probability and resolve some other important issues of the switch, like, for instance, the congestion problem of the output reassembly unit.
Identifer | oai:union.ndltd.org:unitn.it/oai:iris.unitn.it:11572/368663 |
Date | January 2009 |
Creators | Audzevich, Yury |
Contributors | Audzevich, Yury, Ofek, Yoram |
Publisher | Università degli studi di Trento, place:TRENTO |
Source Sets | Università di Trento |
Language | English |
Detected Language | English |
Type | info:eu-repo/semantics/doctoralThesis |
Rights | info:eu-repo/semantics/openAccess |
Relation | firstpage:1, lastpage:166, numberofpages:166 |
Page generated in 0.0028 seconds