• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 5
  • 4
  • Tagged with
  • 32
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Suspended gate silicon nanodot memory

Garcia Ramirez, Mario Alberto January 2011 (has links)
The non-volatile memory market has been driven by Flash memory since its invention more than three decades ago. Today, this non-volatile memory is used in a wide variety of devices and systems from pen drives, mp3 players to cars, planes and satellites. However,the conventional floating gate memory technology in use for flash memory is facing a serious scalability issue, the tunnel oxide thickness cannot be reduced to less than 7nm as pointed out in the latest international technology roadmap for semiconductors (ITRS2010) [1]. The limit imposed on the tunnel oxide layer reduces the programming and erasing times, the scalability and endurance among other parameters. To overcome those inherent issues, this research is focused on the co-integration of nano-electromechanical systems (NEMS) with metal-oxide-semiconductor (MOS) technology in order to generate a new non-volatile and high speed memory. The memory device that we are proposing is a high-speed non-volatile memory structure called the Suspended Gate Silicon Nanodot Memory (SGSNM) cell. This non-volatile memory device features a MOSFET as a readout element, a silicon nanodot (SiNDs) monolayer as the floating gate and a movable suspended control gate isolated from the floating gate by an oxide layer and by an air-gap. The fundamental component in this novel device is the introduction of a doubly-clamped beam as a movable control gate, in which through this element, the programming and erasing operations take place. To understand the behaviour of the doubly-clamped beam structure, it is analysed by using analytical models such as the doubly-plate capacitor model and also by using two- and three-dimensional (2D and 3D) finite element method (FEM) analysis. The programming and erasing operations within the SGSNM occur when the suspended control gate is in contact with the tunnel oxide layer. This is the point at which the quantum-mechanical tunnelling mechanism (Fowler-Nordheim) takes place. Through this mechanism, the electrons are allowed to tunnel from the suspended control gate into the memory node and vice versa as a function of the applied voltage (bias). The tunnelling process is numerically analysed by implementing the Tsu-Esaki equation and the transfer matrix method within a homemade program which calculates the current density as a function of the tunnel oxide material and thickness. Both the suspended control gate and tunnelling process are implemented as analog behavioural models within the SGSNM cell that is simulated by using a commercial circuit simulator. From a transient analysis of the suspended control gate, it was found that the suspended control gate takes 0.8 nsec in pull-in on the tunnel oxide layer for a 1 μm-long doubly-clamped structure. In contrast, the time that the memory node takes in charge and discharge is 1.7 nsec. Hence, the programming and erasing times are a combination between the mechanical pull-in and the charging time, which is 2.5 nsec due the fact that to both operations are symmetrical. Moreover, the suspended control gate was successfully fabricated and suspended. This process was performed by depositing a thin layer of aluminium (500 nm) over the sacrificial layer (poly-Si) by using an e-beam evaporator, which was patterned with doubly-clamped beam features through the photolithographic process. By using a combination of wet and dry etching processes, the aluminium and the sacrificial layer were successfully removed without affecting the substrate (Si-based) or the suspended control gate beam. In addition, Capacitance - Voltage measurements were performed on a set of doubly-clamped beams from which the pull-in effect was successfully obtained. Finally, the footprints for the memory device fabrication process were developed and sketched within the document as well as the design of three photomasks.
12

Energy efficient cache architectures for single, multi and many core processors

Thucanakkenpalayam Sundararajan, Karthik January 2013 (has links)
With each technology generation we get more transistors per chip. Whilst processor frequencies have increased over the past few decades, memory speeds have not kept pace. Therefore, more and more transistors are devoted to on-chip caches to reduce latency to data and help achieve high performance. On-chip caches consume a significant fraction of the processor energy budget but need to deliver high performance. Therefore cache resources should be optimized to meet the requirements of the running applications. Fixed configuration caches are designed to deliver low average memory access times across a wide range of potential applications. However, this can lead to excessive energy consumption for applications that do not require the full capacity or associativity of the cache at all times. Furthermore, in systems where the clock period is constrained by the access times of level-1 caches, the clock frequency for all applications is effectively limited by the cache requirements of the most demanding phase within the most demanding application. This motivates the need for dynamic adaptation of cache configurations in order to optimize performance while minimizing energy consumption, on a per-application basis. First, this thesis proposes an energy-efficient cache architecture for a single core system, along with a run-time support framework for dynamic adaptation of cache size and associativity through the use of machine learning. The machine learning model, which is trained offline, profiles the application’s cache usage and then reconfigures the cache according to the program’s requirement. The proposed cache architecture has, on average, 18% better energy-delay product than the prior state-of-the-art cache architectures proposed in the literature. Next, this thesis proposes cooperative partitioning, an energy-efficient cache partitioning scheme for multi-core systems that share the Last Level Cache (LLC), with a core to LLC cache way ratio of 1:4. The proposed cache partitioning scheme uses small auxiliary tags to capture each core’s cache requirements, and partitions the LLC according to the individual cores cache requirement. The proposed partitioning uses a way-aligned scheme that helps in the reduction of both dynamic and static energy. This scheme, on an average offers 70% and 30% reduction in dynamic and static energy respectively, while maintaining high performance on par with state-of-the-art cache partitioning schemes. Finally, when Last Level Cache (LLC) ways are equal to or less than the number of cores present in many-core systems, cooperative partitioning cannot be used for partitioning the LLC. This thesis proposes a region aware cache partitioning scheme as an energy-efficient approach for many core systems that share the LLC, with a core to LLC way ratio of 1:2 and 1:1. The proposed partitioning, on an average offers 68% and 33% reduction in dynamic and static energy respectively, while again maintaining high performance on par with state-of-the-art LLC cache management techniques.
13

On reducing the decoding complexity of shingled magnetic recording system

Awad, Nadia January 2013 (has links)
Shingled Magnetic Recording (SMR) has been recognised as one of the alternative technologies to achieve an areal density beyond the limit of the perpendicular recording technique, 1 Tb/in2, which has an advantage of extending the use of the conventional method media and read/write head. This work presents SMR system subject to both Inter Symbol Interference (ISI) and Inter Track Interference (ITI) and investigates different equalisation/detection techniques in order to reduce the complexity of this system. To investigate the ITI in shingled systems, one-track one-head system model has been extended into two-track one-head system model to have two interfering tracks. Consequently, six novel decoding techniques have been applied to the new system in order to find the Maximum Likelihood (ML) sequence. The decoding complexity of the six techniques has been investigated and then measured. The results show that the complexity is reduced by more than three times with 0.5 dB loss in performance. To measure this complexity practically, perpendicular recording system has been implemented in hardware. Hardware architectures are designed for that system with successful Quartus II fitter which are: Perpendicular Magnetic Recording (PMR) channel, digital filter equaliser with and without Additive White Gaussian Noise (AWGN) and ideal channel architectures. Two different hardware designs are implemented for Viterbi Algorithm (VA), however, Quartus II fitter for both of them was unsuccessful. It is found that, Simulink/Digital Signal Processing (DSP) Builder based designs are not efficient for complex algorithms and the eligible solution for such designs is writing Hardware Description Language (HDL) codes for those algorithms.
14

Approximation algorithms for packing and buffering problems

Matsakis, Nicolaos January 2015 (has links)
This thesis studies online and offine approximation algorithms for packing and buffering problems. In the second chapter of this thesis, we study the problem of packing linear programs online. In this problem, the online algorithm may only increase the values of the variables of the linear program and his goal is to maximize the value of the objective function of it. The online algorithm has initially full knowledge of all parameters of the linear program, except for the right-hand sides of the constraints which are gradually revealed to him by the adversary. This online problem has been introduced by Ochel et al. [2012]. Our contribution (Englert et al. [2014]) is to provide improved upper bounds for the competitiveness of both deterministic and randomized online algorithms for this problem, as well as an optimal deterministic online algorithm for the special case of linear programs involving two variables. In the third chapter we study the offine COLORFUL BIN PACKING problem. This problem is a variant of the BIN PACKING problem, where each item is associated with a color and where there exists the additional restriction that two items packed consecutively into the same bin cannot share the same color. The COLORFUL BIN PACKING problem has been studied mainly from an online perspective and has been introduced as a generalization of the BLACK AND WHITE BIN PACKING problem (Balogh et al. [2012]), i.e., the special case of this problem for two colors. We provide (joint work with Matthias Englert) a 2-appoximate algorithm for the COLORFUL BIN PACKING problem. In the fourth chapter we study the Longest Queue Drop (LQD) online algorithm for shared-memory switches with three and two output ports. The Longest Queue Drop algorithm is a well-known online algorithm used to direct the packet ow of shared-memory switches. According to LQD, when the buffer of the switch becomes full, a packet is preempted from the longest queue in the buffer to free buffer space for the newly arriving packet which is accepted. We show (Matsakis [2016], to appear) that the Longest Queue Drop algorithm is (3/2)-competitive for three-port switches, improving the previously best upper bound of 5/3 (Kobayashi et al. [2007]). Additionally, we show that this algorithm is exactly (4/3)-competitive for two-port switches, correcting a previously published result claiming a tight upper bound of 4M-4/3M-2 < 4=3, where M 2 Z+ denotes the buffer size.
15

Advanced managment techniques for many-core communication systems

Al Khanjari, Sharifa January 2017 (has links)
The way computer processors are built is changing. Nowadays, computer processor performance is increased by adding more processing cores on a single chip instead of making processors larger and faster. The traditional approach is no longer viable, due to limits in transistor scaling. Both industry and academia agree that scaling the number of processing cores to hundreds or thousands on a single chip is the only way to scale computer processor performance from now on. Consequently, the performance of these future many-core systems with thousands of cores will heavily depend on the Network-on-Chip (NoC) architecture to provide scalable communication. Therefore, as the number of cores increases the locality will only become more important. Communication locality is essential to reduce latency and increase performance. Many-core systems should be designed such that cores communicate mainly to the neighbouring cores, in order to minimise the communication cost. We investigate the network performance of different topologies using the ITRS physical data for the year 2023. For this reason, we propose abstract synthetic traffic generation models to explore the locality behaviour in many-core NoC systems. Using the synthetic traffic models - group clustering model and ring clustering model - traffic distance metrics may be adjusted with locality parameters. We choose two many-core NoC architectures - distributed memory architecture and shared memory architecture - to examine whether enforcing locality on different architectures may have a diverse effect on the network performance of different topologies. Distributed memory architecture uses the message passing method of communication to communicate between cores. Our results show that the degree of locality and the clustering model strongly affect the performance of the network. Scale-invariant topologies, such as the fat quadtree, perform worse than flat ones because the reduced hop count is outweighed by the longer wire delays. In shared memory architecture, threads communicate with each other by storing data in shared cache lines. We design a hierarchical cache model that benefits from communication locality because many-core cache hierarchy that fails to exploit locality may end up having more cores delayed, thereby decreasing the network performance. Our results show that the locality model of thread placement and the distance of placing them significantly affect the NoC performance. Furthermore, they show that scale-invariant topologies perform better than flat topologies. Then, we demonstrate that implementing directory-based cache coherency has only a small overhead on the cache size. Using cache coherency protocol in our proposed hierarchical cache model, we show that network performance decreases only slightly. Hence, cache coherency scales, and it is possible to have shared memory architecture with thousands of cores.
16

Structuration de données multidimensionnelles : une approche basée instance pour l'exploration de données médicales / Structuring multidimensional data : exploring medical data with an instance-based approach

Falip, Joris 22 November 2019 (has links)
L'exploitation, a posteriori, des données médicales accumulées par les praticiens représente un enjeu majeur pour la recherche clinique comme pour le suivi personnalisé du patient. Toutefois les professionnels de santé manquent d'outils adaptés leur permettant d'explorer, comprendre et manipuler aisément leur données. Dans ce but, nous proposons un algorithme de structuration d'éléments par similarité et représentativité. Cette méthode permet de regrouper les individus d'un jeu de données autour de membres représentatifs et génériques aptes à subsumer les éléments et résumer les données. Cette méthode, procédant dimension par dimension avant d'agréger les résultats, est adaptée aux données en haute dimension et propose de plus des résultats transparents, interprétables et explicables. Les résultats obtenus favorisent l'analyse exploratoire et le raisonnement par analogie via une navigation de proche en proche : la structure obtenue est en effet similaire à l'organisation des connaissances utilisée par les experts lors du processus décisionnel qu'ils emploient. Nous proposons ensuite un algorithme de détection d'anomalies qui permet de détecter des anomalies complexes et en haute dimensionnalité en analysant des projections sur deux dimensions. Cette approche propose elle aussi des résultats interprétables. Nous évaluons ensuite ces deux algorithmes sur des données réelles et simulées dont les éléments sont décrits par de nombreuses variables : de quelques dizaines à plusieurs milliers. Nous analysant particulièrement les propriétés du graphe résultant de la structuration des éléments. Nous décrivons par la suite un outil de prétraitement de données médicales ainsi qu'une plateforme web destinée aux médecins. Via cet outil à l'utilisation intuitif nous proposons de structurer de manière visuelle les éléments pour faciliter leur exploration. Ce prototype fournit une aide à la décision et au diagnostique médical en permettant au médecin de naviguer au sein des données et d'explorer des patients similaires. Cela peut aussi permettre de vérifier des hypothèses cliniques sur une cohorte de patients. / A posteriori use of medical data accumulated by practitioners represents a major challenge for clinical research as well as for personalized patient follow-up. However, health professionals lack the appropriate tools to easily explore, understand and manipulate their data. To solve this, we propose an algorithm to structure elements by similarity and representativeness. This method allows individuals in a dataset to be grouped around representative and generic members who are able to subsume the elements and summarize the data. This approach processes each dimension individually before aggregating the results and is adapted to high-dimensional data and also offers transparent, interpretable and explainable results. The results we obtain are suitable for exploratory analysis and reasoning by analogy: the structure is similar to the organization of knowledge and decision-making process used by experts. We then propose an anomaly detection algorithm that allows complex and high-dimensional anomalies to be detected by analyzing two-dimensional projections. This approach also provides interpretable results. We evaluate these two algorithms on real and simulated high-dimensional data with up to thousands of dimensions. We analyze the properties of graphs resulting from the structuring of elements. We then describe a medical data pre-processing tool and a web application for physicians. Through this intuitive tool, we propose a visual structure of the elements to ease the exploration. This decision support prototype assists medical diagnosis by allowing the physician to navigate through the data and explore similar patients. It can also be used to test clinical hypotheses on a cohort of patients.
17

Διαχείριση κρυφής μνήμης επεξεργαστών με πρόβλεψη

Σπηλιωτακάρας, Αθανάσιος 11 May 2010 (has links)
Στον διαρκώς μεταβαλλόμενο τομέα της αρχιτεκτονικής των υπολογιστών, τα τελευταία 30 τουλάχιστον χρόνια οι αλλαγές έρχονται με εκθετικό ρυθμό. Οι κρυφές μνήμες αποτελούν πλέον το κέντρο του ενδιαφέροντος, αφού οι επεξεργαστές γίνονται ολοένα και ταχύτεροι, ολοένα και αποδοτικότεροι, αλλά τα κυκλώματα μνήμης αδυνατούν να τους ακολουθήσουν. Το επιστημονικό αυτό πεδίο στρέφεται πλέον σε έξυπνες λύσεις που έχουν ως στόχο την μείωση του κόστους επικοινωνίας μεταξύ των δύο υποσυστημάτων. Οι τρόποι διαχείρισης της κρυφής μνήμης αποτελούν έκφανση της πραγματικότητας αυτής και ένα από τα βασικότερα μέρη της είναι οι αλγόριθμοι αντικατάστασης. Η μελέτη εστιάζει στη σχέση ανάμεσα σε δύο, ήδη εφαρμοσμένων, νέων πολιτικών αντικατάστασης, καθώς και το βαθμό στον οποίο μπορεί να υπάρξει συγχώνευση τους σε μία καινούργια. Οι νέοι αλγόριθμοι που μελετάμε είναι ο αλγόριθμος αντικατάστασης IbRdPrediction (Instruction-based Reuse-Distance Prediction – Πρόβλεψης απόστασης επαναχρησιμοποίησης βασισμένης σε εντολή) και ο αλγόριθμος MLP-Aware (Memory level parallelism aware – επίγνωσης επιπέδου παραλληλισμού μνήμης). Εξετάζουμε κατά πόσο είναι δυνατόν να δημιουργηθεί ένας νέος μηχανισμός πρόβλεψης βασισμένος σε εντολη (instruction-based) που να λαμβάνει υπόψιν του τα χαρακτηριστικά του παραλληλισμού επιπέδου μνήμης (MLP) και κατα πόσο βελτιώνει τις ήδη υπάρχουσες τεχνικές ως προς την απόδοση του συστήματος. / In the continiously altering field of computer architecture, changes occur with exponential rate the last 30 years. Cache memories have become the pole of interest, as processors are growing all faster, all efficient, but memory circuits fail to follow them. The scientific community is now turning to clever solutions which aim to limit the two subsytem communication cost. Cache management consists the expression of this reality, and one of its most basic parts is cache replacement algorithms. The thesis focuses on the relation between two, already applied, recent replacement policies, and the degree in which their coalescence in a new policy can exist. We study the IbRdPrediction (Instruction-based Reuse-Distance Prediction) replacement algorithm and the MLP-Aware (Memory level parallelism aware) replacement algorithm. We thoroughly examine if it is possible to create a novel prediction mecahnism, based on instruction, that takes into account the MLP ((Memory level parallelism) characteristics, and how much it improves the existing techniques concerning system performance.
18

Fabrication and characterisation of L10 ordered FePt thin films and bit patterned media

Zygridou, Smaragda January 2016 (has links)
Highly ordered magnetic materials with high perpendicular magnetic anisotropy (PMA), such as the L10 ordered FePt, and new recording technologies, such as bit patterned media (BPM), have been proposed as solutions to the media trilemma problem and provide promising strategies towards future high-density magnetic data storage media. L10 ordered FePt thin films can provide the necessary high PMA. However, the ordering of this material perpendicular to the plane of the films remains challenging since high-temperature and time-consuming processes are required. In this work, a remote plasma sputtering system has been used for the investigation of FePt thin films in order to understand if the greater control of process parameters offered by this system can lead to enhanced ordering in L10 FePt thin films at low temperatures compared with conventional dc magnetron approaches. More specifically, the effect of the different substrate temperatures and the target bias voltages on the ordering, the microstructure and the magnetic properties of FePt thin films was investigated. Highly ordered FePt thin films were successfully fabricated after post-annealing processes and were patterned into arrays of FePt islands. This patterning process was carried out with e-beam lithography and ion milling. Initial MFM measurements of these islands showed their single-domain structure for all the island sizes, which indicated the high PMA of the FePt. Magnetometry measurements were also carried out with a novel polar magneto-optical Kerr effect (MOKE) system which was designed and built during this project. This system has unique capabilities which are: a) the application of uniform magnetic field up to 2 Tesla, b) the rotation of the field to an arbitrary angle and c) the use of lasers of four different wavelengths. The combination of these abilities enabled measurements on ordered FePt thin films and patterned media which can pave the way for further highly sensitive measurements on magnetic thin films and nanostructures.
19

Modélisation système d'une architecture d'interconnexion RF reconfigurable pour les many-cœurs / System modeling of a reconfigurable RF interconnect architecture for manycore

Brière, Alexandre 08 December 2017 (has links)
La multiplication du nombre de cœurs de calcul présents sur une même puce va depair avec une augmentation des besoins en communication. De plus, la variété des applications s’exécutant sur la puce provoque une hétérogénéité spatiale et temporelle des communications. C’est pour répondre à ces problématiques que nous pré-sentons dans ce manuscrit un réseau d’interconnexion sur puce dynamiquement reconfigurable utilisant la Radio Fréquence (RF). L’utilisation de la RF permet de disposer d’une bande passante plus importante tout en minimisant la latence. La possibilité de reconfigurer dynamiquement le réseau permet d’adapter cette puce many-cœur à la variabilité des applications et des communications. Nous présentons les raisons du choix de la RF par rapport aux autres nouvelles technologies du domaine que sont l’optique et la 3D, l’architecture détaillée de ce réseau et d’une puce le mettant en œuvre ainsi que l’évaluation de sa faisabilité et de ses performances. Durant la phase d’évaluation nous avons pu montrer que pour un Chip Multiprocessor (CMP) de 1 024 tuiles, notre solution permettait un gain en performance de 13 %. Un des avantages de ce réseau d’interconnexion RF est la possibilité de faire du broadcast sans surcoût par rapport aux communications point-à-point,ouvrant ainsi de nouvelles perspectives en termes de gestion de la cohérence mémoire notamment. / The growing number of cores in a single chip goes along with an increase in com-munications. The variety of applications running on the chip causes spatial andtemporal heterogeneity of communications. To address these issues, we presentin this thesis a dynamically reconfigurable interconnect based on Radio Frequency(RF) for intra chip communications. The use of RF allows to increase the bandwidthwhile minimizing the latency. Dynamic reconfiguration of the interconnect allowsto handle the heterogeneity of communications. We present the rationale for choos-ing RF over optics and 3D, the detailed architecture of the network and the chipimplementing it, the evaluation of its feasibility and its performances. During theevaluation phase we were able to show that for a CMP of 1 024 tiles, our solutionallowed a performance gain of 13 %. One advantage of this RF interconnect is theability to broadcast without additional cost compared to point-to-point communi-cations, opening new perspectives in terms of cache coherence.
20

Network architectures and energy efficiency for high performance data centers / Architectures réseaux et optimisation d'énergie pour les centres de données massives

Baccour, Emna 30 June 2017 (has links)
L’évolution des services en ligne et l’avènement du big data ont favorisé l’introduction de l’internet dans tous les aspects de notre vie : la communication et l’échange des informations (exemple, Gmail et Facebook), la recherche sur le web (exemple, Google), l’achat sur internet (exemple, Amazon) et le streaming vidéo (exemple, YouTube). Tous ces services sont hébergés sur des sites physiques appelés centres de données ou data centers qui sont responsables de stocker, gérer et fournir un accès rapide à toutes les données. Tous les équipements constituants le système d’information d’une entreprise (ordinateurs centraux, serveurs, baies de stockage, équipements réseaux et de télécommunications, etc) peuvent être regroupés dans ces centres de données. Cette évolution informatique et technologique a entrainé une croissance exponentielle des centres de données. Cela pose des problèmes de coût d’installation des équipements, d’énergie, d’émission de chaleur et de performance des services offerts aux clients. Ainsi, l’évolutivité, la performance, le coût, la fiabilité, la consommation d’énergie et la maintenance sont devenus des défis importants pour ces centres de données. Motivée par ces défis, la communauté de recherche a commencé à explorer de nouveaux mécanismes et algorithmes de routage et des nouvelles architectures pour améliorer la qualité de service du centre de données. Dans ce projet de thèse, nous avons développé de nouveaux algorithmes et architectures qui combinent les avantages des solutions proposées, tout en évitant leurs limitations. Les points abordés durant ce projet sont: 1) Proposer de nouvelles topologies, étudier leurs propriétés, leurs performances, ainsi que leurs coûts de construction. 2) Conception des algorithmes de routage et des modèles pour réduire la consommation d’énergie en prenant en considération la complexité, et la tolérance aux pannes. 3) Conception des protocoles et des systèmes de gestion de file d’attente pour fournir une bonne qualité de service. 4) Évaluation des nouveaux systèmes en les comparants à d’autres architectures et modèles dans des environnements réalistes. / The increasing trend to migrate applications, computation and storage into more robust systems leads to the emergence of mega data centers hosting tens of thousands of servers. As a result, designing a data center network that interconnects this massive number of servers, and providing efficient and fault-tolerant routing service are becoming an urgent need and a challenge that will be addressed in this thesis. Since this is a hot research topic, many solutions are proposed like adapting new interconnection technologies and new algorithms for data centers. However, many of these solutions generally suffer from performance problems, or can be quite costly. In addition, devoted efforts have not focused on quality of service and power efficiency on data center networks. So, in order to provide a novel solution that challenges the drawbacks of other researches and involves their advantages, we propose to develop new data center interconnection networks that aim to build a scalable, cost-effective, high performant and QoS-capable networking infrastructure. In addition, we suggest to implement power aware algorithms to make the network energy effective. Hence, we will particularly investigate the following issues: 1) Fixing architectural and topological properties of the new proposed data centers and evaluating their performances and capacities of providing robust systems under a faulty environment. 2) Proposing routing, load-balancing, fault-tolerance and power efficient algorithms to apply on our architectures and examining their complexity and how they satisfy the system requirements. 3) Integrating quality of service. 4) Comparing our proposed data centers and algorithms to existing solutions under a realistic environment. In this thesis, we investigate a quite challenging topic where we intend, first, to study the existing models, propose improvements and suggest new methodologies and algorithms.

Page generated in 0.0649 seconds