• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 146
  • 43
  • 19
  • 11
  • 7
  • 6
  • 3
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 289
  • 289
  • 61
  • 61
  • 53
  • 52
  • 48
  • 47
  • 40
  • 36
  • 35
  • 34
  • 33
  • 32
  • 31
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Cooperative Communication In Store And Forward Wireless Networks Using Rateless Codes

Bansal, Gaurav 05 1900 (has links) (PDF)
In this thesis, we consider a cooperative relay-assisted communication system that uses rateless codes. When multiple relays are present, the relay with the highest channel gain to the source is the first to successfully decode a message from the source and forward it to the destination. Thus, the unique properties of rateless codes ensure that both rate adaptation and relay selection occur without the transmitting source or relays acquiring instantaneous channel knowledge. We show that in such cooperative systems, buffering messages at relays significantly increases throughput. We develop a novel analysis of these systems that combines the communication-theoretic aspects of cooperation over fading channels with the queuing-theoretic aspects associated with buffering. Closed-form expressions are derived for the throughput and end-to-end delay for the general case in which the channels between various nodes are not statistically identical. Results are also shown for the benchmark system that does not buffer messages. Though relay selection combined with buffering of messages at the relays substantially increases the throughput of a cooperative network, it also increases the end-to-end delays due to the additional queuing delays at the relay nodes. In order to overcome this, we propose a novel method that exploits a unique property of rateless codes that enables a receiver to decode a message from non-contiguous and unordered portions of the received signal. In it, each relay, depending on its queue length, ignores its received coded bits with a given probability. We show that this substantially reduces the end-to-end delays while retaining almost all of the throughput gain achieved by buffering. In effect, the method increases the odds that the message is first decoded by a relay with a smaller queue. Thus, the queuing load is balanced across the relays and traded off with transmission times. We derive conditions for the stability of this system when the various channels undergo fading. Despite encountering analytically intractable G/GI/1 queues in our system, we also gain insights about the method by analyzing a similar system with a simpler model for the relay-to-destination transmission times. Next we combine the single relay selection scheme at the source with physical layer power control at the relays (due to the diversity provided by the rateless codes, power control at the source is not needed). We derive an optimal power control policy that minimizes the relay to destination transmission time. Due to its computational and implementation complexity, we develop another heuristic easily implementable near optimal policy. In this policy, power allocated turns out to be inversely proportional to the square root of channel gain. We also see that this policy performs better than the channel inversion policy. Our power control solution substantially decreases the mean end-to-end delays with a marginal increase in throughput also. Finally, we combine bit dropping with power control at the relays which further improves the system performance.
232

A simulation workflow to evaluate the performance of dynamic load balancing with over decomposition for iterative parallel applications

Tesser, Rafael Keller January 2018 (has links)
Nesta tese é apresentado um novo workflow de simulação para avaliar o desempenho do balanceamento de carga dinâmico baseado em sobre-decomposição aplicado a aplicações paralelas iterativas. Seus objetivos são realizar essa avaliação com modificações mínimas da aplicação e a baixo custo em termos de tempo e de sua necessidade de recursos computacionais. Muitas aplicações paralelas sofrem com desbalanceamento de carga dinâmico (temporal) que não pode ser tratado a nível de aplicação. Este pode ser causado por características intrínsecas da aplicação ou por fatores externos de hardware ou software. Como demonstrado nesta tese, tal desbalanceamento é encontrado mesmo em aplicações cujo código não aparenta qualquer dinamismo. Portanto, faz-se necessário utilizar mecanismo de balanceamento de carga dinâmico a nível de runtime. Este trabalho foca no balanceamento de carga dinâmico baseado em sobre-decomposição. No entanto, avaliar e ajustar o desempenho de tal técnica pode ser custoso. Isso geralmente requer modificações na aplicação e uma grande quantidade de execuções para obter resultados estatisticamente significativos com diferentes combinações de parâmetros de balanceamento de carga Além disso, para que essas medidas sejam úteis, são usualmente necessárias grandes alocações de recursos em um sistema de produção. Simulated Adaptive MPI (SAMPI), nosso workflow de simulação, emprega uma combinação de emulação sequencial e replay de rastros para reduzir os custos dessa avaliação. Tanto emulação sequencial como replay de rastros requerem um único nó computacional. Além disso, o replay demora apenas uma pequena fração do tempo de uma execução paralela real da aplicação. Adicionalmente à simulação de balanceamento de carga, foram desenvolvidas técnicas de agregação espacial e rescaling a nível de aplicação, as quais aceleram o processo de emulação. Para demonstrar os potenciais benefícios do balanceamento de carga dinâmico com sobre-decomposição, foram avaliados os ganhos de desempenho empregando essa técnica a uma aplicação iterativa paralela da área de geofísica (Ondes3D). Adaptive MPI (AMPI) foi utilizado para prover o suporte a balanceamento de carga dinâmico, resultando em ganhos de desempenho de até 36.58% em 288 cores de um cluster Essa avaliação também é usada pra ilustrar as dificuldades encontradas nesse processo, assim justificando o uso de simulação para facilitá-la. Para implementar o workflow SAMPI, foi utilizada a interface SMPI do simulador SimGrid, tanto no modo de emulação, como no de replay de rastros. Para validar esse simulador, foram comparadas execuções simuladas (SAMPI) e reais (AMPI) da aplicação Ondes3D. As simulações apresentaram uma evolução do balanceamento de carga bastante similar às execuções reais. Adicionalmente, SAMPI estimou com sucesso a melhor heurística de balanceamento de carga para os cenários testados. Além dessa validação, nesta tese é demonstrado o uso de SAMPI para exploração de parâmetros de balanceamento de carga e para planejamento de capacidade computacional. Quanto ao desempenho da simulação, estimamos que o workflow completo é capaz de simular a execução do Ondes3D com 24 combinações de parâmetros de balanceamento de carga em 5 horas para o nosso cenário de terremoto mais pesado e 3 horas para o mais leve. / In this thesis we present a novel simulation workflow to evaluate the performance of dynamic load balancing with over-decomposition applied to iterative parallel applications at low-cost. Its goals are to perform such evaluation with minimal application modification and at a low cost in terms of time and of resource requirements. Many parallel applications suffer from dynamic (temporal) load imbalance that can not be treated at the application level. It may be caused by intrinsic characteristics of the application or by external software and hardware factors. As demonstrated in this thesis, such dynamic imbalance can be found even in applications whose codes do not hint at any dynamism. Therefore, we need to rely on runtime dynamic load balancing mechanisms, such as dynamic load balancing based on over-decomposition. The problem is that evaluating and tuning the performance of such technique can be costly. This usually entails modifications to the application and a large number of executions to get statistically sound performance measurements with different load balancing parameter combinations. Moreover, useful and accurate measurements often require big resource allocations on a production cluster. Our simulation workflow, dubbed Simulated Adaptive MPI (SAMPI), employs a combined sequential emulation and trace-replay simulation approach to reduce the cost of such an evaluation Both sequential emulation and trace-replay require a single computer node. Additionally, the trace-replay simulation lasts a small fraction of the real-life parallel execution time of the application. Besides the basic SAMPI simulation, we developed spatial aggregation and applicationlevel rescaling techniques to speed-up the emulation process. To demonstrate the real-life performance benefits of dynamic load balance with over-decomposition, we evaluated the performance gains obtained by employing this technique on a iterative parallel geophysics application, called Ondes3D. Dynamic load balancing support was provided by Adaptive MPI (AMPI). This resulted in up to 36.58% performance improvement, on 288 cores of a cluster. This real-life evaluation also illustrates the difficulties found in this process, thus justifying the use of simulation. To implement the SAMPI workflow, we relied on SimGrid’s Simulated MPI (SMPI) interface in both emulation and trace-replay modes.To validate our simulator, we compared simulated (SAMPI) and real-life (AMPI) executions of Ondes3D. The simulations presented a load balance evolution very similar to real-life and were also successful in choosing the best load balancing heuristic for each scenario. Besides the validation, we demonstrate the use of SAMPI for load balancing parameter exploration and for computational capacity planning. As for the performance of the simulation itself, we roughly estimate that our full workflow can simulate the execution of Ondes3D with 24 different load balancing parameter combinations in 5 hours for our heavier earthquake scenario and in 3 hours for the lighter one.
233

Modelování protokolů pro redundanci brány / Modelling Gateway Redundancy Protocols

Vítek, Petr January 2013 (has links)
This master's thesis report deals with the theoretical analysis of FHRP. First Hop Redundancy Protocols are network protocols which are designed to protect the default gateway and also to ensure high availability in the network by using redundancy. The reader becomes familiar with protocols VRRP, HSRP and GLBP and also learn the way how to configure them to on real Cisco devices. It also describes how implement VRRP int the simulated enviroment of OMNeT++. The result of the implementation is verified in the test topologies.
234

Resource management in computer clusters : algorithm design and performance analysis / Gestion des ressources dans les grappes d’ordinateurs : conception d'algorithmes et analyse de performance

Comte, Céline 24 September 2019 (has links)
La demande croissante pour les services de cloud computing encourage les opérateurs à optimiser l’utilisation des ressources dans les grappes d’ordinateurs. Cela motive le développement de nouvelles technologies qui rendent plus flexible la gestion des ressources. Cependant, exploiter cette flexibilité pour réduire le nombre d’ordinateurs nécessite aussi des algorithmes de gestion des ressources efficaces et dont la performance est prédictible sous une demande stochastique. Dans cette thèse, nous concevons et analysons de tels algorithmes en utilisant le formalisme de la théorie des files d’attente.Notre abstraction du problème est une file multi-serveur avec plusieurs classes de clients. Les capacités des serveurs sont hétérogènes et les clients de chaque classe entrent dans la file selon un processus de Poisson indépendant. Chaque client peut être traité en parallèle par plusieurs serveurs, selon des contraintes de compatibilité décrites par un graphe biparti entre les classes et les serveurs, et chaque serveur applique la politique premier arrivé, premier servi aux clients qui lui sont affectés. Nous prouvons que, si la demande de service de chaque client suit une loi exponentielle indépendante de moyenne unitaire, alors la performance moyenne sous cette politique simple est la même que sous l’équité équilibrée, une extension de processor-sharing connue pour son insensibilité à la loi de la demande de service. Une forme plus générale de ce résultat, reliant les files order-independent aux réseaux de Whittle, est aussi prouvée. Enfin, nous développons de nouvelles formules pour calculer des métriques de performance.Ces résultats théoriques sont ensuite mis en pratique. Nous commençons par proposer un algorithme d’ordonnancement qui étend le principe de round-robin à une grappe où chaque requête est affectée à un groupe d’ordinateurs par lesquels elle peut ensuite être traitée en parallèle. Notre seconde proposition est un algorithme de répartition de charge à base de jetons pour des grappes où les requêtes ont des contraintes d’affectation. Ces deux algorithmes sont approximativement insensibles à la loi de la taille des requêtes et s’adaptent dynamiquement à la demande. Leur performance peut être prédite en appliquant les formules obtenues pour la file multi-serveur. / The growing demand for cloud-based services encourages operators to maximize resource efficiency within computer clusters. This motivates the development of new technologies that make resource management more flexible. However, exploiting this flexibility to reduce the number of computers also requires efficient resource-management algorithms that have a predictable performance under stochastic demand. In this thesis, we design and analyze such algorithms using the framework of queueing theory.Our abstraction of the problem is a multi-server queue with several customer classes. Servers have heterogeneous capacities and the customers of each class enter the queue according to an independent Poisson process. Each customer can be processed in parallel by several servers, depending on compatibility constraints described by a bipartite graph between classes and servers, and each server applies first-come-first-served policy to its compatible customers. We first prove that, if the service requirements are independent and exponentially distributed with unit mean, this simple policy yields the same average performance as balanced fairness, an extension to processor-sharing known to be insensitive to the distribution of the service requirements. A more general form of this result, relating order-independent queues to Whittle networks, is also proved. Lastly, we derive new formulas to compute performance metrics.These theoretical results are then put into practice. We first propose a scheduling algorithm that extends the principle of round-robin to a cluster where each incoming job is assigned to a pool of computers by which it can subsequently be processed in parallel. Our second proposal is a load-balancing algorithm based on tokens for clusters where jobs have assignment constraints. Both algorithms are approximately insensitive to the job size distribution and adapt dynamically to demand. Their performance can be predicted by applying the formulas derived for the multi-server queue.
235

Resource management in computer clusters : algorithm design and performance analysis / Gestion des ressources dans les grappes d’ordinateurs : conception d'algorithmes et analyse de performance

Comte, Céline 24 September 2019 (has links)
La demande croissante pour les services de cloud computing encourage les opérateurs à optimiser l’utilisation des ressources dans les grappes d’ordinateurs. Cela motive le développement de nouvelles technologies qui rendent plus flexible la gestion des ressources. Cependant, exploiter cette flexibilité pour réduire le nombre d’ordinateurs nécessite aussi des algorithmes de gestion des ressources efficaces et dont la performance est prédictible sous une demande stochastique. Dans cette thèse, nous concevons et analysons de tels algorithmes en utilisant le formalisme de la théorie des files d’attente.Notre abstraction du problème est une file multi-serveur avec plusieurs classes de clients. Les capacités des serveurs sont hétérogènes et les clients de chaque classe entrent dans la file selon un processus de Poisson indépendant. Chaque client peut être traité en parallèle par plusieurs serveurs, selon des contraintes de compatibilité décrites par un graphe biparti entre les classes et les serveurs, et chaque serveur applique la politique premier arrivé, premier servi aux clients qui lui sont affectés. Nous prouvons que, si la demande de service de chaque client suit une loi exponentielle indépendante de moyenne unitaire, alors la performance moyenne sous cette politique simple est la même que sous l’équité équilibrée, une extension de processor-sharing connue pour son insensibilité à la loi de la demande de service. Une forme plus générale de ce résultat, reliant les files order-independent aux réseaux de Whittle, est aussi prouvée. Enfin, nous développons de nouvelles formules pour calculer des métriques de performance.Ces résultats théoriques sont ensuite mis en pratique. Nous commençons par proposer un algorithme d’ordonnancement qui étend le principe de round-robin à une grappe où chaque requête est affectée à un groupe d’ordinateurs par lesquels elle peut ensuite être traitée en parallèle. Notre seconde proposition est un algorithme de répartition de charge à base de jetons pour des grappes où les requêtes ont des contraintes d’affectation. Ces deux algorithmes sont approximativement insensibles à la loi de la taille des requêtes et s’adaptent dynamiquement à la demande. Leur performance peut être prédite en appliquant les formules obtenues pour la file multi-serveur. / The growing demand for cloud-based services encourages operators to maximize resource efficiency within computer clusters. This motivates the development of new technologies that make resource management more flexible. However, exploiting this flexibility to reduce the number of computers also requires efficient resource-management algorithms that have a predictable performance under stochastic demand. In this thesis, we design and analyze such algorithms using the framework of queueing theory.Our abstraction of the problem is a multi-server queue with several customer classes. Servers have heterogeneous capacities and the customers of each class enter the queue according to an independent Poisson process. Each customer can be processed in parallel by several servers, depending on compatibility constraints described by a bipartite graph between classes and servers, and each server applies first-come-first-served policy to its compatible customers. We first prove that, if the service requirements are independent and exponentially distributed with unit mean, this simple policy yields the same average performance as balanced fairness, an extension to processor-sharing known to be insensitive to the distribution of the service requirements. A more general form of this result, relating order-independent queues to Whittle networks, is also proved. Lastly, we derive new formulas to compute performance metrics.These theoretical results are then put into practice. We first propose a scheduling algorithm that extends the principle of round-robin to a cluster where each incoming job is assigned to a pool of computers by which it can subsequently be processed in parallel. Our second proposal is a load-balancing algorithm based on tokens for clusters where jobs have assignment constraints. Both algorithms are approximately insensitive to the job size distribution and adapt dynamically to demand. Their performance can be predicted by applying the formulas derived for the multi-server queue.
236

Resource management in computer clusters : algorithm design and performance analysis / Gestion des ressources dans les grappes d’ordinateurs : conception d'algorithmes et analyse de performance

Comte, Céline 24 September 2019 (has links)
La demande croissante pour les services de cloud computing encourage les opérateurs à optimiser l’utilisation des ressources dans les grappes d’ordinateurs. Cela motive le développement de nouvelles technologies qui rendent plus flexible la gestion des ressources. Cependant, exploiter cette flexibilité pour réduire le nombre d’ordinateurs nécessite aussi des algorithmes de gestion des ressources efficaces et dont la performance est prédictible sous une demande stochastique. Dans cette thèse, nous concevons et analysons de tels algorithmes en utilisant le formalisme de la théorie des files d’attente.Notre abstraction du problème est une file multi-serveur avec plusieurs classes de clients. Les capacités des serveurs sont hétérogènes et les clients de chaque classe entrent dans la file selon un processus de Poisson indépendant. Chaque client peut être traité en parallèle par plusieurs serveurs, selon des contraintes de compatibilité décrites par un graphe biparti entre les classes et les serveurs, et chaque serveur applique la politique premier arrivé, premier servi aux clients qui lui sont affectés. Nous prouvons que, si la demande de service de chaque client suit une loi exponentielle indépendante de moyenne unitaire, alors la performance moyenne sous cette politique simple est la même que sous l’équité équilibrée, une extension de processor-sharing connue pour son insensibilité à la loi de la demande de service. Une forme plus générale de ce résultat, reliant les files order-independent aux réseaux de Whittle, est aussi prouvée. Enfin, nous développons de nouvelles formules pour calculer des métriques de performance.Ces résultats théoriques sont ensuite mis en pratique. Nous commençons par proposer un algorithme d’ordonnancement qui étend le principe de round-robin à une grappe où chaque requête est affectée à un groupe d’ordinateurs par lesquels elle peut ensuite être traitée en parallèle. Notre seconde proposition est un algorithme de répartition de charge à base de jetons pour des grappes où les requêtes ont des contraintes d’affectation. Ces deux algorithmes sont approximativement insensibles à la loi de la taille des requêtes et s’adaptent dynamiquement à la demande. Leur performance peut être prédite en appliquant les formules obtenues pour la file multi-serveur. / The growing demand for cloud-based services encourages operators to maximize resource efficiency within computer clusters. This motivates the development of new technologies that make resource management more flexible. However, exploiting this flexibility to reduce the number of computers also requires efficient resource-management algorithms that have a predictable performance under stochastic demand. In this thesis, we design and analyze such algorithms using the framework of queueing theory.Our abstraction of the problem is a multi-server queue with several customer classes. Servers have heterogeneous capacities and the customers of each class enter the queue according to an independent Poisson process. Each customer can be processed in parallel by several servers, depending on compatibility constraints described by a bipartite graph between classes and servers, and each server applies first-come-first-served policy to its compatible customers. We first prove that, if the service requirements are independent and exponentially distributed with unit mean, this simple policy yields the same average performance as balanced fairness, an extension to processor-sharing known to be insensitive to the distribution of the service requirements. A more general form of this result, relating order-independent queues to Whittle networks, is also proved. Lastly, we derive new formulas to compute performance metrics.These theoretical results are then put into practice. We first propose a scheduling algorithm that extends the principle of round-robin to a cluster where each incoming job is assigned to a pool of computers by which it can subsequently be processed in parallel. Our second proposal is a load-balancing algorithm based on tokens for clusters where jobs have assignment constraints. Both algorithms are approximately insensitive to the job size distribution and adapt dynamically to demand. Their performance can be predicted by applying the formulas derived for the multi-server queue.
237

A scalable database for a remote patient monitoring system

Mukhammadov, Ruslan January 2013 (has links)
Today one of the fast growing social services is the ability for doctors to monitor patients in their residences. The proposed highly scalable database system is designed to support a Remote Patient Monitoring system (RPMS). In an RPMS, a wide range of applications are enabled by collecting health related measurement results from a number of medical devices in the patient’s home, parsing and formatting these results, and transmitting them from the patient’s home to specific data stores. Subsequently, another set of applications will communicate with these data stores to provide clinicians with the ability to observe, examine, and analyze these health related measurements in (near) real-time. Because of the rapid expansion in the number of patients utilizing RPMS, it is becoming a challenge to store, manage, and process the very large number of health related measurements that are being collected. The primary reason for this problem is that most RPMSs are built on top of traditional relational databases, which are inefficient when dealing with this very large amount of data (often called “big data”). This thesis project analyzes scalable data management to support RPMSs, introduces a new set of open-source technologies that efficiently store and manage any amount of data which might be used in conjunction with such a scalable RPMS based upon HBase, implements these technologies, and as a proof of concept, compares the prototype data management system with the performance of a traditional relational database (specifically MySQL). This comparison considers both a single node and a multi node cluster. The comparison evaluates several critical parameters, including performance, scalability, and load balancing (in the case of multiple nodes). The amount of data used for testing input/output (read/write) and data statistics performance is 1, 10, 50, 100, and 250 GB. The thesis presents several ways of dealing with large amounts of data and develops & evaluates a highly scalable database that could be used with a RPMS. Several software suites were used to compare both relational and non-relational systems and these results are used to evaluate the performance of the prototype of the proposed RPMS. The results of benchmarking show that MySQL is better than HBase in terms of read performance, while HBase is better in terms of write performance. Which of these types of databases should be used to implement a RPMS is a function of the expected ratio of reads and writes. Learning this ratio should be the subject of a future thesis project. / En av de snabbast växande sociala tjänsterna idag är möjligheten för läkare att övervaka patienter i sina bostäder. Det beskrivna, mycket skalbara databassystemet är utformat för att stödja ett sådant Remote Patient Monitoring-system (RPMS). I ett RPMS kan flertalet applikationer användas med hälsorelaterade mätresultat från medicintekniska produkter i patientens hem, för att analysera och formatera resultat, samt överföra dem från patientens hem till specifika datalager. Därefter kommer ytterligare en uppsättning program kommunicera med dessa datalager för att ge kliniker möjlighet att observera, undersöka och analysera dessa hälsorelaterade mått i (nära) realtid. På grund av den snabba expansionen av antalet patienter som använder RPMS, är det en utmaning att hantera och bearbeta den stora mängd hälsorelaterade mätningar som samlas in. Den främsta anledningen till detta problem är att de flesta RPMS är inbyggda i traditionella relationsdatabaser, som är ineffektiva när det handlar om väldigt stora mängder data (ofta kallat "big data"). Detta examensarbete analyserar skalbar datahantering för RPMS, och inför en ny uppsättning av teknologier baserade på öppen källkod som effektivt lagrar och hanterar godtyckligt stora datamängder. Dessa tekniker används i en prototypversion (proof of concept) av ett skalbart RPMS baserat på HBase. Implementationen av det designade systemet jämförs mot ett RPMS baserat på en traditionell relationsdatabas (i detta fall MySQL). Denna jämförelse ges för både en ensam nod och flera noder. Jämförelsen utvärderar flera kritiska parametrar, inklusive prestanda, skalbarhet, och lastbalansering (i fallet med flera noder). Datamängderna som används för att testa läsning/skrivning och statistisk prestanda är 1, 10, 50, 100 respektive 250 GB. Avhandlingen presenterar flera sätt att hantera stora mängder data och utvecklar samt utvärderar en mycket skalbar databas, som är lämplig för användning i RPMS. Flera mjukvaror för att jämföra relationella och icke-relationella system används för att utvärdera prototypen av de föreslagna RPMS och dess resultat. Resultaten av dessa jämförelser visar att MySQL presterar bättre än HBase när det gäller läsprestanda, medan HBase har bättre prestanda vid skrivning. Vilken typ av databas som bör väljas vid en RMPS-implementation beror därför på den förväntade kvoten mellan läsningar och skrivningar. Detta förhållande är ett lämpligt ämne för ett framtida examensarbete.
238

Cooperative user- and system-level scheduling of task-centric parallel programs

Varisteas, Georgios January 2013 (has links)
Emerging architecture designs include tens of processing cores on a single chip die; it is believed that the number of cores will reach the hundreds in not so many years from now. However, most common workloads cannot expose fluctuating parallelism, insufficient to utilize such systems. The combination of these issues suggests that large-scale systems will be either multiprogrammed or have their unneeded resources powered off. To achieve these features, workloads must be able to provide a metric on their parallelism which the system can use to dynamically adapt per-application resource allotments.Adaptive resource management requires scheduling abstractions to be split into two cooperating layers. The system layer that is aware of the availability of resources and the application layer which can accurately and iteratively estimate the workload's true resource requirements.This thesis addresses these issues and provides a self-adapting work-stealing scheduling method that can achieve expected performance while conserving resources. This method is based on deterministic victim selection (DVS) that controls the concentration of the load among the worker threads. It allows to use the number of spawned but not yet processed tasks as a metric for the requirements. Because this metric measures work to be executed in the future instead of past behavior, DVS is versatile to handlevery irregular workloads. / <p>QC 20130910</p>
239

Adaptive Middleware for Self-Configurable Embedded Real-Time Systems : Experiences from the DySCAS Project and Remaining Challenges

Persson, Magnus January 2009 (has links)
Development of software for embedded real-time systems poses severalchallenges. Hard and soft constraints on timing, and usually considerableresource limitations, put important constraints on the development. Thetraditional way of coping with these issues is to produce a fully static design,i.e. one that is fully fixed already during design time.Current trends in the area of embedded systems, including the emergingopenness in these types of systems, are providing new challenges for theirdesigners – e.g. integration of new software during runtime, software upgradeor run-time adaptation of application behavior to facilitate better performancecombined with more ecient resource usage. One way to reach these goals is tobuild self-configurable systems, i.e. systems that can resolve such issues withouthuman intervention. Such mechanisms may be used to promote increasedsystem openness.This thesis covers some of the challenges involved in that development.An overview of the current situation is given, with a extensive review ofdi erent concepts that are applicable to the problem, including adaptivitymechanisms (incluing QoS and load balancing), middleware and relevantdesign approaches (component-based, model-based and architectural design).A middleware is a software layer that can be used in distributed systems,with the purpose of abstracting away distribution, and possibly other aspects,for the application developers. The DySCAS project had as a major goaldevelopment of middleware for self-configurable systems in the automotivesector. Such development is complicated by the special requirements thatapply to these platforms.Work on the implementation of an adaptive middleware, DyLite, providingself-configurability to small-scale microcontrollers, is described andcovered in detail. DyLite is a partial implementation of the concepts developedin DySCAS.Another area given significant focus is formal modeling of QoS andresource management. Currently, applications in these types of systems arenot given a fully formal definition, at least not one also covering real-timeaspects. Using formal modeling would extend the possibilities for verificationof not only system functionality, but also of resource usage, timing and otherextra-functional requirements. This thesis includes a proposal of a formalismto be used for these purposes.Several challenges in providing methodology and tools that are usablein a production development still remain. Several key issues in this areaare described, e.g. version/configuration management, access control, andintegration between di erent tools, together with proposals for future workin the other areas covered by the thesis. / Utveckling av mjukvara för inbyggda realtidssystem innebär flera utmaningar.Hårda och mjuka tidskrav, och vanligtvis betydande resursbegränsningar,innebär viktiga inskränkningar på utvecklingen. Det traditionellasättet att hantera dessa utmaningar är att skapa en helt statisk design, d.v.s.en som är helt fix efter utvecklingsskedet.Dagens trender i området inbyggda system, inräknat trenden mot systemöppenhet,skapar nya utmaningar för systemens konstruktörer – exempelvisintegration av ny mjukvara under körskedet, uppgradering av mjukvaraeller anpassning av applikationsbeteende under körskedet för att nå bättreprestanda kombinerat med e ektivare resursutnyttjande. Ett sätt att nå dessamål är att bygga självkonfigurerande system, d.v.s. system som kan lösa sådanautmaningar utan mänsklig inblandning. Sådana mekanismer kan användas föratt öka systemens öppenhet.Denna avhandling täcker några av utmaningarna i denna utveckling. Enöversikt av den nuvarande situationen ges, med en omfattande genomgångav olika koncept som är relevanta för problemet, inklusive anpassningsmekanismer(inklusive QoS och lastbalansering), mellanprogramvara och relevantadesignansatser (komponentbaserad, modellbaserad och arkitekturell design).En mellanprogramvara är ett mjukvarulager som kan användas i distribueradesystem, med syfte att abstrahera bort fördelning av en applikation överett nätverk, och möjligtvis även andra aspekter, för applikationsutvecklarna.DySCAS-projektet hade utveckling av mellanprogramvara för självkonfigurerbarasystem i bilbranschen som ett huvudmål. Sådan utveckling försvåras avde särskilda krav som ställs på dessa plattformarArbete på implementeringen av en adaptiv mellanprogramvara, DyLite,som tillhandahåller självkonfigurerbarhet till småskaliga mikrokontroller,beskrivs och täcks i detalj. DyLite är en delvis implementering av konceptensom utvecklats i DySCAS.Ett annat område som får särskild fokus är formell modellering av QoSoch resurshantering. Idag beskrivs applikationer i dessa områden inte heltformellt, i varje fall inte i den mån att realtidsaspekter täcks in. Att användaformell modellering skulle utöka möjligheterna för verifiering av inte barasystemfunktionalitet, men även resursutnyttjande, tidsaspekter och andraicke-funktionella krav. Denna avhandling innehåller ett förslag på en formalismsom kan användas för dessa syften.Det återstår många utmaningar innan metodik och verktyg som är användbarai en produktionsmiljö kan erbjudas. Många nyckelproblem i områdetbeskrivs, t.ex. versions- och konfigurationshantering, åtkomststyrning ochintegration av olika verktyg, tillsammans med förslag på framtida arbete iövriga områden som täcks av avhandlingen. / DySCAS
240

Radio Resource Management in LTE Networks : Load Balancing in Heterogeneous Cellular Networks / Gestion des ressources radio dans les réseaux LTE

Jouini, Hana 20 December 2017 (has links)
Face à la croissance exponentielle des réseaux mobiles très haut débit, les opérateurs de téléphonie mobile se sont lancé dans le déploiement des réseaux dits hiérarchiques (HetNet), composés par des sous-réseaux avec des caractéristiques divergentes en termes de type des cellules déployées et des technologies d’accès radio utilisées. Avec ce caractère hétérogène des réseaux cellulaire, l’exploitation de ces derniers devienne de plus en plus compliquée et coûteuse impliquant le déploiement, la configuration et la reconfiguration de stations de base et d’équipements de différentes caractéristiques. Ainsi, l’intégration dans les réseaux HetNet de fonctionnalités d’auto-configuration automatisant et simplifiant l’exploitation des réseaux deviennent une demande forte des opérateurs. Cette thèse a pour objectif l’étude et le développement de solutions de gestion dynamique de l’équilibrage de charges entre les différentes couches composant un même HetNet, pour une expérience utilisateur (QoE) améliorée. Dans ce contexte, une classe des algorithmes d’équilibrage de charges dite ‘équilibrage de charges par adaptation dynamique des paramètres de la procédure de handover’ est étudiée. Pour commencer, nous développons un modèle théorique basé sur des solutions et des outils de la géométrie stochastique et incorporant le caractère hétérogène des réseaux cellulaires. Ensuite nous exploitons ce modèle pour introduire des algorithmes d’adaptation des paramètres de handover basés sur la maximisation de la puissance reçue et du rapport signal/brouillage plus bruit (SINR). Nous exploitons ces résultats pour implémenter et étudier, par simulation à évènements discrets, des algorithmes d’équilibrage de charges dans le contexte des réseaux LTE HetNet auto-organisés basés sur les spécifications 3GPP. Ces travaux soulignent l’importance de l’équilibrage de charges afin de booster les performances des réseaux cellulaires en termes de débit global transmis, perte de paquets de données et utilisation optimisée des ressources radio. / High demands on mobile networks provide a fresh opportunity to migrate towardsmulti-tier deployments, denoted as heterogeneous network (HetNet), involving a mix of cell types and radio access technologies working together seamlessly. In this context, network optimisation functionalities such as load balancing have to be properly engineered so that HetNet benefit are fully exploited. This dissertation aims to develop tractable frameworks to model and analyze load balancing dynamics while incorporating the heterogeneous nature of cellular networks. In this context we investigate and analyze a class of load balancingstrategies, namely adaptive handover based load balancing strategies. These latter were firstly studied under the general heading of stochastic networks using independent and homogeneous Poisson point processes based network model. We propose a baseline model to characterize rate coverage and handover signalling in K-tier HetNet with a general maximum power based cell association and adaptive handover strategies. Tiers differ in terms of deployment density and cells characteristics (i.e. transmit power, bandwidth, and path loss exponent). One of the main outcomes is demonstrating the impact of offloading traffic from macro- to small-tier. This impact was studied in terms of rate coverage and HO signalling. Results show that enhancement in rate coverage is penalized by HO signalling overhead. Then appropriate algorithms of LB based adaptive HO are designed and their performance is evaluated by means of extensive system level simulations. These latter are conducted in 3GPP defined scenarios, including representation of mobility procedures in both connectedstate. Simulation results show that the proposed LB algorithms ensure performance enhancement in terms of network throughput, packet loss ratio, fairness and HO signalling.

Page generated in 0.0963 seconds