• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 195
  • 65
  • 55
  • 16
  • 16
  • 8
  • 6
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 2
  • Tagged with
  • 440
  • 440
  • 236
  • 230
  • 105
  • 83
  • 77
  • 72
  • 62
  • 56
  • 54
  • 54
  • 51
  • 49
  • 48
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
401

The Thermal-Constrained Real-Time Systems Design on Multi-Core Platforms -- An Analytical Approach

SHA, SHI 21 March 2018 (has links)
Over the past decades, the shrinking transistor size enabled more transistors to be integrated into an IC chip, to achieve higher and higher computing performances. However, the semiconductor industry is now reaching a saturation point of Moore’s Law largely due to soaring power consumption and heat dissipation, among other factors. High chip temperature not only significantly increases packing/cooling cost, degrades system performance and reliability, but also increases the energy consumption and even damages the chip permanently. Although designing 2D and even 3D multi-core processors helps to lower the power/thermal barrier for single-core architectures by exploring the thread/process level parallelism, the higher power density and longer heat removal path has made the thermal problem substantially more challenging, surpassing the heat dissipation capability of traditional cooling mechanisms such as cooling fan, heat sink, heat spread, etc., in the design of new generations of computing systems. As a result, dynamic thermal management (DTM), i.e. to control the thermal behavior by dynamically varying computing performance and workload allocation on an IC chip, has been well-recognized as an effective strategy to deal with the thermal challenges. Over the past decades, the shrinking transistor size, benefited from the advancement of IC technology, enabled more transistors to be integrated into an IC chip, to achieve higher and higher computing performances. However, the semiconductor industry is now reaching a saturation point of Moore’s Law largely due to soaring power consumption and heat dissipation, among other factors. High chip temperature not only significantly increases packing/cooling cost, degrades system performance and reliability, but also increases the energy consumption and even damages the chip permanently. Although designing 2D and even 3D multi-core processors helps to lower the power/thermal barrier for single-core architectures by exploring the thread/process level parallelism, the higher power density and longer heat removal path has made the thermal problem substantially more challenging, surpassing the heat dissipation capability of traditional cooling mechanisms such as cooling fan, heat sink, heat spread, etc., in the design of new generations of computing systems. As a result, dynamic thermal management (DTM), i.e. to control the thermal behavior by dynamically varying computing performance and workload allocation on an IC chip, has been well-recognized as an effective strategy to deal with the thermal challenges. Different from many existing DTM heuristics that are based on simple intuitions, we seek to address the thermal problems through a rigorous analytical approach, to achieve the high predictability requirement in real-time system design. In this regard, we have made a number of important contributions. First, we develop a series of lemmas and theorems that are general enough to uncover the fundamental principles and characteristics with regard to the thermal model, peak temperature identification and peak temperature reduction, which are key to thermal-constrained real-time computer system design. Second, we develop a design-time frequency and voltage oscillating approach on multi-core platforms, which can greatly enhance the system throughput and its service capacity. Third, different from the traditional workload balancing approach, we develop a thermal-balancing approach that can substantially improve the energy efficiency and task partitioning feasibility, especially when the system utilization is high or with a tight temperature constraint. The significance of our research is that, not only can our proposed algorithms on throughput maximization and energy conservation outperform existing work significantly as demonstrated in our extensive experimental results, the theoretical results in our research are very general and can greatly benefit other thermal-related research.
402

Chiefly Symmetric: Results on the Scalability of Probabilistic Model Checking for Operating-System Code

Baier, Christel, Daum, Marcus, Engel, Benjamin, Härtig, Hermann, Klein, Joachim, Klüppelholz, Sascha, Märcker, Steffen, Tews, Hendrik, Völp, Marcus 10 September 2013 (has links) (PDF)
Reliability in terms of functional properties from the safety-liveness spectrum is an indispensable requirement of low-level operating-system (OS) code. However, with evermore complex and thus less predictable hardware, quantitative and probabilistic guarantees become more and more important. Probabilistic model checking is one technique to automatically obtain these guarantees. First experiences with the automated quantitative analysis of low-level operating-system code confirm the expectation that the naive probabilistic model checking approach rapidly reaches its limits when increasing the numbers of processes. This paper reports on our work-in-progress to tackle the state explosion problem for low-level OS-code caused by the exponential blow-up of the model size when the number of processes grows. We studied the symmetry reduction approach and carried out our experiments with a simple test-and-test-and-set lock case study as a representative example for a wide range of protocols with natural inter-process dependencies and long-run properties. We quickly see a state-space explosion for scenarios where inter-process dependencies are insignificant. However, once inter-process dependencies dominate the picture models with hundred and more processes can be constructed and analysed.
403

System Support for End-to-End Performance Management

Agarwala, Sandip 09 July 2007 (has links)
This dissertation introduces, implements, and evaluates the novel concept of "Service Paths", which are system-level abstractions that capture and describe the dynamic dependencies between the different components of a distributed enterprise application. Service paths are dynamic because they capture the natural interactions between application services dynamically composed to offer some desired end user functionality. Service paths are distributed because such sets of services run on networked machines in distributed enterprise data centers. Service paths cross multiple levels of abstraction because they link end user application components like web browsers with system services like http providing communications with embedded services like hardware-supported data encryption. Service paths are system-level abstractions that are created without end user, application, or middleware input, but despite these facts, they are able to capture application-relevant performance metrics, including end-to-end latencies for client requests and the contributions to these latencies from application-level processes and from software/hardware resources like protocol stacks or network devices. Beyond conceiving of service paths and demonstrating their utility, this thesis makes three concrete technical contributions. First, we propose a set of signal analysis techniques called ``E2Eprof' that identify the service paths taken by different request classes across a distributed IT infrastructure and the time spent in each such path. It uses a novel algorithm called ``pathmap' that computes the correlation between the message arrival and departure timestamps at each participating node and detect dependencies among them. A second contribution is a system-level monitoring toolkit called ``SysProf', which captures monitoring information at different levels of granularity, ranging from tracking the system-level activities triggered by a single system call, to capturing the client-server interactions associated with a service paths, to characterizing the server resources consumed by sets of clients or client behaviors. The third contribution of the thesis is a publish-subscribe based monitoring data delivery framework called ``QMON'. QMON offers high levels of predictability for service delivery and supports utility-aware monitoring while also able to differentiate between different levels of service for monitoring, corresponding to the different classes of SLAs maintained for applications.
404

Time Management In Partitioned Systems

Kodancha, A Hariprasad 10 1900 (has links)
Time management is one of the critical modules of safety-critical systems. Applications need strong assurance from the operating system that their hard real-time requirements are met. Partitioned system has recently evolved as a means to provide protection to safety critical applications running on an Avionics computer resource. Each partition has an application running strictly for a specified duration. These applications use the CPU on a cyclic basis. Applications running on a real-time systems request the service of time management in one way or the other. An application may request for a time-out while waiting for a resource, may voluntarily relinquish the CPU for some delay time or may have deadline before which it is expected to complete its tasks. These requests must be handled in a deterministic and accurate way with lower overheads. Time management within an operating system uses the hardware timers to service the time-out requests. The three well-known approaches for handling timer requests are tick-based, one-shot and firm timer. Traditionally tick-based has been the most popular approach that relies on periodic interrupt timer, although it has a poor accuracy. One-shot timer approach provides better accuracy as the timer interrupt can be generated exactly when required. Firm timers use soft timers in combination with one-shot timer wherein the expired timers are checked at strategic points in the kernel. The thesis compares the performance of these three approaches for partitioned systems and provides an insight about the suitability of the approaches. The thesis presents tick-based and one-shot timer algorithms that handle time-out requests of real-time applications running on a partitioned system by adhering to time partitioning rules. It compares the performance of these algorithms. It presents an one-shot timer algorithm named hierarchical multiple linked lists and the experimental results proves that the algorithm performs better than other conventional linked list based one-shot timer algorithms. The thesis also analyzes the timing behavior of real-time applications for partitioned systems. The hard real-time system under consideration is avionics system and an indigenously developed ARINC-653 compliant real-time operating system has been used to measure the performance.
405

Efficient end-to-end monitoring for fault management in distributed systems

Feng, Dawei 27 March 2014 (has links) (PDF)
In this dissertation, we present our work on fault management in distributed systems, with motivating application roots in monitoring fault and abrupt change of large computing systems like the grid and the cloud. Instead of building a complete a priori knowledge of the software and hardware infrastructures as in conventional detection or diagnosis methods, we propose to use appropriate techniques to perform end-to-end monitoring for such large scale systems, leaving the inaccessible details of involved components in a black box.For the fault monitoring of a distributed system, we first model this probe-based application as a static collaborative prediction (CP) task, and experimentally demonstrate the effectiveness of CP methods by using the max margin matrix factorization method. We further introduce active learning to the CP framework and exhibit its critical advantage in dealing with highly imbalanced data, which is specially useful for identifying the minority fault class.Further we extend the static fault monitoring to the sequential case by proposing the sequential matrix factorization (SMF) method. SMF takes a sequence of partially observed matrices as input, and produces predictions with information both from the current and history time windows. Active learning is also employed to SMF, such that the highly imbalanced data can be coped with properly. In addition to the sequential methods, a smoothing action taken on the estimation sequence has shown to be a practically useful trick for enhancing sequential prediction performance.Since the stationary assumption employed in the static and sequential fault monitoring becomes unrealistic in the presence of abrupt changes, we propose a semi-supervised online change detection (SSOCD) framework to detect intended changes in time series data. In this way, the static model of the system can be recomputed once an abrupt change is detected. In SSOCD, an unsupervised offline method is proposed to analyze a sample data series. The change points thus detected are used to train a supervised online model, which gives online decision about whether there is a change presented in the arriving data sequence. State-of-the-art change detection methods are employed to demonstrate the usefulness of the framework.All presented work is verified on real-world datasets. Specifically, the fault monitoring experiments are conducted on a dataset collected from the Biomed grid infrastructure within the European Grid Initiative, and the abrupt change detection framework is verified on a dataset concerning the performance change of an online site with large amount of traffic.
406

Extensible Networked-storage Virtualization with Metadata Management at the Block Level

Flouris, Michail D. 24 September 2009 (has links)
Increased scaling costs and lack of desired features is leading to the evolution of high-performance storage systems from centralized architectures and specialized hardware to decentralized, commodity storage clusters. Existing systems try to address storage cost and management issues at the filesystem level. Besides dictating the use of a specific filesystem, however, this approach leads to increased complexity and load imbalance towards the file-server side, which in turn increase costs to scale. In this thesis, we examine these problems at the block-level. This approach has several advantages, such as transparency, cost-efficiency, better resource utilization, simplicity and easier management. First of all, we explore the mechanisms, the merits, and the overheads associated with advanced metadata-intensive functionality at the block level, by providing versioning at the block level. We find that block-level versioning has low overhead and offers transparency and simplicity advantages over filesystem-based approaches. Secondly, we study the problem of providing extensibility required by diverse and changing application needs that may use a single storage system. We provide support for (i)adding desired functions as block-level extensions, and (ii)flexibly combining them to create modular I/O hierarchies. In this direction, we design, implement and evaluate an extensible block-level storage virtualization framework, Violin, with support for metadata-intensive functions. Extending Violin we build Orchestra, an extensible framework for cluster storage virtualization and scalable storage sharing at the block-level. We show that Orchestra's enhanced block interface can substantially simplify the design of higher-level storage services, such as cluster filesystems, while being scalable. Finally, we consider the problem of consistency and availability in decentralized commodity clusters. We propose RIBD, a novel storage system that provides support for handling both data and metadata consistency issues at the block layer. RIBD uses the notion of consistency intervals (CIs) to provide fine-grain consistency semantics on sequences of block level operations by means of a lightweight transactional mechanism. RIBD relies on Orchestra's virtualization mechanisms and uses a roll-back recovery mechanism based on low-overhead block-level versioning. We evaluate RIBD on a cluster of 24 nodes, and find that it performs comparably to two popular cluster filesystems, PVFS and GFS, while offering stronger consistency guarantees.
407

Amélioration de la qualité des codes de gestion d'erreur dans les logiciels système en utilisant des informations locales aux fonctions

Saha, Suman 25 March 2013 (has links) (PDF)
En C, une stratégie classique pour implémenter les codes de gestion d'erreur est de faire suivre chaque opération qui peut générer une erreur d'une structure conditionnelle qui teste si l'opération a renvoyé une erreur. Ce stratégie basique, cependant, est sujette à erreurs, et il est courant d'oublier des opérations de nettoyage requises, ainsi que d'oublier de mettre à jour des codes de gestion d'erreur existants lorsque la fonction est étendue avec de nouvelles opérations. De plus, une partie importante du code doit souvent être dupliquée. Un style de programmation, <EM> stratégie goto </EM>, qui peut réduire en partie certaines de ces difficultés. Pour améliorer la structure des codes de gestion d'erreur dans les logiciels système, nous définissions un algorithme qui permet de transformer les codes de gestion d'erreur implémentés suivant la stratégie basique en codes de gestion d'erreur qui utilisent la <EM> stratégie goto</EM>. Même lorsque les codes de gestion d'erreurs sont structurés, la gestion et la libération des ressources allouées restent un problème lorsqu'il s'agit d'assurer la robustesse du code système. Dans cette thèse, nous proposons un algorithme <EM> microscopique </EM> de détection d'omission de libération de ressource, basé sur une analyse principalement intra-procédurale, qui prend en compte les flux et les chemins du code et qui cible et exploite les propriétés des codes de gestion d'erreur. Notre algorithme est résistant aux faux positifs dans l'ensemble des acquisitions de ressources et des opérations de libération, ce qui produit un faible taux de faux positifs dans les rapports renvoyés par l'outil tout en passant à l'échelle.
408

Optimisation of Performance Metrics of Embedded Hard Real-Time Systems using Software/Hardware Parallelism

Paolillo, Antonio 17 October 2018 (has links)
Optimisation of Performance Metrics of Embedded Hard Real-Time Systems using Software/Hardware Parallelism. Nowadays, embedded systems are part of our daily lives.Some of these systems are called safetycritical and have strong requirements in terms of safety and reliability.Additionally, these systems must have a long autonomy, good performance and minimal costs.Finally, these systems must exhibit predictable behaviour and provide their results within firm deadlines.When these different constraints are combined in the requirement specifications of a modern product, classic design techniques making use of single core platforms are not sufficient.Academic research in the field of real-time embedded systems has produced numerous techniques to exploit the capabilities of modern hardware platforms.These techniques are often based on using parallelism inherently present in modern hardware to improve the system performance while reducing the platform power dissipation.However, very few systems existing on the market are using these state-of-the-art techniques.Moreover, few of these techniques have been validated in the context of practical experiments.In this thesis, we realise the study of operating system level techniques allowing to exploit hardware parallelism through the implementation of parallel software in order to boost the performance of target applications and to reduce the overall system energy consumption while satisfying strict application timing requirements.We detail the theoretical foundations of the ideas applied in the dissertation and validate these ideas through experimental work.To this aim, we use a new Real-Time Operating System kernel written in the context of the creation of a spin-off of the Université libre de Bruxelles.Our experiments are based on the execution of applications on the operating system which run on a real-world platform for embedded systems.Our results show that, compared to traditional design techniques, using parallel and power-aware scheduling techniques in order to exploit hardware and software parallelism allows to execute embedded applications with substantial savings in terms of energy consumption.We present future and ongoing research work that exploit the capabilities of recent embedded platforms.These platforms combine multi-core processors and reconfigurable hardware logic, allowing further improvements in performance and energy consumption. / Optimisation de Métriques de Performances de Systèmes Embarqués Temps Réel Durs par utilisation du Parallélisme Logiciel et Matériel. De nos jours, les systèmes embarqués font partie intégrante de notre quotidien.Certains de ces systèmes, appelés systèmes critiques, sont soumis à de fortes contraintes de fiabilité et de robustesse.De plus, des contraintes de coûts, d’autonomie et de performances s’additionnent à la fiabilité.Enfin, ces systèmes doivent très souvent respecter des délais très stricts de façon prédictible.Lorsque ces différentes contraintes sont combinées dans le cahier de charge d’un produit, les techniques classiques de conception consistant à utiliser un seul cœur d’un processeur ne suffisent plus.La recherche académique dans le domaine des systèmes embarqués temps réel a produit de nombreuses techniques pour exploiter les plate-formes modernes.Ces techniques sont souvent basées sur l’exploitation du parallélisme inhérent au matériel pour améliorer les performances du système et la puissance dissipée par la plate-forme.Cependant, peu de systèmes existant sur le marché exploitent ces techniques de la littérature et peu de ces techniques ont été validées dans le cadre d’expériences pratiques.Dans cette thèse, nous réalisons l’étude des techniques, au niveau du système d’exploitation, permettant l’exploitation du parallélisme matériel par l’implémentation de logiciels parallèles afin de maximiser les performances et réduire l’impact sur l’énergie consommée tout en satisfaisant les contraintes temporelles strictes du cahier de charge applicatif. Nous détaillons les fondements théoriques des idées qui sont appliquées dans la dissertation et nous les validons par des travaux expérimentaux.A ces fins, nous utilisons le nouveau noyau d’un système d’exploitation écrit dans le cadre de la création d’une spin-off de l’Université libre de Bruxelles.Nos expériences, basées sur l’exécution d’applications sur le système d’exploitation qui s’exécute lui-même sur une plate-forme embarquée réelle, montre que l’utilisation de techniques d’ordonnancement exploitant le parallélisme matériel et logiciel permet de larges économies d’énergie consommée lors de l’exécution d’applications embarquées.De futurs travaux en cours de réalisation sont présentés.Ceux-ci exploitent des plate-formes innovantes qui combinent processeurs multi-cœurs et matériel reconfigurable, permettant d’aller encore plus loin dans l’amélioration des performances et les gains énergétiques. / Doctorat en Sciences / info:eu-repo/semantics/nonPublished
409

Towards a model for teaching distributed computing in a distance-based educational environment

Le Roux, Petra 02 1900 (has links)
Several technologies and languages exist for the development and implementation of distributed systems. Furthermore, several models for teaching computer programming and teaching programming in a distance-based educational environment exist. Limited literature, however, is available on models for teaching distributed computing in a distance-based educational environment. The focus of this study is to examine how distributed computing should be taught in a distance-based educational environment so as to ensure effective and quality learning for students. The required effectiveness and quality should be comparable to those for students exposed to laboratories, as commonly found in residential universities. This leads to an investigation of the factors that contribute to the success of teaching distributed computing and how these factors can be integrated into a distance-based teaching model. The study consisted of a literature study, followed by a comparative study of available tools to aid in the learning and teaching of distributed computing in a distance-based educational environment. A model to accomplish this teaching and learning is then proposed and implemented. The findings of the study highlight the requirements and challenges that a student of distributed computing in a distance-based educational environment faces and emphasises how the proposed model can address these challenges. This study employed qualitative research, as opposed to quantitative research, as qualitative research methods are designed to help researchers to understand people and the social and cultural contexts within which they live. The research methods employed are design research, since an artefact is created, and a case study, since “how” and “why” questions need to be answered. Data collection was done through a survey. Each method was evaluated via its own well-established evaluation methods, since evaluation is a crucial component of the research process. / Computing / M. Sc. (Computer Science)
410

Bridging the gap between Privacy by Design and mobile systems by patterns / Modèles pour les environnements de terminaux nomades "Privacy by Design"

Sokolova, Karina 27 April 2016 (has links)
De nos jours, les smartphones et les tablettes génèrent, reçoivent, mémorisent et transfèrent vers des serveurs une grande quantité de données en proposant des services aux utilisateurs via des applications mobiles facilement téléchargeables et installables. Le grand nombre de capteurs intégrés dans un smartphone lui permet de collecter de façon continue des informations très précise sur l'utilisateur et son environnement. Cette importante quantité de données privées et professionnelles devient difficile à superviser.L'approche «Privacy by Design», qui inclut sept principes, propose d'intégrer la notion du respect des données privées dès la phase de la conception d’un traitement informatique. En Europe, la directive européenne sur la protection des données privées (Directive 95/46/EC) intègre des notions du «Privacy by Design». La nouvelle loi européenne unifiée (General Data Protection Régulation) renforce la protection et le respect des données privées en prenant en compte les nouvelles technologies et confère au concept de «Privacy by Design» le rang d’une obligation légale dans le monde des services et des applications mobiles.L’objectif de cette thèse est de proposer des solutions pour améliorer la transparence des utilisations des données personnelles mobiles, la visibilité sur les systèmes informatiques, le consentement et la sécurité pour finalement rendre les applications et les systèmes mobiles plus conforme au «Privacy by (re)Design» / Nowadays, smartphones and smart tablets generate, receive, store and transfer substantial quantities of data, providing services for all possible user needs with easily installable programs, also known as mobile applications. A number of sensors integrated into smartphones allow the devices to collect very precise information about the owner and his environment at any time. The important flow of personal and business data becomes hard to manage.The “Privacy by Design” approach with 7 privacy principles states privacy can be integrated into any system from the software design stage. In Europe, the Data Protection Directive (Directive 95/46/EC) includes “Privacy by Design” principles. The new General Data Protection Regulation enforces privacy protection in the European Union, taking into account modern technologies such as mobile systems and making “Privacy by Design” not only a benefit for users, but also a legal obligation for system designers and developers.The goal of this thesis is to propose pattern-oriented solutions to cope with mobile privacy problems, such as lack of transparency, lack of consent, poor security and disregard for purpose limitation, thus giving mobile systems more Privacy by (re) Design

Page generated in 0.3336 seconds