Spelling suggestions: "subject:"aperating systems"" "subject:"boperating systems""
391 |
A New System Architecture for Heterogeneous Compute UnitsAsmussen, Nils 09 August 2019 (has links)
The ongoing trend to more heterogeneous systems forces us to rethink the design of systems. In this work, I study a new system design that considers heterogeneous compute units (general-purpose cores with different instruction sets, DSPs, FPGAs, fixed-function accelerators, etc.) from the beginning instead of as an afterthought. The goal is to treat all compute units (CUs) as first-class citizens, enabling (1) isolation and secure communication between all types of CUs, (2) a direct interaction of all CUs, removing the conventional CPU from the critical path, and (3) access to operating system (OS) services such as file systems and network stacks for all CUs.
To study this system design, I am using a hardware/software co-design based on two key ideas: 1) introduce a new hardware component next to each CU used by the OS as the CUs' common interface and 2) let the OS kernel control applications remotely from a different CU. The hardware component is called data transfer unit (DTU) and offers the minimal set of features to reach the stated goals: secure message passing and memory access. The OS is called M³ and runs its kernel on a dedicated CU and runs the OS services and applications on the remaining CUs. The kernel is responsible for establishing DTU-based communication channels between services and applications. After a channel has been set up, services and applications communicate directly without involving
the kernel. This approach allows to support arbitrary CUs as aforementioned first-class citizens, ranging from fixed-function accelerators to complex general-purpose cores.
|
392 |
System-wide Performance Analysis for VirtualizationJensen, Deron Eugene 13 June 2014 (has links)
With the current trend in cloud computing and virtualization, more organizations are moving their systems from a physical host to a virtual server.
Although this can significantly reduce hardware, power, and administration costs, it can increase the cost of analyzing performance problems. With virtualization, there is an initial performance overhead, and as more virtual machines are added to a physical host the interference increases between various guest machines. When this interference occurs, a virtualized guest application may not perform as expected. There is little or no information to the virtual OS about the interference, and the current performance tools in the guest are unable to show this interference.
We examine the interference that has been shown in previous research, and relate that to existing tools and research in root cause analysis. We show that in virtualization there are additional layers which need to be analyzed, and design a framework to determine if degradation is occurring from an external virtualization layer. Additionally, we build a virtualization test suite with Xen and PostgreSQL and run multiple tests to create I/O interference. We show that our method can distinguish between a problem caused by interference from external systems and a problem from within the virtual guest.
|
393 |
Towards Manifesting Reliability Issues In Modern Computer SystemsZheng, Mai 02 September 2015 (has links)
No description available.
|
394 |
Memory Turbo Boost: Architectural Support for Using Unused Memory for Memory Replication to Boost Server Memory PerformanceZhang, Da 28 June 2023 (has links)
A significant portion of the memory in servers today is often unused. Our large-scale study of HPC systems finds that more than half of the total memory in active nodes running user jobs are unused for 88% of the time. Google and Azure Cloud studies also report unused memory accounts for 40% of the total memory in their servers, on average.
Leaving so much memory unused is wasteful. To address this problem, we note that in the context of CPUs, Turbo Boost can turn off the unused cores to boost the performance of in-use cores. However, there is no equivalent technology in the context of memory; no matter how much memory is unused, the performance of in-use memory remains the same.
This dissertation explores architectural techniques to utilize the unused memory to boost the performance of in-use memory and refer to them collectively as Memory Turbo Boost. This dissertation explores how to turbo boost memory performance through memory replication; specifically, it explores how to efficiently store the replicas in the unused memory and explores multiple architectural techniques to utilize the replicas to enhance memory system performance.
Performance simulations show that Memory Turbo Boost can improve node-level performance by 18%, on average across a wide spectrum of workloads. Our system-wide simulations show applying Memory Turbo Boost to an HPC system provides 1.4x average speedup on job turnaround time. / Doctor of Philosophy / Today's servers often have a significant portion of their memory unused. Our large-scale study of HPC systems finds that more than half of the total memory of an HPC server is unused for most of the time; Google and Azure Cloud studies find that 40% of the total memory in their servers is often unused. Today's servers usually have 100s of GBs to TB memory; 40% unused memory means 10s-100s of GBs unused memory on the servers.
Leaving so much memory unused is wasteful. To address this problem, I note that there are techniques to leverage unused hardware resources to improve the performance of in-use resources in other types of hardware. For example, CPU Turbo Boost can turn off the unused cores to boost the performance of in-use cores; modern SSDs can use the unused space to switch the Multi-Level Cell blocks to Single-Level Cell blocks to boost performance. However, there is no equivalent technology in the context of memory; no matter how much memory is unused, the performance of in-use memory remains the same.
This dissertation explores techniques to utilize the unused memory to boost the performance of in-use memory and refer to them collectively as Memory Turbo Boost. Performance evaluations show that Memory Turbo Boost can provide up to 18% average performance improvement.
|
395 |
Communication and Control in Power Electronics SystemsMitrovic, Vladimir 17 December 2021 (has links)
The demands of a modern way of life have changed the way power electronics systems work. For instance, the grid has to provide not only the service of delivering electrical energy but also the communication to enable interactions between customers and enable them to be producers of electrical energy, too. Thus, the smart grid has come into existence. The consequence of the smart grid is that consumers could be “smart.” The most obvious consumers are households, so the houses have to also be smart and must be equipped with various power electronics devices for producing and managing electrical energy. Again, all those devices have to communicate somehow and provide data for managing electrical energy in the house. Zoomed in further, novel, state-of-the-art measurement equipment could have been built from different power electronics devices, and communication among them would be necessary for good operation. Zoomed further in, communication among different pieces of power electronics devices (such as converters) could offer benefits such as flexibility, abstraction, and modularity.
This thesis provides insight into different communication techniques and protocols used in power electronics systems. A top-down approach presents three different levels of communication used in real-life projects with all the challenges they bring, starting with the smart house, followed by the state-of-the-art impedance measurement unit, and finalizing with internal power electronics building block (PEBB) communication.
In the case of a smart house, where the house is equipped with solar panels, charge controllers, batteries, and inverters, communication allows interoperation between different
elements of the power electronics system, enabling energy management. Results show the operation of the system and energy management algorithm. A house of this type won first prize at an international competition where energy management was one of the disciplines.
The impedance measurement unit consists of different power electronics devices. In this case, too, communication between devices enables the operation of the impedance measurement unit. Communication techniques used here are shown together with measurement results.
Finally, inter-PEBB communication has been shown as an approach for interaction among the different elements inside the PEBB, such as controller, GDs, sensors, and actuators. Real-time communication protocol, including all challenges, is described and developed. This approach is shown to enable communication and synchronization among different nodes inside the PEBB. Communication enables all internal elements of the PEBB to be transparent outside the PEBB in the sense that data gathered from them could be reused anywhere else in the system. Also, this approach enables the development of distributed event (time) driven control, hardware and software, abstraction, high modularity, and flexibility. A very important aspect of inter-PEBB communication is synchronization. A simple technique of sharing a clock among the parts of a 6 kV PEBB has been shown. / M.S. / This thesis provides insight into different communication techniques and protocols used in power electronics systems. A top-down approach presents three different levels of communication used in real-life projects with all the challenges they bring, starting with the smart house and a custom device designed and developed to be a communication interface among different power electronics devices from different vendors, such as charge controllers or inverters, but with capabilities not only to communicate but to also provide a platform for the development of energy management algorithms used to make houses grid zero if not grid positive.
Aside from the smart house, this thesis describes communication protocols and techniques used in the impedance measurement unit (IMU). This complex measurement device provides valuable and accurate impedance measurements and consists of different power electronics devices that need to communicate.
Finally, at the power electronics building block (PEBB) level, real-time communication protocol with all challenges is described. Developed communication protocol provides communication and synchronization among different nodes such as GDs, sensors, and actuators inside the PEBB. This intra-PEBB communication and synchronization combined with inter-PEBB communication and synchronization provide the foundation for the development of truly distributed event- (time-) driven control as well as hardware and software abstraction.
|
396 |
Исследование возможностей разработки мобильного приложения под российскую операционную систему : магистерская диссертация / Research into the possibilities of developing a mobile application for a Russian operating systemКазаков, В. В., Kazakov, V. V. January 2024 (has links)
The subject of the research is the development of a mobile application for interaction with Directum Rx for a Russian operating system. The purpose of the work is to analyze the possibilities of implementing a mobile application for interaction with Directum Rx for a Russian OS. Research methods: comparative analysis for choosing a Russian mobile operating system; comparative analysis for choosing a set of tools for developing a mobile application for a Russian mobile operating system; literature analysis to study the necessary functionality of the application, build the architecture of the mobile application, study the documentation on the development of mobile applications for the Russian mobile operating system. The result of the work is a conducted study of Russian mobile OS, an analysis of the tools for developing application software defined for a Russian mobile OS, a designed architecture of a mobile application for a specific Russian operating system and automation of the development process. Based on the results of the work, the results were implemented in the company OOO Starkov Group. / Предмет исследования является разработка мобильного приложения для взаимодействия с Directum Rx под российскую операционную систему. Цель работы – анализ возможностей реализации мобильного приложения для взаимодействия с Directum Rx под российскую ОС. Методы исследования: сравнительный анализ по выбору российской мобильной операционной системы; сравнительный анализ по выбору набора инструментов для разработки мобильного приложения под российскую мобильную операционную систему; анализ литературы для изучения необходимого функционала приложения, построения архитектуры мобильного приложения, изучения документации по разработке мобильных приложения под российскую мобильную операционную систему. Результатом работы является проведенное исследование российских мобильных ОС, проведенный анализ инструментов разработки прикладного ПО определенную под российскую мобильную ОС, спроектированная архитектура мобильного приложения под определенную российскую операционную систему и автоматизация процесса разработки. По итогам работы результаты были внедрены в компании ООО «Старков Групп».
|
397 |
Towards a model for teaching distributed computing in a distance-based educational environmentLe Roux, Petra 02 1900 (has links)
Several technologies and languages exist for the development and implementation of distributed systems. Furthermore, several models for teaching computer programming and teaching programming in a distance-based educational environment exist. Limited literature, however, is available on models for teaching distributed computing in a distance-based educational environment. The focus of this study is to examine how distributed computing should be taught in a distance-based educational environment so as to ensure effective and quality learning for students. The required effectiveness and quality should be comparable to those for students exposed to laboratories, as commonly found in residential universities. This leads to an investigation of the factors that contribute to the success of teaching distributed computing and how these factors can be integrated into a distance-based teaching model. The study consisted of a literature study, followed by a comparative study of available tools to aid in the learning and teaching of distributed computing in a distance-based educational environment. A model to accomplish this teaching and learning is then proposed and implemented. The findings of the study highlight the requirements and challenges that a student of distributed computing in a distance-based educational environment faces and emphasises how the proposed model can address these challenges. This study employed qualitative research, as opposed to quantitative research, as qualitative research methods are designed to help researchers to understand people and the social and cultural contexts within which they live. The research methods employed are design research, since an artefact is created, and a case study, since “how” and “why” questions need to be answered. Data collection was done through a survey. Each method was evaluated via its own well-established evaluation methods, since evaluation is a crucial component of the research process. / Computing / M. Sc. (Computer Science)
|
398 |
Extensible Networked-storage Virtualization with Metadata Management at the Block LevelFlouris, Michail D. 24 September 2009 (has links)
Increased scaling costs and lack of desired features is leading to the evolution of high-performance storage systems from centralized architectures and specialized hardware to
decentralized, commodity storage clusters. Existing systems try to address storage cost and management issues at the filesystem level. Besides dictating the use of a specific filesystem, however, this approach leads to increased complexity and load imbalance towards the file-server side,
which in turn increase costs to scale.
In this thesis, we examine these problems at the block-level. This approach has several advantages, such as transparency, cost-efficiency, better resource utilization,
simplicity and easier management.
First of all, we explore the mechanisms, the merits, and the overheads associated with advanced metadata-intensive functionality at the block level, by providing versioning at
the block level. We find that block-level versioning has low overhead and offers transparency and simplicity advantages over filesystem-based approaches.
Secondly, we study the problem of providing extensibility required by diverse and changing application needs that may
use a single storage system. We provide support for (i)adding desired functions as block-level extensions, and (ii)flexibly combining them to create modular I/O
hierarchies. In this direction, we design, implement and evaluate an extensible block-level storage virtualization framework, Violin, with support for metadata-intensive
functions. Extending Violin we build Orchestra, an extensible framework for cluster storage virtualization and scalable storage sharing at the block-level. We show that Orchestra's enhanced block interface can substantially simplify the design of higher-level storage services, such
as cluster filesystems, while being scalable.
Finally, we consider the problem of consistency and availability in decentralized commodity clusters. We propose
RIBD, a novel storage system that provides support for handling both data and metadata consistency issues at the block layer. RIBD uses the notion of consistency intervals
(CIs) to provide fine-grain consistency semantics on sequences of block level operations by means of a lightweight transactional mechanism. RIBD relies on
Orchestra's virtualization mechanisms and uses a roll-back recovery mechanism based on low-overhead block-level versioning. We evaluate RIBD on a cluster of 24 nodes, and
find that it performs comparably to two popular cluster filesystems, PVFS and GFS, while offering stronger consistency guarantees.
|
399 |
Autonomic management in a distributed storage systemTauber, Markus January 2010 (has links)
This thesis investigates the application of autonomic management to a distributed storage system. Effects on performance and resource consumption were measured in experiments, which were carried out in a local area test-bed. The experiments were conducted with components of one specific distributed storage system, but seek to be applicable to a wide range of such systems, in particular those exposed to varying conditions. The perceived characteristics of distributed storage systems depend on their configuration parameters and on various dynamic conditions. For a given set of conditions, one specific configuration may be better than another with respect to measures such as resource consumption and performance. Here, configuration parameter values were set dynamically and the results compared with a static configuration. It was hypothesised that under non-changing conditions this would allow the system to converge on a configuration that was more suitable than any that could be set a priori. Furthermore, the system could react to a change in conditions by adopting a more appropriate configuration. Autonomic management was applied to the peer-to-peer (P2P) and data retrieval components of ASA, a distributed storage system. The effects were measured experimentally for various workload and churn patterns. The management policies and mechanisms were implemented using a generic autonomic management framework developed during this work. The motivation for both groups of experiments was to test management policies with the objective to avoid unsatisfactory situations with respect to resource consumption and performance. Such unsatisfactory situations occur when either the P2P layer or the data retrieval mechanism is configured statically. In a statically configured P2P system two unsatisfactory situations can be identified. The first arises when the frequency with which P2P node states are verified is low and membership churn is high. The P2P node state becomes inaccurate due to a high membership churn, leading to errors during the routing process and a reduction in performance. In this situation it is desirable to increase the frequency to increase P2P state accuracy. The converse situation arises when the frequency is high and churn is low. In this situation network resources are used unnecessarily, which may also reduce performance, making it desirable to decrease the frequency. In ASA’s data retrieval mechanism similar unsatisfactory situations can be identified with respect to the degree of concurrency (DOC). The DOC controls the eagerness with which multiple redundant replicas are retrieved. An unsatisfactory situation arises when the DOC is low and there is a large variation in the times taken to retrieve replicas. In this situation it is desirable to increase the DOC, because by retrieving more replicas in parallel a result can be returned to the user sooner. The converse situation arises when the DOC is high, there is little variation in retrieval time and there is a network bottleneck close to the requesting client. In this situation it is desirable to decrease the DOC, since the low variation removes any benefit in parallel retrieval, and the bottleneck means that decreasing parallelism reduces both bandwidth consumption and elapsed time for the user. The experimental evaluations of autonomic management show promising results, and suggest several future research topics. These include optimisations of the managed mechanisms, alternative management policies, different evaluation methods, and the application of developed management mechanisms to other facets of a distributed storage system. The findings of this thesis could be exploited in building other distributed storage systems that focus on harnessing storage on user workstations, since these are particularly likely to be exposed to varying, unpredictable conditions.
|
400 |
Rétro-ingénierie des plateformes pour le déploiement des applications temps-réel / Reverse-engineering of platforms for the deployment of real-time applicationsMzid, Rania 12 May 2014 (has links)
Les travaux présentés dans cette thèse s’inscrivent dans le cadre du développement logiciel des systèmes temps réel embarqués. Nous définissons dans ce travail une méthodologie nommée DRIM. Cette méthodologie permet de guider le déploiement des applications temps réel sur différents RTOS en suivant la ligne de l’IDM et en assurant le respect des contraintes de temps après le déploiement. L’automatisation de la méthodologie DRIM montre sa capacité à détecter les descriptions non-implémentables de l’application, réalisées au niveau conception, pour un RTOS donné, ce qui présente l’avantage de réduire le temps de mise sur le marché d’une part et de guider l’utilisateur pour un choix approprié de l’RTOS cible d’autre part. / The main purpose of this Phd is to contribute to the software development of real-time embedded systems. We define in this work a methodology named DRIM: Design Refinement toward Implementation Methodology. This methodology aims to guide the deployment of a real-time application on to different RTOS while respecting MDE principals and ensuing that the timing properties are still met after deployment. The automation of DRIM shows its ability to detect non-implementable design models describing the real-time application, on aparticular RTOS, which permits to reduce the time-to-market on the one hand and guide the user to the selection of the appropriate RTOS from the other hand.
|
Page generated in 0.0912 seconds