• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 235
  • 130
  • 120
  • 108
  • 83
  • 48
  • 23
  • 14
  • 13
  • 7
  • 6
  • 5
  • 2
  • 2
  • 2
  • Tagged with
  • 840
  • 248
  • 210
  • 189
  • 130
  • 127
  • 126
  • 117
  • 107
  • 90
  • 75
  • 75
  • 66
  • 62
  • 62
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Specializing a general-purpose operating system

Raza, Ali 10 September 2024 (has links)
This thesis aims to address the growing disconnect between the goals general-purpose operating systems were designed to achieve and the requirements of some of today’s new workloads and use cases. General-purpose operating systems multiplex system resources between multiple non-trusting workloads and users. They have generalized code paths, designed to support diverse applications, potentially running concurrently. This generality comes at a performance cost. In contrast, many modern data center workloads are often deployed separately in single-user, and often single workload, virtual machines and require specialized behavior from the operating system for high-speed I/O. Unikernels, library operating systems, and systems that exploit kernel bypass mechanisms have been developed to provide high-speed I/O by being specialized to meet the needs of performance-critical workloads. These systems have demonstrated immense performance advantages over general-purpose operating systems but have yet to see widespread adoption. This is because, compared to general-purpose operating systems, these systems lack a battle-tested code base, a large developer community, wide application, and hardware support, and a vast ecosystem of tools, utilities, etc. This thesis explores a novel view of the design space; a generality-specialization spectrum. General-purpose operating systems like Linux lie at one end of this spectrum; they are willing to sacrifice performance to support a wide range of applications and a broad set of use cases. As we move towards the specialization end, different specializable systems like unikernels, library operating systems, and those that exploit kernel bypass mechanisms appear at different points based on how much specialization a system enables and how much application and hardware compatibility it gives up compared to general-purpose operating systems. Is it possible, at compile/configure time, to enable a system to move to different points on the generality-specialization spectrum depending on the needs of the workload? Any application would just work at the generality end, where application and hardware compatibility and the ecosystem of the general-purpose operating system are preserved. Developers can then focus on optimizing performance-critical code paths only, based on application requirements, to improve performance. With each new optimization added, the set of target applications would shrink. In other words, the system would be specialized for a class of applications, offering high performance for a potentially narrow set of use cases. If such a system could be designed, it would have the application and hardware compatibility and ecosystem of general-purpose operating systems as a starting point. Based on the target application, select code paths of this system can then be incrementally optimized to improve performance, moving the system to the specializable end of the spectrum. This would be different from previous specializable systems, which are designed to demonstrate huge performance advantages over general-purpose operating systems, but then try to retrofit application and hardware compatibility. To explore the above question, this thesis proposes Unikernel Linux (UKL), which integrates optimizations explored by specializable systems to Linux. It starts at the general-purpose end of the spectrum and, by linking an application with the kernel, kernel mode execution, and replacing system calls with function calls, offers a minimal performance advantage over Linux. This base model of UKL supports most Linux applications (after recompiling and relinking) and hardware. Further, this thesis explores common optimizations explored by specializable systems, e.g., faster transitions between application and kernel code, avoiding stack switches, run-to-completion modes, and bypassing the kernel TCP state machine to access low-level functions directly. These optimizations allow higher performance advantages over unmodified Linux but apply to a narrower set of workloads. Contributions of this thesis include proposing a novel approach to specialization, i.e., adding optimizations to a general-purpose operating system to move it along the generality-specialization spectrum, an existence proof that optimizations explored by specializable systems can be integrated into a general-purpose operating system without major changes to the invariants, assumptions, and code of that general purpose operating system, a demonstration that the resulting system can be moved on the generality-specialization spectrum, and showing that performance gains are possible.
132

Secure Communication in a Multi-OS-Environment

Bathe, Shivraj Gajanan 02 February 2016 (has links) (PDF)
Current trend in automotive industry is moving towards adopting the multicore microcontrollers in Electronic Control Units (ECUs). Multicore microcontrollers give an opportunity to run a number of separated and dedicated operating systems on a single ECU. When two heterogeneous operating systems run in parallel on a multicore environment, the inter OS communication between these operating systems become the key factor in the overall performance. The inter OS communication based on shared memory is studied in this thesis work. In a setup where two operating systems namely EB Autocore OS which is based on AUTomotive Open System Architecture standard and Android are considered. Android being the gateway to the internet and due to its open nature and the increased connectivity features of a connected car, many attack surfaces are introduced to the system. As safety and security go hand in hand, the security aspects of the communication channel are taken into account. A portable prototype for multi OS communication based on shared memory communication with security considerations is developed as a plugin for EB tresos Studio.
133

Koexistenz von AUTOSAR Softwarekomponenten und Linux-Programmen für zukünftige High Performance Automotive Steuergeräte

Jann, Christian 04 May 2016 (has links) (PDF)
Moderne Fahrerassistenzsysteme und der Weg zum autonomen Fahren stellen immer größere Anforderungen an die Steuergeräte Hard- und Software im Fahrzeug. Um diese Anforderungen zu erfüllen kommen vermehrt hochperformante Steuergeräte mit einer heterogenen Prozessorarchitektur zum Einsatz. Ein Safety-Prozessor, auf dem ein standardmäßiges AUTOSAR-Betriebssystem ausgeführt wird, übernimmt dabei die echtzeitkritischen und sicherheitsrelevanten Aufgaben wohingegen die rechenintensiven und dynamischen Aufgaben auf einem sehr viel leistungsfähigeren Performance-Prozessor unter einem POSIX-Betriebssystem wie zum Beispiel Linux ausgeführt werden. Hierbei soll es ermöglicht werden unter dem Linux System ebenfalls AUTOSAR Softwarekomponenten und Module auszuführen, welche beispielsweise die im Fahrzeug verwendeten Kommunikationsprotokolle umsetzen oder weniger sicherheitskritische Aufgaben erfüllen. Auf diese Weise lassen sich andere Steuergeräte im Fahrzeug entlasten. Dazu wurde im Rahmen dieser Arbeit eine Softwarearchitektur entwickelt, die es ermöglicht AUTOSAR-Komponenten direkt in einer Linux-Umgebung auszuführen. Des Weiteren wurde eine einfache und effiziente Kommunikation zwischen AUTOSARKomponenten und Linux-Applikationen erarbeitet.
134

Chemnitzer Linux-Tage 2012

Schöner, Axel, Meier, Wilhelm, Kubieziel, Jens, Berger, Uwe, Götz, Sebastian, Leuthäuser, Max, Piechnick, Christian, Reimann, Jan, Richly, Sebastian, Schroeter, Julia, Wilke, Claas, Aßmann, Uwe, Schütz, Georg, Kastrup, David, Lang, Jens, Luithardt, Wolfram, Gachet, Daniel, Nasrallah, Olivier, Kölbel, Cornelius, König, Harald, Wachtler, Axel, Wunsch, Jörg, Vorwerk, Matthias, Knopper, Klaus, Meier, Wilhelm, Kramer, Frederik, Jamous, Naoum 20 April 2012 (has links) (PDF)
Die Chemnitzer Linux-Tage sind eine Veranstaltung rund um das Thema Open Source. Im Jahr 2012 wurden 104 Vorträge und Workshops gehalten. Der Band enthält ausführliche Beiträge zu 14 Hauptvorträgen sowie Zusammenfassungen zu 90 weiteren Vorträgen. / The "Chemnitz Linux Days" is a conference that deals with Linux and Open Source Software. In 2012 104 talks and workshops were given. This volume contains papers of 14 main lectures and 90 abstracts.
135

Chemnitzer Linux-Tage 2014

Courtenay, Mark, Kölbel, Cornelius, Lang, Jens, Luithardt, Wolfram, Zscheile, Falk, Kramer, Frederik, Schneider, Markus, Pfeifle, Kurt, Berger, Uwe, Wachtler, Axel, Findeisen, Ralf, Schöner, Axel, Lohr, Christina, Herms, Robert, Schütz, Georg, Luther, Tobias 23 April 2014 (has links) (PDF)
Der vorliegende Tagungsband beinhaltet 13 Beiträge von Referenten der Chemnitzer Linux-Tage 2014 sowie Zusammenfassungen von weiteren 78 Vorträgen und 14 Workshops. Die Beiträge umfassen das breite Spektrum der Veranstaltung, darunter Probleme von eingebetteten Systemen und vertrauliche Kommunikation.
136

Scalable Tools for Non-Intrusive Performance Debugging of Parallel Linux Workloads

Schöne, Robert, Schuchart, Joseph, Ilsche, Thomas, Hackenberg, Daniel 26 January 2015 (has links) (PDF)
There is a variety of tools to measure the performance of Linux systems and the applications running on them. However, the resulting performance data is often presented in plain text format or only with a very basic user interface. For large systems with many cores and concurrent threads, it is increasingly difficult to present the data in a clear way for analysis. Moreover, certain performance analysis and debugging tasks require the use of a high-resolution time-line based approach, again entailing data visualization challenges. Tools in the area of High Performance Computing (HPC) have long been able to scale to hundreds or thousands of parallel threads and help finding performance anomalies. We therefore present a solution to gather performance data using Linux performance monitoring interfaces. A combination of sampling and careful instrumentation allows us to obtain detailed performance traces with manageable overhead. We then convert the resulting output to the Open Trace Format (OTF) to bridge the gap between the recording infrastructure and HPC analysis tools. We explore ways to visualize the data by using the graphical tool Vampir. The combination of established Linux and HPC tools allows us to create an interface for easy navigation through time-ordered performance data grouped by thread or CPU and to help users find opportunities for performance optimizations.
137

Evaluation of an Adaptive AUTOSAR System in Context of Functional Safety Environments

Massoud, Mostafa 08 November 2017 (has links) (PDF)
The rapidly evolving technologies in the automotive industry have been defining new challenges, setting new goals and consenting to more complex systems. This steered the AUTOSAR community toward the independent development of the AUTOSAR Adaptive Platform with the intention of addressing and serving the demands defined by the new technology drivers. The use of an already existing software based on an open-source development - specifically GNU/Linux - was recognized as a matching candidate fulfilling the requirements defined by AUTOSAR Adaptive Platform as its operating system. However, this raises new challenges in addressing the safety aspect and the suitability of its implementation in safety-critical environments. As safety standards do not explicitly handle the use of open-source software development, this thesis proposes a tailoring procedure that aims to match the requirements defined by ISO 26262 for a possible qualification of GNU/Linux. And while very little is known about the behavior specification of GNU/Linux to appropriate its use in safety-critical environments, the outlined methodology seeks to verify the specification requirements of GNU/Linux leveraging its claimed compliance to the POSIX standard. In order to further use GNU/Linux with high pedigree of certainty in safety-critical applications, a software partitioning mechanism is implemented to provide control over the resource consumption of the operating system –specifically computation time and memory usage- between different criticality applications in order to achieve Freedom from Interference. The implementation demonstrates the ability to avoid interference concerning required resources of safety-critical applications.
138

Scalable Tools for Non-Intrusive Performance Debugging of Parallel Linux Workloads

Schöne, Robert, Schuchart, Joseph, Ilsche, Thomas, Hackenberg, Daniel January 2014 (has links)
There is a variety of tools to measure the performance of Linux systems and the applications running on them. However, the resulting performance data is often presented in plain text format or only with a very basic user interface. For large systems with many cores and concurrent threads, it is increasingly difficult to present the data in a clear way for analysis. Moreover, certain performance analysis and debugging tasks require the use of a high-resolution time-line based approach, again entailing data visualization challenges. Tools in the area of High Performance Computing (HPC) have long been able to scale to hundreds or thousands of parallel threads and help finding performance anomalies. We therefore present a solution to gather performance data using Linux performance monitoring interfaces. A combination of sampling and careful instrumentation allows us to obtain detailed performance traces with manageable overhead. We then convert the resulting output to the Open Trace Format (OTF) to bridge the gap between the recording infrastructure and HPC analysis tools. We explore ways to visualize the data by using the graphical tool Vampir. The combination of established Linux and HPC tools allows us to create an interface for easy navigation through time-ordered performance data grouped by thread or CPU and to help users find opportunities for performance optimizations.
139

DevAlert i Linux-baserade Inbyggda System / DevAlert in Linux Based Embedded Systems

Warnerman, Thimmy, Nilsson, Ewelin January 2024 (has links)
Linux som operativsystem används mer och har blivit vanligare i samband med inbyggda system vilket har lett till att företag som Percepio ställer frågor om hur datakollektion vid processkrascher genomförs i Linux-baserade inbyggda system. Detta med anledning för att öka observerbarhet i system via externa medel och tillåta fjärrfelsökning via en molnbaserad informationspanel. Syftet med det här arbetet är att undersöka vad det finns för befintliga tillvägagångssätt kring datainsamling vid signalavbrott. Vårt arbete har som mål att implementera en prototyp av Percepios övervakningsverktyg DevAlert på ett Linux-baserat inbyggt system. I den här rapporten kommer vi att undersöka hur praxisen ser ut för felsökning och felhantering i den här typen av system. För att uppfylla syfte och mål med vårt arbete har vi samlat information i en litteraturstudie om vad som är relevant för att öka observerbarheten i liknande system som har felande processer. Detta följdes av iterativa experiment där den insamlade informationen från litteraturstudien bekräftats och implementerats i vår prototyp. Den slutliga iterationen utfördes på en virtuell maskin vilket resulterade i en lyckad prototypimplementation av DevAlert i Linux. Resultaten som vi presenterar anser vi ska kunna appliceras i ett inbyggt Linux-system eftersom ramverken som vi samlar information ifrån använder en Linux-kärna. / The Linux operating systems is used more frequently and have become more common in connection with embedded systems, which has led to companies such as Percepio to pose questions about how data collection in case of process crashes is carried out in Linux based embedded systems. This is to increase observability in systems via external means and allow remote troubleshooting via a cloud-based dashboard. The purpose of this work is to investigate what the existing approaches regarding data collection in case of signal interruption are. Our work aims to implement a prototype of Percepio's monitoring tool DevAlert on a Linux-based embedded system. In this report, we will examine what the best practices are for troubleshooting and error handling in this type of system. In order to fulfill the purpose and goals of our work, we have gathered information in a literature study about what is relevant to increase observability in similar systems that have faulty processes. This was followed by iterative experiments where the collected information from the literature study was confirmed and implemented in our prototype. The final iteration was performed on a virtual machine resulting in a successful prototype implementation of DevAlert in Linux. We believe that the results we present should be applicable in an embedded Linux system because the frameworks from which we gather information use a Linux kernel.
140

LINUX POWERED TELEMETRY PROCESSING

Ayala, Joseph, Sorton, Eric 10 1900 (has links)
International Telemetering Conference Proceedings / October 23-26, 2000 / Town & Country Hotel and Conference Center, San Diego, California / Since its debut, the Linux operating system has garnered much attention in the software development community. This paper discusses the open source operating system, Linux, and it’s application as the operating system powering a commercial off-the-shelf telemetry processing system. The paper begins by discussing what are the real-time requirements of the operating system in a telemetry processing system. A discussion to the Linux system is then presented. Soft real-time features of Linux are discussed which allow it to meet the telemetry processing requirements. Linux is compared with the more traditional operating system products and points are made as to why open source software is just as capable, if not preferable, of handling mission critical applications. The paper also presents the authors’ view of future of Linux and open source software in the telemetry marketplace. The paper concludes with a summary of products available for Linux that support telemetry processing and the data acquisition environment.

Page generated in 0.0175 seconds