• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 95
  • 36
  • 28
  • 11
  • 7
  • 7
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 226
  • 226
  • 75
  • 54
  • 51
  • 35
  • 35
  • 34
  • 31
  • 26
  • 25
  • 23
  • 22
  • 20
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.

Software Design of A UNIX-like Kernel

Jya, Jean-Ray 15 September 2003 (has links)
Abstract In the age of deep submicron VLSI, we can design various system applications in a single chip. In such system-on-a-chip designs, hardware and software function are integrated and managed by application-specific operating system functions. It motivates us to study design structures of current OS kernels. In this research, we applied an executable specification method for the software design of a UNIX kernel. We studied first the overall software structure of UNIX kernels. Then, we analyzed the detailed designs of process management and memory management. We applied object-oriented analysis and design techniques as well as a hierarchical state machine control design method. We will then map this design onto an executable specification framework to produce system prototype designs for collecting early experimental results and tuning application-specific kernel functionalities.

Chameleon, a dynamically extensible and configurable object-oriented operating system

Bryce, Robert William 03 May 2017 (has links)
Currently, new algorithms are being incorporated into operating systems to deal with a host of new requirements from multimedia applications. These new algorithms deal with soft real-time scheduling, different memory models, and changes to buffer caching and network protocols. However, old design techniques such as structured programming, global variables and implied dependencies are impeding this development and proof of correctness. Many current operating system research groups are developing extensible systems, where new code can be placed into the system and even kernel layers. A primary difficulty in these efforts is how to avoid adversely affecting reliability and traditional measures of performance. Techniques from the object orientation paradigm are being incorporated to better manage these issues because they have shown promise in improving modularity, information hiding, and reusability. In some cases, these techniques are even being used to build fresh operating systems from the ground up with the goal of easier extensibility and adaptability in the future. The Apertos operating system introduced and implemented many concepts originally alien to operating system research but exhibited unacceptable performance for multimedia applications. This dissertation introduces Chameleon, a new object-oriented operating system that shares the same philosophical approach as Apertos, leveraging meta designs and concepts to deal with the diverse requirements of today’s and future multimedia applications. However, Chameleon takes a new and original approach to design and implementation to achieve a high degree of adaptability and retain the performance of a micro-kemel. In Chameleon, the object-oriented paradigm serves as the basis for newly introduced concepts such as AbstractCPU, brokers, and the broker interface hierarchy. Together, AbstractCPU, brokers, and related software engineering techniques such as dynamic class binding serve as a basis for all system management, communication, and for an event-driven model where new events can be defined and dynamically introduced to a running system. The meta design clearly defines a hierarchy of “operating environments” that can be optimized for a particular type of application. As such, hierarchical resource management plays an important role in Chameleon. A minimal set of primitives that is appropriate for hierarchical memory management is defined atop a single address space memory model. Similarly, hierarchical CPU scheduling is employed, as different applications will exhibit different scheduling requirements. Different schedulers may then co-exist on the same CPU. Communication in a hierarchically structured operating system is also detailed. The implementation of the Chameleon structuring concept is presented and analyzed. Standard performance measures are used to compare Chameleon to related research and commercial operating systems. Costs of individual operations are also presented to outline the overheads and gains associated with the Chameleon model. / Graduate

software architecture design of a configurable object-oriented operating system

Lin, Yu-chung 11 September 2008 (has links)
Along with emergence of embedded systems, operating systems are now widely used in various applications on environments other than the desktops and workstations, such as house electrical appliances and mobile devices. Diverse applications have different requirement on the software architectures of operating systems, They can be satisfied by adopting configurable operating systems. In this research, utilizing modulization and inter-module communication channel, we developed a software architecture configurable operating system. By configuring inside channels with interfacing and protection components, we can realize an operating system into various software architectures.

MinixARM: A port of Minix 3 to an ARM-based embedded system

Chiu, Sheng-yu 27 June 2007 (has links)
Theories in operating systems are relatively matured, but implementations are hard compared to many areas in computer science. For example, virtual memory has been around for more than 20 years since its introduction, but, to understand how an operating system supports virtual memory is not a trivial task, let alone implementation. Minix is an operating system that has been designed for educational purpose. It¡¦s a good starting point for a novice who wants to learn operating systems. The third version of Minix has been moved towards a true microkernel design and targeted at small computers and embedded systems. The advantages of microkernel architecture is its high fault tolerance and high modularity design which can make it much more flexible for versatile applications on embedded systems. However, to the best of our knowledge, Minix 3 only runs on Intel-based machine so far. The objective of this thesis is thus to port Minix 3 to ARM-Based embedded systems, to make it an experimental microkernel for embedded systems. Also, due to the incompatibility between the segmented memory model used by Minix 3 on IA-32 and the unsegmented memory model support by ARM, we also provide an API to simplify the porting effort.

Product Bundling in Software Industry: The Case of Operating System and Browser Market

Hsu, Tuang-Chou 20 June 2000 (has links)
Product bundling is a common tool to increase sales and profits for the firms when they sell products. In traditional market, because the consumer¡¦s reserve price to each product is different, adopting product bundling strategy can achieve their goal. Now , owing to the highly changed technology and the prevailing of computer and internet, there are plenty of forms and varieties of product bundling. It is more different to do the research about the product bundling in computer, software, or some high-tech information industries. In the light of the characteristic of these industries, it is necessary to modify the product bundling strategy to meet the demand. This study focuses on the product bundling strategy in software industry. A few days ago, in the case of Microsoft who violates the antitrust law, the public starts to pay attention to the product bundling strategy in software industry again. But the software industry has two very important characteristics. First, there are compatibility problems between different products. Second, it is difficult to define the boundary of products. So this study tries to build a model to explain the application of product bundling strategy in software industry, and use the model to confer the case of Microsoft. There are three objects in this study. First, analyzing the advantages and disadvantages of product bundling strategies in all kinds of conditions in the recent years. Second, using the model to prove that if there is only one monopoly firm in the main product market and it bundles its main and downstream products. Then the downstream competitor and the consumers will be harmed. Third, using the inference of the model to comment the case that the U.S department of justice who accuses Microsoft bundling its personal computer operation system and browser.

Remote interprocess communication and its performance in Team Shoshin

Acton, Donald William January 1985 (has links)
Team Shoshin is an extension of Shoshin, a testbed for distributed software developed at the University of Waterloo. Part of the functionality of Shoshin can be attributed to its transparent treatment of remote interprocess communication. This is accomplished by having a special system process, the communications manager, handle the exchange of messages between machines. Shoshin's new hardware environment is significantly different from what it was originally designed on. This thesis describes the problems the new hardware presented and how those problems were overcome. Performance measurements of the time required for both local and remote message exchanges are made and compared. Using this empirical data, a simple model of the remote message exchange protocol is developed to try and determine how to improve performance. The software and hardware enhancements made to Shoshin have resulted in an improvement in system interprocess communication performance by a factor of four. Finally as a demonstration of Shoshin's interprocess communications facilities a simple UNIX based file server is implemented. / Science, Faculty of / Computer Science, Department of / Graduate


Smith, Dan, Steele, Doug 10 1900 (has links)
ITC/USA 2005 Conference Proceedings / The Forty-First Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2005 / Riviera Hotel & Convention Center, Las Vegas, Nevada / Near real-time telemetry acquisition, processing and analysis on a desktop PC have always been difficult. Many factors complicate working with real-time data, including operating system latencies, design inefficiencies and hardware limitations. These problems are further compounded when data from multiple sources had to be integrated, increasing design complexity. Current design solutions for analyzing data in near real-time now utilize the latest hardware implementations and software designs, taking advantage of new hardware and language features. This paper will discuss several issues found with PC-based telemetry systems and how new designs are addressing these issues.

A separation logic framework for HOL

Tuerk, Thomas January 2011 (has links)
No description available.

Optimization of Component Connections for an Embedded Component System

Azumi, Takuya, Takada, Hiroaki, Oyama, Hiroshi 29 August 2009 (has links)
No description available.

Designing a company-specific Production System : Developing an appropriate operating approach

Meinhardt, Johan, Kallin, Dennis January 2013 (has links)
To boost operational performance and ultimately competitiveness, firms choose to develop company-specific Production Systems (XPS). Developing such production systems the management literature suggests that a XPS must be tailored to the firm operating context to yield full effect. This explorative case study examines how to design a XPS that provides an appropriate operating approach. Clarifying terminological confusion, the study proposes a XPS framework derived from the literature that encompasses three levels of operating elements - philosophical, principle, and practice. Investigating how to prioritize among these elements the study empirically validate the importance of tailoring firm operating approaches. In particular, categorizing practices as technical or socio-technical, and internal or external, the study contradicts existing research and posit that (1) socio-technical practices are a prerequisite for the adoption of technical practices and (2), practice classified as internal also have an external dimension. In addition, the results indicate that a XPS must evolve as contextual requirements and prerequisites change – thus making the design of a XPS dynamic. Finally, this study proposes a case-specific production system, tailored to the requirements of the research objects market-, organizational- and process context.

Page generated in 0.1037 seconds