• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 121
  • 36
  • 29
  • 13
  • 7
  • 7
  • 5
  • 5
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 264
  • 264
  • 92
  • 60
  • 57
  • 42
  • 41
  • 38
  • 35
  • 34
  • 32
  • 28
  • 27
  • 25
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

An Operating System Architecture and Hybrid Scheduling Methodology for Real-Time Systems with Uncertainty

Apte, Manoj Shriganesh 11 December 2004 (has links)
Personal computer desktops, and other standardized computer architectures are optimized to provide the best performance for frequently occurring conditions. Real-time systems designed using worst-case analysis for such architectures under-utilize the hardware. This shortcoming provides the motivation for scheduling algorithms that can improve overall utilization by accounting for inherent uncertainty in task execution duration. A real-time task dispatcher must perform its function with constant scheduling overhead. Given the NP-hard nature of the problem of scheduling non-preemptible tasks, dispatch decisions for such systems cannot be made in real-time. This argues for a hybrid architecture that includes an offline policy generator, and an online dispatcher. This dissertation proposes, and demonstrates a hybrid operating system architecture that enables cost-optimal task dispatch on Commercial-Off-The-Shelf (COTS) systems. This is achieved by explicitly accounting for the stochastic nature of each task?s execution time, and dynamically learning the system behavior. Decision Theoretic Scheduling (DTS) provides the framework for scheduling under uncertainty. The real-time scheduling problem is cast as a Markov Decision Process (MDP). An offline policy generator discovers an epsilon-optimal policy using value iteration with model learning. For the selected representation of state, action, model, and rewards, the policydiscovered using value iteration is proved to have a probability of failure that is less than any arbitrarily small user-specified value. The PromisQoS operating system architecture demonstrates a practical implementation of the proposed approach. PromisQoS is a Linux based platform that supports concurrent execution of time-based (preemptible and non-preemptible) real-time tasks, and best-effort processes on an interactive workstation. Several examples demonstrate that model learning, and scheduling under uncertainty enables PromisQoS to achieve better CPU utilization than other scheduling methods. Real-time task sets that solve practical problems, such as a Laplace solver, matrix multiplication, and transpose, demonstrate the robustness and correctness of PromisQoS design and implementation. This pioneering application demonstrates the feasibility of MDP based scheduling for real-time tasks in practical systems. It also opens avenues for further research into the use of such DTS techniques in real-time system design.
112

A Context-Aware Approach to Android Memory Management

Muthu, Srinivas 14 November 2016 (has links)
No description available.
113

ADVANCEMENT OF OPERATING SYSTEM TO MANAGE CRITICAL RESOURCES IN INCREASINGLY COMPLEX COMPUTER ARCHITECTURE

Ding, Xiaoning 28 September 2010 (has links)
No description available.
114

Real-Time Computational Scheduling with Path Planning for Autonomous Mobile Robots

Chen, David Xitai 05 June 2024 (has links)
With the advancement in technology, modern autonomous vehicles are required to perform more complex tasks and navigate through challenging terrains. Thus, the amount of computation resources to accurately accomplish those tasks have exponentially grown in the last decade. With growing computational intensity and limited computational resources on embedded devices, schedulers are necessary to manage and fully optimize computational loads between the GPU and CPU as well as reducing the power consumption to maximize time in the field. Thus far, it has been proven the effectiveness of schedulers and path planners on computational load on embedded devices through numerous bench testing and simulated environments. However, there have not been any significant data collection in the real-world with all hardware and software combined. This thesis focuses on the implementation of various computational loads (i.e. scheduler, path planner, RGB-D camera, object detection, depth estimation, etc.) on the NVIDIA Jetson AGX Xavier and real-world experimentation on the Clearpath Robotics Jackal. We compare the computation response time and effectiveness of all systems tested in the real-world versus the same software and hardware architecture on the bench. / Master of Science / Modern autonomous vehicles are required to perform more complex tasks with limited computational resources, power and operating frequency. In recent past, the research around autonomous vehicles have been focused on proving the effectiveness of using software-based programming on embedded devices with integrated GPU to improve the overall performance by speeding up task completion. Our goal is to perform real-world data collection and experimentation with both hardware and software frameworks onboard the Clearpath Robotics Jackal. This will validate the efficiency and computational load of the software framework under multiple varying environments.
115

Exploring the Boundaries of Operating System in the Era of Ultra-fast Storage Technologies

Ramanathan, Madhava Krishnan 24 May 2023 (has links)
The storage hardware is evolving at a rapid pace to keep up with the exponential rise of data consumption. Recently, ultra-fast storage technologies such as nano-second scale byte- addressable Non-Volatile Memory (NVM), micro-second scale SSDs are being commercialized. However, the OS storage stack has not been evolving fast enough to keep up with these new ultra-fast storage hardware. Hence, the latency due user-kernel context switch caused by system calls and hardware interrupts is no longer negligible as presumed in the era of slower high latency hard disks. Further, the OS storage stack is not designed with multi-core scalability in mind; so with CPU core count continuously increasing, the OS storage stack particularly the Virtual Filesystem (VFS) and filesystem layer are increasingly becoming a scalability bottleneck. Applications bypass the kernel (kernel-bypass storage stack) completely to eliminate the storage stack from becoming a performance and scalability bottleneck. But this comes at the cost of programmability, isolation, safety, and reliability. Moreover, scalability bottlenecks in the filesystem can not be addressed by simply moving the filesystem to the userspace. Overall, while designing a kernel-bypass storage stack looks obvious and promising there are several critical challenges in the aspects of programmability, performance, scalability, safety, and reliability that needs to be addressed to bypass the traditional OS storage stack. This thesis proposes a series of kernel-bypass storage techniques designed particularly for fast memory-centric storage. First, this thesis proposes a scalable persistent transactional memory (PTM) programming model to address the programmability and multi-core scalability challenges. Next, this thesis proposes techniques to make the PTM memory safe and fault tolerant. Further, this thesis also proposes a kernel-bypass programming framework to port legacy DRAM-based in-memory database applications to run on persistent memory-centric storage. Finally, this thesis explores an application-driven approach to address the CPU side and storage side bottlenecks in the deep learning model training by proposing a kernel-bypass programming framework to move to compute closer to the storage. Overall, the techniques proposed in this thesis will be a strong foundation for the applications to adopt and exploit the emerging ultra-fast storage technologies without being bottlenecked by the traditional OS storage stack. / Doctor of Philosophy / The storage hardware is evolving at a rapid pace to keep up with the exponential rise of data consumption. Recently, ultra-fast storage technologies such as nano-second scale byte- addressable Non-Volatile Memory (NVM), micro-second scale SSDs are being commercialized. The Operating System (OS) has been the gateway for the applications to access and manage the storage hardware. Unfortunately, the OS storage stack that is designed with slower storage technologies (e.g., hard disk drives) becomes a performance, scalability, and programmability bottleneck for the emerging ultra-fast storage technologies. This has created a large gap between the storage hardware advancements and the system software support for such emerging storage technologies. Consequently, applications are constrained by the limitations of the OS storage stack when they intend to explore these emerging storage technologies. In this thesis, we propose a series of novel kernel-bypass storage stack designs to address the performance, scalability, and programmability limitations of the conventional OS storage stack. The kernel-bypass storage stack proposed in this thesis is carefully designed with ultra-fast modern storage hardware in mind. Application developers can leverage the kernel-bypass techniques proposed in this thesis to develop new applications or port the legacy applications to use the emerging ultra-fast storage technologies without being constrained by the limitations of the conventional OS storage stack.
116

Specializing a general-purpose operating system

Raza, Ali 10 September 2024 (has links)
This thesis aims to address the growing disconnect between the goals general-purpose operating systems were designed to achieve and the requirements of some of today’s new workloads and use cases. General-purpose operating systems multiplex system resources between multiple non-trusting workloads and users. They have generalized code paths, designed to support diverse applications, potentially running concurrently. This generality comes at a performance cost. In contrast, many modern data center workloads are often deployed separately in single-user, and often single workload, virtual machines and require specialized behavior from the operating system for high-speed I/O. Unikernels, library operating systems, and systems that exploit kernel bypass mechanisms have been developed to provide high-speed I/O by being specialized to meet the needs of performance-critical workloads. These systems have demonstrated immense performance advantages over general-purpose operating systems but have yet to see widespread adoption. This is because, compared to general-purpose operating systems, these systems lack a battle-tested code base, a large developer community, wide application, and hardware support, and a vast ecosystem of tools, utilities, etc. This thesis explores a novel view of the design space; a generality-specialization spectrum. General-purpose operating systems like Linux lie at one end of this spectrum; they are willing to sacrifice performance to support a wide range of applications and a broad set of use cases. As we move towards the specialization end, different specializable systems like unikernels, library operating systems, and those that exploit kernel bypass mechanisms appear at different points based on how much specialization a system enables and how much application and hardware compatibility it gives up compared to general-purpose operating systems. Is it possible, at compile/configure time, to enable a system to move to different points on the generality-specialization spectrum depending on the needs of the workload? Any application would just work at the generality end, where application and hardware compatibility and the ecosystem of the general-purpose operating system are preserved. Developers can then focus on optimizing performance-critical code paths only, based on application requirements, to improve performance. With each new optimization added, the set of target applications would shrink. In other words, the system would be specialized for a class of applications, offering high performance for a potentially narrow set of use cases. If such a system could be designed, it would have the application and hardware compatibility and ecosystem of general-purpose operating systems as a starting point. Based on the target application, select code paths of this system can then be incrementally optimized to improve performance, moving the system to the specializable end of the spectrum. This would be different from previous specializable systems, which are designed to demonstrate huge performance advantages over general-purpose operating systems, but then try to retrofit application and hardware compatibility. To explore the above question, this thesis proposes Unikernel Linux (UKL), which integrates optimizations explored by specializable systems to Linux. It starts at the general-purpose end of the spectrum and, by linking an application with the kernel, kernel mode execution, and replacing system calls with function calls, offers a minimal performance advantage over Linux. This base model of UKL supports most Linux applications (after recompiling and relinking) and hardware. Further, this thesis explores common optimizations explored by specializable systems, e.g., faster transitions between application and kernel code, avoiding stack switches, run-to-completion modes, and bypassing the kernel TCP state machine to access low-level functions directly. These optimizations allow higher performance advantages over unmodified Linux but apply to a narrower set of workloads. Contributions of this thesis include proposing a novel approach to specialization, i.e., adding optimizations to a general-purpose operating system to move it along the generality-specialization spectrum, an existence proof that optimizations explored by specializable systems can be integrated into a general-purpose operating system without major changes to the invariants, assumptions, and code of that general purpose operating system, a demonstration that the resulting system can be moved on the generality-specialization spectrum, and showing that performance gains are possible.
117

An open-source digital twin of the wire arc directed energy deposition process for interpass temperature regulation

Stokes, Ryan Mitchell 10 May 2024 (has links) (PDF)
The overall goal of this work is to create an open-source digital twin of the wire arc directed energy deposition process using robot operating system 2 for interpass temperature regulation of a maraging steel alloy. This framework takes a novel approach to regulating the interpass temperatures by using in-situational infrared camera data and a closed loop feedback control that is enabled by robot operating system 2. This is the first implementation of robot operating system 2 for wire arc directed energy deposition and this framework outlines a sensor and machine agnostic approach for creating a digital twin of this additive manufacturing process. In-situ control of the welding process is conducted on a maraging steel alloy demonstrating interpass temperature regulation leads to improved as-built surface roughness and more consistent as-built hardness. An evaluation of three distinct weld modes: Pulsed MIG, CMT MIX, and CMT Universal and two primary process parameters: travel speed and wire feed speed was conducted to identify suitable process windows for welding the maraging alloy. Single track welds for each parameter and weld mode combination were produced and evaluated against current weld bead metrics in the literature. Non destructive profilometry and destructive characterization were performed on the single track welds to evaluate geometric features like wetting angle, dilution percentage, and cross sectional area. In addition, the role of material feed rate on heat input and the cross sectional area was examined in relation to the as-built hardness. The robot operating system 2 digital twin provides a visualization environment to monitor and record real time data from a variety of sensors including robot position, weld data, and thermal camera images. Point cloud data is visualized, in real time, to provide insight to the captured weld meta data. Capturing in-situ data from the wire arc directed energy deposition process is critical to establishing an improved understanding of the process for parameter optimization, tool path planning, with both required to build repeatable, quality components. This work presents an open-source method to capture multi-modal data into a shared environment for improved data capture, data sharing, data synchronization, and data visualization. This digital twin provides users enhanced process control capabilities and greater flexibility by utilizing the robot operating system 2 as a middleware to provide interoperability between sensors and machines.
118

Exploração robótica ativa usando câmera de profundidade / Active robotic exploration using depth camera

Viecili, Eduardo Brendler 17 March 2014 (has links)
Made available in DSpace on 2016-12-12T20:22:52Z (GMT). No. of bitstreams: 1 Eduardo B Viecilli.pdf: 12003318 bytes, checksum: 049902e80d65ca85726715d69e30469a (MD5) Previous issue date: 2014-03-17 / Conselho Nacional de Desenvolvimento Científico e Tecnológico / Mobile robots should be able to seek (explore the environment and recognize the objects) autonomously and efficiently. This work developed a mobile robot capable of performing the search for a 3D object in an unknown environment, using only one depth camera (RGB + Depth) as sensor and executing a strategy of active vision. The Microsoft Kinect was adopted as sensor. Also a mobile robot (XKBO) was build using the Robot Operating System (ROS), and with its architecture adapted from the norm STANAG 4586. A new active exploration strategy was developed in which considers the effort of the robot to move to the frontier regions (occult areas), and the presence of traces of the object. The metrics used demonstrated that the use of depth cameras for visual search tasks have a potential for deployment because associates visual and depth information, allowing the robot to better understand the environment and the target object of the search. / Robôs móveis devem ter a capacidade de buscar (explorar o ambiente e reconhecer objetos) de forma autônoma e eficiente. Este trabalho desenvolveu um robô móvel capaz de executar a busca a um objeto (3D) em ambiente desconhecido, utilizando somente uma câmera de profundidade (RGB + distancia) como sensor e executando uma estratégia de visão ativa. A Microsoft Kinect foi a câmera adotada. Também construiu-se um robô móvel (XKBO) que utiliza o Sistema Operacional Robótico (ROS), e com a arquitetura adaptada da norma STANAG 4586. Foi possível usar algoritmos existentes para reconhecer objetos 3D usando o Kinect graças as ferramentas presentes no ROS. E o uso do Kinect facilitou a geração de mapas do ambiente. Desenvolveu-se uma nova estratégia de exploração ativa que considera o esforço de movimentação para as regiões de fronteiras (áreas ocultas), e a existência de indícios da presença do objeto. As métricas utilizadas demonstram que o uso de câmeras de profundidade para tarefas de busca tem potencial para evolução por associar informação visuais com as de profundidade, permitindo que o robô possa entender o ambiente e o objeto alvo da busca. Palavras-chave: Robô Móvel. Exploração. Busca Visual. Câmera RGB-D.
119

Modernisering av ett 3D-scanningssystem : Utmaningar och lärdomar av ett projekt / Modernizing a 3D Scanning System : Challenges and Lessons Learned

Haavisto, Felix, Henriksson, Henrik, Hätty, Niklas, Jansson, Johan, Petersen, Fabian, Pop, David, Ringdahl, Viktor, Svensson, Sara January 2016 (has links)
Ett styrsystem för 3D-scanning har moderniserats av en projektgrupp på nio personer. Under utvecklingsarbetet följdes en arbetsprocess som liknade vattenfallsmetoden. Arbetsprocessen fungerade bra, bland annat då projektgruppen utnyttjat både tidigare och nya erfarenheter för att förbättra arbetssättet. Systemet som utvecklades ersätter ett tidigare styrsystem baserat på Matlab, men behåller samma grundläggande uppsättning hårdvara. En avståndskamera, en linjärenhet och ett rotationsbord utgör grunden till systemet. Med hjälp av denna hårdvara möjliggör systemet 3D-scanningar av mindre objekt. Styrsystemet är utvecklat med Python och ROS, Robot Operating System. Valet av ROS ledde till en komplex arkitektur på grund av skillnader i systemkrav hos ROS och hårdvarudrivrutiner. Utan dessa systemkrav tros ROS ha varit ett ypperligt val. Den utvecklade arkitekturen jämförs med en alternativ hypotetisk arkitektur, vilken uppvisade lägre komplexitet och större portabilitet. Den är dock inte lika lättanvänd tillsammans med andra ROS-system. Under utvecklingsarbetet har modularitet, vidareutvecklingsbarhet och robusthet varit i fokus. Även om det fullständiga systemet inte är så robust som önskats så anses de ingående modulerna uppvisa en önskad nivå av robusthet. Systemet uppvisar även en hög grad av modularitet. Den utförligt dokumenterade koden tillsammans med de väl separerade modulerna har lett till att systemet bör vara lätt att vidareutveckla.
120

Bilddiagnostik av barn ochungdomar vid skolios : En jämförande litteraturstudie mellan konventionell röntgen, datortomografi och entrepreneurial operating system / Imaging diagnostics of children and adolescents with scoliosis : A comparative literature study between conventional X-ray, computed tomography and entrepreneurial operating system

Söderberg, Ida, Najm, Van January 2023 (has links)
Bakgrund: Konventionell röntgen, datortomografi (DT) och entrepreneurial operating system (EOS) är tre radiologiska undersökningsmetoder som kan användas för bilddiagnostik vid skolios av barn och ungdomar. Undersökningarna kan genomföras i en liggande eller stående positionering vid bedömning av ryggradens deformation. Dock avger dessa tre modaliteter joniserad strålning vid bildtagning. Syfte: Syftet med denna litteraturstudie var att jämföra konventionell röntgen, DT och EOS vid bilddiagnostik av skolios med fokus på barn och ungdomar. Metod: För att besvara syftet gjordes en litteraturstudie med systematisk ansats. Studier söktes databaserna Pubmed och Cinahl. Resultat: Två av tre studier påvisar att cobb-vinklen tenderar att minska från stående konventionella- och EOS-undersökningar till liggande DT-undersökningar. Vidare kan både DT och EOS generera 3D-rekonstruktioner för att underlätta bedömningen av deformationen. Den effektiva stråldosen går att reduceras för samtliga tre modaliteter men det finns en tendens att bildkvalitén försämras vid optimering. Slutsats: Konventionell röntgen, DT och EOS har för- och nackdelar avseende bilddiagnostik och stråldos. Samtliga tre modaliteter kan användas vid bedömning av ryggradens deformation och pre- och postoperativa bedömning, men det finns skillnader avseende stråldosen och bildkvalitén. Således krävs vidare forskning för att undersöka modaliteternas tillgänglighet och utifrån ett patientperspektiv. / Background: Conventional radiography, computed tomography (CT) and the entrepreneurial operating system (EOS) are three radiological examination methods that can be used for image diagnostics of scoliosis in children and adolescents. The examinations can be carried out in a lying or standing position when assessing the deformation of the spine. However, these three modalities emit ionizing radiation during imaging. Purpose: The purpose of this literature study was to compare conventional radiography, CT and EOS in imaging diagnostics of scoliosis with a focus on children and adolescents. Method: To answer the purpose, a systematic litteratur study was conducted. Literature were search for using the databases PubMed and Cinahl. Result: Two out of three studies showed that cobb-angle tends to decrsease from standning conventional and EOS examinations to lying CT examinations. Furthermore, both CT and EOS can generate 3D reconstuctions to facilitate the assessment of deformation. The effective radiation dose can be reduced for all three modalities, but there is a tendency for image quality to deteriorate during optimization. Conclusion: Conventional X-ray, CT and EOS have advantages and disadvantages in terms of imaging diagnosis and radiation dose. All three modalities can be applied for assessing spinal deformity and pre- and postoperative evaluations, but there are differences in image quality and radiation dose. Therefore, further research is required to investigate the accessibility of the modalities and their suitability from a patient perspective.

Page generated in 0.1161 seconds