• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 27548
  • 5236
  • 1499
  • 1329
  • 1329
  • 1329
  • 1329
  • 1329
  • 1319
  • 1212
  • 869
  • 671
  • 512
  • 158
  • 156
  • Tagged with
  • 43151
  • 43151
  • 14726
  • 11044
  • 3184
  • 2994
  • 2823
  • 2608
  • 2595
  • 2551
  • 2515
  • 2500
  • 2395
  • 2289
  • 2139
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
551

A Qualitative Event-based Approach to Fault Diagnosis of Hybrid Systems

Daigle, Matthew John 21 April 2008 (has links)
<p> Fault diagnosis is crucial for ensuring the safe operation of complex engineering systems. Many present-day systems combine physical and computational processes, and are best modeled as hybrid systems, where the dynamic behavior combines continuous evolution interspersed with discrete configuration changes. Due to the complexity of such modern engineering systems, formal methods are required for reliable and correct design, analysis, and implementation of hybrid system diagnosers. <p> This dissertation presents a systematic, model-based approach to event-based diagnosis of hybrid systems based on qualitative abstractions of deviations from nominal behavior. The primary contributions of this work center on (i) incorporating relative measurement orderings into fault isolation for continuous and hybrid systems, which describe predicted temporal orderings of measurement deviations, (ii) providing algorithms for event-based diagnosis of single and multiple faults, (iii) developing an integrated framework for diagnosis of parametric, sensor, and discrete, i.e., switching faults in hybrid systems, and (iv) developing and implementing an efficient event-based diagnosis framework for continuous and hybrid systems that enables automatic design of event-based diagnosers and establishes notions of diagnosability for continuous and hybrid systems. <p> The effectiveness of the approach is demonstrated on two practical systems. First, the single fault diagnosis method for continuous systems is applied in a distributed fashion to formations of mobile robots. The results include a formal diagnosability analysis, scalability results, and experiments performed on a formation of robots. Second, the approach developed for hybrid systems diagnosis is applied to the Advanced Diagnostics and Prognostics Testbed, which is a complex electrical distribution system for spacecraft and aircraft applications. The results focus on a subset of the testbed, and include a diagnosability analysis, experiments from the actual testbed, and detailed simulation experiments that examine the performance of the diagnosis algorithms for different fault magnitudes and noise levels.
552

A THREAD-SAFE IMPLEMENTATION OF A META-PROGRAMMABLE DATA MODEL

Balasubramanian, Daniel Allen 25 April 2008 (has links)
This thesis describes the design and implementation of a thread-safe meta-programmable data model that can be used in a multi-threaded environment without the need for user defined synchronization. The locking mechanisms used to provide thread safety are described, and a proof of deadlock freedom for the data model is provided. A case study is presented in which the data model is used to implement the data structures for a sequential and parallelized version of an algorithm, and the performance using multiple threads is measured against the ideal possible speedup for the parallelized algorithm.
553

DEPLOYMENT AND CONFIGURATION OF COMPONENT-BASED DISTRIBUTED, REAL-TIME AND EMBEDDED SYSTEMS

Deng, Gan 27 February 2008 (has links)
Component-based software engineering (CBSE) is increasingly being adopted for large-scale software systems, particularly for large-scale distributed real-time and embedded (DRE) systems. One of the most challenging -- and often most neglected -- problems in CBSE for large-scale DRE systems is the system deployment and configuration (D&C) process, where the increasing heterogeneity and versatility of application domains requires support for an unprecedented level of configurability and adaptability. Existing D&C middleware technologies suffer from three major challenges: (1) insufficient ability to evolve in the face of quality of service (QoS) diversification due to the interaction with systemic concerns imposed by a wide range of application requirements, (2) complexities of integrating, configuring and deploying different real-time publish/subscribe services in QoS-enabled component middleware, and (3) ensuring the predictability of D&C in the face of complex dependency relationships among components in large scale DRE systems. This dissertation provides three contributions to the D&C middleware research for large-scale component-based DRE systems. First, we describe the design and implementation of Deployment And Configuration Engine (DAnCE), which is QoS-enabled component D&C framework that alleviates key inherent and accidental complexities for automated D&C of component-based DRE systems. Our results show that DAnCE provides an effective platform for deploying DRE system components using a standard runtime environment and metadata. Second, we evaluate different architectural design choices for integrating, configuring, and deploying publish/subscribe services in component middleware for DRE systems, and develop a combined solution approach based on a pattern language and model-driven engineering (MDE) technique. Our results reveal that both the performance and scalability of our design and implementation are comparable to its object-oriented counterpart, which provides a key guidance to the suitability of component technologies for DRE systems. Third, we describe how we identify key sources of deployment-time priority inversion and how we apply a multi-graph based algorithm called PARIGE to avoid deployment priority inversion. Our results show that PARIGE incurs negligible ~1% D&C performance overhead and avoids unbounded deployment time priority inversion when operational strings with different priorities have dependencies among each other, thereby significantly improving the responsiveness of high priority mission-critical tasks. The results presented in the dissertation show that the capabilities provided by the DAnCE framework -- combined with its associated component middleware infrastructure and MDE tools significantly improves the reusability of components and the productivity of D&C process of component-based DRE systems compared to conventional D&C approaches. Moreover, the PARIGE algorithm in DAnCE significantly improves the predictability of D&C of large-scale component-based DRE systems than conventional D&C QoS assurance techniques.
554

Verification of Model Transformations

Narayanan, Anantha 14 May 2008 (has links)
Model based software development projects are typically built over tool-chains that tie modeling, verification and code generation together. Models must usually be transformed to the appropriate notation at each step. These transformations are often automated, and their correctness is crucial to the success of the development effort. Model-based development of high-consequence, critical systems must address the need for verifiable model transformations and code generators that give assurances for the logical consistency of the toolchain. Verifying the correctness of model transformations is, in general, as difficult as verifying compilers for high-level languages. However, the limitations enforced by the syntactic and semantic rules of domain specific languages allow us to reason about the properties of models within a restricted framework. In model transformations between domain specific models, the preservation of certain properties of the models may also be reasoned within the same restricted framework. This research presents initial ideas for a certification framework for model transformations that can guarantee the correctness of a certain execution of the transformation.
555

An Analysis of Software Quality and Maintainability Metrics with an Application to a Longitudinal Study of the Linux Kernel.

Thomas, Lawrence Gray 15 July 2008 (has links)
Coupling between software modules has been shown to be deleterious to both software quality and maintainability. Common coupling (two or more modules sharing a global variable, or GV) is a particularly strong (and therefore undesirable) form of coupling. We examined 83 production (stable) versions of the Linux kernel. Previous studies have determined that the size of the Linux kernel (as measured in lines of code) grows linearly and the number of instances of common coupling grows exponentially with respect to version number. We found that the size of the Linux kernel and the number of instances of common coupling both grow linearly with respect to release date, because Linux kernels are released at irregular intervals. We also measured correlations between common coupling and Omans maintainability index as well as two of Halsteads Software Science metrics (volume and effort). The general validity of Omans and Halsteads metrics has long been questioned. We performed a longitudinal study on three distinct series of the Linux kernel, in an attempt to see if these metrics were valid within multiple consecutive versions of the same program. All of the metrics we examined were valid in one series, but not in the other two. Of all the metrics we examined, only lines of code maintained a significant linear correlation with the number of instances of common coupling in all cases. Rather than analyzing the code in only the /kernel/ subdirectory, we built complete, executable kernels, and analyzed all of the source code used to build each kernel. Doing so required us to develop an algorithm to select consistent kernel configurations across all versions of the kernel we analyzed. We used a variety of open-source software tools (and one proprietary CASE tool), and built several specialized tools to link the existing tools, resulting in an end-to-end solution for the analysis of the various software metrics we gathered. Newer tools cannot be used to build the older versions of the kernel, and some of the code had to be modified in order to successfully build the kernel. We carefully documented these changes to assist future researchers.
556

Design and Evaluation of Methods for Motor Exploration in Large Virtual Environments with Head-Mounted Display Technology

Williams, Betsy 06 August 2008 (has links)
Virtual environments provide people with the opportunity to experience environments remote from their actual physical surrounding. Virtual environment systems could have a huge impact in education, entertainment, medicine,architecture, and training, but they are not widely used because of their expense and delicacy. Current interfaces to virtual environments such as treadmills, tracked head-mounted displays (HMDs), and joysticks suffer from being expensive, limited, and/or disorienting. Since HMD systems hold the promise of being readily available to the public within the next several years, constraints of the system need to be identified and addressed. In particular, a major drawback of HMD-based systems is the likely limited amount of space available for exploration. Thus, this thesis presents the development and evaluation of a system that allows people to explore a large virtual environment with an HMD when the size of the surrounding physical space is small. More specifically, this thesis focuses on exploring an HMD-based virtual environment by physically walking, i.e., bipedal locomotion. Bipedal locomotion is a highly effective method for learning the locations of things when exploring virtual environments, and seems to result in better spatial orientation than other locomotor interfaces such as joysticks. Bipedal locomotion within a virtual environment is easily accomplished as long as the physical space housing the tracking system and HMD are roughly the same size as the virtual environment. The issue becomes how to fit physical bipedal locomotion in a large virtual environment into a much smaller space while preserving a user's spatial orientation. This thesis thus develops engineering solutions that allow people to explore virtual environments that are much larger than the physical tracked space using their own locomotion. We explore several engineering solutions to designing this human-computer interface while using psychological experimentation to evaluate these techniques. The techniques presented in this thesis can be implemented and scale easily to any tracked HMD system.
557

Adaptive Resource Management Algorithms, Architectures, and Frameworks for Distributed Real-time Embedded Systems

Shankaran, Nishanth 27 October 2008 (has links)
There is an increasing demand for adaptive capabilities in distributed real-time and embedded (DRE) systems that execute in open environments where system operational conditions, input workload, and resource availability cannot be characterized accurately a priori. A challenging problem faced by researchers and developers of such systems is devising effective adaptive resource management strategies that can meet end-to-end quality of service (QoS) requirements of applications. To address this challenge, this dissertation presents three contributions to the research on adaptive resource management for DRE systems. First, it presents the Hierarchical Distributed Resource-management Architecture (HiDRA), which provides adaptive resource management using control techniques that enables the system to adapt to workload fluctuations and resource availability for both bandwidth and processor utilization simultaneously. Second, it describes the structure and functionality of the Resource Allocation and Control Engine (RACE), which is an open-source adaptive resource management framework built atop standards-based QoS-eabled component middleware. Third, it presents three representative DRE system case studies where RACE has been successfully applied. These case studies demonstrate and evaluate the effectiveness of RACE in the context of representative DRE systems.
558

Optimizing the Configuration of Software Product Line Variants

White, Christopher Jules 17 November 2008 (has links)
Software Product-Lines (SPLs) are software architectures that provide reusable components that can be configured into variants to meet different requirement sets. A key part of an SPL is a specification of the rules governing how the reusable components can be configured into variants. One of the most widely used modeling techniques for capturing these configuration rules is called feature modeling. This dissertation describes a research approach for addressing the challenges of configuring and optimizing SPL variants. We show that constraint programming techniques can be used to select optimal or good feature selections from feature models. Furthermore, we show how these constraint-based automation techniques can be used to perform modeling guidance to improve manual modeling steps. Finally, we show that a key missing component of SPL automation is the ability to automatically diagnose SPL configuration errors and offer good remedies. We provide a constraint-based diagnosis method for identifying SPL configuration errors.
559

Reusable Model Transformation Techniques for Automating Middleware QoS Configuration in Distributed Real-time and Embedded Systems

Kavimandan, Amogh 11 December 2008 (has links)
Contemporary component middleware platforms provide a high degree of flexibility and configurability to support the development and operational life-cycles of distributed real-time and embedded (DRE) systems. This flexibility of component middleware, however, can also complicate DRE system configuration since assuring their quality of service (QoS) properties requires mapping their QoS requirements onto the right set of configuration options of the underlying middleware platform. This dissertation provides the following contributions to the development of component-based DRE systems. First, it describes the design and implementation of Quality of service pICKER (QUICKER) model-driven toolchain that combines domain-specific modeling to capture system QoS requirements at domain-level abstractions to simplify the QoS requirements specification process, and model transformations to automate the mapping of these requirements to middleware-specific QoS options. Second, it evaluates the domain-specific requirements modeling, and the generated QoS configurations in the context of representative DRE systems. Finally, it describes a templatized model transformation technique for developing general-purpose, platform-independent QoS mappings that can operate across multiple platforms.
560

Car Detection by Classification of Image Segments

Halpenny, Robert Morgan 30 December 2008 (has links)
COMPUTER SCIENCE CAR DETECTION BY CLASSIFICATION OF IMAGE SEGMENTS Robert Morgan Halpenny Thesis under the direction of Professor Xenofon D. Koutsoukos Object detection is one of the most important unsolved problems in computer vision. Even recognizing a single class of object (such as a car), is complicated by object variation, changes in lighting and perspective, and object occlusion. In this approach, we attempt to detect cars in images of city streets by classifying image segments based on SIFT keypoints. SIFT keypoints provide image features that are strongly resistant to changes in perspective and lighting. We learn the most predictive keypoints by applying Bayesian inductive learning via the HITON-PC algorithm. Each segment is classified by a Support-Vector Machine (SVM) using the keypoints in and adjacent to the segment. Classifying segments allows us to greatly reduce the scope of classification efforts while retaining high cohesion among intra-segment keypoints, yielding a fast and highly predictive detection algorithm.

Page generated in 0.0914 seconds