• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 67
  • 29
  • 17
  • 6
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 148
  • 47
  • 43
  • 41
  • 34
  • 27
  • 24
  • 24
  • 24
  • 23
  • 21
  • 19
  • 19
  • 18
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Research On Transfer Alignment For Increased Speed And Accuracy

Kayasal, Ugur 01 September 2012 (has links) (PDF)
In this thesis, rapid transfer alignment algorithm for a helicopter launched guided munition is studied. Transfer alignment is the process of initialization of a guided munition&rsquo / s inertial navigation system with the aid of the carrier platform&rsquo / s navigation system, which is generally done by comparing the navigation data of missile and carrier&rsquo / s navigation data. In the literature, there are different studies of transfer alignment, especially for aircraft launched munitions. One important problem in transfer alignment is the attitude uncertainty of lever arm between munition&rsquo / s and carrier&rsquo / s navigation systems. In order to overcome this problem, most of the studies in the literature do not use carrier&rsquo / s attitude data in the transfer alignment, only velocity data is used. In order to estimate attitude and related inertial sensor errors, specific maneuvers of carrier platform are required which can take 1-5 minutes. The purpose of this thesis is to compensate the errors arising from the dynamics of the Helicopter, lever arm, mechanical vibration effects and inertial sensor error amplification, thus designing a transfer alignment algorithm under real environment conditions. The algorithm design begins with observability analysis, which is not done for helicopter transfer alignment in literature. In order to make proper compensations, characterization and modeling of vibration and lever arm environment is done for the helicopter. Also, vibration based errors of MEMS based inertial sensors are experimentally shown. The developed transfer alignment algorithm is tested by simulated and experimental data
22

Observability and Economic aspects of Fault Detection and Diagnosis Using CUSUM based Multivariate Statistics

Bin Shams, Mohamed January 2010 (has links)
This project focuses on the fault observability problem and its impact on plant performance and profitability. The study has been conducted along two main directions. First, a technique has been developed to detect and diagnose faulty situations that could not be observed by previously reported methods. The technique is demonstrated through a subset of faults typically considered for the Tennessee Eastman Process (TEP); which have been found unobservable in all previous studies. The proposed strategy combines the cumulative sum (CUSUM) of the process measurements with Principal Component Analysis (PCA). The CUSUM is used to enhance faults under conditions of small fault/signal to noise ratio while the use of PCA facilitates the filtering of noise in the presence of highly correlated data. Multivariate indices, namely, T2 and Q statistics based on the cumulative sums of all available measurements were used for observing these faults. The ARLo.c was proposed as a statistical metric to quantify fault observability. Following the faults detection, the problem of fault isolation is treated. It is shown that for the particular faults considered in the TEP problem, the contribution plots are not able to properly isolate the faults under consideration. This motivates the use of the CUSUM based PCA technique previously used for detection, for unambiguously diagnose the faults. The diagnosis scheme is performed by constructing a family of CUSUM based PCA models corresponding to each fault and then testing whether the statistical thresholds related to a particular faulty model is exceeded or not, hence, indicating occurrence or absence of the corresponding fault. Although the CUSUM based techniques were found successful in detecting abnormal situations as well as isolating the faults, long time intervals were required for both detection and diagnosis. The potential economic impact of these resulting delays motivates the second main objective of this project. More specifically, a methodology to quantify the potential economical loss due to unobserved faults when standard statistical monitoring charts are used is developed. Since most of the chemical and petrochemical plants are operated under closed loop scheme, the interaction of the control is also explicitly considered. An optimization problem is formulated to search for the optimal tradeoff between fault observability and closed loop performance. This optimization problem is solved in the frequency domain by using approximate closed loop transfer function models and in the time domain using a simulation based approach. The optimization in the time domain is applied to the TEP to solve for the optimal tuning parameters of the controllers that minimize an economic cost of the process.
23

Improved measurement placement and topology processing in power system state estimation

Wu, Yang 02 June 2009 (has links)
State estimation plays an important role in modern power system energy management systems. The network observability is a pre-requisite for the state estimation solution. Topological error in the network may cause the state estimation results to be seriously biased. This dissertation studies new schemes to improve the conventional state estimation in the above aspects. A new algorithm for cost minimization in the measurement placement design is proposed in this dissertation. The new algorithm reduces the cost of measurement installation and retains the network observability. Two levels of measurement place- ment designs are obtained: the basic level design guarantees the whole network to be observable using only the voltage magnitude measurement and the branch power flow measurements. The advanced level design keeps the network observable under certain contingencies. To preserve as many substation measurements as possible and maintain the net-work observability, an advanced network topology processor is introduced. A new method - the dynamic utilization of substation measurements (DUSM) - is presented. Instead of seeking the installation of new measurements in the system, this method dynamically calculates state estimation measurement values by applying the current law that regulates different measurement values implicitly. Its processing is at the substation level and, therefore, can be implemented independently in substations. This dissertation also presents a new way to verify circuit breaker status and identify topological errors. The new method improves topological error detection using the method of DUSM. It can be seen that without modifying the state estimator, the status of a circuit breaker may still be verified even without direct power flow measurements. Inferred measurements, calculated by DUSM, are used to help decide the CB status. To reduce future software code maintenance and to provide standard data ex- changes, the newly developed functions were developed in Java, with XML format input/output support. The effectiveness and applicability of these functions are ver-ified by various test cases.
24

Quantitative Measures Of Observability For Stochastic Systems

Subasi, Yuksel 01 February 2012 (has links) (PDF)
The observability measure based on the mutual information between the last state and the measurement sequence originally proposed by Mohler and Hwang (1988) is analyzed in detail and improved further for linear time invariant discrete-time Gaussian stochastic systems by extending the definition to the observability measure of a state sequence. By using the new observability measure it is shown that the unobservable states of the deterministic system have no effect on this measure and any observable part with no measurement uncertainty makes it infinite. Other distance measures i.e., Bhattacharyya and Hellinger distances are also investigated to be used as observability measures. The relationships between the observability measures and the covariance matrices of Kalman filter and the state sequence conditioned on the measurement sequence are derived. Steady state characteristics of the observability measure based on the last state is examined. The observability measures of a subspace of the state space, an individual state, the modes of the system are investigated. One of the results obtained in this part is that the deterministically unobservable states may have nonzero observability measures. The observability measures based on the mutual information are represented recursively and calculated for nonlinear stochastic systems. Then the measures are applied to a nonlinear stochastic system by using the particle filter methods. The arguments given for the LTI case are also observed for nonlinear stochastic systems. The second moment approximation deviates from the actual values when the nonlinearity in the system increases.
25

Analysis and synthesis of collaborative opportunistic navigation systems

Kassas, Zaher 09 July 2014 (has links)
Navigation is an invisible utility that is often taken for granted with considerable societal and economic impacts. Not only is navigation essential to our modern life, but the more it advances, the more possibilities are created. Navigation is at the heart of three emerging fields: autonomous vehicles, location-based services, and intelligent transportation systems. Global navigation satellite systems (GNSS) are insufficient for reliable anytime, anywhere navigation, particularly indoors, in deep urban canyons, and in environments under malicious attacks (e.g., jamming and spoofing). The conventional approach to overcome the limitations of GNSS-based navigation is to couple GNSS receivers with dead reckoning sensors. A new paradigm, termed opportunistic navigation (OpNav), is emerging. OpNav is analogous to how living creatures naturally navigate: by learning their environment. OpNav aims to exploit the plenitude of ambient radio frequency signals of opportunity (SOPs) in the environment. OpNav radio receivers, which may be handheld or vehicle-mounted, continuously search for opportune signals from which to draw position and timing information, employing on-the-fly signal characterization as necessary. In collaborative opportunistic navigation (COpNav), multiple receivers share information to construct and continuously refine a global signal landscape. For the sake of motivation, consider the following problem. A number of receivers with no a priori knowledge about their own states are dropped in an environment comprising multiple unknown terrestrial SOPs. The receivers draw pseudorange observations from the SOPs. The receivers' objective is to build a high-fidelity signal landscape map of the environment within which they localize themselves in space and time. We then ask: (i) Under what conditions is the environment fully observable? (ii) In cases where the environment is not fully observable, what are the observable states? (iii) How would receiver-controlled maneuvers affect observability? (iv) What is the degree of observability of the various states in the environment? (v) What motion planning strategy should the receivers employ for optimal information gathering? (vi) How effective are receding horizon strategies over greedy for receiver trajectory optimization, and what are their limitations? (vii) What level of collaboration between the receivers achieves a minimal price of anarchy? This dissertation addresses these fundamental questions and validates the theoretical conclusions numerically and experimentally. / text
26

System-Level Observation Framework for Non-Intrusive Runtime Monitoring of Embedded Systems

Lee, Jong Chul January 2014 (has links)
As system complexity continues to increase, the integration of software and hardware subsystems within system-on-a-chip (SOC) presents significant challenges in post-silicon validation, testing, and in-situ debugging across hardware and software layers. The deep integration of software and hardware components within SOCs often prevents the use of traditional analysis methods to observe and monitor the internal state of these components. This situation is further exacerbated for in-situ debugging and testing in which physical access to traditional debug and trace interfaces is unavailable, infeasible, or cost prohibitive. In this dissertation, we present a system-level observation framework (SOF) that provides minimally intrusive methods for dynamically monitoring and analyzing deeply integrated hardware and software components within embedded systems. The SOF monitors hardware and software events by inserting additional logic within hardware cores and by listening to processor trace ports. The SOF provides visibility for monitoring complex execution behavior of software applications without affecting the system execution. The SOF utilizes a dedicated event-streaming interface that allows efficient observation and analysis of rapidly occurring events at runtime. The event-streaming interface supports three alternatives: (1) an in-order priority-based event stream controller, (2) a round-robin priority-based event stream controller, and (3) a priority-level based event stream controller. The in-order priority-based event stream controller, which uses efficient pipelined hardware architecture, ensures that events are reported in-order based on the time of the event occurrence. While the in-order priority-based event stream controller provides high throughput for reporting events, significant area requirement can be incurred. The round-robin priority-based event stream controller is an area-efficient event stream ordering technique with acceptable tradeoffs in event stream throughput. To further reduce area requirement, the SOF supports a priority-level based event stream controller that provides an in-ordering method with smaller area requirements than the round-robin priority-based event stream controller. Comprehensive experimental results using a complete prototype system implementation are presented to quantify the tradeoffs in area, throughput, and latency for the various event streaming interfaces considering several execution scenarios.
27

Observability and Economic aspects of Fault Detection and Diagnosis Using CUSUM based Multivariate Statistics

Bin Shams, Mohamed January 2010 (has links)
This project focuses on the fault observability problem and its impact on plant performance and profitability. The study has been conducted along two main directions. First, a technique has been developed to detect and diagnose faulty situations that could not be observed by previously reported methods. The technique is demonstrated through a subset of faults typically considered for the Tennessee Eastman Process (TEP); which have been found unobservable in all previous studies. The proposed strategy combines the cumulative sum (CUSUM) of the process measurements with Principal Component Analysis (PCA). The CUSUM is used to enhance faults under conditions of small fault/signal to noise ratio while the use of PCA facilitates the filtering of noise in the presence of highly correlated data. Multivariate indices, namely, T2 and Q statistics based on the cumulative sums of all available measurements were used for observing these faults. The ARLo.c was proposed as a statistical metric to quantify fault observability. Following the faults detection, the problem of fault isolation is treated. It is shown that for the particular faults considered in the TEP problem, the contribution plots are not able to properly isolate the faults under consideration. This motivates the use of the CUSUM based PCA technique previously used for detection, for unambiguously diagnose the faults. The diagnosis scheme is performed by constructing a family of CUSUM based PCA models corresponding to each fault and then testing whether the statistical thresholds related to a particular faulty model is exceeded or not, hence, indicating occurrence or absence of the corresponding fault. Although the CUSUM based techniques were found successful in detecting abnormal situations as well as isolating the faults, long time intervals were required for both detection and diagnosis. The potential economic impact of these resulting delays motivates the second main objective of this project. More specifically, a methodology to quantify the potential economical loss due to unobserved faults when standard statistical monitoring charts are used is developed. Since most of the chemical and petrochemical plants are operated under closed loop scheme, the interaction of the control is also explicitly considered. An optimization problem is formulated to search for the optimal tradeoff between fault observability and closed loop performance. This optimization problem is solved in the frequency domain by using approximate closed loop transfer function models and in the time domain using a simulation based approach. The optimization in the time domain is applied to the TEP to solve for the optimal tuning parameters of the controllers that minimize an economic cost of the process.
28

FAULT LOCATION ALGORITHMS, OBSERVABILITY AND OPTIMALITY FOR POWER DISTRIBUTION SYSTEMS

Xiu, Wanjing 01 January 2014 (has links)
Power outages usually lead to customer complaints and revenue losses. Consequently, fast and accurate fault location on electric lines is needed so that repair work can be carried out as fast as possible. Chapter 2 describes novel fault location algorithms for radial and non-radial ungrounded power distribution systems. For both types of systems, fault location approaches using line to neutral or line to line measurements are presented. It’s assumed that network structure and parameters are known, so that during-fault bus impedance matrix of the system can be derived. Functions of bus impedance matrix and available measurements at substation are formulated, from which the unknown fault location can be estimated. Evaluation studies on fault location accuracy and robustness of fault location methods to load variations and measurement errors has been performed. Most existing fault location methods rely on measurements obtained from meters installed in power systems. To get the most from a limited number of meters available, optimal meter placement methods are needed. Chapter 3 presents a novel optimal meter placement algorithm to keep the system observable in terms of fault location determination. The observability of a fault location in power systems is defined first. Then, fault location observability analysis of the whole system is performed to determine the least number of meters needed and their best locations to achieve fault location observability. Case studies on fault location observability with limited meters are presented. Optimal meter deployment results based on the studied system with equal and varying monitoring cost for meters are displayed. To enhance fault location accuracy, an optimal fault location estimator for power distribution systems with distributed generation (DG) is described in Chapter 4. Voltages and currents at locations with power generation are adopted to give the best estimation of variables including measurements, fault location and fault resistances. Chi-square test is employed to detect and identify bad measurement. Evaluation studies are carried out to validate the effectiveness of optimal fault location estimator. A set of measurements with one bad measurement is utilized to test if a bad data can be identified successfully by the presented method.
29

Uniform controllability of discrete partial differential equations

Nguyen, Thi Nhu Thuy 26 October 2012 (has links) (PDF)
In this thesis, we study uniform controllability properties of semi-discrete approximations for parabolic systems. In a first part, we address the minimization of the Lq-norm (q > 2) of semidiscrete controls for parabolic equation. Our goal is to overcome the limitation of [LT06] about the order 1/2 of unboundedness of the control operator. Namely, we show that the uniform observability property also holds in Lq (q > 2) even in the case of a degree of unboundedness greater than 1/2. Moreover, a minimization procedure to compute the approximation controls is provided. The study of Lq optimality in the first part is in a general context. However, the discrete observability inequalities that are obtained are not so precise than the ones derived then with Carleman estimates. In a second part, in the discrete setting of one-dimensional finite-differences we prove a Carleman estimate for a semi discrete version of the parabolic operator @t − @x(c@x) which allows one to derive observability inequalities that are far more precise. Here we consider in case that the diffusion coefficient has a jump which yields a transmission problem formulation. Consequence of this Carleman estimate, we deduce consistent null-controllability results for classes of linear and semi-linear parabolic equations.
30

Development, Implementation, And Testing Of A Tightly Coupled Integrated Ins/gps System

Ozturk, Alper 01 January 2003 (has links) (PDF)
This thesis describes the theoretical and practical stages through development to testing of an integrated navigation system, specifically composed of an Inertial Navigation System (INS), and Global Positioning System (GPS). Integrated navigation systems combine the best features of independent systems to bring out increased performance, improved reliability and system integrity. In an integrated INS/GPS system, INS output is used to calculate current navigation states / GPS output is used to supply external measurements, and a Kalman filter is used to provide the most probable corrections to the state estimate using both data. Among various INS/GPS integration strategies, our aim is to construct a tightly coupled integrated INS/GPS system. For this purpose, mathematical models of INS and GPS systems are derived and they are linearized to form system dynamics and system measurement models respectively. A Kalman filter is designed and implemented depending upon these models. Besides these, based on the given aided navigation system representation a quantitative measure for observability is defined using Gramians. Finally, the performance of the developed system is evaluated with real data recorded by the sensors. A comparison with a reference system and also with a loosely coupled system is done to show the superiority of the tightly coupled structure. Scenarios simulating various GPS data outages proved that the tightly coupled system outperformed the loosely coupled system from the aspects of accuracy, reliability and level of observability.

Page generated in 0.0441 seconds