• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 4
  • 4
  • 1
  • Tagged with
  • 35
  • 35
  • 12
  • 7
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Optimal sensing matrices

Achanta, Hema Kumari 01 December 2014 (has links)
Location information is of extreme importance in every walk of life ranging from commercial applications such as location based advertising and location aware next generation communication networks such as the 5G networks to security based applications like threat localization and E-911 calling. In indoor and dense urban environments plagued by multipath effects there is usually a Non Line of Sight (NLOS) scenario preventing GPS based localization. Wireless localization using sensor networks provides a cost effective and accurate solution to the wireless source localization problem. Certain sensor geometries show significantly poor performance even in low noise scenarios when triangulation based localization methods are used. This brings the need for the design of an optimum sensor placement scheme for better performance in the source localization process. The optimum sensor placement is the one that optimizes the underlying Fisher Information Matrix(FIM) . This thesis will present a class of canonical optimum sensor placements that produce the optimum FIM for N-dimensional source localization N greater than or equal to 2 for a case where the source location has a radially symmetric probability density function within a N-dimensional sphere and the sensors are all on or outside the surface of a concentric outer N-dimensional sphere. While the canonical solution that we designed for the 2D problem represents optimum spherical codes, the study of 3 or higher dimensional design provides great insights into the design of measurement matrices with equal norm columns that have the smallest possible condition number. Such matrices are of importance in compressed sensing based applications. This thesis also presents an optimum sensing matrix design for energy efficient source localization in 2D. Specifically, the results relate to the worst case scenario when the minimum number of sensors are active in the sensor network. We also propose a distributed control law that guides the motion of the sensors on the circumference of the outer circle so that achieve the optimum sensor placement with minimum communication overhead. The design of equal norm column sensing matrices has a variety of other applications apart from the optimum sensor placement for N-dimensional source localization. One such application is fourier analysis in Magnetic Resonance Imaging (MRI). Depending on the method used to acquire the MR image, one can choose an appropriate transform domain that transforms the MR image into a sparse image that is compressible. Some such transform domains include Wavelet Transform and Fourier Transform. The inherent sparsity of the MR images in an appropriately chosen transform domain, motivates one of the objectives of this thesis which is to provide a method for designing a compressive sensing measurement matrix by choosing a subset of rows from the Discrete Fourier Transform (DFT) matrix. This thesis uses the spark of the matrix as the design criterion. The spark of a matrix is defined as the smallest number of linearly dependent columns of the matrix. The objective is to select a subset of rows from the DFT matrix in order to achieve maximum spark. The design procedure leads us to an interest study of coprime conditions on the row indices chosen with the size of the DFT matrix.
12

Dynamic Modeling, Sensor Placement Design, and Fault Diagnosis of Nuclear Desalination Systems

Li, Fan 01 May 2011 (has links)
Fault diagnosis of sensors, devices, and equipment is an important topic in the nuclear industry for effective and continuous operation of nuclear power plants. All the fault diagnostic approaches depend critically on the sensors that measure important process variables. Whenever a process encounters a fault, the effect of the fault is propagated to some or all the process variables. The ability of the sensor network to detect and isolate failure modes and anomalous conditions is crucial for the effectiveness of a fault detection and isolation (FDI) system. However, the emphasis of most fault diagnostic approaches found in the literature is primarily on the procedures for performing FDI using a given set of sensors. Little attention has been given to actual sensor allocation for achieving the efficient FDI performance. This dissertation presents a graph-based approach that serves as a solution for the optimization of sensor placement to ensure the observability of faults, as well as the fault resolution to a maximum possible extent. This would potentially facilitate an automated sensor allocation procedure. Principal component analysis (PCA), a multivariate data-driven technique, is used to capture the relationships in the data, and to fit a hyper-plane to the data. The fault directions for different fault scenarios are obtained from the prediction errors, and fault isolation is then accomplished using new projections on these fault directions. The effectiveness of the use of an optimal sensor set versus a reduced set for fault detection and isolation is demonstrated using this technique. Among a variety of desalination technologies, the multi-stage flash (MSF) processes contribute substantially to the desalinating capacity in the world. In this dissertation, both steady-state and dynamic simulation models of a MSF desalination plant are developed. The dynamic MSF model is coupled with a previously developed International Reactor Innovative and Secure (IRIS) model in the SIMULINK environment. The developed sensor placement design and fault diagnostic methods are illustrated with application to the coupled nuclear desalination system. The results demonstrate the effectiveness of the newly developed integrated approach to performance monitoring and fault diagnosis with optimized sensor placement for large industrial systems.
13

A Distributed Approach to Dynamic Autonomous Agent Placement for Tracking Moving Targets with Application to Monitoring Urban Environments

Hegazy, Tamir A. 22 November 2004 (has links)
The problem of dynamic autonomous agent placement for tracking moving targets arises in many real-life applications, such as rescue operations, security, surveillance, and reconnaissance. The objective of this thesis is to develop a distributed hierarchical approach to address this problem. After the approach is developed, it is tested on a number of urban surveillance scenarios. The proposed approach views the placement problem as a multi-tiered architecture entailing modules for low-level sensor data preprocessing and fusion, decentralized decision support, knowledge building, and centralized decision support. This thesis focuses upon the modules of decentralized decision support and knowledge building. The decentralized decision support module requires a great deal of coordination among agents to achieve the mission objectives. The module entails two classes of distributed algorithms: non-model-based algorithms and model-based algorithms. The first class is used as a place holder while a model is built to describe agents knowledge about target behaviors. After the model is built and evaluated, agents switch to the model-based algorithms. To apply the approach to urban environments, urban terrain zones are classified, and the problem is mathematically formulated for two different types of urban terrain, namely low-rise, widely spaced and high-rise, closely spaced zones. An instance of each class of algorithms is developed for each of the two types of urban terrain. The algorithms are designed to run in a distributed fashion to address scalability and fault tolerance issues. The class of model-based algorithms includes a distributed model-based algorithm for dealing with evasive targets. The algorithm is designed to improve its performance over time as it learns from past experience how to deal with evasive targets. Apart from the algorithms, a model estimation module is developed to build motion models online from sensor observations. The approach is evaluated through a set of simulation experiments inspired from real-life scenarios. Experimental results reveal the superiority of the developed algorithms over existing ones and the applicability of the online model-building method. Therefore, it is concluded that the overall distributed approach is capable of handling agent placement or surveillance applications in urban environments among other applications.
14

Sensor Placement and Graphical User Interface for Photovoltaic Array Monitoring System

January 2012 (has links)
abstract: With increased usage of green energy, the number of photovoltaic arrays used in power generation is increasing rapidly. Many of the arrays are located at remote locations where faults that occur within the array often go unnoticed and unattended for large periods of time. Technicians sent to rectify the faults have to spend a large amount of time determining the location of the fault manually. Automated monitoring systems are needed to obtain the information about the performance of the array and detect faults. Such systems must monitor the DC side of the array in addition to the AC side to identify non catastrophic faults. This thesis focuses on two of the requirements for DC side monitoring of an automated PV array monitoring system. The first part of the thesis quantifies the advantages of obtaining higher resolution data from a PV array on detection of faults. Data for the monitoring system can be gathered for the array as a whole or from additional places within the array such as individual modules and end of strings. The fault detection rate and the false positive rates are compared for array level, string level and module level PV data. Monte Carlo simulations are performed using PV array models developed in Simulink and MATLAB for fault and no fault cases. The second part describes a graphical user interface (GUI) that can be used to visualize the PV array for module level monitoring system information. A demonstration GUI is built in MATLAB using data obtained from a PV array test facility in Tempe, AZ. Visualizations are implemented to display information about the array as a whole or individual modules and locate faults in the array. / Dissertation/Thesis / M.S. Electrical Engineering 2012
15

Sewer systems management : illicit intrusion identification and optimal sensor placement / Management des réseaux d’assainissement : identification des pollutions ponctuelles et optimisation du placement de capteurs

Banik, Bijit Kumar 17 December 2015 (has links)
La gestion incorrecte des eaux usées peut entraîner des dommages importants sur les stations de traitement et sur le récepteur final (écosystème aquatique). Dans le passé, la gestion des eaux usées n'a pas retenu beaucoup d'attention de la part des différentes parties prenantes. Toutefois, récemment, le changement de modèle de gestion des eaux usées et des eaux pluviales, a évolué du simple contrôle sanitaire et des inondations, à une protection globale de l'environnement. Un aspect très important, dans la politique de gestion des systèmes d'assainissement, est de détecter et d'éliminer une intrusion illicite, qui peut être intentionnelle. Ce travail thèse de doctorat est constitué de deux parties principales. Dans la première partie les problèmes relatifs à l'identification d'une intrusion illicite dans un système d'assainissement ont été abordés, proposant une méthodologie d'identification de la source (IS). Dans la deuxième partie, différentes méthodologies innovantes ont été proposées pour trouver l'emplacement optimal d'un nombre limité de capteurs dans le système d'assainissement. Dans cette thèse, le ISest résolu grâce à un modèle de simulation-optimisation, combinant l'outil de simulation Storm Water Management Model (SWMM) avec un code d'optimisation basé sur un algorithme génétique (Galib). Ceci nécessite des mesures en ligne de certains capteurs placés sur le réseau. Le SWMM ne possède pas l'outil de programmation. Afin d'intégrer le simulateur SWMM à la méthodologie de IS automatisé proposée, un outil ad-hoc a été développé. Une procédure de présélection, basée sur le concept de la matrice de la pollution et compte tenu de la topologie des égouts, a été mis en œuvre pour réduire l'effort de calcul. La méthodologie IS a été testée sur deux réseaux différents. L'un est un réseau connu dans la littérature, extrait du manuel de SWMM, tandis que l'autre réseau est un sous-bassin versant du réseau d'assainissement de Massa Lubrense, village situé près de Naples, en Italie. Les résultats montrent que les procédures de présélection réduisent considérablement l'effort de calcul, avec un rôle crucial pour les grands systèmes. En enquêtant sur la performance de la méthodologie IS, sa sensibilité par rapport aux paramètres de l'algorithme génétique a été vérifiée. En outre, l'influence de l'incertitude des flux entrés et des erreurs de mesure sur les résultats ont été approfondi. Un autre problème fondamental, associé à la surveillance de la qualité de l'eau des égouts, est le placement optimal d'un nombre limité de capteurs pour la détection précoce d'une source illicite. Dans la thèse l'emplacement du capteur est exprimé avec un problème d'optimisation mono ou multi-objectif. Le SWMM est utilisé pour extraire les données de qualité de l'eau. Différentes formulations ont été proposées et testées. Tout d'abord, la Théorie de l'Information (TI) basée sur la méthodologie d'optimisation multi-objectif est présentée. La TI considère deux objectifs : l'entropie conjointe, le contenu de l'information dans un ensemble de capteurs, qui est maintenu aussi haut que possible ; la corrélation totale, une mesure de la redondance, qui est maintenue aussi faible que possible. Dans la seconde approche multi-objectifs le temps de détection doit être minimisé et la fiabilité qui doit être maximisée. Les deux cas, les problèmes multi-objectifs sont résolues en utilisant l'algorithme Non-Dominating Sorting Genetic Algorithm-II (NSGA-II). Comme troisième alternative, un outil d'optimisation mono-objectif (Greedy) a été testé. Les objectifs précédemment considérées sont utilisés avec différentes combinaisons. Le réseau d'assainissement de Massa Lubrense a été utilisé pour tester les performances des différentes procédures proposées. Une comparaison normalisée entre toutes les approches montre que l'approche basée sur Greedy pourrait être une alternative pratique pour l'optimisation des emplacements de capteurs dans les systèmes d'assainissement / Improper wastewater management could result in significant damage to the treatment plants and the final recipient aquatic ecosystem. In the past, wastewater management did not get much attention from different stakeholders. However, recently a paradigm shift of wastewater and storm water management is evolving from a simple sanitary and flood control, respectively, to a whole environmental protection function. A very important aspect of the sewer systems management policy is to detect and eliminate an illicit intrusion. This PhD research is consisting of two main pillars. In the first pillar, the issues regarding the identification of an illicit intrusion in a sewer system have been addressed, proposing a source identification (SI) methodology. In the second pillar, different innovative methodologies have been proposed to find the optimal placement of a limited number of sensors in the sewer system. In the thesis, the SI is solved through a simulation-optimization model, combining the hydraulic and quality simulation tool Storm Water Management Model (SWMM) with a genetic algorithm code (GALib) as an optimizer. It requires online measurements from some sensors placed on the network. The SWMM does not have the programmer's toolkit. To integrate the SWMM simulator with the proposed automated SI methodology, an ad-hoc toolkit has been developed. A pre-screening procedure, based on the pollution matrix concept and considering the topology of sewers, has been implemented to reduce the computational effort. The SI methodology has been tested on two different networks. One is a literature network taken from the SWMM example manual while the other is one sub-catchment of the real sewer network of Massa Lubrense, a town located near Naples, Italy. The results show that the pre-screening procedure reduces the computational effort significantly, and it has a crucial role in large systems. In investigating the performances of the SI methodology, its sensitivity respect to the genetic algorithm parameters has been verified. Moreover, the influence of the uncertainty of the inflows values and the measurement errors on the results have been investigated. Another core problem associated with the water quality monitoring of sewers is represented by the optimal placement of a limited number of sensors for the early detection of an illicit source. In the thesis, the sensor location is expressed as a single or multi-objective optimization problem and the SWMM is used to extract the water quality data. Different formulations have been proposed and tested. First, an Information Theory (IT) based multi-objective optimization methodology is presented. The IT approach considers two objectives: the Joint entropy, the information content of a set of sensors, which is kept as high as possible; the Total correlation, a measure of redundancy, which is kept as low as possible. In the second multi-objective approach Detection time, to be minimized, and Reliability, to be maximized, are considered. In both cases, the multi-objective problems are solved using the Non-Dominating Sorting Genetic Algorithm-II (NSGA-II). As a third alternative, a single objective Greedy based optimization tool has been tested. The previously considered objectives are also used with different combinations. The Massa Lubrense sewer network is used to test the performances of various proposed procedures. A normalized comparison among all approaches shows that the Greedy based approach could be a handy alternative for optimizing the sensor locations in sewer systems
16

Sparse Signal Recovery Based on Compressive Sensing and Exploration Using Multiple Mobile Sensors

Shekaramiz, Mohammad 01 December 2018 (has links)
The work in this dissertation is focused on two areas within the general discipline of statistical signal processing. First, several new algorithms are developed and exhaustively tested for solving the inverse problem of compressive sensing (CS). CS is a recently developed sub-sampling technique for signal acquisition and reconstruction which is more efficient than the traditional Nyquist sampling method. It provides the possibility of compressed data acquisition approaches to directly acquire just the important information of the signal of interest. Many natural signals are sparse or compressible in some domain such as pixel domain of images, time, frequency and so forth. The notion of compressibility or sparsity here means that many coefficients of the signal of interest are either zero or of low amplitude, in some domain, whereas some are dominating coefficients. Therefore, we may not need to take many direct or indirect samples from the signal or phenomenon to be able to capture the important information of the signal. As a simple example, one can think of a system of linear equations with N unknowns. Traditional methods suggest solving N linearly independent equations to solve for the unknowns. However, if many of the variables are known to be zero or of low amplitude, then intuitively speaking, there will be no need to have N equations. Unfortunately, in many real-world problems, the number of non-zero (effective) variables are unknown. In these cases, CS is capable of solving for the unknowns in an efficient way. In other words, it enables us to collect the important information of the sparse signal with low number of measurements. Then, considering the fact that the signal is sparse, extracting the important information of the signal is the challenge that needs to be addressed. Since most of the existing recovery algorithms in this area need some prior knowledge or parameter tuning, their application to real-world problems to achieve a good performance is difficult. In this dissertation, several new CS algorithms are proposed for the recovery of sparse signals. The proposed algorithms mostly do not require any prior knowledge on the signal or its structure. In fact, these algorithms can learn the underlying structure of the signal based on the collected measurements and successfully reconstruct the signal, with high probability. The other merit of the proposed algorithms is that they are generally flexible in incorporating any prior knowledge on the noise, sparisty level, and so on. The second part of this study is devoted to deployment of mobile sensors in circumstances that the number of sensors to sample the entire region is inadequate. Therefore, where to deploy the sensors, to both explore new regions while refining knowledge in aleady visited areas is of high importance. Here, a new framework is proposed to decide on the trajectories of sensors as they collect the measurements. The proposed framework has two main stages. The first stage performs interpolation/extrapolation to estimate the phenomenon of interest at unseen loactions, and the second stage decides on the informative trajectory based on the collected and estimated data. This framework can be applied to various problems such as tuning the constellation of sensor-bearing satellites, robotics, or any type of adaptive sensor placement/configuration problem. Depending on the problem, some modifications on the constraints in the framework may be needed. As an application side of this work, the proposed framework is applied to a surrogate problem related to the constellation adjustment of sensor-bearing satellites.
17

Model-Based Fault Diagnosis of Automatic Transmissions

Deosthale, Eeshan Vijay January 2018 (has links)
No description available.
18

A Multi Sensor System for a Human Activities Space : Aspects of Planning and Quality Measurement

Chen, Jiandan January 2008 (has links)
In our aging society, the design and implementation of a high-performance autonomous distributed vision information system for autonomous physical services become ever more important. In line with this development, the proposed Intelligent Vision Agent System, IVAS, is able to automatically detect and identify a target for a specific task by surveying a human activities space. The main subject of this thesis is the optimal configuration of a sensor system meant to capture the target objects and their environment within certain required specifications. The thesis thus discusses how a discrete sensor causes a depth spatial quantisation uncertainty, which significantly contributes to the 3D depth reconstruction accuracy. For a sensor stereo pair, the quantisation uncertainty is represented by the intervals between the iso-disparity surfaces. A mathematical geometry model is then proposed to analyse the iso-disparity surfaces and optimise the sensors’ configurations according to the required constrains. The thesis also introduces the dithering algorithm which significantly reduces the depth reconstruction uncertainty. This algorithm assures high depth reconstruction accuracy from a few images captured by low-resolution sensors. To ensure the visibility needed for surveillance, tracking, and 3D reconstruction, the thesis introduces constraints of the target space, the stereo pair characteristics, and the depth reconstruction accuracy. The target space, the space in which human activity takes place, is modelled as a tetrahedron, and a field of view in spherical coordinates is proposed. The minimum number of stereo pairs necessary to cover the entire target space and the arrangement of the stereo pairs’ movement is optimised through integer linear programming. In order to better understand human behaviour and perception, the proposed adaptive measurement method makes use of a fuzzily defined variable, FDV. The FDV approach enables an estimation of a quality index based on qualitative and quantitative factors. The suggested method uses a neural network as a tool that contains a learning function that allows the integration of the human factor into a quantitative quality index. The thesis consists of two parts, where Part I gives a brief overview of the applied theory and research methods used, and Part II contains the five papers included in the thesis.
19

An Architecture for Global Ubiquitous Sensing

Perez, Alfredo Jose 01 January 2011 (has links)
A new class of wireless sensor networks has recently appeared due to the pervasiness of cellular phones with embedded sensors, mobile Internet connectivity, and location technologies. This mobile wireless sensor network has the potential to address large-scale societal problems and improve the people's quality of life in a better, faster and less expensive fashion than current solutions based on static wireless sensor networks. Ubiquitous Sensing is the umbrella term used in this dissertation that encompasses location-based services, human-centric, and participatory sensing applications. At the same time, ubiquitous sensing applications are bringing a new series of challenging problems. This dissertation proposes and evaluates G-Sense, for Global-Sense, an architecture that integrates mobile and static wireless sensor networks, and addresses several new problems related to location-based services, participatory sensing, and human-centric sensing applications. G-Sense features the critical point algorithms, which are specific mechanisms to reduce the power consumption by continous sensing applications in cellular phones, and reduce the amount of data generated by these applications. As ubiquitous sensing applications have the potential to gather data from many users around the globe, G-Sense introduces a peer-to-peer system to interconnect sensing servers based on the locality of the data. Finally, this dissertation proposes and evaluates a multiobjective model and a hybrid evolutionary algorithm to address the efficient deployment of static wireless sensor nodes when monitoring critical areas of interest.
20

WATER QUALITY SENSOR PLACEMENT GUIDANCE FOR SMALL WATER DISTRIBUTION SYSTEMS

Schal, Stacey L 01 January 2013 (has links)
Water distribution systems are vulnerable to intentional, along with accidental, contamination of the water supply. Contamination warning systems (CWS) are strategies to lessen the effects of contamination by delivering early indication of an event. Online quality monitoring, a network of sensors that can assess water quality and alert an operator of contamination, is a critical component of CWS, but utilities are faced with the decision of what locations are optimal for deployment of sensors. A sensor placement algorithm was developed and implemented in a commercial network distribution model (i.e. KYPIPE) to aid small utilities in sensor placement. The developed sensor placement tool was then validated using 12 small distribution system models and multiple contamination scenarios for the placement of one and two sensors. This thesis also addresses the issue that many sensor placement algorithms require calibrated hydraulic/water quality models, but small utilities do not always possess the financial resources or expertise to build calibrated models. Because of such limitations, a simple procedure is proposed to recommend optimal placement of a sensor without the need for a model or complicated algorithm. The procedure uses simple information about the geometry of the system and does not require explicit information about flow dynamics.

Page generated in 0.0964 seconds