• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 85
  • 20
  • 11
  • 8
  • 2
  • 1
  • Tagged with
  • 147
  • 147
  • 46
  • 40
  • 26
  • 22
  • 21
  • 20
  • 18
  • 18
  • 14
  • 14
  • 13
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Compressive Sensing Approaches for Sensor based Predictive Analytics in Manufacturing and Service Systems

Bastani, Kaveh 14 March 2016 (has links)
Recent advancements in sensing technologies offer new opportunities for quality improvement and assurance in manufacturing and service systems. The sensor advances provide a vast amount of data, accommodating quality improvement decisions such as fault diagnosis (root cause analysis), and real-time process monitoring. These quality improvement decisions are typically made based on the predictive analysis of the sensor data, so called sensor-based predictive analytics. Sensor-based predictive analytics encompasses a variety of statistical, machine learning, and data mining techniques to identify patterns between the sensor data and historical facts. Given these patterns, predictions are made about the quality state of the process, and corrective actions are taken accordingly. Although the recent advances in sensing technologies have facilitated the quality improvement decisions, they typically result in high dimensional sensor data, making the use of sensor-based predictive analytics challenging due to their inherently intensive computation. This research begins in Chapter 1 by raising an interesting question, whether all these sensor data are required for making effective quality improvement decisions, and if not, is there any way to systematically reduce the number of sensors without affecting the performance of the predictive analytics? Chapter 2 attempts to address this question by reviewing the related research in the area of signal processing, namely, compressive sensing (CS), which is a novel sampling paradigm as opposed to the traditional sampling strategy following the Shannon Nyquist rate. By CS theory, a signal can be reconstructed from a reduced number of samples, hence, this motivates developing CS based approaches to facilitate predictive analytics using a reduced number of sensors. The proposed research methodology in this dissertation encompasses CS approaches developed to deliver the following two major contributions, (1) CS sensing to reduce the number of sensors while capturing the most relevant information, and (2) CS predictive analytics to conduct predictive analysis on the reduced number of sensor data. The proposed methodology has a generic framework which can be utilized for numerous real-world applications. However, for the sake of brevity, the validity of the proposed methodology has been verified with real sensor data associated with multi-station assembly processes (Chapters 3 and 4), additive manufacturing (Chapter 5), and wearable sensing systems (Chapter 6). Chapter 7 summarizes the contribution of the research and expresses the potential future research directions with applications to big data analytics. / Ph. D.
52

Aplicação do método do Gradiente Espectral Projetado ao problema de Compressive Sensing / Applications of the Spectral Prjected Gradient for Compressive Sensing theory

Chullo Llave, Boris 19 September 2012 (has links)
A teoria de Compressive Sensing proporciona uma nova estratégia de aquisição e recuperação de dados com bons resultados na área de processamento de imagens. Esta teoria garante recuperar um sinal com alta probabilidade a partir de uma taxa reduzida de amostragem por debaixo do limite de Nyquist-Shanon. O problema de recuperar o sinal original a partir das amostras consiste em resolver um problema de otimização. O método de Gradiente Espectral Projetado é um método para minimizar funções suaves em conjuntos convexos que tem sido aplicado com frequência ao problema de recuperar o sinal original a partir do sinal amostrado. Este trabalho dedica-se ao estudo da aplicação do Método do Gradiente Espectral Projetado ao problema de Compressive Sensing. / The theory of compressive sensing has provided a new acquisition strategy and data recovery with good results in the image processing area. This theory guarantees to recover a signal with high probability from a reduced sampling rate below the Nyquist-Shannon limit. The problem of recovering the original signal from the samples consists in solving an optimization problem. The Spectral Projected Gradient (SPG) is a method to minimize continuous functions over convex sets which often has been applied to the problem of recovering the original signal from sampled signals. This work is dedicated to the study and application of the Spectral Projected Gradient method to Compressive Sensing problems.
53

Aplicação do método do Gradiente Espectral Projetado ao problema de Compressive Sensing / Applications of the Spectral Prjected Gradient for Compressive Sensing theory

Boris Chullo Llave 19 September 2012 (has links)
A teoria de Compressive Sensing proporciona uma nova estratégia de aquisição e recuperação de dados com bons resultados na área de processamento de imagens. Esta teoria garante recuperar um sinal com alta probabilidade a partir de uma taxa reduzida de amostragem por debaixo do limite de Nyquist-Shanon. O problema de recuperar o sinal original a partir das amostras consiste em resolver um problema de otimização. O método de Gradiente Espectral Projetado é um método para minimizar funções suaves em conjuntos convexos que tem sido aplicado com frequência ao problema de recuperar o sinal original a partir do sinal amostrado. Este trabalho dedica-se ao estudo da aplicação do Método do Gradiente Espectral Projetado ao problema de Compressive Sensing. / The theory of compressive sensing has provided a new acquisition strategy and data recovery with good results in the image processing area. This theory guarantees to recover a signal with high probability from a reduced sampling rate below the Nyquist-Shannon limit. The problem of recovering the original signal from the samples consists in solving an optimization problem. The Spectral Projected Gradient (SPG) is a method to minimize continuous functions over convex sets which often has been applied to the problem of recovering the original signal from sampled signals. This work is dedicated to the study and application of the Spectral Projected Gradient method to Compressive Sensing problems.
54

Remote-Sensed LIDAR Using Random Sampling and Sparse Reconstruction

Martinez, Juan Enrique Castorera 10 1900 (has links)
ITC/USA 2011 Conference Proceedings / The Forty-Seventh Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2011 / Bally's Las Vegas, Las Vegas, Nevada / In this paper, we propose a new, low complexity approach for the design of laser radar (LIDAR) systems for use in applications in which the system is wirelessly transmitting its data from a remote location back to a command center for reconstruction and viewing. Specifically, the proposed system collects random samples in different portions of the scene, and the density of sampling is controlled by the local scene complexity. The range samples are transmitted as they are acquired through a wireless communications link to a command center and a constrained absolute-error optimization procedure of the type commonly used for compressive sensing/sampling is applied. The key difficulty in the proposed approach is estimating the local scene complexity without densely sampling the scene and thus increasing the complexity of the LIDAR front end. We show here using simulated data that the complexity of the scene can be accurately estimated from the return pulse shape using a finite moments approach. Furthermore, we find that such complexity estimates correspond strongly to the surface reconstruction error that is achieved using the constrained optimization algorithm with a given number of samples.
55

Coding Strategies and Implementations of Compressive Sensing

Tsai, Tsung-Han January 2016 (has links)
<p>This dissertation studies the coding strategies of computational imaging to overcome the limitation of conventional sensing techniques. The information capacity of conventional sensing is limited by the physical properties of optics, such as aperture size, detector pixels, quantum efficiency, and sampling rate. These parameters determine the spatial, depth, spectral, temporal, and polarization sensitivity of each imager. To increase sensitivity in any dimension can significantly compromise the others. </p><p>This research implements various coding strategies subject to optical multidimensional imaging and acoustic sensing in order to extend their sensing abilities. The proposed coding strategies combine hardware modification and signal processing to exploiting bandwidth and sensitivity from conventional sensors. We discuss the hardware architecture, compression strategies, sensing process modeling, and reconstruction algorithm of each sensing system. </p><p>Optical multidimensional imaging measures three or more dimensional information of the optical signal. Traditional multidimensional imagers acquire extra dimensional information at the cost of degrading temporal or spatial resolution. Compressive multidimensional imaging multiplexes the transverse spatial, spectral, temporal, and polarization information on a two-dimensional (2D) detector. The corresponding spectral, temporal and polarization coding strategies adapt optics, electronic devices, and designed modulation techniques for multiplex measurement. This computational imaging technique provides multispectral, temporal super-resolution, and polarization imaging abilities with minimal loss in spatial resolution and noise level while maintaining or gaining higher temporal resolution. The experimental results prove that the appropriate coding strategies may improve hundreds times more sensing capacity. </p><p>Human auditory system has the astonishing ability in localizing, tracking, and filtering the selected sound sources or information from a noisy environment. Using engineering efforts to accomplish the same task usually requires multiple detectors, advanced computational algorithms, or artificial intelligence systems. Compressive acoustic sensing incorporates acoustic metamaterials in compressive sensing theory to emulate the abilities of sound localization and selective attention. This research investigates and optimizes the sensing capacity and the spatial sensitivity of the acoustic sensor. The well-modeled acoustic sensor allows localizing multiple speakers in both stationary and dynamic auditory scene; and distinguishing mixed conversations from independent sources with high audio recognition rate.</p> / Dissertation
56

Practical Considerations In Experimental Computational Sensing

Poon, Phillip K., Poon, Phillip K. January 2017 (has links)
Computational sensing has demonstrated the ability to ameliorate or eliminate many trade-offs in traditional sensors. Rather than attempting to form a perfect image, then sampling at the Nyquist rate, and reconstructing the signal of interest prior to post-processing, the computational sensor attempts to utilize a priori knowledge, active or passive coding of the signal-of-interest combined with a variety of algorithms to overcome the trade-offs or to improve various task-specific metrics. While it is a powerful approach to radically new sensor architectures, published research tends to focus on architecture concepts and positive results. Little attention is given towards the practical issues when faced with implementing computational sensing prototypes. I will discuss the various practical challenges that I encountered while developing three separate applications of computational sensors. The first is a compressive sensing based object tracking camera, the SCOUT, which exploits the sparsity of motion between consecutive frames while using no moving parts to create a psuedo-random shift variant point-spread function. The second is a spectral imaging camera, the AFSSI-C, which uses a modified version of Principal Component Analysis with a Bayesian strategy to adaptively design spectral filters for direct spectral classification using a digital micro-mirror device (DMD) based architecture. The third demonstrates two separate architectures to perform spectral unmixing by using an adaptive algorithm or a hybrid techniques of using Maximum Noise Fraction and random filter selection from a liquid crystal on silicon based computational spectral imager, the LCSI. All of these applications demonstrate a variety of challenges that have been addressed or continue to challenge the computational sensing community. One issue is calibration, since many computational sensors require an inversion step and in the case of compressive sensing, lack of redundancy in the measurement data. Another issue is over multiplexing, as more light is collected per sample, the finite amount of dynamic range and quantization resolution can begin to degrade the recovery of the relevant information. A priori knowledge of the sparsity and or other statistics of the signal or noise is often used by computational sensors to outperform their isomorphic counterparts. This is demonstrated in all three of the sensors I have developed. These challenges and others will be discussed using a case-study approach through these three applications.
57

Compressive sensing for microwave and millimeter-wave array imaging

Cheng, Qiao January 2018 (has links)
Compressive Sensing (CS) is a recently proposed signal processing technique that has already found many applications in microwave and millimeter-wave imaging. CS theory guarantees that sparse or compressible signals can be recovered from far fewer measure- ments than those were traditionally thought necessary. This property coincides with the goal of personnel surveillance imaging whose priority is to reduce the scanning time as much as possible. Therefore, this thesis investigates the implementation of CS techniques in personnel surveillance imaging systems with different array configurations. The first key contribution is the comparative study of CS methods in a switched array imaging system. Specific attention has been paid to situations where the array element spacing does not satisfy the Nyquist criterion due to physical limitations. CS methods are divided into the Fourier transform based CS (FT-CS) method that relies on conventional FT and the direct CS (D-CS) method that directly utilizes classic CS formulations. The performance of the two CS methods is compared with the conventional FT method in terms of resolution, computational complexity, robustness to noise and under-sampling. Particularly, the resolving power of the two CS methods is studied under various cir- cumstances. Both numerical and experimental results demonstrate the superiority of CS methods. The FT-CS and D-CS methods are complementary techniques that can be used together for optimized efficiency and image reconstruction. The second contribution is a novel 3-D compressive phased array imaging algorithm based on a more general forward model that takes antenna factors into consideration. Imaging results in both range and cross-range dimensions show better performance than the conventional FT method. Furthermore, suggestions on how to design the sensing con- figurations for better CS reconstruction results are provided based on coherence analysis. This work further considers the near-field imaging with a near-field focusing technique integrated into the CS framework. Simulation results show better robustness against noise and interfering targets from the background. The third contribution presents the effects of array configurations on the performance of the D-CS method. Compressive MIMO array imaging is first derived and demonstrated with a cross-shaped MIMO array. The switched array, MIMO array and phased array are then investigated together under the compressive imaging framework. All three methods have similar resolution due to the same effective aperture. As an alternative scheme for the switched array, the MIMO array is able to achieve comparable performance with far fewer antenna elements. While all three array configurations are capable of imaging with sub-Nyquist element spacing, the phased array is more sensitive to this element spacing factor. Nevertheless, the phased array configuration achieves the best robustness against noise at the cost of higher computational complexity. The final contribution is the design of a novel low-cost beam-steering imaging system using a flat Luneburg lens. The idea is to use a switched array at the focal plane of the Luneburg lens to control the beam-steering. By sequentially exciting each element, the lens forms directive beams to scan the region of interest. The adoption of CS for image reconstruction enables high resolution and also data under-sampling. Numerical simulations based on mechanically scanned data are conducted to verify the proposed imaging system.
58

Cluster Expansion Models Via Bayesian Compressive Sensing

Nelson, Lance Jacob 09 May 2013 (has links)
The steady march of new technology depends crucially on our ability to discover and design new, advanced materials. Partially due to increases in computing power, computational methods are now having an increased role in this discovery process. Advances in this area speed the discovery and development of advanced materials by guiding experimental work down fruitful paths. Density functional theory (DFT)has proven to be a highly accurate tool for computing material properties. However, due to its computational cost and complexity, DFT is unsuited to performing exhaustive searches over many candidate materials or for extracting thermodynamic information. To perform these types of searches requires that we construct a fast, yet accurate model. One model commonly used in materials science is the cluster expansion, which can compute the energy, or another relevant physical property, of millions of derivative superstructures quickly and accurately. This model has been used in materials research for many years with great success. Currently the construction of a cluster expansion model presents several noteworthy challenges. While these challenges have obviously not prevented the method from being useful, addressing them will result in a big payoff in speed and accuracy. Two of the most glaring challenges encountered when constructing a cluster expansion model include:(i) determining which of the infinite number of clusters to include in the expansion, and (ii) deciding which atomic configurations to use for training data. Compressive sensing, a recently-developed technique in the signal processing community, is uniquely suited to address both of these challenges. Compressive sensing (CS) allows essentially all possible basis (cluster) functions to be included in the analysis and offers a specific recipe for choosing atomic configurations to be used for training data. We show that cluster expansion models constructed using CS predict more accurately than current state-of-the art methods, require little user intervention during the construction process, and are orders-of-magnitude faster than current methods. A Bayesian implementation of CS is found to be even faster than the typical constrained optimization approach, is free of any user-optimized parameters, and naturally produces error bars on the predictions made. The speed and hands-off nature of Bayesian compressive sensing (BCS) makes it a valuable tool for automatically constructing models for many different materials. Combining BCS with high-throughput data sets of binary alloy data, we automatically construct CE models for all binary alloy systems. This work represents a major stride in materials science and advanced materials development.
59

Méthodes approchées de maximum de vraisemblances pour la classification et identification aveugles en communications numériques

Barembruch, Steffen 22 September 2010 (has links) (PDF)
La thèse considère la classification aveugle de modulations linéaires en communication numérique sur des canaux sélectifs en fréquence (et en temps). Nous utilisons l'approche de maximum de vraisemblance et nous développons plusieurs estimateurs de modèle
60

Computational Optical Imaging Systems for Spectroscopy and Wide Field-of-View Gigapixel Photography

Kittle, David S. January 2013 (has links)
<p>This dissertation explores computational optical imaging methods to circumvent the physical limitations of classical sensing. An ideal imaging system would maximize resolution in time, spectral bandwidth, three-dimensional object space, and polarization. Practically, increasing any one parameter will correspondingly decrease the others.</p><p>Spectrometers strive to measure the power spectral density of the object scene. Traditional pushbroom spectral imagers acquire high resolution spectral and spatial resolution at the expense of acquisition time. Multiplexed spectral imagers acquire spectral and spatial information at each instant of time. Using a coded aperture and dispersive element, the coded aperture snapshot spectral imagers (CASSI) here described leverage correlations between voxels in the spatial-spectral data cube to compressively sample the power spectral density with minimal loss in spatial-spectral resolution while maintaining high temporal resolution.</p><p>Photography is limited by similar physical constraints. Low f/# systems are required for high spatial resolution to circumvent diffraction limits and allow for more photon transfer to the film plain, but require larger optical volumes and more optical elements. Wide field systems similarly suffer from increasing complexity and optical volume. Incorporating a multi-scale optical system, the f/#, resolving power, optical volume and wide field of view become much less coupled. This system uses a single objective lens that images onto a curved spherical focal plane which is relayed by small micro-optics to discrete focal planes. Using this design methodology allows for gigapixel designs at low f/# that are only a few pounds and smaller than a one-foot hemisphere.</p><p>Computational imaging systems add the necessary step of forward modeling and calibration. Since the mapping from object space to image space is no longer directly readable, post-processing is required to display the required data. The CASSI system uses an undersampled measurement matrix that requires inversion while the multi-scale camera requires image stitching and compositing methods for billions of pixels in the image. Calibration methods and a testbed are demonstrated that were developed specifically for these computational imaging systems.</p> / Dissertation

Page generated in 0.018 seconds