• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 229
  • 37
  • 31
  • 30
  • 24
  • 6
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 460
  • 181
  • 114
  • 99
  • 67
  • 51
  • 41
  • 41
  • 37
  • 35
  • 35
  • 34
  • 32
  • 32
  • 31
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Statistical physics for compressed sensing and information hiding / Física Estatística para Compressão e Ocultação de Dados

Manoel, Antonio André Monteiro 22 September 2015 (has links)
This thesis is divided into two parts. In the first part, we show how problems of statistical inference and combinatorial optimization may be approached within a unified framework that employs tools from fields as diverse as machine learning, statistical physics and information theory, allowing us to i) design algorithms to solve the problems, ii) analyze the performance of these algorithms both empirically and analytically, and iii) to compare the results obtained with the optimal achievable ones. In the second part, we use this framework to study two specific problems, one of inference (compressed sensing) and the other of optimization (information hiding). In both cases, we review current approaches, identify their flaws, and propose new schemes to address these flaws, building on the use of message-passing algorithms, variational inference techniques, and spin glass models from statistical physics. / Esta tese está dividida em duas partes. Na primeira delas, mostramos como problemas de inferência estatística e de otimização combinatória podem ser abordados sob um framework unificado que usa ferramentas de áreas tão diversas quanto o aprendizado de máquina, a física estatística e a teoria de informação, permitindo que i) projetemos algoritmos para resolver os problemas, ii) analisemos a performance destes algoritmos tanto empiricamente como analiticamente, e iii) comparemos os resultados obtidos com os limites teóricos. Na segunda parte, este framework é usado no estudo de dois problemas específicos, um de inferência (compressed sensing) e outro de otimização (ocultação de dados). Em ambos os casos, revisamos abordagens recentes, identificamos suas falhas, e propomos novos esquemas que visam corrigir estas falhas, baseando-nos sobretudo em algoritmos de troca de mensagens, técnicas de inferência variacional, e modelos de vidro de spin da física estatística.
22

Compressed Sensing for 3D Laser Radar / Compressed Sensing för 3D Laserradar

Fall, Erik January 2014 (has links)
High resolution 3D images are of high interest in military operations, where data can be used to classify and identify targets. The Swedish defence research agency (FOI) is interested in the latest research and technologies in this area. A draw- back with normal 3D-laser systems are the lack of high resolution for long range measurements. One technique for high long range resolution laser radar is based on time correlated single photon counting (TCSPC). By repetitively sending out short laser pulses and measure the time of flight (TOF) of single reflected pho- tons, extremely accurate range measurements can be done. A drawback with this method is that it is hard to create single photon detectors with many pixels and high temporal resolution, hence a single detector is used. Scanning an entire scene with one detector is very time consuming and instead, as this thesis is all about, the entire scene can be measured with less measurements than the number of pixels. To do this a technique called compressed sensing (CS) is introduced. CS utilizes that signals normally are compressible and can be represented sparse in some basis representation. CS sets other requirements on the sampling compared to the normal Shannon-Nyquist sampling theorem. With a digital micromirror device (DMD) linear combinations of the scene can be reflected onto the single photon detector, creating scalar intensity values as measurements. This means that fewer DMD-patterns than the number of pixels can reconstruct the entire 3D-scene. In this thesis a computer model of the laser system helps to evaluate different CS reconstruction methods with different scenarios of the laser system and the scene. The results show how many measurements that are required to reconstruct scenes properly and how the DMD-patterns effect the results. CS proves to enable a great reduction, 85 − 95 %, of the required measurements com- pared to pixel-by-pixel scanning system. Total variation minimization proves to be the best choice of reconstruction method. / Högupplösta 3D-bilder är väldigt intressanta i militära operationer där data kan utnyttjas för klassificering och identifiering av mål. Det är av stort intresse hos Totalförsvarets forskningsinstitut (FOI) att undersöka de senaste teknikerna in- om detta område. Ett stort problem med vanliga 3D-lasersystem är att de saknar hög upplösning för långa mätavstånd. En teknik som har hög avståndsupplös- ning är tidskorrelerande enfotonräknare, som kan räkna enstaka fotoner med extremt bra noggrannhet. Ett sådant system belyser en scen med laserljus och mäter sedan reflektionstiden för enstaka fotoner och kan på så sätt mäta avstånd. Problemet med denna metod är att göra detektion av många pixlar när man bara kan använda en detektor. Att skanna en hel scen med en detektor tar väldigt lång tid och istället handlar det här exjobbet om att göra färre mätningar än antalet pixlar, men ändå återskapa hela 3D-scenen. För att åstadkomma detta används en ny teknik kallad Compressed Sensing (CS). CS utnyttjar att mätdata normalt är komprimerbar och skiljer sig från det traditionella Shannon-Nyquists krav på sampling. Med hjälp av ett Digital Micromirror Device (DMD) kan linjärkombi- nationer av scenen speglas ner på enfotondetektorn och med färre DMD-mönster än antalet pixlar kan hela 3D-scenen återskapas. Med hjälp av en egenutvecklad lasermodell evalueras olika CS rekonstruktionsmetoder och olika scenarier av la- sersystemet. Arbetet visar att basrepresentationen avgör hur många mätningar som behövs och hur olika uppbyggnader av DMD-mönstren påverkar resultatet. CS visar sig möjliggöra att 85 − 95 % färre mätningar än antalet pixlar behövs för att avbilda hela 3D-scener. Total variation minimization visar sig var det bästa valet av rekonstruktionsmetod.
23

Statistical physics for compressed sensing and information hiding / Física Estatística para Compressão e Ocultação de Dados

Antonio André Monteiro Manoel 22 September 2015 (has links)
This thesis is divided into two parts. In the first part, we show how problems of statistical inference and combinatorial optimization may be approached within a unified framework that employs tools from fields as diverse as machine learning, statistical physics and information theory, allowing us to i) design algorithms to solve the problems, ii) analyze the performance of these algorithms both empirically and analytically, and iii) to compare the results obtained with the optimal achievable ones. In the second part, we use this framework to study two specific problems, one of inference (compressed sensing) and the other of optimization (information hiding). In both cases, we review current approaches, identify their flaws, and propose new schemes to address these flaws, building on the use of message-passing algorithms, variational inference techniques, and spin glass models from statistical physics. / Esta tese está dividida em duas partes. Na primeira delas, mostramos como problemas de inferência estatística e de otimização combinatória podem ser abordados sob um framework unificado que usa ferramentas de áreas tão diversas quanto o aprendizado de máquina, a física estatística e a teoria de informação, permitindo que i) projetemos algoritmos para resolver os problemas, ii) analisemos a performance destes algoritmos tanto empiricamente como analiticamente, e iii) comparemos os resultados obtidos com os limites teóricos. Na segunda parte, este framework é usado no estudo de dois problemas específicos, um de inferência (compressed sensing) e outro de otimização (ocultação de dados). Em ambos os casos, revisamos abordagens recentes, identificamos suas falhas, e propomos novos esquemas que visam corrigir estas falhas, baseando-nos sobretudo em algoritmos de troca de mensagens, técnicas de inferência variacional, e modelos de vidro de spin da física estatística.
24

Block Compressed Sensing of Images and Video

Mun, Sungkwang 15 December 2012 (has links)
Compressed sensing is an emerging approach for signal acquisition wherein theory has shown that a small number of linear, random projection of a signal contains enough information for reconstruction of the signal. Despite its potential to enable lightweight and inexpensive sensing hardware that simultaneously combines signal acquisition and dimensionality reduction, the compressed sensing of images and video still entails several challenges, in particular, a sensing-measurement operator which is difficult to apply in practice due to the heavy memory and computational burdens. Block-based random image sampling coupled with a projection-driven compressed-sensing recovery is proposed to address this challenge. For images, the block-based image acquisition is coupled with reconstruction driven by a directional transform that encourages spatial sparsity. Specifically, both contourlets as well as complex-valued dual-tree wavelets are considered for their highly directional representation, while bivariate shrinkage is adapted to their multiscale decomposition structure to provide the requisite sparsity constraint. Smoothing is achieved via a Wiener filter incorporated into iterative projected Landweber compressed-sensing recovery, yielding fast reconstruction. Also considered is an extension of the basic reconstruction algorithm that incorporates block-based measurements in the domain of a wavelet transform. The pro-posed image recovery algorithm and its extension yield images with quality that matches or exceeds that produced by a popular, yet computationally expensive, technique which minimizes total variation. Additionally, reconstruction quality is substantially superior to that from several prominent pursuits-based algorithms that do not include any smoothing. For video, motion estimation and compensation is utilized to promote temporal sparsity. Because video sequences have temporal redundancy in locations in which objects are moving while the background is still, a residual between the current frame and the previous frame compensated by object motion is shown to be more sparse than the orig-inal frame itself. By using residual reconstruction, information contained in the previous frame contributes to the reconstruction of the current frame. The proposed block-based compressed-sensing reconstruction for video outperforms a simple frame-byrame reconstruction as well as a 3D volumetric reconstruction in terms of visual quality. Finally, quantization of block-based compressed-sensing measurements is considered in order to generate a true bitstream from a compressed-sensing image acquisition. Specifically, a straightforward process of quantization via simple uniform scalar quantization applied in conjunction with differential pulse code modulation of the block-based compressed-sensing measurements is proposed. Experimental results demonstrate significant improvement in rate-distortion performance as compared scalar quantization used alone in several block-based compressed-sensing reconstruction algorithms. Additionally, rate-distortion performance superior to that of alternative quantized-compressed-sensing techniques relying on optimized quantization or reconstruction is observed.
25

Compressed sensing with approximate message passing : measurement matrix and algorithm design

Guo, Chunli January 2015 (has links)
Compressed sensing (CS) is an emerging technique that exploits the properties of a sparse or compressible signal to efficiently and faithfully capture it with a sampling rate far below the Nyquist rate. The primary goal of compressed sensing is to achieve the best signal recovery with the least number of samples. To this end, two research directions have been receiving increasing attention: customizing the measurement matrix to the signal of interest and optimizing the reconstruction algorithm. In this thesis, contributions in both directions are made in the Bayesian setting for compressed sensing. The work presented in this thesis focuses on the approximate message passing (AMP) schemes, a new class of recovery algorithm that takes advantage of the statistical properties of the CS problem. First of all, a complete sample distortion (SD) framework is presented to fundamentally quantify the reconstruction performance for a certain pair of measurement matrix and recovery scheme. In the SD setting, the non-optimality region of the homogeneous Gaussian matrix is identified and the novel zeroing matrix is proposed with an improved performance. With the SD framework, the optimal sample allocation strategy for the block diagonal measurement matrix are derived for the wavelet representation of natural images. Extensive simulations validate the optimality of the proposed measurement matrix design. Motivated by the zeroing matrix, we extend the seeded matrix design in the CS literature to the novel modulated matrix structure. The major advantage of the modulated matrix over the seeded matrix lies in the simplicity of its state evolution dynamics. Together with the AMP based algorithm, the modulated matrix possesses a 1-D performance prediction system, with which we can optimize the matrix configuration. We then focus on a special modulated matrix form, designated as the two block matrix, which can also be seen as a generalization of the zeroing matrix. The effectiveness of the two block matrix is demonstrated through both sparse and compressible signals. The underlining reason for the improved performance is presented through the analysis of the state evolution dynamics. The final contribution of the thesis explores improving the reconstruction algorithm. By taking the signal prior into account, the Bayesian optimal AMP (BAMP) algorithm is demonstrated to dramatically improve the reconstruction quality. The key insight for its success is that it utilizes the minimum mean square error (MMSE) estimator for the CS denoising. However, the prerequisite of the prior information makes it often impractical. A novel SURE-AMP algorithm is proposed to address the dilemma. The critical feature of SURE-AMP is that the Stein’s unbiased risk estimate (SURE) based parametric least square estimator is used to replace the MMSE estimator. Given the optimization of the SURE estimator only involves the noisy data, it eliminates the need for the signal prior, thus can accommodate more general sparse models.
26

Source-Channel Mappings with Applications to Compressed Sensing

ABOU SALEH, AHMAD 29 July 2011 (has links)
Tandem source-channel coding is proven to be optimal by Shannon given unlimited delay and complexity in the coders. Under low delay and low complexity constraints, joint source-channel coding may achieve better performance. Although digital joint source-channel coding has shown a noticeable gain in terms of reconstructed signal quality, coding delay, and complexity, it suffers from the leveling-off effect. However, analog systems do not suffer from the leveling-off effect. In this thesis, we investigate the advantage of analog systems based on the Shannon-Kotel’nikov approach and hybrid digital-analog coding systems, which combine digital and analog schemes to achieve a graceful degradation/improvement over a wide range of channel conditions. First, we propose a low delay and low complexity hybrid digital-analog coding that is able to achieve high (integer) expansion ratios ( >3). This is achieved by combining the spiral mapping with multiple stage quantizers. The system is simulated for a 1 : 3 bandwidth expansion and the behavior for a 1 : M (with M an integer >3) system is studied in the low noise level regime. Next, we propose an analog joint source-channel coding system that is able to achieve a low (fractional) expansion ratio between 1 and 2. More precisely, this is an N : M bandwidth expansion system based on combining uncoded transmission and a 1 : 2 bandwidth expansion system (with N < M < 2N).Finally, a 1 : 2 analog bandwidth expansion system using the (Shannon-Kotel’nikov) Archimedes’ spiral mapping is used in the compressed sensing context, which is inherently analog, to increase the system’s immunity against channel noise. The proposed system is compared to a conventional compressed sensing system that assumes noiseless transmission and a compressed sensing based system that account for noise during signal reconstruction. / Thesis (Master, Electrical & Computer Engineering) -- Queen's University, 2011-07-29 02:30:11.978
27

The effects of hyperbaric environments on exercise metabolism

Hanson, R.de G. January 1979 (has links)
No description available.
28

Development of a liner-less composite CNG cylinder and improved mechanical properties of cylinder materials /

Iqbal, Kosar. January 2008 (has links)
Thesis (M.Phil.)--Hong Kong University of Science and Technology, 2008. / Includes bibliographical references (leaves 82-90). Also available in electronic version.
29

Influence of material type, aggregate size, and unconfined compressive strength on water jetting of CIDH pile anomalies a thesis /

Heavin, Joseph Carl. Fiegel, Gregg L. January 1900 (has links)
Thesis (M.S.)--California Polytechnic State University, 2010. / Mode of access: Internet. Title from PDF title page; viewed on April 6, 2010. Major professor: Gregg L. Fiegel, PhD, PE, GE. "Presented to the faculty of California Polytechnic State University, San Luis Obispo." "In partial fulfillment of the requirements for the degree of Master of Science in Civil and Environmental Engineering." "March 2010." Includes bibliographical references (p. 116-117).
30

Efficient speech storage via compression of silence periods

Gan, Cheong Kuoon January 1984 (has links)
An adaptive optimal silence detector is designed and implemented in four speech coding schemes: N-bit PCM (N = 5 to 12), N-bit A-law PCM (N = 4 to 8), N-bit ADPCM (N = 3 to 8) and ADM (Adaptive Delta Modulation) for bit-rates of 16Kps, 24Kps and 32Kps. The amount of compression is approximately 35% for voice recordings such as radio newscasts, highly active conversations and readings from prepared texts. Subjective evaluation shows that the silence-edited versions (silence played back as absolute silence) have acceptability scores of 1.07 lower than the unedited versions with respect to a specific coding scheme for a score range of 1 to 5. With noise-edited versions (silence replaced by random noise during playback) the score degradation is 0.5. / Applied Science, Faculty of / Electrical and Computer Engineering, Department of / Graduate

Page generated in 0.0997 seconds