• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 223
  • 171
  • 79
  • 42
  • 20
  • 17
  • 13
  • 12
  • 6
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 703
  • 388
  • 169
  • 155
  • 139
  • 114
  • 98
  • 74
  • 72
  • 72
  • 72
  • 65
  • 62
  • 58
  • 54
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

A compressive sensing approach to solving nonograms

Lopez, Oscar Fabian 12 December 2013 (has links)
A nonogram is a logic puzzle where one shades certain cells of a 2D grid to reveal a hidden image. One uses the sequences of numbers on the left and the top of the grid to figure out how many and which cells to shade. We propose a new technique to solve a nonogram using compressive sensing. Our method avoids (1) partial fill-ins, (2) heuristics, and (3) over-complication, and only requires that we solve a binary integer programming problem. / text
22

Adaptive Feature-Specific Spectral Imaging Classifier (AFSSI-C)

Dunlop, Matthew, Poon, Phillip 10 1900 (has links)
ITC/USA 2013 Conference Proceedings / The Forty-Ninth Annual International Telemetering Conference and Technical Exhibition / October 21-24, 2013 / Bally's Hotel & Convention Center, Las Vegas, NV / The AFSSI-C is a spectral imager that generates spectral classification directly, in fewer measurements than are required by traditional systems that measure the spectral datacube (which is later interpreted to make material classification). By utilizing adaptive features to constantly update conditional probabilities for the different hypotheses, the AFSSI-C avoids the overhead of directly measuring every element in the spectral datacube. The system architecture, feature design methodology, simulation results, and preliminary experimental results are given.
23

On Invertibility of the Radon Transform and Compressive Sensing

Andersson, Joel January 2014 (has links)
This thesis contains three articles. The first two concern inversion andlocal injectivity of the weighted Radon transform in the plane. The thirdpaper concerns two of the key results from compressive sensing.In Paper A we prove an identity involving three singular double integrals.This is then used to prove an inversion formula for the weighted Radon transform,allowing all weight functions that have been considered previously.Paper B is devoted to stability estimates of the standard and weightedlocal Radon transform. The estimates will hold for functions that satisfy an apriori bound. When weights are involved they must solve a certain differentialequation and fulfill some regularity assumptions.In Paper C we present some new constant bounds. Firstly we presenta version of the theorem of uniform recovery of random sampling matrices,where explicit constants have not been presented before. Secondly we improvethe condition when the so-called restricted isometry property implies the nullspace property. / <p>QC 20140228</p>
24

Reconstrução de imagens de ressonância magnética com base em compreensive sensing usando informação a priori estrutural em abordagem estocástica

Almeida, Daniel Lucas Ferreira e 15 February 2017 (has links)
Dissertação (mestrado)—Universidade de Brasília, Faculdade Gama, Programa de Pós-Graduação em Engenharia Biomédica, 2017. / Submitted by Fernanda Percia França (fernandafranca@bce.unb.br) on 2017-05-03T17:41:24Z No. of bitstreams: 1 2017_DanielLucasFerreiraeAlmeida.pdf: 15210206 bytes, checksum: df62328627085f382c7eff9dbdf98b0f (MD5) / Approved for entry into archive by Raquel Viana (raquelviana@bce.unb.br) on 2017-09-11T16:40:09Z (GMT) No. of bitstreams: 1 2017_DanielLucasFerreiraeAlmeida.pdf: 15210206 bytes, checksum: df62328627085f382c7eff9dbdf98b0f (MD5) / Made available in DSpace on 2017-09-11T16:40:09Z (GMT). No. of bitstreams: 1 2017_DanielLucasFerreiraeAlmeida.pdf: 15210206 bytes, checksum: df62328627085f382c7eff9dbdf98b0f (MD5) Previous issue date: 2017-09-11 / A utilização de imagens obtidas por meio da ressonância magnética (do inglês magneticresonance, ou MR) auxilia no diagnóstico e no acompanhamento das mais diversas patologias que afetam o corpo humano. Ela apresenta, contudo, um custo mais elevado do que outras técnicas de imageamento que não são capazes de gerar imagens com a mesma qualidade objetiva, e isso pode dificultar o trabalho dos profissionais de saúde. Esse custo mais elevado se deve ao alto valor do equipamento e de sua manutenção, bem como ao baixo número de exames que podem ser realizados por dia, quando comparados com os de técnicas como a tomografia computadorizada. Além de seu processo de aquisição ser inerentemente mais lento, uma vez que sua duração depende da alta quantidade de medidas extraídas pelo scanner. De modo a diminuir a quantidade de medidas necessária, técnicas de reconstrução alternativas às tradicionais vêm sendo estudadas, como as que se baseiam em compressive sensing (CS). O CS permite reconstruir um sinal a partir de uma quantidade de medidas muito inferior `a estabelecida pelo critério de Nyquist. Além disso, o imageamento por ressonância magnética atende aos requisitos mínimos para a aplicação dessa técnica: o sinal possui uma representação esparsa num domínio transformado e as medidas adquiridas pelo scanner já são naturalmente codificadas. Essas técnicas têm sido eficientes ao diminuir a quantidade de medidas e garantir uma boa qualidade objetiva da imagem reconstruída, mas ainda há potencial para que essa diminuição seja maior. Uma das alternativas é encontrar a transformada esparsificante que torne o sinal o mais esparso possível, como a transformada de Fourier bidimensional, transformadas baseadas em wavelets e a pré-filtragem. Além disso, a utilização de informação a priori aliada aos algoritmos de reconstrução que se baseiam em compressive sensing também é utilizada para diminuir a quantidade de medidas. Essa informação pode ser caracterizada por dados estatísticos prévios da imagem ou com base em informações determinísticas sobre ela. Neste trabalho, propomos uma modelagem estocástica da informação a priori a ser utilizada em algoritmos de reconstrução de imagens de ressonância magnéticas baseados em compressive sensing com pré-filtragem. Nossa abordagem gera um espalhamento probabilístico em torno de um ponto que apresenta o potencial de pertencer ao suporte da versão esparsa da imagem a ser reconstruída. Esse espalhamento proposto tem o objetivo de garantir que a imagem possa ser reconstruída, mesmo em situações nas quais o ponto do suporte pode mudar de posição – quando um paciente se movimenta dentro do scanner, por exemplo. De modo a validar essa técnica, n´os a aplicamos a sinais de domínio unidimensional, bidimensional, imagens de fantomas de Shepp-Logan e imagens reais de RM. Os resultados dos testes sistemáticos em sinais de domínio unidimensional mostram que essa abordagem estocástica apresenta melhores resultados de reconstrução do que aquele que não utiliza informação a priori, apresentando um SER superior em cerca de 100dB para alguns valores de medidas ℓ. Em sinais de domínio bidimensional e de fantomas, apresentamos os resultados de um estudo de caso envolvendo a reconstrução de um sinal de cada tipo. Os resultados corroboram o que foi encontrado com os sinais de domínio unidimensional, com a abordagem estocástica apresentado valores de SER superiores em cerca de 10dB quando comparada com a abordagem determinística, apesar da pouca significância estatística. Os testes em imagens de MR incluíram a reconstrução de imagens deslocadas para simular uma possível situação de movimento do paciente durante a aquisição do exame. Além disso, foi analisada a influência do número de linhas radiais na reconstrução, bem como da matriz de covariância usada para gerar a função de espalhamento. Os resultados mostram que a abordagem estocástica apresenta sempre um bom desempenho quando comparada com as demais e, muitas vezes, esse desempenho é superior ao das outras técnicas. Em alguns pontos críticos, por exemplo, o valor do SER para a abordagem estocástica chega a ser superior em 6dB com relação ao caso ideal e em mais de 10dB para o caso não-ideal. Algo importante de se destacar é o fato de a abordagem estocástica apresentar um bom desempenho constantemente (um valor médio de SER de 21dB e do índice SSIM de 0,7), inclusive em casos em que as demais falham. Os resultados encontrados permitem agora que outras formas de explorar essa informação a priori possam ser investigados, para diminuir ainda mais a quantidade de medidas necessária para a reconstrução. Mas é também importante realizar um estudo teórico para quantificar a probabilidade de reconstrução em função da representação estocástica da informação a priori e da quantidade de medidas disponível. / The use of images obtained through MRI helps in the diagnosis and follow-up of the most diverse pathologies that affect the human body. However, it presents a higher cost than other imaging techniques that are not capable of generating images with the same objective quality, and this may hinder the work of health professionals. This higher cost is due to the high value of the equipment and its maintenance, as well as to the low number of tests that can be carried out per day, when compared to techniques such as the computed tomography. Besides its acquisition process is inherently slower, since its duration depends on the high amount of measurements extracted by the scanner. In order to reduce the amount of measurements required, alternatives to traditional reconstruction techniques have been studied, such as those based on compressive sensing. The CS allows you to reconstruct a signal from a quantity of measures much lower than the one established by the Nyquist theorem. In addition, magnetic resonance imaging meets the minimum requirements for the application of this technique: the signal has a sparse representation in a transformed domain and the measurements acquired by the scanner are already encoded naturally. These techniques have been effective in decreasing the number of measurements and ensuring a good objective quality of the reconstructed image, but there is still potential for such a decrease. One of the alternatives is to find a sparsifying transform that makes the signal as sparse as possible, such as the two-dimensional Fourier transform, wavelet-based transforms, and pre-filtering. Besides, the use of a prior information coupled with reconstruction algorithms based on compressive sensing is also used to decrease the number of measurements. This information may be characterized by prior statistical data of the image or based on deterministic information about it. In this work, we propose a stochastic modeling of the prior information to be used in reconstruction algorithms of magnetic resonance images based on compressive sensing with pre-filtering. Our approach generates a probabilistic spreading around a point that has the potential to belong to the support of the sparse version of the image to be reconstructed. This proposed spread is intended to ensure that the image can be reconstructed, even in situations when the position of the support point may change - when a patient moves within the scanner, for example. In order to validate this technique, we apply it to one-dimensional, two-dimensional domain signals, Shepp-Logan phantom images, and real MRI images. The results of the systematic tests on one-dimensional domain signals show that this stochastic approach presents better reconstruction results than those that do not use the prior information, with a SER 100dB greater for some number of measures ℓ. In two-dimensional domain signals and phantoms, we present the results of a case study involving the reconstruction of a signal of each type. The results corroborate what was found with one-dimensional domain signals, with the SER for the stochastic approach being 10dB higher than when the deterministic approach was used, despite the low statistical significance. The MR imaging tests included the reconstruction of shifted images to simulate a possible patient movement situation during the acquisition of the examination. In addition, the influence of the number of radial lines in the reconstruction was analyzed, as well as the covariance matrix used to generate the spreading function. The results show that the stochastic approach always performs well when compared to the others, and this performance is often superior to other techniques. When considering some critic points, for instance, the SER value for the stochastic approach is 6dB higher than the ideal prior approach, and more than 10dB higher than the non-ideal. Something important to emphasize is that the stochastic approach performs consistently well (a SER mean value of around 21dB and a SSIM index of around 0,7), even when the others fail. Those results will now allow the investigation of other ways to explore this prior information in order to further reduce the amount of measures required for reconstruction. But it is also important to carry out a theoretical study to quantify the reconstruction probability as a function of the stochastic representation of the prior information and of the quantity of measures available.
25

Comparação objetiva de imagens de ressonancia magnetica usando Compressive Sensing em diferentes estruturas de decomposição multinível / Objective comparision of magnetic resonance images using Compressive Sensing in different multilevel decomposition structures

Paiva, Gian Lucas de Oliveira 19 July 2017 (has links)
Dissertação (mestrado)—Universidade de Brasília, Faculdade UnB Gama, Programa de Pós-Graduação em Engenharia Biomédica, 2017. / Submitted by Raquel Almeida (raquel.df13@gmail.com) on 2017-11-14T16:22:17Z No. of bitstreams: 1 2017_GianLucasdeOliveiraPaiva.pdf: 4207150 bytes, checksum: aa6048e4656a7ed3af0aef65ac9f3311 (MD5) / Approved for entry into archive by Raquel Viana (raquelviana@bce.unb.br) on 2017-11-21T18:35:34Z (GMT) No. of bitstreams: 1 2017_GianLucasdeOliveiraPaiva.pdf: 4207150 bytes, checksum: aa6048e4656a7ed3af0aef65ac9f3311 (MD5) / Made available in DSpace on 2017-11-21T18:35:34Z (GMT). No. of bitstreams: 1 2017_GianLucasdeOliveiraPaiva.pdf: 4207150 bytes, checksum: aa6048e4656a7ed3af0aef65ac9f3311 (MD5) Previous issue date: 2017-11-21 / O imageamento por ressonância magnética (RM) constitui uma das várias modalidades de imagens médicas utilizadas para diagnóstico, acompanhamento de doenças e planejamento de tratamentos. Ela é capaz de produzir imagens com melhor contraste, além de não emitir radiação ionizante, o que a torna uma escolha atraente para exames. Entretanto, seu maior custo e tempo de exame dificultam o seu uso de maneira mais generalizada. Reduzir o tempo de exame tem se tornado um importante tema de pesquisa na área de processamento de sinais nos últimos anos. Compressive Sensing é uma técnica que tem sido utilizada em vários estudos de ressonância magnética. O seu uso com transformadas esparsificantes abre uma grande variedade de possibilidades com o uso de filtros sobre as informações da imagem. Transformadas comuns utilizadas em ressonância magnética e compressive sensing são as wavelets. Um tipo de wavelet pouco explorado em ressonância magnética é a transformada wavelet dualtree, que possui algumas vantagens sobre a wavelet comum. Neste trabalho é feita a hipótese de que a transformada wavelet dualtree é superior às transformadas wavelet comuns pela possibilidade de esparsificar melhor, devido a maior seletividade de direções para imagens. Foram utilizados bancos de filtros para implementação das transformadas e também como método de reconstrução das imagens pré-filtradas, reconstruídas por compressive sensing. O banco de filtros foi comparado com o método de recomposição espectral usando métricas de qualidade objetiva (SNR e SSIM). Os métodos também foram comparados em relação ao tempo de reconstrução. Filtros de diferentes tipos e famílias foram comparados entre si, utilizando banco de filtros como método de reconstrução. Um conjunto de 73 imagens de cabeça foi utilizado para avaliar estatisticamente os resultados, para verificar se a diferença na qualidade das imagens por filtros diferentes é estatisticamente significante. Os resultados indicaram que, para os filtros de Haar, o método de recomposição espectral foi superior ao método do banco de filtros, com diferenças entre valores da SNR chegando a 14 dB e 0.1 para o SSIM para uma mesma imagem, em um nível. O filtro dtf4 em dois níveis obteve qualidade semelhantes para ambos os métodos de banco de filtros e a recomposição espectral. Foi observado que o banco de filtros obteve uma melhora na qualidade com o aumento de níveis de decomposição, enquanto a recomposição espectral foi quase insensível ao aumento de níveis, apresentando apenas uma ligeira melhora na qualidade do primeiro nível para o segundo. A comparação objetiva entre diferentes filtros, utilizando o banco de filtros como método de reconstrução, mostrou que os quatro filtros dualtree, em todos os casos, obtiveram resultados significantemente melhores que os outros filtros wavelet, com valores médios de SNR até 3 dB maiores que os outros filtros. A família de filtros coiflets apresentou, na média, resultados próximos aos dos filtros dualtree. Os filtros biortogonais reversos 3.1 e 3.3 apresentaram os piores resultados, seguidos dos filtros Daubechies/symlet 1 e 2, biortogonal e biortogonal reverso 1.1 e 1.5. Os tempos de recomposição para o banco de filtros foram menores que os tempos da recomposição espectral, chegando a ser 30 vezes menores, embora esse tempo seja praticamente desprezível em relação ao tempo total de reconstrução da imagem. Concluiu-se que o método do banco de filtros, utilizando os filtros dualtree, permite a reconstrução de imagens com qualidade semelhante ao do método de recomposição espectral, com tempos menores de recomposição e utilizando a mesma quantidade de informação. Os filtros dualtree também se mostraram superiores aos filtros wavelet comuns para o uso com banco de filtros. / Magnetic resonance imaging (MRI) is one of several medical imaging modalities used for diagnosis, disease monitoring and treatment planning. It is able to produce images with better contrast, besides not emitting ionizing radiation, which makes it an attractive choice for exams. However, its higher cost and time of examination make it more difficult to use. Reducing exam time has become an important research topic in the area of signal processing in recent years. Compressive sensing is a technique that has been used in several magnetic resonance studies. Its use with sparse transforms opens a wide range of possibilities with the use of filters over the image information. Common transforms used in magnetic resonance imaging and compressive sensing are the wavelets. A type of wavelet not thoroughly explored on MRI is the dualtree wavelet transform, which has some advantages over the common wavelet. In this work, it is hypothesized that the dualtree wavelet transform is superior to the ordinary wavelets due to the possibility of better scattering, thanks to its greater directionalities for images. Filter banks were used to implement the transforms and also as a method of reconstructing the prefiltered images, recovered by compressive sensing. The filter bank was compared to the spectral recomposition method using objective quality metrics (SNR and SSIM). The methods were also compared to in relation to the reconstruction time. Filters of different types and families were compared to each other, using filter bank as reconstruction method. A set of 73 MRI head imaged was used to statistically evaluate the results, to verify if the difference in the image quality recovered using different filters is statistically significant. The results indicated that, for the Haar filters, the spectral recomposition method was superior to the filter bank method, with differences between SNR values reaching up to 14 dB and 0.1 for SSIM for the same image at one level of decomposition. The twolevel decomposition dtf4 filter obtained similar quality for both filterbank methods and spectral recomposition. It was observed that the filter bank obtained an improvement in quality with increasing decomposition levels, while the spectral recomposition was almost insensitive to the increase of levels, presenting only a slight improvement in quality from the first level to the second. The objective comparison of different filters using the filter bank as a reconstruction method showed that the four dualtree filters, in all cases, obtained significantly better results than the other wavelet filters with mean values of SNR up to 3 dB higher than the other filters. The coiflet family of filters presented, on average, results close to the dualtree filters. The reverse biorthogonal filters 3.1 and 3.3 had the worst results, followed by the Haar filter and reverse biorthogonal filters 1.3 and 1.5. The recomposition times for the filter bank were smaller than that of spectral recomposition, being up to 30 times faster. although this times is negligible in relation to the total times of reconstruction of the image. It was concluded that the filter bank method, using the dualtree filters, allows the reconstruction of images with similar quality to the spectral recomopsition, with smaller recomposition times and using the same amount of information. The dualtree filters were also shown to be superior to the common wavelet filters using filter bank.
26

Processamento de sinais de EMG-S por meio da utilização de Compressive Sensing

Moura, Igor Luiz Bernardes de 08 June 2015 (has links)
Dissertação (mestrado)—Universidade de Brasília, Faculdade Gama, Programa de Pós-Graduação em Engenharia Biomédica, 2015. / Submitted by Raquel Viana (raquelviana@bce.unb.br) on 2015-12-01T17:06:21Z No. of bitstreams: 1 2015_IgorLuizBernardesdeMoura.pdf: 2053550 bytes, checksum: a20bf1fac903babfe1ae72f7433039dd (MD5) / Approved for entry into archive by Raquel Viana(raquelviana@bce.unb.br) on 2016-01-15T19:23:05Z (GMT) No. of bitstreams: 1 2015_IgorLuizBernardesdeMoura.pdf: 2053550 bytes, checksum: a20bf1fac903babfe1ae72f7433039dd (MD5) / Made available in DSpace on 2016-01-15T19:23:05Z (GMT). No. of bitstreams: 1 2015_IgorLuizBernardesdeMoura.pdf: 2053550 bytes, checksum: a20bf1fac903babfe1ae72f7433039dd (MD5) / O CompressiveSensing(CS) é uma técnica recente que explora a esparsidade de um sinal para realizar a amostragem em uma taxa inferior à de Nyquist. Ainda incipientes, pesquisas que relacionam o CS à reconstrução de sinais de Eletromiografia de Superfície (EMG-S) indicam possibilidade de utilização desta técnica no processamento e recuperação de dados. Este trabalho tem como proposta a realização de um teste computacional para avaliar múltiplas combinações dos parâmetros: intervalos máximo de perdas para reconstrução, métrica lpa ser minimizada, menor percentual de amostras para uso do CS e tipo de matriz de aquisição (binária ou aleatória). Espera-se determinarquais os melhores valores para recuperação e reconstrução de sinais de EMG-S através do uso de CS, de modo que na etapa de pós processamento seja possível recuperar trechos danificados do sinal. Utilizou-se um sinal simulado para os testes, de modo que este serviu para comparação com os que foram reconstruídos experimentalmente. Por serem naturalmente não-esparsos, os sinais de EMG-S foram esparsificados através da utilização de um banco de filtros de 32 canais, sendo o CS aplicado em cada uma das componentes. Um laço de repetição foi implementado para determinar qual a melhor combinação entre o tamanho máximo do intervalo, a métrica lpa ser minimizada, o percentual de amostras e o tipo de matriz de aquisição utilizado. Uma vez determinados, os resultados foram aplicados em múltiplos intervalos distribuídos pelo sinal, com intuito de avaliar a capacidade da técnica em recuperar um sinal altamente comprometido.Os resultados indicaram uma relação entre o sinal reconstruído e o original de aproximadamente 29.59 dB, para o caso em que um intervalo de 40 pontos foi recuperado, e 25.01 para o caso decinco intervalos. Considerando que o sinal de testes tinha 1025 amostras, foi possível reconstruiraproximadamente 20% deste com o CS. Uma vez que na análise de sinais de EMG-S micro oscilações na curva do sinal não são comprometedoras, e que outros parâmetros/características possuem umarelevânciamaior (ex: RMS, ARV, MNF, MDF, CV), o objetivo foi alcançado e a utilização do CSmostrou-se bastante promissor. ______________________________________________________________________________________________ ABSTRACT / The Compressive Sensing (CS) is a recent technique that exploits the sparsity of a signal to sampleit at a rate under the proposed by Nyquist. Still incipient, research linking the CS reconstructionof surface electromyography signals (EMG-S) indicate the possibility of using this technic for theprocessing and data recovery. This paper aims to carry out a computer test to evaluate multiplecombinations of parameters: maximum reconstruction intervals, metric lp to be minimized, minimal percentage of samplesand type of acquisition matrix(binary or random). It was expected to determine the best values for the recovery andreconstruction S-EMG signals through the use ofCS, so in the post-processing step, will bepossible to recover damaged sections of the signal.It was used a simulated signal for the tests, so this was used to compare with those reconstructedexperimentally. Due the characteristic of been naturally non-sparse, the S-EMG signals weresparsified by using a 32-channel filter bank, and the CS was applied to each of the components. Arepeating loop was implemented to determine the best combination between the maximum size ofthe interval, the metric Lpto be minimized, the percentage of samples and the type of acquisitionmatrix used. Once determined, the results were applied in multiple intervals distributed by thesignal, in order to verify the technique's ability to recover a highly compromised signal.The results indicate a relationship between the reconstructed signal and the original of approximately29.59 dB for the case in which an interval of 40 points was recovered and 25.01for the case of five intervals. Whereas the test signal had 1025 samples, it was possible to reconstructabout 20 % of this with the CS. Once the analysis of EMG-S micro-oscillation signalsin the signal curve are not compromising, and other parameters/features have greater relevance(RMS, ARV, MNF, MDF, CV), the objective has been achieved and using the CS proved verypromising.
27

Measurement Quantization in Compressive Imaging

Lin, Yuzhang, Lin, Yuzhang January 2016 (has links)
In compressive imaging the measurement quantization and its impact on the overall system performance is an important problem. This work considers several challenges that derive from quantization of compressive measurements. We investigate the design of scalar quantizer (SQ), vector quantizer (VQ), and tree-structured vector quantizer (TSVQ) for information-optimal compressive imaging. The performance of these quantizer designs is quantified for a variety of compression rates and measurement signal-to-noise-ratio (SNR) using simulation studies. Our simulation results show that in the low SNR regime a low bit-depth (3 bit per measurement) SQ is sufficient to minimize the degradation due to measurement quantization. However, in mid-to-high SNR regime, quantizer design requires higher bit-depth to preserve the information in the measurements. Simulation results also confirm the superior performance of VQ over SQ. As expected, TSVQ provides a good tradeoff between complexity and performance, bounded by VQ and SQ designs on either side of performance/complexity limits. In compressive image the size of final measurement data (i.e. in bits) is also an important system design metric. In this work, we also optimize the compressive imaging system using this design metric, and investigate how to optimally allocate the number of measurement and bits per measurement, i.e. the rate allocation problem. This problem is solved using both an empirical data driven approach and a model-based approach. As a function of compression rate (bits per pixel), our simulation results show that compressive imaging can outperform traditional (non-compressive) imaging followed by image compression (JPEG 2000) in low-to-mid SNR regime. However, in high SNR regime traditional imaging (with image compression) offers a higher image fidelity compare to compressive imaging for a given data rate. Compressive imaging using blockwise measurements is partly limited due to its inability to perform global rate allocation. We also develop an optimal minimum mean-square error (MMSE) reconstruction algorithm for quantized compressed measurements. The algorithm employs Monte-Carlo Markov Chain (MCMC) sampling technique to estimate the posterior mean. Simulation results show significant improvement over approximate MMSE algorithms.
28

Material and structural properties of a novel Aer-Tech material

Dan-Jumbo, F. G. January 2015 (has links)
This study critically investigates the material and structural behaviour of Aer-Tech material. Aer- Tech material is composed of 10% by volume of foam mechanically entrapped in a plastic mortar. The research study showed that the density of the material mix controls all other properties such as fresh state properties, mechanical properties, functional properties and acoustic properties. Appreciably, the research had confirmed that Aer-Tech material despite being classified as a light weight material had given high compressive strength of about 33.91N/mm2. The compressive strength characteristics of Aer-Tech material make the material a potential cost effective construction material, comparable to conventional concrete. The material also showed through this study that it is a structural effective material with its singly reinforced beam giving ultimate moment of about 38.7KN. In addition, the Aer-Tech material is seen as a very good ductile material since, the singly reinforced beam in tension showed visible signs of diagonal vertical cracks long before impending rapture. Consequently, the SEM test and the neural network model predictions, carried out had showed how billions of closely tight air cells are evenly distributed within the Aer-Tech void system as well as the close prediction of NN model for compressive strength and density are same with the experimental results of compressive strength and density. The result shows that the Aer-Tech NN-model can simulate inputs data and predicts their corresponding output data.
29

COMPRESSIVE STRENGTH TO WEIGHT RATIO OPTIMIZATION OF COMPOSITE HONEYCOMB THROUGH ADDITION OF INTERNAL REINFORCEMENTS

Rudd, Jeffrey Roy 18 May 2006 (has links)
No description available.
30

Correlations Between Geometric and Material Properties of Vertebral Bodies and Their Compressive Strength

Stenekes, Jennifer 09 1900 (has links)
Osteoporosis is a disease characterized by reduced bone strength leading to an increased fracture risk. Current diagnostic best practice involves measuring the bone mineral density (BMD) of a patient using absorptiometric imaging tools. This measurement is compared to a known value in order to compute fracture risk. This assessment of bone quality is based solely on the BMD, which has been shown to make up only a portion of the explanation of bone strength. The extent of BMD's contribution to bone strength is also extensively debated and widely varying in the scientific literature. This thesis work encompasses a preliminary investigation into factors in addition to density that contribute to bone strength. The geometric and material properties of 21 vertebral functional unit specimens were measured using dual energy absorptiometry (DXA), pQCT (peripheral quantitative computed tomography) and HCT (helical computed tomography) techniques. The strength of the functional units was assessed through mechanical testing under compressive loading conditions. These measurements were amalgamated into multiple linear regression models to characterize vertebral strength in terms of a few key variables. The model developed for failure load had a coefficient of determination of 0.725 and indicated that the volume of the vertebral body as well as the cross-sectional area of the cortical region were significant in the explanation of failure load. A model was also developed for stress at failure which indicated that the vertebral body height and cortex concavity were important parameters. The coefficient of determination for this model was 0.871. The goal of this study was to provide a foundation on which further investigation into the explanation of bone strength could be built. Ultimately, a better understanding of the parameters that affect bone strength will provide a basis for more accurate clinical tools for the diagnosis of osteoporosis. / Thesis / Master of Applied Science (MASc)

Page generated in 0.0806 seconds