• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 85
  • 20
  • 11
  • 8
  • 2
  • 1
  • Tagged with
  • 147
  • 147
  • 46
  • 40
  • 26
  • 22
  • 21
  • 20
  • 18
  • 18
  • 14
  • 14
  • 13
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Avaliação de representações transformadas para compressão de sinais de eletroencefalografia, com base em análise de componentes principais, decomposições wavelet, transformada discreta de cossenos e compressive sensing

Tôrres, Filipe Emídio 19 March 2018 (has links)
Dissertação (mestrado)—Universidade de Brasília, Faculdade UnB Gama, Programa de Pós-Graduação em Engenharia Biomédica, 2018. / Submitted by Fabiana Santos (fabianacamargo@bce.unb.br) on 2018-08-30T19:08:22Z No. of bitstreams: 1 2018_FilipeEmídioTôrres.pdf: 3263020 bytes, checksum: 67052b5b208c8be101de72f84c20c0f9 (MD5) / Approved for entry into archive by Raquel Viana (raquelviana@bce.unb.br) on 2018-09-10T18:37:59Z (GMT) No. of bitstreams: 1 2018_FilipeEmídioTôrres.pdf: 3263020 bytes, checksum: 67052b5b208c8be101de72f84c20c0f9 (MD5) / Made available in DSpace on 2018-09-10T18:37:59Z (GMT). No. of bitstreams: 1 2018_FilipeEmídioTôrres.pdf: 3263020 bytes, checksum: 67052b5b208c8be101de72f84c20c0f9 (MD5) Previous issue date: 2018-08-30 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES). / Os sinais de eletroencefalografia (EEG) podem ser utilizados para aplicações clínicas, como análises de níveis de sono, diagnósticos e acompanhamento de epilepsia, monitoramento e reabilitação. Esse tipo de sinal também é usado no contexto de interação cérebro-máquina (BCI do inglês, Brain Computer Interface), e seu uso é crescente em várias aplicações deste tipo, como controle de cadeiras de rodas, computadores e automóveis. Sendo assim, existem problemas comumente encontrados, por exemplo, na aquisição desse sinal. Muitas das vezes são necessárias de dezenas a centenas de eletrodos, além de que podem ocorrer falhas de contato exigindo trocas periódicas ou renovação de gel condutor. Outras dificuldades encontradas dizem respeito ao armazenamento e transmissão desses dados em dispositivos móveis e com restrição de consumo de energia. Portanto, existem técnicas de processamento de sinais diversas que podem diminuir o número de sensores necessários e reduzir os custos de armazenamento e transmissão. A proposta desta pesquisa é implementar e avaliar o Compressive Sensing (CS) e mais outras 4 técnicas aplicadas à compressão de sinais de EEG, visando compará-las quanto ao nível de esparsificação e à qualidade de sinais reconstruídos a partir da mesma quantidade de coeficientes. As técnicas utilizadas são o CS, a análise de componentes principais (PCA), análise de componentes independentes (ICA), 30 famílias de wavelets implementadas com base em bancos de filtros de decomposição e a transformada discreta de cossenos (DCT). O CS é destas técnicas a mais recentemente desenvolvida e apresenta possíveis vantagens na fase de aquisição com relação às demais, e o trabalho deseja avaliar sua viabilidade. Para a avaliação são considerados dois bancos de dados de sinais reais, um de polissonografia chamado Sleep Heart Health Study e um estudo em crianças do Instituto de Tecnologia de Massachusetts (MIT), ambos disponíveis publicamente. O estudo se baseia na transformação, quantização, codificação e em seus processos inversos para reconstrução do sinal. A partir dos resultados são realizadas comparações entre os sinais reconstruídos utilizando as diferentes representações escolhidas. Para a comparação, são usadas métricas quantitativas de razão do sinal-ruído (SNR), fator de compressão (CF), um tipo de diferença percentual residual (PRD1) e medidas de tempo.Foi observado que os algoritmos podem reconstruir os sinais com menos de 1=3 dos coeficientes originais dependendo da técnica utilizada. Em geral a DCT e a PCA têm um melhor resultado contra as outras nas métricas utilizadas. Porém cabe ressaltar que o CS permite menor custo de aquisição, possivelmente requisitando um hardware mais simples para isso. De fato, toda a aquisição realizada com base em CS pôde ser feita com medidas obtidas usando apenas soma dos sinais dos eletrodos, sem perdas em relação a matrizes de medidas que envolvem também multiplicações. Admitindo, por exemplo, uma reconstrução a partir de 50% do número de coeficientes do sinal no banco do MIT, a DCT conseguiu uma relação de SNR de 27; 8 dB entre o sinal original e a reconstrução. O PCA teve 24; 0 dB e as melhores wavelets ficaram na faixa dos 19 dB, já o CS com 8; 3 dB e o ICA apenas 1; 1 dB. Para esse mesmo banco, com 50% de CF, o PRD1 resultou em 27; 8% na DCT, 24; 0% na PCA, 17; 2% na wavelet biortogonal 2.2, 8; 3% no CS–10 e 1; 1% no ICA. Portanto, o estudo e uso do CS é justificado pela diferença de complexidade da fase de aquisição com relação a outras técnicas, inclusive tendo melhores resultados do que algumas delas. Na próxima etapa da pesquisa, pretende-se avaliar a compressão multicanal, para verificar o desempenho de cada técnica ao explorar a redundância entre os canais. Além de ferramentas que possam ajudar no desempenho do CS, como fontes de informação a priori e pré-filtragem dos sinais. / Electroencephalography (EEG) signals can be used for clinical applications such as sleep level analysis, diagnosis and monitoring of epilepsy, monitoring and rehabilitation. This type of signal is also used in the context of the Brain Computer Interface (BCI), and its use is increasing in many applications of this type, such as wheelchair, computer and automobile control. Thus, there are problems commonly encountered, for example, in the acquisition of this signal. Often times, it is necessary tens to thousands of electrodes, besides of contact failures may occur requiring periodic changes or conductive gel renewal. Other difficulties encountered relate to the storage and transmission of this data in mobile devices and with restricted energy consumption. Therefore, there are several signal processing techniques that can reduce the number of sensors required and also save storage and transmission costs. The purpose of this research is to implement and evaluate the Compressive Sensing (CS) and other 4 techniques applied to the compression of EEG signals, in order to compare them with the level of scattering and the quality of reconstructed signals from the same number of coefficients. The techniques used are CS, Principal Component Analysis (PCA), Independent Component Analysis (ICA), 30 families of wavelets implemented on the basis of decomposition filter banks and DCT (discrete cosine transform). CS is one of the most recently developed techniques and presents possible advantages in the acquisition phase in relation to the others, and the work wants to evaluate its viability. Two real-signal databases, a polysomnography called the Sleep Heart Health Study and one study of children at the Massachusetts Institute of Technology (MIT), both publicly available, are considered for the evaluation. The study is based on transformation, quantization, coding and its inverse processes for signal reconstruction. From the results are made comparisons between the reconstructed signals using the different representations chosen. For comparison, quantitative measurements of signal-to-noise ratio (SNR), compression factor (CF), a type of residual percentage difference (PRD1), and time measurements are used. It was observed that the algorithms can reconstruct the signals with less than 1/3 of the original coefficients depending on the technique used. In general, DCT and PCA have a better result comparing the others depending the metrics used. However, it is worth mentioning that CS allows lower cost of acquisition, possibly requesting a simpler hardware for this. In fact, all the acquisition based on CS could be done with measurements obtained using only the sum of the signals of the electrodes, without losses in relation to matrices of measures that also involve multiplications. Assuming, for example, a reconstruction from 50 % of the number of signal coefficients in the MIT database, the DCT achieved a SNR ratio of 27:8 dB between the original signal and the reconstruction. The PCA had 24:0 dB and the best wavelets were in the 19 dB range, the CS with 8:3 dB and the ICA only 1:1 dB. For this same database, with 50 % of CF, PRD1 resulted in 27:8% by DCT, 24:0% by PCA, 17:2% by biortogonal wavelet 2.2, 8:3% by CS–10 and 1:1% by ICA. Therefore, the study and use of CS is justified by the difference in complexity of the acquisition phase in relation to other techniques, including having better results than some of them. In the next step of the research, it is intended to evaluate the multichannel compression, to verify the performance of each technique when exploring the redundancy between the channels. In addition to tools that can help in the performance of the CS, as sources of information a priori and pre-filtering the signals.
82

Reconstructing and Controlling Nonlinear Complex Systems

January 2015 (has links)
abstract: The power of science lies in its ability to infer and predict the existence of objects from which no direct information can be obtained experimentally or observationally. A well known example is to ascertain the existence of black holes of various masses in different parts of the universe from indirect evidence, such as X-ray emissions. In the field of complex networks, the problem of detecting hidden nodes can be stated, as follows. Consider a network whose topology is completely unknown but whose nodes consist of two types: one accessible and another inaccessible from the outside world. The accessible nodes can be observed or monitored, and it is assumed that time series are available from each node in this group. The inaccessible nodes are shielded from the outside and they are essentially ``hidden.'' The question is, based solely on the available time series from the accessible nodes, can the existence and locations of the hidden nodes be inferred? A completely data-driven, compressive-sensing based method is developed to address this issue by utilizing complex weighted networks of nonlinear oscillators, evolutionary game and geospatial networks. Both microbes and multicellular organisms actively regulate their cell fate determination to cope with changing environments or to ensure proper development. Here, the synthetic biology approaches are used to engineer bistable gene networks to demonstrate that stochastic and permanent cell fate determination can be achieved through initializing gene regulatory networks (GRNs) at the boundary between dynamic attractors. This is experimentally realized by linking a synthetic GRN to a natural output of galactose metabolism regulation in yeast. Combining mathematical modeling and flow cytometry, the engineered systems are shown to be bistable and that inherent gene expression stochasticity does not induce spontaneous state transitioning at steady state. By interfacing rationally designed synthetic GRNs with background gene regulation mechanisms, this work investigates intricate properties of networks that illuminate possible regulatory mechanisms for cell differentiation and development that can be initiated from points of instability. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2015
83

Computer Vision from Spatial-Multiplexing Cameras at Low Measurement Rates

January 2017 (has links)
abstract: In UAVs and parking lots, it is typical to first collect an enormous number of pixels using conventional imagers. This is followed by employment of expensive methods to compress by throwing away redundant data. Subsequently, the compressed data is transmitted to a ground station. The past decade has seen the emergence of novel imagers called spatial-multiplexing cameras, which offer compression at the sensing level itself by providing an arbitrary linear measurements of the scene instead of pixel-based sampling. In this dissertation, I discuss various approaches for effective information extraction from spatial-multiplexing measurements and present the trade-offs between reliability of the performance and computational/storage load of the system. In the first part, I present a reconstruction-free approach to high-level inference in computer vision, wherein I consider the specific case of activity analysis, and show that using correlation filters, one can perform effective action recognition and localization directly from a class of spatial-multiplexing cameras, called compressive cameras, even at very low measurement rates of 1\%. In the second part, I outline a deep learning based non-iterative and real-time algorithm to reconstruct images from compressively sensed (CS) measurements, which can outperform the traditional iterative CS reconstruction algorithms in terms of reconstruction quality and time complexity, especially at low measurement rates. To overcome the limitations of compressive cameras, which are operated with random measurements and not particularly tuned to any task, in the third part of the dissertation, I propose a method to design spatial-multiplexing measurements, which are tuned to facilitate the easy extraction of features that are useful in computer vision tasks like object tracking. The work presented in the dissertation provides sufficient evidence to high-level inference in computer vision at extremely low measurement rates, and hence allows us to think about the possibility of revamping the current day computer systems. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2017
84

AnÃlise do Uso de Compressive Sensing para Canal de Feedback Limitado Diante do Erro de QuantizaÃÃo e RuÃdo em Sistemas SM-MIMO / Quantization and Noise Impact Over Feedback Reduction of MIMO Systems Using Compressive Sensing

Raymundo Nogueira de SÃ Netto 18 January 2013 (has links)
Conselho Nacional de Desenvolvimento CientÃfico e TecnolÃgico / Em se tratando de comunicaÃÃes mÃveis, a troca de informaÃÃes sobre os estados do canal entre as antenas receptoras e transmissoras à uma importante ferramenta para a melhoria do desempenho do sistema. Assim, nesse trabalho foram analisados sistemas MIMO multiplexados espacialmente, Spatially Multiplexed MIMO (SM-MIMO), com informaÃÃes do estado do canal no transmissor, Channel State Information (CSI), limitadas e duas tÃcnicas de detecÃÃo linear do sinal e prÃ-equalizaÃÃo do sinal Zero Forcing (ZF) e Minimum Mean Square Error (MMSE). Para essa limitaÃÃo dois esquemas foram considerados: Quantization Codebook (QC) e Compressive Sensing (CS). Compressive Sensing à usado para gerar um CSI comprimido a ser enviado pelas antenas receptoras por um canal de feedback a fim de reduzir a quantidade de informaÃÃo enviada pelas mesmas. Portanto, nesse trabalho, o desempenho das duas tÃcnicas foram comparadas por simulaÃÃes computacionais das curvas da taxa de erro de bit, Bit Error Rate (BER), de acordo com a variaÃÃo da relaÃÃo sinal ruÃdo, Signal to Noise Ratio (SNR), considerando as duas abordagens QC e CS. AlÃm disso, a presenÃa do erro de quantizaÃÃo e do ruÃdo, no canal de feedback, tambÃm foi avaliada para o esquema de CS. / Concerning to mobile communications, the information exchange over the channel states between receiving antennas and transmiting antennas is an important tool to enhance the system performance. Thus, in this work, spatially multiplexed MIMO (SM-MIMO) systems with limited Channel State Information (CSI) were analyzed considering two techniques of linear signal detection and pre-equalization Zero Forcing (ZF) and Minimum Mean Square Error (MMSE). Due to this limitation two schemes were considered: Quantization Codebook (QC) e Compressive Sensing (CS). Compressive Sensing is used to generate a reduced CSI feedback to the transmitter in order to reduce feedback load into the system. Therefore, in this work, the performance of the techniques were compared by computational simulations of Bit Error Rate (BER) curves according to the variation of the Signal to Noise Ratio (SNR) for the two considered approaches QC and CS. Furthermore, the presence of quantization error and noise, in the feedback link, were also evaluated for the CS scheme.
85

Greedy algorithms for multi-channel sparse recovery

Determe, Jean-François 16 January 2018 (has links)
During the last decade, research has shown compressive sensing (CS) to be a promising theoretical framework for reconstructing high-dimensional sparse signals. Leveraging a sparsity hypothesis, algorithms based on CS reconstruct signals on the basis of a limited set of (often random) measurements. Such algorithms require fewer measurements than conventional techniques to fully reconstruct a sparse signal, thereby saving time and hardware resources. This thesis addresses several challenges. The first is to theoretically understand how some parameters—such as noise variance—affect the performance of simultaneous orthogonal matching pursuit (SOMP), a greedy support recovery algorithm tailored to multiple measurement vector signal models. Chapters 4 and 5 detail novel improvements in understanding the performance of SOMP. Chapter 4 presents analyses of SOMP for noiseless measurements; using those analyses, Chapter 5 extensively studies the performance of SOMP in the noisy case. A second challenge consists in optimally weighting the impact of each measurement vector on the decisions of SOMP. If measurement vectors feature unequal signal-to-noise ratios, properly weighting their impact improves the performance of SOMP. Chapter 6 introduces a novel weighting strategy from which SOMP benefits. The chapter describes the novel weighting strategy, derives theoretically optimal weights for it, and presents both theoretical and numerical evidence that the strategy improves the performance of SOMP. Finally, Chapter 7 deals with the tendency for support recovery algorithms to pick support indices solely for mapping a particular noise realization. To ensure that such algorithms pick all the correct support indices, researchers often make the algorithms pick more support indices than the number strictly required. Chapter 7 presents a support reduction technique, that is, a technique removing from a support the supernumerary indices solely mapping noise. The advantage of the technique, which relies on cross-validation, is that it is universal, in that it makes no assumption regarding the support recovery algorithm generating the support. Theoretical results demonstrate that the technique is reliable. Furthermore, numerical evidence proves that the proposed technique performs similarly to orthogonal matching pursuit with cross-validation (OMP-CV), a state-of-the-art algorithm for support reduction. / Doctorat en Sciences de l'ingénieur et technologie / info:eu-repo/semantics/nonPublished
86

Low power real-time data acquisition using compressive sensing

Powers, Linda S., Zhang, Yiming, Chen, Kemeng, Pan, Huiqing, Wu, Wo-Tak, Hall, Peter W., Fairbanks, Jerrie V., Nasibulin, Radik, Roveda, Janet M. 18 May 2017 (has links)
New possibilit ies exist for the development of novel hardware/software platforms havin g fast data acquisition capability with low power requirements. One application is a high speed Adaptive Design for Information (ADI) system that combines the advantages of feature-based data compression, low power nanometer CMOS technology, and stream computing [1]. We have developed a compressive sensing (CS) algorithm which linearly reduces the data at the analog front end, an approach which uses analog designs and computations instead of smaller feature size transistors for higher speed and lower power. A level-crossing sampling approach replaces Nyquist sampling. With an in-memory design, the new compressive sensing based instrumentation performs digitization only when there is enough variation in the input and when the random selection matrix chooses this input.
87

Building Constraints, Geometric Invariants and Interpretability in Deep Learning: Applications in Computational Imaging and Vision

January 2019 (has links)
abstract: Over the last decade, deep neural networks also known as deep learning, combined with large databases and specialized hardware for computation, have made major strides in important areas such as computer vision, computational imaging and natural language processing. However, such frameworks currently suffer from some drawbacks. For example, it is generally not clear how the architectures are to be designed for different applications, or how the neural networks behave under different input perturbations and it is not easy to make the internal representations and parameters more interpretable. In this dissertation, I propose building constraints into feature maps, parameters and and design of algorithms involving neural networks for applications in low-level vision problems such as compressive imaging and multi-spectral image fusion, and high-level inference problems including activity and face recognition. Depending on the application, such constraints can be used to design architectures which are invariant/robust to certain nuisance factors, more efficient and, in some cases, more interpretable. Through extensive experiments on real-world datasets, I demonstrate these advantages of the proposed methods over conventional frameworks. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2019
88

Compressive Radar Cross Section Computation

Li, Xiang 15 January 2020 (has links)
Compressive Sensing (CS) is a novel signal-processing paradigm that allows sampling of sparse or compressible signals at lower than Nyquist rate. The past decade has seen substantial research on imaging applications using compressive sensing. In this thesis, CS is combined with the commercial electromagnetic (EM) simulation software newFASANT to improve its efficiency in solving EM scattering problems such as Radar Cross Section (RCS) of complex targets at GHz frequencies. This thesis proposes a CS-RCS approach that allows efficient and accurate recovery of under-sampled RCSs measured from a random set of incident angles using an accelerated iterative soft thresh-holding reconstruction algorithm. The RCS results of a generic missile and a Canadian KingAir aircraft model simulated using Physical Optics (PO) as the EM solver at various frequencies and angular resolutions demonstrate good efficiency and accuracy of the proposed method.
89

Combating Impairments in Multi-carrier Systems: A Compressed Sensing Approach

Al-Shuhail, Shamael 05 1900 (has links)
Multi-carrier systems suffer from several impairments, and communication system engineers use powerful signal processing tools to combat these impairments and keep up with the capacity/rate demands. Compressed sensing (CS) is one such tool that allows recovering any sparse signal, requiring only a few measurements in a domain that is incoherent with the domain of sparsity. Almost all signals of interest have some degree of sparsity, and in this work we utilize the sparsity of impairments in orthogonal frequency division multiplexing (OFDM) and its variants (i.e., orthogonal frequency division multiplexing access (OFDMA) and single-carrier frequency-division multiple access (SC-FDMA)) to combat them using CS. We start with the problem of peak-to-average power ratio (PAPR) reduction in OFDM. OFDM signals suffer from high PAPR and clipping is the simplest PAPR reduction scheme. However, clipping introduces inband distortions that result in compromised performance and hence needs to be mitigated at the receiver. Due to the high PAPR nature of the OFDM signal, only a few instances are clipped, these clipping distortions can be recovered at the receiver by employing CS. We then extend the proposed clipping recovery scheme to an interleaved OFDMA system. Interleaved OFDMA presents a special structure that result in only self-inflicted clipping distortions. In this work, we prove that distortions do not spread over multiple users (while utilizing interleaved carrier assignment in OFDMA) and construct a CS system that recovers the clipping distortions on each user. Finally, we address the problem of narrowband interference (NBI) in SC-FDMA. Unlike OFDM and OFDMA systems, SC-FDMA does not suffer from high PAPR, but (as the data is encoded in time domain) is seriously vulnerable to information loss owing to NBI. Utilizing the sparse nature of NBI (in frequency domain) we combat its effect on SC-FDMA system by CS recovery.
90

Passive Radar Imaging with Multiple Transmitters

Brandewie, Aaron January 2021 (has links)
No description available.

Page generated in 0.0995 seconds