• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 366
  • 139
  • 47
  • 42
  • 34
  • 10
  • 9
  • 8
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 843
  • 116
  • 105
  • 104
  • 61
  • 60
  • 59
  • 55
  • 50
  • 45
  • 44
  • 43
  • 43
  • 43
  • 41
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
601

Monitoring the Recovery from a Temporary Threshold Shift Using an Adaptive Procedure and Measurements of Spontaneous and Distortion Product Otoacoustic Emissions

Smurzynski, Jacek 01 January 2014 (has links)
No description available.
602

Analysis of algorithms for filter bank design optimization

ElGarewi, Ahmed 06 September 2019 (has links)
This thesis deals with design algorithms for filter banks based on optimization. The design specifications consist of the perfect reconstruction and frequency response specifications for finite impulse response (FIR) analysis and synthesis filters. The perfect reconstruction conditions are formulated as a set of linear equations with respect to the analysis filters’ coefficients and the synthesis filters’ coefficients. Five design algorithms are presented. The first three are based on an unconstrained optimization of performance indices, which include the perfect reconstruction error and the error in the frequency specifications. The last two algorithms are formulated as constrained optimization problems with the perfect reconstruction error as the performance index and the frequency specifications as constraints. The performance of the five algorithms is evaluated and compared using six examples; these examples include uniform filter bank, compatible non-uniform filter bank and incompatible non-uniform filter bank designs. The evaluation criteria are based on distortion and aliasing errors, the magnitude response characteristics of analysis and synthesis filters, the computation time required for the optimization, and the convergence of the performance index with respect to the number of iterations. The results show that the five algorithms can achieve almost perfect reconstruction and can meet the frequency response specifications at an acceptable level. In the case of incompatible non-uniform filter banks, the algorithms have challenges to achieve almost perfect reconstruction. / Graduate
603

Detecção de distorção arquitetural mamária em mamografia digital utilizando rede neural convolucional profunda / Detection of architectural distortion in digital mammography using deep convolutional neural network

Costa, Arthur Chaves 08 March 2019 (has links)
A proposta deste trabalho foi analisar diferentes metodologias de treinamento de uma rede neural convolucional profunda (CNN) para a detecção de distorção arquitetural mamária (DA) em imagens de mamografia digital. A DA é uma contração sutil do tecido mamário que pode representar o sinal mais precoce de um câncer de mama em formação. Os sistemas computacionais de auxílio ao diagnóstico (CAD) existentes ainda apresentam desempenho insatisfatório para a detecção da DA. Sistemas baseados em CNN têm atraído a atenção da comunidade científica, inclusive na área médica para a otimização dos sistemas CAD. No entanto, as CNNs necessitam de um grande volume de dados para serem treinadas adequadamente, o que é particularmente difícil na área médica. Dessa forma, foi realizada neste trabalho, uma comparação de diferentes abordagens de treinamento para uma arquitetura CNN avaliando-se o efeito de técnicas de geração de novas amostras (data augmentation) sobre o desempenho da rede. Para isso, foram utilizadas 240 mamografias digitais clínicas. Uma das redes (CNN-SW) foi treinada com recortes extraídos por varredura em janela sobre a área interna da mama (aprox. 21600 em média) e a outra rede (CNN-SW+) contou com o mesmo conjunto ampliado por data augmentation (aprox. 345000 em média). Para avaliar o método, foi utilizada validação cruzada por k-fold, gerando-se em rodízio, 10 modelos de cada rede. Os testes analisaram todas as ROIs extraídas da mama, sendo testados 14 mamogramas por fold, e obtendo-se uma diferença estatisticamente significativa entre os resultados (AUC de 0,81 para a CNN-SW e 0,83 para a CNN-SW+). Mapas de calor ilustraram as predições da rede, permitindo uma análise visual e quantitativa do comportamento de ambos os modelos. / The purpose of this work was to analyze different training methodologies of a deep convolutional neural network (CNN) to detect breast architectural distortion (AD) in digital mammography images. AD is a subtle contraction of the breast tissue that may represent the earliest sign of a breast cancer in formation. Current Computer-Aided Detection (CAD) systems still have an unsatisfactory performance on AD detection. CNN-based systems have attracted the attention of the scientific community, including in the medical field for CAD optimization. However, CNNs require a large amount of data to be properly trained, which is particularly difficult in the medical field. Thus, in this work, different training approaches for a CNN architecture are compared evaluating the effect of data augmentation techniques on the data set. For this, 240 clinical digital mammography were used. One of the networks (CNN-SW) was trained with regions of interest (ROI) extracted by a sliding window over the inner breast area (approx 21600 on average) and the other network (CNN-SW+) had the same set enlarged by data augmentation (about 345000 on average). To evaluate the method, k-fold cross-validation was used, generating 10 instances of each model. The tests looked at all the ROIs extracted from the breast (14 mammograms per fold), and results showed a statistically significant difference between both networks (AUC of 0.81 for CNN-SW and 0.83 for CNN-SW+). Heat maps illustrated the predictions of the networks, allowing a visual and quantitative analysis of the behavior of both models.
604

Optimal source coding with signal transfer function constraints

Derpich, Milan January 2009 (has links)
Research Doctorate - Doctor of Philosophy (PhD) / This thesis presents results on optimal coding and decoding of discrete-time stochastic signals, in the sense of minimizing a distortion metric subject to a constraint on the bit-rate and on the signal transfer function from source to reconstruction. The first (preliminary) contribution of this thesis is the introduction of new distortion metric that extends the mean squared error (MSE) criterion. We give this extension the name Weighted-Correlation MSE (WCMSE), and use it as the distortion metric throughout the thesis. The WCMSE is a weighted sum of two components of the MSE: the variance of the error component uncorrelated to the source, on the one hand, and the remainder of the MSE, on the other. The WCMSE can take account of signal transfer function constraints by assigning a larger weight to deviations from a target signal transfer function than to source-uncorrelated distortion. Within this framework, the second contribution is the solution of a family of feedback quantizer design problems for wide sense stationary sources using an additive noise model for quantization errors. These associated problems consist of finding the frequency response of the filters deployed around a scalar quantizer that minimize the WCMSE for a fixed quantizer signal-to-(granular)-noise ratio (SNR). This general structure, which incorporates pre-, post-, and feedback filters, includes as special cases well known source coding schemes such as pulse coded modulation (PCM), Differential Pulse-Coded Modulation (DPCM), Sigma Delta converters, and noise-shaping coders. The optimal frequency response of each of the filters in this architecture is found for each possible subset of the remaining filters being given and fixed. These results are then applied to oversampled feedback quantization. In particular, it is shown that, within the linear model used, and for a fixed quantizer SNR, the MSE decays exponentially with oversampling ratio, provided optimal filters are used at each oversampling ratio. If a subtractively dithered quantizer is utilized, then the noise model is exact, and the SNR constraint can be directly related to the bit-rate if entropy coding is used, regardless of the number of quantization levels. On the other hand, in the case of fixed-rate quantization, the SNR is related to the number of quantization levels, and hence to the bit-rate, when overload errors are negligible. It is shown that, for sources with unbounded support, the latter condition is violated for sufficiently large oversampling ratios. By deriving an upper bound on the contribution of overload errors to the total WCMSE, a lower bound for the decay rate of the WCMSE as a function of the oversampling ratio is found for fixed-rate quantization of sources with finite or infinite support. The third main contribution of the thesis is the introduction of the rate-distortion function (RDF) when WCMSE is the distortion metric, denoted by WCMSE-RDF. We provide a complete characterization for Gaussian sources. The resulting WCMSE-RDF yields, as special cases, Shannon's RDF, as well as the recently introduced RDF for source-uncorrelated distortions (RDF-SUD). For cases where only source-uncorrelated distortion is allowed, the RDF-SUD is extended to include the possibility of linear-time invariant feedback between reconstructed signal and coder input. It is also shown that feedback quantization schemes can achieve a bit-rate only 0.254 bits/sample above this RDF by using the same filters that minimize the reconstruction MSE for a quantizer-SNR constraint. The fourth main contribution of this thesis is to provide a set of conditions under which knowledge of a realization of the RDF can be used directly to solve encoder-decoder design optimization problems. This result has direct implications in the design of subband coders with feedback, as well as in the design of encoder-decoder pairs for applications such as networked control. As the fifth main contribution of this thesis, the RDF-SUD is utilized to show that, for Gaussian sta-tionary sources with memory and MSE distortion criterion, an upper bound on the information-theoretic causal RDF can be obtained by means of an iterative numerical procedure, at all rates. This bound is tighter than 0:5 bits/sample. Moreover, if there exists a realization of the causal RDF in which the re-construction error is jointly stationary with the source, then the bound obtained coincides with the causal RDF. The iterative procedure proposed here to obtain Ritc(D) also yields a characterization of the filters in a scalar feedback quantizer having an operational rate that exceeds the bound by less than 0:254 bits/sample. This constitutes an upper bound on the optimal performance theoretically attainable by any causal source coder for stationary Gaussian sources under the MSE distortion criterion.
605

Transform Coefficient Thresholding and Lagrangian Optimization for H.264 Video Coding / Transformkoefficient-tröskling och Lagrangeoptimering för H.264 Videokodning

Carlsson, Pontus January 2004 (has links)
<p>H.264, also known as MPEG-4 Part 10: Advanced Video Coding, is the latest MPEG standard for video coding. It provides approximately 50% bit rate savings for equivalent perceptual quality compared to any previous standard. In the same fashion as previous MPEG standards, only the bitstream syntax and the decoder are specified. Hence, coding performance is not only determined by the standard itself but also by the implementation of the encoder. In this report we propose two methods for improving the coding performance while remaining fully compliant to the standard. </p><p>After transformation and quantization, the transform coefficients are usually entropy coded and embedded in the bitstream. However, some of them might be beneficial to discard if the number of saved bits are sufficiently large. This is usually referred to as coefficient thresholding and is investigated in the scope of H.264 in this report. </p><p>Lagrangian optimization for video compression has proven to yield substantial improvements in perceived quality and the H.264 Reference Software has been designed around this concept. When performing Lagrangian optimization, lambda is a crucial parameter that determines the tradeoff between rate and distortion. We propose a new method to select lambda and the quantization parameter for non-reference frames in H.264. </p><p>The two methods are shown to achieve significant improvements. When combined, they reduce the bitrate around 12%, while preserving the video quality in terms of average PSNR. </p><p>To aid development of H.264, a software tool has been created to visualize the coding process and present statistics. This tool is capable of displaying information such as bit distribution, motion vectors, predicted pictures and motion compensated block sizes.</p>
606

Dynamic nonlinear pre-distortion of signal generators for improved dynamic range

Jawdat, Suzan January 2009 (has links)
<p>In this thesis, a parsimoniously parameterized digital predistorter is derived for linearization of the IQ modulation mismatch and the amplifier imperfection in the signal generator [1]. It is shown that the resulting predistorter is linear in its parameters, and thus they may be estimated by the method of least-squares. Spectrally pure signals are an indispensable requirement when the signal generator is to be used as part of a test bed. Due to the non-linear characteristic of the IQ modulator and power amplifier, distortion will be present at the output of the signal generator. The device under test was the IQ modulation mismatch and power amplifier deficiencies in the signal generator.</p><p>In [2], the dynamic range of low-cost signal generators are improved by employing model based digital pre-distortion and the designed predistorter seems to give some improvement of the dynamic range of the signal generator.</p><p>The goal of this project is to implement and verify the theory parts [1] using data program (Matlab) to improve the dynamic range of the signal generator. The design digital pre-distortion that is implemented in software so that the dynamic range of the signal generator output after predistortion is superior to that of the output prior to it. In this project, we have observed numerical<strong> </strong>problems in the proposed theory and we have found other methods to solve the problem.</p><p>The polynomial model is commonly used in power amplifier modeling and predistorter design. However, the conventional polynomial model exhibits numerical instabilities when higher order terms are included, we have used the conventional and orthogonal polynomial models. The result shows that the orthogonal polynomial model generally yield better power amplifier modeling accuracy as well as predistortion linearization performance then the conventional polynomial model.</p>
607

Multidimensional Measurements : on RF Power Amplifiers

Al-Tahir, Hibah January 2008 (has links)
<p>Abstract</p><p>In this thesis, a measurement system was set to perform comprehensive measurements on RF power amplifiers. Data obtained from the measurements is then processed mathematically to obtain three dimensional graphs of the basic parameters affected or generated by nonlinearities of the amplifier i.e. gain, efficiency and distortion. Using a class AB amplifier as the DUT, two sets of signals – both swept in power level and frequency - were generated to validate the method, a two-tone signal and a WCDMA signal. The three dimensional plot gives a thorough representation of the behavior of the amplifier in any arbitrary range of spectrum and input level. Sweet spots are consequently easy to detect and analyze. The measurement setup can also yield other three dimensional plots of variations of gain, efficiency or distortion versus frequencies and input levels. Moreover, the measurement tool can be used to plot traditional two dimensional plots such as, input versus gain, frequency versus efficiency etc, making the setup a practical tool for RF amplifiers designers.</p><p>The test signals were generated by computer then sent to a vector signal generator that generates the actual signals fed to the amplifier. The output of the amplifier is fed to a vector signal analyzer then collected by computer to be handled. MATLAB® was used throughout the entire process.</p><p>The distortion considered in the case of the two-tone signals is the third order intermodulation distortion (IM3) whereas Adjacent Channel Power Ratio (ACPR) was considered in the case of WCDMA.</p>
608

Quantization of Random Processes and Related Statistical Problems

Shykula, Mykola January 2006 (has links)
<p>In this thesis we study a scalar uniform and non-uniform quantization of random processes (or signals) in average case setting. Quantization (or discretization) of a signal is a standard task in all nalog/digital devices (e.g., digital recorders, remote sensors etc.). We evaluate the necessary memory capacity (or quantization rate) needed for quantized process realizations by exploiting the correlation structure of the model random process. The thesis consists of an introductory survey of the subject and related theory followed by four included papers (A-D).</p><p>In Paper A we develop a quantization coding method when quantization levels crossings by a process realization are used for its coding. Asymptotical behavior of mean quantization rate is investigated in terms of the correlation structure of the original process. For uniform and non-uniform quantization, we assume that the quantization cellwidth tends to zero and the number of quantization levels tends to infinity, respectively.</p><p>In Papers B and C we focus on an additive noise model for a quantized random process. Stochastic structures of asymptotic quantization errors are derived for some bounded and unbounded non-uniform quantizers when the number of quantization levels tends to infinity. The obtained results can be applied, for instance, to some optimization design problems for quantization levels.</p><p>Random signals are quantized at sampling points with further compression. In Paper D the concern is statistical inference for run-length encoding (RLE) method, one of the compression techniques, applied to quantized stationary Gaussian sequences. This compression method is widely used, for instance, in digital signal and image processing. First, we deal with mean RLE quantization rates for various probabilistic models. For a time series with unknown stochastic structure, we investigate asymptotic properties (e.g., asymptotic normality) of two estimates for the mean RLE quantization rate based on an observed sample when the sample size tends to infinity.</p><p>These results can be used in communication theory, signal processing, coding, and compression applications. Some examples and numerical experiments demonstrating applications of the obtained results for synthetic and real data are presented.</p>
609

A Broad View on the Interpretation of Electromagnetic Data (VLF, RMT, MT, CSTMT) / En bred syn på Tolkning av Elektromagnetiska Data (VLF, RMT, MT, CSTMT)

Oskooi, Behrooz January 2004 (has links)
<p>The resolution power of single Very Low Frequency (VLF) data and multi-frequency Radiomagnetotelluric (RMT) data in delineating conductive structures typical for the sedimentary cover and crystalline basement in Scandinavia is studied with a view to future developments of the technique to increasing the frequency range into the LW radio band. Airborne and ground VLF data are interpreted and correlated with RMT measurements made on the ground to better understand the resolution power of VLF data. To aid in this understanding single and multifrequency VLF and RMT responses for some typical resistivity structures are analyzed. An analytic model is presented for obtaining unique transfer functions from measurements of the electromagnetic components on board an air-plane or on the ground. Examples of 2D inversion of ground and airborne VLF profiles in Sweden are shown to demonstrate the quantitative interpretation of VLF data in terms of both lateral and depth changes of the resistivity in the uppermost crust.</p><p>Geothermal resources are ideal targets for Electromagnetic (EM) methods since they produce strong variations in underground electrical resistivity. Modelling of Magnetotelluric (MT) data in SW Iceland indicates an alteration zone beneath the surface, where there are no obvious geothermal manifestations, in between Hengill and Brennisteinsfjoll geothermal systems. It suggests that a hydrothermal fluid circulation exists at depth. It also proves that the MT method, with its ability to map deep conductive features can play a valuable role in the reconnaissance of deep geothermal systems in active rift regimes such as in Iceland.</p><p>A damped nonlinear least-squares inversion approach is employed to invert Controlled Source Tensor MT (CSTMT) data for azimuthal anisotropy in a 1D layered earth. Impedance and tipper data are inverted jointly. The effects of near-surface inhomogeneities are parameterized in addition to each layer parameter(s). Application of the inversion algorithm to both synthetic and field data shows that the CSTMT method can be used to detect azimuthal anisotropy under realistic conditions with near surface lateral heterogeneities.</p>
610

Selected problems in turbulence theory and modeling

Jeong, Eun-Hwan 30 September 2004 (has links)
Three different topics of turbulence research that cover modeling, theory and model computation categories are selected and studied in depth. In the first topic, "velocity gradient dynamics in turbulence" (modeling), the Lagrangian linear diffusion model that accounts for the viscous-effect is proposed to make the existing restricted-Euler velocity gradient dynamics model quantitatively useful. Results show good agreement with DNS data. In the second topic, "pressure-strain correlation in homogeneous anisotropic turbulence subject to rapid strain-dominated distortion" (theory), extensive rapid distortion calculation is performed for various anisotropic initial turbulence conditions in strain-dominated mean flows. The behavior of the rapid pressure-strain correlation is investigated and constraining criteria for the rapid pressure-strain correlation models are developed. In the last topic, "unsteady computation of turbulent flow past a square cylinder using partially-averaged Navier-Stokes method" (model computation), the basic philosophy of the PANS method is reviewed and a practical problem of flow past a square cylinder is computed for various levels of physical resolution. It is revealed that the PANS method can capture many important unsteady flow features at an affordable computational effort.

Page generated in 0.0545 seconds