• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 168
  • 98
  • 66
  • 16
  • 11
  • 5
  • 2
  • 2
  • Tagged with
  • 390
  • 390
  • 135
  • 55
  • 54
  • 54
  • 53
  • 45
  • 39
  • 37
  • 34
  • 32
  • 31
  • 31
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Topological Conjugacy Relation on the Space of Toeplitz Subshifts

Yu, Ping 08 1900 (has links)
We proved that the topological conjugacy relation on $T_1$, a subclass of Toeplitz subshifts, is hyperfinite, extending Kaya's result that the topological conjugate relation of Toeplitz subshifts with growing blocks is hyperfinite. A close concept about the topological conjugacy is the flip conjugacy, which has been broadly studied in terms of the topological full groups. Particularly, we provided an equivalent characterization on Toeplitz subshifts with single hole structure to be flip invariant.
42

Interpolated Perturbation-Based Decomposition as a Method for EEG Source Localization

Lipof, Gabriel Zelik 01 June 2019 (has links) (PDF)
In this thesis, the perturbation-based decomposition technique developed by Szlavik [1] was used in an attempt to solve the inverse problem in EEG source localization. A set of dipole locations were forward modeled using a 4-layer sphere model of the head at uniformly distributed lead locations to form the vector basis necessary for the method. Both a two-dimensional and a pseudo-three-dimensional versions of the model were assessed with the two-dimensional model yielding decompositions with minimal error and the pseudo-three-dimensional version having unacceptable levels of error. The utility of interpolation as a method to reduce the number of data points to become overdefined was assessed as well. The approach was effective as long as the number of component functions did not exceed the number of data points and stayed relatively small (less than 77 component functions). This application of the method to a spatially variate system indicates its potential for other systems and with some tweaking to the least squares algorithm used, could be applied to multivariate systems.
43

Analytical Study and Numerical Solution of the Inverse Source Problem Arising in Thermoacoustic Tomography

Holman, Benjamin Robert January 2016 (has links)
In recent years, revolutionary "hybrid" or "multi-physics" methods of medical imaging have emerged. By combining two or three different types of waves these methods overcome limitations of classical tomography techniques and deliver otherwise unavailable, potentially life-saving diagnostic information. Thermoacoustic (and photoacoustic) tomography is the most developed multi-physics imaging modality. Thermo- and photo-acoustic tomography require reconstructing initial acoustic pressure in a body from time series of pressure measured on a surface surrounding the body. For the classical case of free space wave propagation, various reconstruction techniques are well known. However, some novel measurement schemes place the object of interest between reflecting walls that form a de facto resonant cavity. In this case, known methods cannot be used. In chapter 2 we present a fast iterative reconstruction algorithm for measurements made at the walls of a rectangular reverberant cavity with a constant speed of sound. We prove the convergence of the iterations under a certain sufficient condition, and demonstrate the effectiveness and efficiency of the algorithm in numerical simulations. In chapter 3 we consider the more general problem of an arbitrarily shaped resonant cavity with a non constant speed of sound and present the gradual time reversal method for computing solutions to the inverse source problem. It consists in solving back in time on the interval [0, T] the initial/boundary value problem for the wave equation, with the Dirichlet boundary data multiplied by a smooth cutoff function. If T is sufficiently large one obtains a good approximation to the initial pressure; in the limit of large T such an approximation converges (under certain conditions) to the exact solution.
44

Optimal shape design based on body-fitted grid generation.

Mohebbi, Farzad January 2014 (has links)
Shape optimization is an important step in many design processes. With the growing use of Computer Aided Engineering in the design chain, it has become very important to develop robust and efficient shape optimization algorithms. The field of Computer Aided Optimal Shape Design has grown substantially over the recent past. In the early days of its development, the method based on small shape perturbation to probe the parameter space and identify an optimal shape was routinely used. This method is nothing but an educated trial and error method. A key development in the pursuit of good shape optimization algorithms has been the advent of the adjoint method to compute the shape sensitivities more formally and efficiently. While undoubtedly, very attractive, this method relies on very sophisticated and advanced mathematical tools which are an impediment to its wider use in the engineering community. It that spirit, it is the purpose of this thesis to propose a new shape optimization algorithm based on more intuitive engineering principles and numerical procedures. In this thesis, the new shape optimization procedure which is proposed is based on the generation of a body-fitted mesh. This process maps the physical domain into a regular computational domain. Based on simple arguments relating to the use of the chain rule in the mapped domain, it is shown that an explicit expression for the shape sensitivity can be derived. This enables the computation of the shape sensitivity in one single solve, a performance analogous to the adjoint method, the current state-of-the art. The discretization is based on the Finite Difference method, a method chosen for its simplicity and ease of implementation. This algorithm is applied to the Laplace equation in the context of heat transfer problems and potential flows. The applicability of the proposed algorithm is demonstrated on a number of benchmark problems which clearly confirm the validity of the sensitivity analysis, the most important aspect of any shape optimization problem. This thesis also explores the relative merits of different minimization algorithms and proposes a technique to “fix” meshes when inverted element arises as part of the optimization process. While the problems treated are still elementary when compared to complex multiphysics engineering problems, the new methodology presented in this thesis could apply in principle to arbitrary Partial Differential Equations.
45

Investigation into a prominent 38 kHz scattering layer in the North Sea

Mair, Angus MacDonald January 2008 (has links)
The aim of this study was to investigate the composition of an acoustic scattering layer in the North Sea that is particularly strong at 38 kHz. A full definition of the biological composition of the layer, along with its acoustic properties, would allow for it to be confidently removed from data collected during acoustic fish surveys, where it presents a potential source of bias. The layer, traditionally and informally referred to as consisting of zooplankton, appears similar to others observed internationally. The methodology utilised in this study consisted of biological and acoustic sampling, followed by application of forward and inverse acoustic modelling techniques. Acoustic data was collected at 38, 120 and 200 kHz in July 2003, with the addition of 18 kHz in July 2004. Net samples were collected in layers of relatively strong 38 kHz acoustic scattering using a U-tow vehicle (2003) and a MIKT net (2004). Acoustic data were scrutinised to determine actual backscattering, expressed as mean volume backscattering strength (MVBS) (dB). This observed MVBS (MVBSobs) was compared with backscattering predicted by applying the forward problem solution (MVBSpred) to sampled animal densities in order to determine whether those animals were responsible for the enhanced 38 kHz scattering. In most instances, MVBSobs > MVBSpred, more pronounced at 38 kHz. It was found that MVBSpred approached MVBSobs more closely with MIKT than with U-tow samples, but that the 38 kHz mismatch was present in both. Inversion of candidate acoustic models predicted gas-bearing scatterers, which are strong at 38 kHz, as most likely to be responsible for this. Potential sources of inconsistencies between MVBSpred and MVBSobs were identified. The presented forward and inverse solutions infer that although the layer often contains large numbers of common zooplankton types, such as copepods and euphausiids, these are not the dominant acoustic scatterer at 38 kHz. Rather, there remains an unidentified, probably gas-bearing scatterer that contributes significantly to observed scattering levels at this frequency. This study identifies and considerably narrows the list of candidates that are most likely to be responsible for enhanced 38 kHz scattering in the North Sea layer, and recommendations are made for potential future studies.
46

Coding Strategies and Implementations of Compressive Sensing

Tsai, Tsung-Han January 2016 (has links)
<p>This dissertation studies the coding strategies of computational imaging to overcome the limitation of conventional sensing techniques. The information capacity of conventional sensing is limited by the physical properties of optics, such as aperture size, detector pixels, quantum efficiency, and sampling rate. These parameters determine the spatial, depth, spectral, temporal, and polarization sensitivity of each imager. To increase sensitivity in any dimension can significantly compromise the others. </p><p>This research implements various coding strategies subject to optical multidimensional imaging and acoustic sensing in order to extend their sensing abilities. The proposed coding strategies combine hardware modification and signal processing to exploiting bandwidth and sensitivity from conventional sensors. We discuss the hardware architecture, compression strategies, sensing process modeling, and reconstruction algorithm of each sensing system. </p><p>Optical multidimensional imaging measures three or more dimensional information of the optical signal. Traditional multidimensional imagers acquire extra dimensional information at the cost of degrading temporal or spatial resolution. Compressive multidimensional imaging multiplexes the transverse spatial, spectral, temporal, and polarization information on a two-dimensional (2D) detector. The corresponding spectral, temporal and polarization coding strategies adapt optics, electronic devices, and designed modulation techniques for multiplex measurement. This computational imaging technique provides multispectral, temporal super-resolution, and polarization imaging abilities with minimal loss in spatial resolution and noise level while maintaining or gaining higher temporal resolution. The experimental results prove that the appropriate coding strategies may improve hundreds times more sensing capacity. </p><p>Human auditory system has the astonishing ability in localizing, tracking, and filtering the selected sound sources or information from a noisy environment. Using engineering efforts to accomplish the same task usually requires multiple detectors, advanced computational algorithms, or artificial intelligence systems. Compressive acoustic sensing incorporates acoustic metamaterials in compressive sensing theory to emulate the abilities of sound localization and selective attention. This research investigates and optimizes the sensing capacity and the spatial sensitivity of the acoustic sensor. The well-modeled acoustic sensor allows localizing multiple speakers in both stationary and dynamic auditory scene; and distinguishing mixed conversations from independent sources with high audio recognition rate.</p> / Dissertation
47

Estudo do problema inverso em balanço populacional aplicado a degradação de polímeros. / Study of the inverse problem in population balances applied to polymer degradation.

Uliana, Murilo 13 December 2011 (has links)
Algoritmos computacionais e análise matemática têm sido grandes aliados na determinação de informação quantitativa extraída de observações experimentais. No presente trabalho, estudou-se a aplicação da metodologia do problema inverso em balanço populacional que descreve como varia a distribuição de tamanhos de moléculas poliméricas durante diferentes processos de degradação de polímeros. A evolução da distribuição durante o processo de quebra pode ser descrita matematicamente por equação de balanço populacional. No assim chamado problema inverso, as distribuições medidas experimentalmente são usadas para estimar os parâmetros do balanço populacional que descrevem, por exemplo, como as taxas de quebra variam ao longo do comprimento da cadeia e como variam com o tamanho da cadeia. Este problema inverso é conhecido por seu intrínseco mal condicionamento numérico. Um algoritmo previamente desenvolvido na literatura para problemas de quebra de gotas em emulsões líquidas, baseado no conceito de auto-similaridade das distribuições, foi adaptado e aplicado no presente trabalho para o problema de quebra de cadeias poliméricas durante a degradação do polímero. Dados experimentais de diferentes processos de degradação, obtidos da literatura, foram testados: degradação de polipropileno por radicais livres gerados por peróxidos, degradação de dextrana por hidrólise ácida, degradação ultra-sônica de dextrana, degradação mecânica por cisalhamento de poliestireno, degradação enzimática de guar, e degradação ultrassônica de guar. As distribuições de taxa de quebra obtidas para os diferentes sistemas foram analisadas e interpretadas em termos das particularidades e do mecanismo de cada tipo de processo de degradação, visando um melhor entendimento fundamental dos processos. / Computational algorithms are used to obtain quantitative information from experimental observations. The aim of the present work was the application of the methodology of inverse problem in population balance used to describe the evolution of the chain length distribution in different polymer degradation processes. The time evolution of the chain length distribution during the polymer breakage can be mathematically described by a population balance equation. In the so-called inverse problem, experimentally measured distributions are used to estimate the parameters of the population balance, such as the distribution of breakage rate along the chain and as function of the chain length. The inverse problem is known to be an ill-conditioned numerical problem. An algorithm previously developed in the literature for liquid droplet breakage in liquid emulsions, based on the concept of self-similarity of the distributions, was adapted and applied in the present work for the problem of polymer scission during polymer degradation. Experimental data of degradation of different polymers were taken from the literature and used to test the procedure: free-radical degradation of polypropylene acid hydrolysis of dextran, ultrasonic degradation of dextran, shear-induced mechanical degradation of polystyrene, enzymatic hydrolysis of guar; and ultrasonic degradation of guar. The breakage rate distribution obtained for the different systems were analyzed and interpreted in terms of the particularities and chemical mechanisms involved in the different degradation processes, aiming at a better understanding of the fundamentals governing the processes.
48

Reconstrução espectral de feixes de raios X diagnósticos / Spectral Reconstruction of Diagnostic X-Ray Beams

Souza, Daiane Miron de 13 December 2017 (has links)
A caracterização completa de um feixe de raios X diagnósticos baseia-se na medição de seu espectro de fluência. O espectro de fluência pode ser medido diretamente utilizando métodos espectroscópicos, no entanto, exige equipamento especializado e não é facilmente realizável em ambiente clínico. Neste trabalho foi implementada uma metodologia que permite obter o espectro de raios X de forma indireta. Esta metodologia baseia-se na aplicação de um modelo matemático que utiliza a transformada inversa de Laplace da curva de atenuação do feixe, gerando dados sobre a distribuição espectral deste. As curvas de atenuação foram medidas por meio de uma câmara de ionização e filtros de alumínio de alta pureza. Os espectros reconstruídos foram validados por meio da comparação da fluência em energia calculada a partir destes e a fluência em energia calculada a partir das curvas de atenuação obtidas experimentalmente. Como aplicação também foram calculados alguns dados característicos do feixe, como primeira e segunda camada semirredutora, energia média, 10º percentil e fator de kerma. Os resultados da fluência em energia calculada a partir dos espectros reconstruídos e da fluência em energia calculada a partir das curvas de atenuação obtidas experimentalmente apresentaram boa concordância, validando os espectros reconstruídos e mostrando o valor da análise da curva de atenuação em comparação com métodos espectroscópicos uma vez que os dados de atenuação podem ser obtidos com comparativa facilidade. Os dados característicos do feixe calculados a partir dos espectros obtidos neste trabalho apresentaram resultados satisfatórios, como era de se esperar, uma vez que a integração é um processo de regularização para a distribuição espectral / A complete characterization of a diagnostic X-ray beam is based on the measurement of its fluency spectrum. The fluency spectrum can be measured directly using spectroscopic methods, however, require specialized equipment and is not easily performed in a clinical environment. In this work a methodology was implemented that allows to obtain the X-ray spectrum in an indirect way. This methodology is based on the application of a mathematical model that uses an inverse Laplace transform of the beam attenuation curve, generating data on a spectral distribution of this beam. The attenuation curves are measured by means of an ionization chamber and high purity aluminum filters. The reconstructed spectra were validated by means of the comparison of energy fluency calculated from them and an energy fluence calculated from the experimentally obtained attenuation curves. As an application, some characteristic beam data were also calculated, such as first and second half-value layers, mean energy, 10th percentile and kerma factor. The energy fluency calculated from the reconstructed spectra and the energy fluence calculated from the experimentally obtained attenuation curves showed good agreement, validating the reconstructed spectra and showing the value of the attenuation curve analysis in comparison with one spectroscopic methods attenuation data can be obtained with comparative ease. The characteristic beam data calculated from the spectra obtained in this work presented satisfactory results, as expected, since it is a regularization process for a spectral distribution
49

Reconstrução do espectro de fótons de aceleradores lineares clínicos com base na curva de transmissão e no algoritmo de recozimento simulado generalizado / Reconstruction of clinical linear accelerators photon spectrum based on the transmission curve and the generalized simulated annealing algorithm

Manrique, John Peter Oyardo 11 December 2015 (has links)
A distribuição espectral de raios X de megavoltagem utilizados em departamentos de radioterapia é uma grandeza fundamental a partir da qual, em princípio, todas as informações requeridas relevantes para tratamentos de radioterapia podem ser determinadas. A medição direta é difícil de realizar clinicamente, e a análise da transmissão é um método indireto clinicamente viável para determinar espectros de fótons de aceleradores lineares clínicos. Neste método, os sinais de transmissão são adquiridos após o feixe passar através de diferentes espessuras de atenuadores. O objetivo deste trabalho foi o estabelecimento e a aplicação de um método indireto que utilizou um modelo espectral baseado no algoritmo de recozimento simulado generalizado para determinar o espectro de fótons de aceleradores lineares clínicos com base na curva de transmissão. A análise dos espectros obtidos foi feita por determinação analítica de grandezas dosimétricas e parâmetros relacionados. / The spectral distribution of megavoltage X-rays used in radiotherapy departments is a fundamental quantity from which, in principle, all relevant information required for radiotherapy treatments can be determined. The direct measurement is difficult to achieve clinically and analyzing the transmission is a clinically viable indirect method for determining clinical linear accelerators photon spectra. In this method, transmission signals are acquired after the beam passes through different thicknesses of attenuators. The objective of this work was the establishment and application of an indirect method that used a spectral model based on generalized simulated annealing algorithm to determine the spectrum of clinical linear accelerators photons based on the transmission curve. Analysis of the spectra was made by analytical determination of dosimetric quantities and related parameters.
50

Esparsidade estruturada em reconstrução de fontes de EEG / Structured Sparsity in EEG Source Reconstruction

Francisco, André Biasin Segalla 27 March 2018 (has links)
Neuroimagiologia funcional é uma área da neurociência que visa o desenvolvimento de diversas técnicas para mapear a atividade do sistema nervoso e esteve sob constante desenvolvimento durante as últimas décadas devido à sua grande importância para aplicações clínicas e pesquisa. Técnicas usualmente utilizadas, como imagem por ressonância magnética functional (fMRI) e tomografia por emissão de pósitrons (PET) têm ótima resolução espacial (~ mm), mas uma resolução temporal limitada (~ s), impondo um grande desafio para nossa compreensão a respeito da dinâmica de funções cognitivas mais elevadas, cujas oscilações podem ocorrer em escalas temporais muito mais finas (~ ms). Tal limitação ocorre pelo fato destas técnicas medirem respostas biológicas lentas que são correlacionadas de maneira indireta com a atividade elétrica cerebral. As duas principais técnicas capazes de superar essa limitação são a Eletro- e Magnetoencefalografia (EEG/MEG), que são técnicas não invasivas para medir os campos elétricos e magnéticos no escalpo, respectivamente, gerados pelas fontes elétricas cerebrais. Ambas possuem resolução temporal na ordem de milisegundo, mas tipicalmente uma baixa resolução espacial (~ cm) devido à natureza mal posta do problema inverso eletromagnético. Um imenso esforço vem sendo feito durante as últimas décadas para melhorar suas resoluções espaciais através da incorporação de informação relevante ao problema de outras técnicas de imagens e/ou de vínculos biologicamente inspirados aliados ao desenvolvimento de métodos matemáticos e algoritmos sofisticados. Neste trabalho focaremos em EEG, embora todas técnicas aqui apresentadas possam ser igualmente aplicadas ao MEG devido às suas formas matemáticas idênticas. Em particular, nós exploramos esparsidade como uma importante restrição matemática dentro de uma abordagem Bayesiana chamada Aprendizagem Bayesiana Esparsa (SBL), que permite a obtenção de soluções únicas significativas no problema de reconstrução de fontes. Além disso, investigamos como incorporar diferentes estruturas como graus de liberdade nesta abordagem, que é uma aplicação de esparsidade estruturada e mostramos que é um caminho promisor para melhorar a precisão de reconstrução de fontes em métodos de imagens eletromagnéticos. / Functional Neuroimaging is an area of neuroscience which aims at developing several techniques to map the activity of the nervous system and has been under constant development in the last decades due to its high importance in clinical applications and research. Common applied techniques such as functional magnetic resonance imaging (fMRI) and positron emission tomography (PET) have great spatial resolution (~ mm), but a limited temporal resolution (~ s), which poses a great challenge on our understanding of the dynamics of higher cognitive functions, whose oscillations can occur in much finer temporal scales (~ ms). Such limitation occurs because these techniques rely on measurements of slow biological responses which are correlated in a complicated manner to the actual electric activity. The two major candidates that overcome this shortcoming are Electro- and Magnetoencephalography (EEG/MEG), which are non-invasive techniques that measure the electric and magnetic fields on the scalp, respectively, generated by the electrical brain sources. Both have millisecond temporal resolution, but typically low spatial resolution (~ cm) due to the highly ill-posed nature of the electromagnetic inverse problem. There has been a huge effort in the last decades to improve their spatial resolution by means of incorporating relevant information to the problem from either other imaging modalities and/or biologically inspired constraints allied with the development of sophisticated mathematical methods and algorithms. In this work we focus on EEG, although all techniques here presented can be equally applied to MEG because of their identical mathematical form. In particular, we explore sparsity as a useful mathematical constraint in a Bayesian framework called Sparse Bayesian Learning (SBL), which enables the achievement of meaningful unique solutions in the source reconstruction problem. Moreover, we investigate how to incorporate different structures as degrees of freedom into this framework, which is an application of structured sparsity and show that it is a promising way to improve the source reconstruction accuracy of electromagnetic imaging methods.

Page generated in 0.0744 seconds