21 |
Mathematical modeling with applications in high-performance codingSu, Yong 10 October 2005 (has links)
No description available.
|
22 |
On singular estimation problems in sensor localization systemsAsh, Joshua N. 10 December 2007 (has links)
No description available.
|
23 |
Sensitivity Analysis and Material Parameter Estimation using Electromagnetic Modelling / Känslighetsanalys och estimering av materialparametrar med elektromagnetisk modelleringSjödén, Therese January 2012 (has links)
Estimating parameters is the problem of finding their values from measurements and modelling. Parameters describe properties of a system; material, for instance, are defined by mechanical, electrical, and chemical parameters. Fisher information is an information measure, giving information about how changes in the parameter effect the estimation. The Fisher information includes the physical model of the problem and the statistical model of noise. The Cramér-Rao bound is the inverse of the Fisher information and gives the best possible variance for any unbiased estimator. This thesis considers aspects of sensitivity analysis in two applied material parameter estimation problems. Sensitivity analysis with the Fisher information and the Cramér-Rao bound is used as a tool for evaluation of measurement feasibilities, comparison of measurement set-ups, and as a quantitative measure of the trade-off between accuracy and resolution in inverse imaging. The first application is with estimation of the wood grain angle parameter in trees and logs. The grain angle is the angle between the direction of the wood fibres and the direction of growth; a large grain angle strongly correlates to twist in sawn timber. In the thesis, measurements with microwaves are argued as a fast and robust measurement technique and electromagnetic modelling is applied, exploiting the anisotropic properties of wood. Both two-dimensional and three-dimensional modelling is considered. Mathematical modelling is essential, lowering the complexity and speeding up the computations. According to a sensitivity analysis with the Cramér-Rao bound, estimation of the wood grain angle with microwaves is feasible. The second application is electrical impedance tomography, where the conductivity of an object is estimated from surface measurements. Electrical impedance tomography has applications in, for example, medical imaging, geological surveillance, and wood evaluation. Different configurations and noise models are evaluated with sensitivity analysis for a two-dimensional electrical impedance tomography problem. The relation between the accuracy and resolution is also analysed using the Fisher information. To conclude, sensitivity analysis is employed in this thesis, as a method to enhance material parameter estimation. The sensitivity analysis methods are general and applicable also on other parameter estimation problems. / Estimering av parametrar är att finna deras värde utifrån mätningar och modellering. Parametrar beskriver egenskaper hos system och till exempel material kan definieras med mekaniska, elektriska och kemiska parametrar. Fisherinformation är ett informationsmått som ger information om hur ändringar i en parameter påverkar estimeringen. Fisherinformationen ges av en fysikalisk modell av problemet och en statistisk modell av mätbruset. Cramér-Rao-gränsen är inversen av Fisherinformationen och ger den bästa möjliga variansen för alla väntevärdesriktiga estimatorer.Den här avhandlingen behandlar aspekter av känslighetsanalys i två tillämpade estimeringsproblem för materialparametrar. Känslighetsanalys med Fisherinformation och Cramér-Rao-gränsen används som ett redskap för utvärdering av möjligheten att mäta och för jämförelser av mätuppställningar, samt som ett kvantitativt mått på avvägningen mellan noggrannhet och upplösning för inversa bilder. Den första tillämpningen är estimering av fibervinkeln hos träd och stockar. Fibervinkeln är vinkeln mellan växtriktningen och riktningen hos träfibern och en stor fibervinkel är relaterad till problem med formstabilitet i färdiga brädor. Mikrovågsmätningar av fibervinkeln presenteras som en snabb och robust mätteknik. I avhandlingen beskrivs två- och tredimensionella elektromagnetiska modeller som utnyttjar anisotropin hos trä. Eftersom matematisk modellering minskar komplexiteten och beräkningstiden är det en viktig del i estimeringen. Enligt känslighetsanalys med Cramér-Rao-gränsen är estimering av fibervinkeln hos trä möjlig. Den andra tillämpningen är elektrisk impedanstomografi, där ledningsförmågan hos objekt bestäms genom mätningar på ytan. Elektrisk impedanstomografi har tillämpningar inom till exempel medicinska bilder, geologisk övervakning och trämätningar. Olika mätkonfigurationer och brusmodeller utvärderas med känslighetsanalys för ett tvådimensionellt exempel på elektrisk impedanstomografi. Relationen mellan noggrannhet och upplösning analyseras med Fisher information. För att sammanfatta beskrivs känslighetsanalys som en metod för att förbättra estimeringen av materialparametrar. Metoderna för känslighetsanalys är generella och kan tillämpas också på andra estimeringsproblem för parametrar.
|
24 |
The optimal control of a Lévy processDiTanna, Anthony Santino 23 October 2009 (has links)
In this thesis we study the optimal stochastic control problem of the drift of a Lévy process. We show that, for a broad class of Lévy processes, the partial integro-differential Hamilton-Jacobi-Bellman equation for the value function admits classical solutions and that control policies exist in feedback form. We then explore the class of Lévy processes that satisfy the requirements of the theorem, and find connections between the uniform integrability requirement and the notions of the score function and Fisher information from information theory. Finally we present three different numerical implementations of the control problem: a traditional dynamic programming approach, and two iterative approaches, one based on a finite difference scheme and the other on the Fourier transform. / text
|
25 |
Applied Adaptive Optimal Design and Novel Optimization Algorithms for Practical UseStrömberg, Eric January 2016 (has links)
The costs of developing new pharmaceuticals have increased dramatically during the past decades. Contributing to these increased expenses are the increasingly extensive and more complex clinical trials required to generate sufficient evidence regarding the safety and efficacy of the drugs. It is therefore of great importance to improve the effectiveness of the clinical phases by increasing the information gained throughout the process so the correct decision may be made as early as possible. Optimal Design (OD) methodology using the Fisher Information Matrix (FIM) based on Nonlinear Mixed Effect Models (NLMEM) has been proven to serve as a useful tool for making more informed decisions throughout the clinical investigation. The calculation of the FIM for NLMEM does however lack an analytic solution and is commonly approximated by linearization of the NLMEM. Furthermore, two structural assumptions of the FIM is available; a full FIM and a block-diagonal FIM which assumes that the fixed effects are independent of the random effects in the NLMEM. Once the FIM has been derived, it can be transformed into a scalar optimality criterion for comparing designs. The optimality criterion may be considered local, if the criterion is based on singe point values of the parameters or global (robust), where the criterion is formed for a prior distribution of the parameters. Regardless of design criterion, FIM approximation or structural assumption, the design will be based on the prior information regarding the model and parameters, and is thus sensitive to misspecification in the design stage. Model based adaptive optimal design (MBAOD) has however been shown to be less sensitive to misspecification in the design stage. The aim of this thesis is to further the understanding and practicality when performing standard and MBAOD. This is to be achieved by: (i) investigating how two common FIM approximations and the structural assumptions may affect the optimized design, (ii) reducing runtimes complex design optimization by implementing a low level parallelization of the FIM calculation, (iii) further develop and demonstrate a framework for performing MBAOD, (vi) and investigate the potential advantages of using a global optimality criterion in the already robust MBAOD.
|
26 |
Retrieving Information from Scattered Photons in Medical ImagingJha, Abhinav K. January 2013 (has links)
In many medical imaging modalities, as photons travel from the emission source to the detector, they are scattered by the biological tissue. Often this scatter is viewed as a phenomenon that degrades image quality, and most research is focused on designing methods for either discarding the scattered photons or correcting for scatter. However, the scattered photons also carry information about the tissue that they pass through, which can perhaps be extracted. In this research, we investigate methods to retrieve information from the scattered photons in two specific medical imaging modalities: diffuse optical tomography (DOT) and single photon emission computed tomography (SPECT). To model the scattering of photons in biological tissue, we investigate using the Neumann-series form of the radiative transport equation (RTE). Since the scattering phenomenon are different in DOT and SPECT, the models are individually designed for each modality. In the DOT study, we use the developed photon-propagation model to investigate signal detectability in tissue. To study this detectability, we demonstrate the application of a surrogate figure of merit, based on Fisher information, which approximates the Bayesian ideal observer performance. In the SPECT study, our aim is to determine if only the SPECT emission data acquired in list-mode (LM) format, including the scattered-photon data, can be used to compute the tissue-attenuation map. We first propose a path-based formalism to process scattered photon data, and follow it with deriving expressions for the Fisher information that help determine the information content of LM data. We then derive a maximum-likelihood expectation-maximization algorithm that can jointly reconstruct the activity and attenuation map using LM SPECT emission data. While the DOT study can provide a boost in transition of DOT to clinical imaging, the SPECT study will provide insights on whether it is worth exposing the patient to extra X-ray radiation dose in order to obtain an attenuation map. Finally, although the RTE can be used to model light propagation in tissues, it is computationally intensive and therefore time consuming. To increase the speed of computation in the DOT study, we develop software to implement the RTE on parallel computing architectures, specifically the NVIDIA graphics processing units (GPUs).
|
27 |
Optimal concentration for SU(1,1) coherent state transforms and an analogue of the Lieb-Wehrl conjecture for SU(1,1)Bandyopadhyay, Jogia 30 June 2008 (has links)
We derive a lower bound for the Wehrl entropy in the setting of SU(1,1). For asymptotically high values of the quantum number k, this bound coincides with the analogue of the Lieb-Wehrl conjecture for SU(1,1) coherent states. The bound on the entropy is proved via a sharp norm bound. The norm bound is deduced by using an
interesting identity for Fisher information of SU(1,1) coherent state transforms on the hyperbolic plane and a new family of sharp Sobolev inequalities on the hyperbolic plane. To prove
the sharpness of our Sobolev inequality, we need to first prove a uniqueness theorem for solutions of a semi-linear Poisson equation
(which is actually the Euler-Lagrange equation for the variational problem associated with our sharp Sobolev inequality) on the hyperbolic plane. Uniqueness theorems proved for similar semi-linear
equations in the past do not apply here and the new features of our proof are of independent interest, as are some of the consequences
we derive from the new family of Sobolev inequalities. We also prove Fisher information identities for the groups SU(n,1) and
SU(n,n).
|
28 |
Entropia e informação de sistemas quânticos amortecidos / Entropy and information of quantum damped systemsLima Júnior, Vanderley Aguiar de January 2014 (has links)
LIMA JÚNIOR, Vanderley Aguiar de. Entropia e informação de sistemas quânticos amortecidos. 2014. 65 f. Dissertação (Mestrado em Física) - Programa de Pós-Graduação em Física, Departamento de Física, Centro de Ciências, Universidade Federal do Ceará, Fortaleza, 2014. / Submitted by Edvander Pires (edvanderpires@gmail.com) on 2015-04-09T19:28:55Z
No. of bitstreams: 1
2014_dis_valimajunior.pdf: 987183 bytes, checksum: 660164955bb5a5c19b5d2d3bb2013a82 (MD5) / Approved for entry into archive by Edvander Pires(edvanderpires@gmail.com) on 2015-04-10T20:50:41Z (GMT) No. of bitstreams: 1
2014_dis_valimajunior.pdf: 987183 bytes, checksum: 660164955bb5a5c19b5d2d3bb2013a82 (MD5) / Made available in DSpace on 2015-04-10T20:50:41Z (GMT). No. of bitstreams: 1
2014_dis_valimajunior.pdf: 987183 bytes, checksum: 660164955bb5a5c19b5d2d3bb2013a82 (MD5)
Previous issue date: 2014
|
29 |
Algoritmo genético aplicado à determinação da melhor configuração e do menor tamanho amostral na análise da variabilidade espacial de atributos químicos do solo / Genetic algorithm applied to determine the best configuration and the lowest sample size in the analysis of space variability of chemical attributes of soilMaltauro, Tamara Cantú 21 February 2018 (has links)
Submitted by Neusa Fagundes (neusa.fagundes@unioeste.br) on 2018-09-10T17:23:20Z
No. of bitstreams: 2
Tamara_Maltauro2018.pdf: 3146012 bytes, checksum: 16eb0e2ba58be9d968ba732c806d14c1 (MD5)
license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Made available in DSpace on 2018-09-10T17:23:20Z (GMT). No. of bitstreams: 2
Tamara_Maltauro2018.pdf: 3146012 bytes, checksum: 16eb0e2ba58be9d968ba732c806d14c1 (MD5)
license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Previous issue date: 2018-02-21 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / It is essential to determine a sampling design with a size that minimizes operating costs and
maximizes the results quality throughout a trial setting that involves the study of spatial
variability of chemical attributes on soil. Thus, this trial aimed at resizing a sample
configuration with the least possible number of points for a commercial area composed of
102 points, regarding the information on spatial variability of soil chemical attributes to
optimize the process. Initially, Monte Carlo simulations were carried out, assuming Gaussian, isotropic, and exponential model for semi-variance function and three initial sampling configurations: systematic, simple random and lattice plus close pairs. The Genetic Algorithm (GA) was used to obtain simulated data and chemical attributes of soil, in order to resize the optimized sample, considering two objective-functions. They are based on the efficiency of spatial prediction and geostatistical model estimation, which are respectively: maximization of global accuracy precision and minimization of functions based on Fisher information matrix. It was observed by the simulated data that for both objective functions, when the nugget effect and range varied, samplings usually showed the lowest values of objectivefunction, whose nugget effect was 0 and practical range was 0.9. And the increase in practical range has generated a slight reduction in the number of optimized sampling points for most cases. In relation to the soil chemical attributes, GA was efficient in reducing the sample size with both objective functions. Thus, sample size varied from 30 to 35 points in order to maximize global accuracy precision, which corresponded to 29.41% to 34.31% of the initial mesh, with a minimum spatial prediction similarity to the original configuration, equal to or greater than 85%. It is noteworthy that such data have reflected on the optimization process, which have similarity between the maps constructed with sample configurations: original and optimized. Nevertheless, the sample size of the optimized sample varied from 30 to 40 points to minimize the function based on Fisher information matrix, which corresponds to 29.41% and 39.22% of the original mesh, respectively. However, there was no similarity between the constructed maps when considering the initial and optimum sample configuration. For both objective functions, the soil chemical attributes showed mild spatial dependence for the original sample configuration. And, most of the attributes showed mild or strong spatial dependence for optimum sample configuration. Thus, the optimization process was efficient when applied to both simulated data and soil chemical attributes. / É necessário determinar um esquema de amostragem com um tamanho que minimize os
custos operacionais e maximize a qualidade dos resultados durante a montagem de um
experimento que envolva o estudo da variabilidade espacial de atributos químicos do solo.
Assim, o objetivo deste trabalho foi redimensionar uma configuração amostral com o menor
número de pontos possíveis para uma área comercial composta por 102 pontos,
considerando a informação sobre a variabilidade espacial de atributos químicos do solo no
processo de otimização. Inicialmente, realizaram-se simulações de Monte Carlo, assumindo
as variáveis estacionárias Gaussiana, isotrópicas, modelo exponencial para a função
semivariância e três configurações amostrais iniciais: sistemática, aleatória simples e lattice
plus close pairs. O Algoritmo Genético (AG) foi utilizado para a obtenção dos dados
simulados e dos atributos químicos do solo, a fim de se redimensionar a amostra otimizada,
considerando duas funções-objetivo. Essas estão baseadas na eficiência quanto à predição
espacial e à estimação do modelo geoestatístico, as quais são respectivamente: a
maximização da medida de acurácia exatidão global e a minimização de funções baseadas
na matriz de informação de Fisher. Observou-se pelos dados simulados que, para ambas as
funções-objetivo, quando o efeito pepita e o alcance variaram, em geral, as amostragens
apresentaram os menores valores da função-objetivo, com efeito pepita igual a 0 e alcance
prático igual a 0,9. O aumento do alcance prático gerou uma leve redução do número de
pontos amostrais otimizados para a maioria dos casos. Em relação aos atributos químicos
do solo, o AG, com ambas as funções-objetivo, foi eficiente quanto à redução do tamanho
amostral. Para a maximização da exatidão global, tem-se que o tamanho amostral da nova
amostra reduzida variou entre 30 e 35 pontos que corresponde respectivamente a 29,41% e
a 34,31% da malha inicial, com uma similaridade mínima de predição espacial, em relação à
configuração original, igual ou superior a 85%. Vale ressaltar que tais dados refletem no
processo de otimização, os quais apresentam similaridade entres os mapas construídos
com as configurações amostrais: original e otimizada. Todavia, o tamanho amostral da
amostra otimizada variou entre 30 e 40 pontos para minimizar a função baseada na matriz
de informaçãode Fisher, a qual corresponde respectivamente a 29,41% e 39,22% da malha
original. Mas, não houve similaridade entre os mapas elaborados quando se considerou a
configuração amostral inicial e a otimizada. Para ambas as funções-objetivo, os atributos
químicos do solo apresentaram moderada dependência espacial para a configuração
amostral original. E, a maioria dos atributos apresentaram moderada ou forte dependência
espacial para a configuração amostral otimizada. Assim, o processo de otimização foi
eficiente quando aplicados tanto nos dados simulados como nos atributos químicos do solo.
|
30 |
Robust Networks: Neural Networks Robust to Quantization Noise and Analog Computation Noise Based on Natural GradientJanuary 2019 (has links)
abstract: Deep neural networks (DNNs) have had tremendous success in a variety of
statistical learning applications due to their vast expressive power. Most
applications run DNNs on the cloud on parallelized architectures. There is a need
for for efficient DNN inference on edge with low precision hardware and analog
accelerators. To make trained models more robust for this setting, quantization and
analog compute noise are modeled as weight space perturbations to DNNs and an
information theoretic regularization scheme is used to penalize the KL-divergence
between perturbed and unperturbed models. This regularizer has similarities to
both natural gradient descent and knowledge distillation, but has the advantage of
explicitly promoting the network to and a broader minimum that is robust to
weight space perturbations. In addition to the proposed regularization,
KL-divergence is directly minimized using knowledge distillation. Initial validation
on FashionMNIST and CIFAR10 shows that the information theoretic regularizer
and knowledge distillation outperform existing quantization schemes based on the
straight through estimator or L2 constrained quantization. / Dissertation/Thesis / Masters Thesis Computer Engineering 2019
|
Page generated in 0.1376 seconds