Spelling suggestions: "subject:"singular value decomposition"" "subject:"singular value ecomposition""
121 |
Decomposição aleatória de matrizes aplicada ao reconhecimento de faces / Stochastic decomposition of matrices applied to face recognitionMauro de Amorim 22 March 2013 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Métodos estocásticos oferecem uma poderosa ferramenta para a execução da compressão
de dados e decomposições de matrizes. O método estocástico para decomposição de matrizes
estudado utiliza amostragem aleatória para identificar um subespaço que captura a imagem de
uma matriz de forma aproximada, preservando uma parte de sua informação essencial. Estas
aproximações compactam a informação possibilitando a resolução de problemas práticos
de maneira eficiente. Nesta dissertação é calculada uma decomposição em valores singulares
(SVD) utilizando técnicas estocásticas. Esta SVD aleatória é empregada na tarefa de reconhecimento
de faces. O reconhecimento de faces funciona de forma a projetar imagens de faces sobre
um espaço de características que melhor descreve a variação de imagens de faces conhecidas.
Estas características significantes são conhecidas como autofaces, pois são os autovetores de
uma matriz associada a um conjunto de faces. Essa projeção caracteriza aproximadamente a
face de um indivíduo por uma soma ponderada das autofaces características. Assim, a tarefa
de reconhecimento de uma nova face consiste em comparar os pesos de sua projeção com os
pesos da projeção de indivíduos conhecidos. A análise de componentes principais (PCA) é um
método muito utilizado para determinar as autofaces características, este fornece as autofaces
que representam maior variabilidade de informação de um conjunto de faces. Nesta dissertação
verificamos a qualidade das autofaces obtidas pela SVD aleatória (que são os vetores singulares
à esquerda de uma matriz contendo as imagens) por comparação de similaridade com as autofaces
obtidas pela PCA. Para tanto, foram utilizados dois bancos de imagens, com tamanhos
diferentes, e aplicadas diversas amostragens aleatórias sobre a matriz contendo as imagens. / Stochastic methods offer a powerful tool for performing data compression and decomposition
of matrices. These methods use random sampling to identify a subspace that captures the
range of a matrix in an approximate way, preserving a part of its essential information. These
approaches compress the information enabling the resolution of practical problems efficiently.
This work computes a singular value decomposition (SVD) of a matrix using stochastic techniques.
This random SVD is employed in the task of face recognition. The face recognition is
based on the projection of images of faces on a feature space that best describes the variation of
known image faces. These features are known as eigenfaces because they are the eigenvectors
of a matrix constructed from a set of faces. This projection characterizes an individual face by a
weighted sum of eigenfaces. The task of recognizing a new face is to compare the weights of its
projection with the projection of the weights of known individuals. The principal components
analysis (PCA) is a widely used method for determining the eigenfaces. This provides the greatest
variability eigenfaces representing information from a set of faces. In this dissertation we
discuss the quality of eigenfaces obtained by a random SVD (which are the left singular vectors
of a matrix containing the images) by comparing the similarity with eigenfaces obtained
by PCA. We use two databases of images, with different sizes and various random sampling
applied on the matrix containing the images.
|
122 |
Image Structures For Steganalysis And EncryptionSuresh, V 04 1900 (has links) (PDF)
In this work we study two aspects of image security: improper usage and illegal access of images. In the first part we present our results on steganalysis – protection against improper usage of images. In the second part we present our results on image encryption – protection against illegal access of images.
Steganography is the collective name for methodologies that allow the creation of invisible –hence secret– channels for information transfer. Steganalysis, the counter to steganography, is a collection of approaches that attempt to detect and quantify the presence of hidden messages in cover media.
First we present our studies on stego-images using features developed for data stream classification towards making some qualitative assessments about the effect of steganography on the lower order bit planes(LSB) of images. These features are effective in classifying different data streams. Using these features, we study the randomness properties of image and stego-image LSB streams and observe that data stream analysis techniques are inadequate for steganalysis purposes. This provides motivation to arrive at steganalytic techniques that go beyond the LSB properties. We then present our steganalytic approach which takes into account such properties.
In one such approach, we perform steganalysis from the point of view of quantifying the effect of perturbations caused by mild image processing operations–zoom-in/out, rotation, distortions–on stego-images. We show that this approach works both in detecting and estimating the presence of stego-contents for a particularly difficult steganographic technique known as LSB matching steganography.
Next, we present our results on our image encryption techniques. Encryption approaches which are used in the context of text data are usually unsuited for the purposes of encrypting images(and multimedia objects) in general. The reasons are: unlike text, the volume to be encrypted could be huge for images and leads to increased computational requirements; encryption used for text renders images incompressible thereby resulting in poor use of bandwidth. These issues are overcome by designing image encryption approaches that obfuscate the image by intelligently re-ordering the pixels or encrypt only parts of a given image in attempts to render them imperceptible. The obfuscated image or the partially encrypted image is still amenable to compression. Efficient image encryption schemes ensure that the obfuscation is not compromised by the inherent correlations present in the image. Also they ensure that the unencrypted portions of the image do not provide information about the encrypted parts. In this work we present two approaches for efficient image encryption.
First, we utilize the correlation preserving properties of the Hilbert space-filling-curves to reorder images in such a way that the image is obfuscated perceptually. This process does not compromise on the compressibility of the output image. We show experimentally that our approach leads to both perceptual security and perceptual encryption. We then show that the space-filling curve based approach also leads to more efficient partial encryption of images wherein only the salient parts of the image are encrypted thereby reducing the encryption load.
In our second approach, we show that Singular Value Decomposition(SVD) of images is useful from the point of image encryption by way of mismatching the unitary matrices resulting from the decomposition of images. It is seen that the images that result due to the mismatching operations are perceptually secure.
|
123 |
Comparison of the 1st and 2nd order Lee–Carter methods with the robust Hyndman–Ullah method for fitting and forecasting mortality ratesWillersjö Nyfelt, Emil January 2020 (has links)
The 1st and 2nd order Lee–Carter methods were compared with the Hyndman–Ullah method in regards to goodness of fit and forecasting ability of mortality rates. Swedish population data was used from the Human Mortality Database. The robust estimation property of the Hyndman–Ullah method was also tested with inclusion of the Spanish flu and a hypothetical scenario of the COVID-19 pandemic. After having presented the three methods and making several comparisons between the methods, it is concluded that the Hyndman–Ullah method is overall superior among the three methods with the implementation of the chosen dataset. Its robust estimation of mortality shocks could also be confirmed.
|
124 |
Décomposition de petit rang, problèmes de complétion et applications : décomposition de matrices de Hankel et des tenseurs de rang faible / Low rank decomposition, completion problems and applications : low rank decomposition of Hankel matrices and tensorsHarmouch, Jouhayna 19 December 2018 (has links)
On étudie la décomposition de matrice de Hankel comme une somme des matrices de Hankel de rang faible en corrélation avec la décomposition de son symbole σ comme une somme des séries exponentielles polynomiales. On présente un nouvel algorithme qui calcule la décomposition d’un opérateur de Hankel de petit rang et sa décomposition de son symbole en exploitant les propriétés de l’algèbre quotient de Gorenstein . La base de est calculée à partir la décomposition en valeurs singuliers d’une sous-matrice de matrice de Hankel . Les fréquences et les poids se déduisent des vecteurs propres généralisés des sous matrices de Hankel déplacés de . On présente une formule pour calculer les poids en fonction des vecteurs propres généralisés au lieu de résoudre un système de Vandermonde. Cette nouvelle méthode est une généralisation de Pencil méthode déjà utilisée pour résoudre un problème de décomposition de type de Prony. On analyse son comportement numérique en présence des moments contaminés et on décrit une technique de redimensionnement qui améliore la qualité numérique des fréquences d’une grande amplitude. On présente une nouvelle technique de Newton qui converge localement vers la matrice de Hankel de rang faible la plus proche au matrice initiale et on montre son effet à corriger les erreurs sur les moments. On étudie la décomposition d’un tenseur multi-symétrique T comme une somme des puissances de produit des formes linéaires en corrélation avec la décomposition de son dual comme une somme pondérée des évaluations. On utilise les propriétés de l’algèbre de Gorenstein associée pour calculer la décomposition de son dual qui est définie à partir d’une série formelle τ. On utilise la décomposition d’un opérateur de Hankel de rang faible associé au symbole τ comme une somme des opérateurs indécomposables de rang faible. La base d’ est choisie de façon que la multiplication par certains variables soit possible. On calcule les coordonnées des points et leurs poids correspondants à partir la structure propre des matrices de multiplication. Ce nouvel algorithme qu’on propose marche bien pour les matrices de Hankel de rang faible. On propose une approche théorique de la méthode dans un espace de dimension n. On donne un exemple numérique de la décomposition d’un tenseur multilinéaire de rang 3 en dimension 3 et un autre exemple de la décomposition d’un tenseur multi-symétrique de rang 3 en dimension 3. On étudie le problème de complétion de matrice de Hankel comme un problème de minimisation. On utilise la relaxation du problème basé sur la minimisation de la norme nucléaire de la matrice de Hankel. On adapte le SVT algorithme pour le cas d’une matrice de Hankel et on calcule l’opérateur linéaire qui décrit les contraintes du problème de minimisation de norme nucléaire. On montre l’utilité du problème de décomposition à dissocier un modèle statistique ou biologique. / We study the decomposition of a multivariate Hankel matrix as a sum of Hankel matrices of small rank in correlation with the decomposition of its symbol σ as a sum of polynomialexponential series. We present a new algorithm to compute the low rank decomposition of the Hankel operator and the decomposition of its symbol exploiting the properties of the associated Artinian Gorenstein quotient algebra . A basis of is computed from the Singular Value Decomposition of a sub-matrix of the Hankel matrix . The frequencies and the weights are deduced from the generalized eigenvectors of pencils of shifted sub-matrices of Explicit formula for the weights in terms of the eigenvectors avoid us to solve a Vandermonde system. This new method is a multivariate generalization of the so-called Pencil method for solving Pronytype decomposition problems. We analyse its numerical behaviour in the presence of noisy input moments, and describe a rescaling technique which improves the numerical quality of the reconstruction for frequencies of high amplitudes. We also present a new Newton iteration, which converges locally to the closest multivariate Hankel matrix of low rank and show its impact for correcting errors on input moments. We study the decomposition of a multi-symmetric tensor T as a sum of powers of product of linear forms in correlation with the decomposition of its dual as a weighted sum of evaluations. We use the properties of the associated Artinian Gorenstein Algebra to compute the decomposition of its dual which is defined via a formal power series τ. We use the low rank decomposition of the Hankel operator associated to the symbol τ into a sum of indecomposable operators of low rank. A basis of is chosen such that the multiplication by some variables is possible. We compute the sub-coordinates of the evaluation points and their weights using the eigen-structure of multiplication matrices. The new algorithm that we propose works for small rank. We give a theoretical generalized approach of the method in n dimensional space. We show a numerical example of the decomposition of a multi-linear tensor of rank 3 in 3 dimensional space. We show a numerical example of the decomposition of a multi-symmetric tensor of rank 3 in 3 dimensional space. We study the completion problem of the low rank Hankel matrix as a minimization problem. We use the relaxation of it as a minimization problem of the nuclear norm of Hankel matrix. We adapt the SVT algorithm to the case of Hankel matrix and we compute the linear operator which describes the constraints of the problem and its adjoint. We try to show the utility of the decomposition algorithm in some applications such that the LDA model and the ODF model.
|
125 |
Methoden zur Ermittlung des Betriebsleermassenanteils im FlugzeugentwurfLehnert, Jan January 2018 (has links) (PDF)
Diese Projektarbeit beschäftigt sich mit dem Thema der Berechnung des Betriebsleermassenanteils im Flugzeugentwurf.
Bekannte Berechnungsverfahren nach Torenbeek, Raymer, Marckwardt und Loftin werden auf Qualität und Aktualität untersucht und miteinander verglichen.
Im Vordergrund steht dabei die Frage,
ob eine genauere Methode zur Ermittlung des Betriebsleermassenanteils auf Basis neuer Statistiken gefunden werden kann.
Neben der Entwicklung einer neuen Berechnungsmethode wird außerdem auf die Verwendung der Singulärwertzerlegung im Flugzeugbau verwiesen
und deren Vor- und Nachteile bezüglich der Handhabung und Genauigkeit erläutert.
Diese Ausarbeitung stützt sich auf eine aktuelle Zusammenstellung von Flugzeugparametern verschiedenster Passagiermaschinen,
deren Anteil sich auf 65 % der gesamten fliegenden Weltflotte im Jahr 2016 beläuft.
Die oben genannten Autoren liefern Gleichungen zur Abschätzung des Verhältnisses aus Betriebsleermasse zum maximalen Abfluggewicht.
Diese Gleichungen haben bezogen auf die zugrunde liegenden Statistiken eine Abweichung von bis zu 10 %.
Dies ist auf die Schlichtheit der Methoden zurückzuführen, da die Anzahl der verwendeten Parameter eingeschränkt ist.
Es wurde im Rahmen dieses Projektes eine analytische Gleichung zur Abschätzung des Betriebsleermassenanteils ermittelt,
die die folgenden Entwurfsparameter mit einbezieht: Schub-Gewichtsverhältnis, Flächenbelastung, Design-Reichweite, Nutzlast und Anzahl der Triebwerke.
Im direkten Vergleich mit der Gleichung nach Loftin,
verringert sich der relative Fehler der Abschätzung um 43 %.
Erreicht wurde dies durch die Einbeziehung weiterer Entwurfsparameter und deren optimaler rechnerischer Verknüpfung.
Dabei wurden nur die Flugzeugparameter mit einbezogen, die zum einen bereits in der Dimensionierungsphase der Entwicklung bekannt sind,
und zum anderen einen kausalen Zusammenhang zum Betriebsleermassenanteil darstellen.
Die neue Methode überragt die Genauigkeit der klassischen Berechnungsverfahren
und reduziert dadurch bereits im frühen Entwurfsstadium die Gefahr einer fehlerhaften Massenabschätzung.
Im weiteren Verlauf des Projekts wird die Nutzung und der Anwendungsbereich der Singulärwertzerlegung (engl. Singular Value Decomposition, SVD)
im Flugzeugbau betrachtet.
Die Singulärwertzerlegung ist ein mathematisches Verfahren das dazu verwendet wird,
mit wenigen bekannten Eingangsparametern auf alle Parameter eines Modells zu schließen.
Dadurch ist es möglich eine schnelle Abschätzung eines komplexen Designs zu erstellen,
auf der Basis von einer begrenzten Auswahl von bekannten Eingangsgrößen.
Es hat sich herausgestellt, dass der relative Fehler des Betriebsleermassenanteils
unter Verwendung der SVD auf dem gleichen Niveau der bisher bekannten Berechnungsverfahren liegt
und somit keinen Vorteil in Bezug auf die Genauigkeit des Ergebnisses mit sich bringt.
|
126 |
Sensory Integration under Natural Conditions: a Theoretical, Physiological and Behavioral ApproachOnat, Selim 02 September 2011 (has links)
We can affirm to apprehend a system in its totality only when we know how it behaves under its natural operating conditions. However, in the face of the complexity of the world, science can only evolve by simplifications, which paradoxically hide a good deal of the very mechanisms we are interested in. On the other hand, scientific enterprise is very tightly related to the advances in technology and the latter inevitably influences the manner in which the scientific experiments are conducted. Due to this factor, experimental conditions which would have been impossible to bring into laboratory not more than 20 years ago, are today within our reach.
This thesis investigates neuronal integrative processes by using a variety of theoretical and experimental techniques wherein the approximation of ecologically relevant conditions within the laboratory is the common denominator. The working hypothesis of this thesis is that neurons and neuronal systems, in the sensory and higher cortices, are specifically adapted, as a result of evolutionary processes, to the sensory signals most likely to be received under ecologically relevant conditions. In order to conduct the present study along this line, we first recorded movies with the help of two microcameras carried by cats exploring a natural environment. This resulted in a database of binocular natural movies that was used in our theoretical and experimental studies.
In a theoretical study, we aimed to understand the principles of binocular disparity encoding in terms of spatio-temporal statistical properties of natural movies in conjunction with simple mathematical expressions governing the activity levels of simulated neurons. In an unsupervised learning scheme, we used the binocular movies as input to a neuronal network and obtained receptive fields that represent these movies optimally with respect to the temporal stability criterion. Many distinctive aspects of the binocular coding in complex cells, such as the phase and position encoding of disparity and the existence of unbalanced ocular contributions, were seen to emerge as the result of this optimization process. Therefore we conclude that the encoding of binocular disparity by complex cells can be understood in terms of an optimization process that regulates activities of neurons receiving ecologically relevant information.
Next we aimed to physiologically characterize the responses of the visual cortex to ecologically relevant stimuli in its full complexity and compare these to the responses evoked by artificial, conventional laboratory stimuli. To achieve this, a state-of-the-art recording method, voltage-sensitive dye imaging was used. This method captures the spatio-temporal activity patterns within the millisecond range across large cortical portions spanning over many pinwheels and orientation columns. It is therefore very well suited to provide a faithful picture of the cortical state in its full complexity. Drifting bar stimuli evoked two major sets of components, one coding for the position and the other for the orientation of the grating. Responses to natural stimuli involved more complex dynamics, which were locked to the motion present in the natural movies. In response to drifting gratings, the cortical state was initially dominated by a strong excitatory wave. This initial spatially widespread hyper-excitatory state had a detrimental effect on feature selectivity. In contrast, natural movies only rarely induced such high activity levels and the onset of inhibition cut short a further increase in activation level. An increase of 30% of the movie contrast was estimated to be necessary in order to produce activity levels comparable to gratings. These results show that the operating regime within which the natural movies are processed differs remarkably. Moreover, it remains to be established to what extent the cortical state under artificial conditions represents a valid state to make inferences concerning operationally more relevant input.
The primary visual cortex contains a dense web of neuronal connections linking distant neurons. However the flow of information within this local network is to a large extent unknown under natural stimulation conditions. To functionally characterize these long-range intra-areal interactions, we presented natural movies also locally through either one or two apertures and analyzed the effects of the distant visual stimulation on the local activity levels. The distant patch had a net facilitatory effect on the local activity levels. Furthermore, the degree of the facilitation was dependent on the congruency between the two simultaneously presented movie patches. Taken together, our results indicate that the ecologically relevant stimuli are processed within a distinct operating regime characterized by moderate levels of excitation and/or high levels of inhibition, where facilitatory cooperative interactions form the basis of integrative processes.
To gather better insights into the motion locking phenomenon and test the generalizability of the local cooperative processes toward larger scale interactions, we resorted to the unequalized temporal resolution of EEG and conducted a multimodal study. Inspired from the temporal properties of our natural movies, we designed a dynamic multimodal stimulus that was either congruent or incongruent across visual and auditory modalities. In the visual areas, the dynamic stimulation unfolded neuronal oscillations with frequencies well above the frequency spectrum content of the stimuli and the strength of these oscillations was coupled to the stimuli's motion profile. Furthermore, the coupling was found to be stronger in the case where the auditory and visual streams were congruent. These results show that the motion locking, which was so far observed in cats, is a phenomenon that also exists in humans. Moreover, the presence of long-range multimodal interactions indicates that, in addition to local intra-areal mechanisms ensuring the integration of local information, the central nervous system embodies an architecture that enables also the integration of information on much larger scales spread across different modalities.
Any characterization of integrative phenomena at the neuronal level needs to be supplemented by its effects at the behavioral level. We therefore tested whether we could find any evidence of integration of different sources of information at the behavioral level using natural stimuli. To this end, we presented to human subjects images of natural scenes and evaluated the effect of simultaneously played localized natural sounds on their eye movements. The behavior during multimodal conditions was well approximated by a linear combination of the behavior under unimodal conditions. This is a strong indication that both streams of information are integrated in a joint multimodal saliency map before the final motor command is produced.
The results presented here validate the possibility and the utility of using natural stimuli in experimental settings. It is clear that the ecological relevance of the experimental conditions are crucial in order to elucidate complex neuronal mechanisms resulting from evolutionary processes. In the future, having better insights on the nervous system can only be possible when the complexity of our experiments will match to the complexity of the mechanisms we are interested in.
|
127 |
H-matrix based Solver for 3D Elastodynamics Boundary Integral Equations / Solveurs fondés sur la méthode des H-matrices pour les équations intégrales en élastodynamique 3DDesiderio, Luca 27 January 2017 (has links)
Cette thèse porte sur l'étude théorique et numérique des méthodes rapides pour résoudre les équations de l'élastodynamique 3D en domaine fréquentiel, et se place dans le cadre d'une collaboration avec la société Shell en vue d'optimiser la convergence des problèmes d'inversion sismique. La méthode repose sur l'utilisation des éléments finis de frontière (BEM) pour la discrétisation et sur les techniques de matrices hiérarchiques (H-matrices) pour l'accélération de la résolution du système linéaire. Dans le cadre de cette thèse on a développé un solveur direct pour les BEMs en utilisant une factorisation LU et un stockage hiérarchique. Si le concept des H-matrices est simple à comprendre, sa mise en oeuvre requiert des développements algorithmiques importants tels que la gestion de la multiplication de matrices représentées par des structures différentes (compressées ou non) qui ne comprend pas mois de 27 sous-cas. Un autre point délicat est l'utilisation des méthodes d'approximations par matrices compressées (de rang faible) dans le cadre des problèmes vectoriels. Une étude algorithmique a donc été faite pour mettre en oeuvre la méthode des H-matrices. Nous avons par ailleurs estimé théoriquement le rang faible attendu pour les noyaux oscillants, ce qui constitue une nouveauté, et montré que la méthode est utilisable en élastodynamique. En outre on a étudié l'influence des divers paramètres de la méthode en acoustique et en élastodynamique 3D, à fin de calibrer leur valeurs numériques optimales. Dans le cadre de la collaboration avec Shell, un cas test spécifique a été étudié. Il s'agit d'un problème de propagation d'une onde sismique dans un demi-espace élastique soumis à une force ponctuelle en surface. Enfin le solveur direct développé a été intégré au code COFFEE développé a POEMS (environ 25000 lignes en Fortran 90) / This thesis focuses on the theoretical and numerical study of fast methods to solve the equations of 3D elastodynamics in frequency-domain. We use the Boundary Element Method (BEM) as discretization technique, in association with the hierarchical matrices (H-matrices) technique for the fast solution of the resulting linear system. The BEM is based on a boundary integral formulation which requires the discretization of the only domain boundaries. Thus, this method is well suited to treat seismic wave propagation problems. A major drawback of classical BEM is that it results in dense matrices, which leads to high memory requirement (O (N 2 ), if N is the number of degrees of freedom) and computational costs.Therefore, the simulation of realistic problems is limited by the number of degrees of freedom. Several fast BEMs have been developed to improve the computational efficiency. We propose a fast H-matrix based direct BEM solver.
|
128 |
SVD-BAYES: A SINGULAR VALUE DECOMPOSITION-BASED APPROACH UNDER BAYESIAN FRAMEWORK FOR INDIRECT ESTIMATION OF AGE-SPECIFIC FERTILITY AND MORTALITYChu, Yue January 2020 (has links)
No description available.
|
129 |
Comprehensive Characterization of the Transcriptional Signaling of Human Parturition through Integrative Analysis of Myometrial Tissues and Cell LinesStanfield, Zachary 28 August 2019 (has links)
No description available.
|
130 |
Simulation of Complex Sound Radiation Patterns from Truck Components using Monopole Clusters / Simulering av komplexa ljudstrålningsmönster från lastbilskomponenter med hjälp av monopolklusterCalen, Titus, Wang, Xiaomo January 2023 (has links)
Pass-by noise testing is an important step in vehicle design and regulation compliance. Finite element analysis simulations have been used to cut costs on prototyping and testing, but the high computational cost of simulating surface vibrations from complex geometries and the resulting airborne noise propagation is making the switch to digital twin methods not viable. This paper aims at investigating the use of equivalent source methods as an alternative to the before mentioned simulations. Through the use of a simple 2D model, the difficulties such as ill-conditioning of the transfer matrix and the required regularisation techniques such as TSVD and the Tikhonov L-curve method are tested and then applied to a mesh of a 3D engine model. Source and pressure field errors are measured and their origins are explained. A heavy emphasis is put on the model geometry as a source of error. Finally, rules of thumb based on the regularisation balance and the wavelength dependent pressure sampling positions are formulated in order to achieve usable results. / Bullerprovning vid passage är ett viktigt steg i fordonsdesign och regelefterlevnad. Simuleringar med finita elementanalyser har använts för att minska kostnaderna för prototypframtagning och provning, men de höga beräkningskostnaderna för att simulera ytvibrationer från komplexa geometrier och den resulterande luftburna bullerspridningen gör att övergången till digitala tvillingmetoder inte är genomförbar. Denna uppsats syftar till att undersöka användningen av ekvivalenta källmetoder som ett alternativ till de tidigare nämnda simuleringarna. Genom att använda en enkel 2D-modell testas svårigheterna som dålig konditionering av överföringsmatrisen och de nödvändiga regulariseringsteknikerna som TSVD och Tikhonov L-kurvmetoden och tillämpas sedan på ett nät av en 3D-motormodell. Käll- och tryckfältsfel mäts och deras ursprung förklaras. Stor vikt läggs vid modellgeometrin som en felkälla. Slutligen formuleras tumregler baserade på regulariseringsbalansen och de våglängdsberoende tryckprovtagningspositionerna för att uppnå användbara resultat.
|
Page generated in 0.0964 seconds