Spelling suggestions: "subject:"eigendecomposition"" "subject:"eigendecompositions""
1 |
Anomaly Detection in Aeroacoustic Wind Tunnel ExperimentsDefreitas, Aaron Chad 27 October 2021 (has links)
Wind tunnel experiments often employ a wide variety and large number of sensor systems. Anomalous measurements occurring without the knowledge of the researcher can be devastating to the success of costly experiments; therefore, anomaly detection is of great interest to the wind tunnel community. Currently, anomaly detection in wind tunnel data is a manual procedure. A researcher will analyze the quality of measurements, such as monitoring for pressure measurements outside of an expected range or additional variability in a time averaged quantity. More commonly, the raw data must be fully processed to obtain near-final results during the experiment for an effective review.
Rapid anomaly detection methods are desired to ensure the quality of a measurement and reduce the load on the researcher. While there are many effective methodologies for anomaly detection used throughout the wider engineering research community, they have not been demonstrated in wind tunnel experiments. Wind tunnel experimentation is unique in the sense that many repeat measurements are not typical. Typically, this will only occur if an anomaly has been identified. Since most anomaly detection methodologies rely on well-resolved knowledge of a measurement to uncover the expected uncertainties, they can be difficult to apply in the wind tunnel setting.
First, the analysis will focus on pressure measurements around an airfoil and its wake. Principal component analysis (PCA) will be used to build a measurement expectation by linear estimation. A covariance matrix will be constructed from experimental data to be used in the PCA-scheme. This covariance matrix represents both the strong deterministic relations dependent on experimental configuration as well as random uncertainty. Through principles of ideal flow, a method to normalize geometrical changes to improve measurement expectations will be demonstrated. Measurements from a microphone array, another common system employed in aeroacoustic wind tunnels, will be analyzed similarly through evaluation of the cross-spectral matrix of microphone data, with minimal repeat measurements. A spectral projection method will be proposed that identifies unexpected acoustic source distributions. Analysis of good and anomalous measurements show this methodology is effective. Finally, machine learning technique will be investigated for an experimental situation where repeat measurements of a known event are readily available. A convolutional neural network for feature detection will be shown in the context of audio detection.
This dissertation presents techniques for anomaly detection in sensor systems commonly used in wind tunnel experiments. The presented work suggests that these anomaly identification techniques can be easily introduced into aeroacoustic experiment methodology, minimizing tunnel down time, and reducing cost. / Doctor of Philosophy / Efficient detection of anomalies in wind tunnel experiments would reduce the cost of experiments and increase their effectiveness. Currently, manual inspection is used to detect anomalies in wind tunnel measurements. A researcher may analyze measurements during experiment, for instance, monitoring for pressure measurements outside of an expected range or additional variability in a time averaged quantity. More commonly, the raw data must be fully processed to obtain near-final results to determine quality.
In this dissertation, many methods, which can assist the wind tunnel researcher in reviewing measurements, are developed and tested. First, a method to simultaneously monitor pressure measurements and wind tunnel environment measurements is developed with a popular linear algebra technique called Principal Component Analysis (PCA). The novelty in using PCA is that measurements in wind tunnels are often not repeated. Instead, the proposed method uses a large number of independent measurements acquired in various conditions and fundamental aspects of fluid mechanics to train the detection algorithm.
Another wind tunnel system which is considered is a microphone array. A microphone array is a collection of microphones arranged in known locations. Current methods to assess the quality of the output data from this system require extended computation and review time during an experiment. A method parallel to PCA is used to rapidly determine if an anomaly is present in the measurement. This method does not require the extra computation necessary to see what the microphone array has observed and simplifies the quantities assessed for anomalies. While this is not a replacement for complete computation of the results associated with microphone array measurements, this can take most of the effort out of the experiment time and relegate detailed review to a time after the experiment is complete.
Finally, an application of machine learning is discussed with an alternate application outside of the wind tunnel. This work explores the usefulness of a convolutional neural network (CNN) for cough detection. This can be similarly applied to detect anomalies in audio data if searching for specific anomalies with known characteristics. CNNs, in general, require much effort to train and operate effectively but are not dependent on the application or data type. These methods could be applied to a wind tunnel experiment.
Overall, the work in this dissertation shows many techniques which can be implemented into current wind tunnel operations to improve the efficiency and effectiveness of the data review process.
|
2 |
Coupling the planetary boundary layer to the large scale dynamics of the atmosphere : the impact of vertical discretisationHoldaway, Daniel January 2010 (has links)
Accurate coupling between the resolved scale dynamics and sub-grid scale physics is essential for accurate modelling of the atmosphere. Previous emphasis has been towards the temporal aspects of this so called physics-dynamics coupling problem, with little attention towards the spatial aspects. When designing a model for numerical weather prediction there is a choice for how to vertically arrange the required variables, namely the Lorenz and Charney-Phillips grids, and there is ongoing debate as to which is the optimal. The Charney-Phillips grid is considered good for capturing the large scale dynamics and wave propagation whereas the Lorenz grid is more suitable for conservation. However the Lorenz grid supports a computational mode. In the first half of this thesis it is argued that the Lorenz grid is preferred for modelling the stably stratified boundary layer. This presents the question: which grid will produce most accurate results when coupling the large scale dynamics to the stably stratified planetary boundary layer? The second half of this thesis addresses this question. The normal mode analysis approach, as used in previous work of a similar nature, is employed. This is an attractive methodology since it allows one to pin down exactly why a particular configuration performs well. In order to apply this method a one dimensional column model is set up, where horizontally wavelike solutions with a given wavenumber are assumed. Applying this method encounters issues when the problem is non normal, as it will be when including boundary layer terms. It is shown that when addressing the coupled problem the lack of orthogonality between eigenvectors can cause mode analysis to break down. Dynamical modes could still be interpreted and compared using the eigenvectors but boundary layer modes could not. It is argued that one can recover some of the usefulness of the methodology by examining singular vectors and singular values; these retain the appropriate physical interpretation and allow for valid comparison due to orthogonality between singular vectors. Despite the problems in using the desirable methodology some interesting results have been gained. It is shown that the Lorenz grid is favoured when the boundary layer is considered on its own; it captures the structures of the steady states and transient singular vectors more accurately than the Charney-Phillips grid. For the coupled boundary layer and dynamics the Charney-Phillips grid is found to be most accurate in terms of capturing the steady state. Dispersion properties of dynamical modes in the coupled problem depend on the choice of horizontal wavenumber. For smaller horizontal wavenumber there is little to distinguish between Lorenz and Charney-Phillips grids, both the frequency and structure of dynamical modes is captured accurately. Dynamical mode structures are found to be harder to interpret when using larger horizontal wavenumbers; for those that are examined the Charney-Phillips grid produces the most sensible and accurate results. It is found that boundary layer modes in the coupled problem cannot be concisely compared between the Lorenz and Charney-Phillips grids due to the issues that arise with the methodology. The Lorenz grid computational mode is found to be suppressed by the boundary layer, but only in the boundary layer region.
|
3 |
Numerical Methods in Deep Learning and Computer VisionSong, Yue 23 April 2024 (has links)
Numerical methods, the collective name for numerical analysis and optimization techniques, have been widely used in the field of computer vision and deep learning. In this thesis, we investigate the algorithms of some numerical methods and their relevant applications in deep learning. These studied numerical techniques mainly include differentiable matrix power functions, differentiable eigendecomposition (ED), feasible orthogonal matrix constraints in optimization and latent semantics discovery, and physics-informed techniques for solving partial differential equations in disentangled and equivariant representation learning. We first propose two numerical solvers for the faster computation of matrix square root and its inverse. The proposed algorithms are demonstrated to have considerable speedup in practical computer vision tasks. Then we turn to resolve the main issues when integrating differentiable ED into deep learning -- backpropagation instability, slow decomposition for batched matrices, and ill-conditioned input throughout the training. Some approximation techniques are first leveraged to closely approximate the backward gradients while avoiding gradient explosion, which resolves the issue of backpropagation instability. To improve the computational efficiency of ED, we propose an efficient ED solver dedicated to small and medium batched matrices that are frequently encountered as input in deep learning. Some orthogonality techniques are also proposed to improve input conditioning. All of these techniques combine to mitigate the difficulty of applying differentiable ED in deep learning. In the last part of the thesis, we rethink some key concepts in disentangled representation learning. We first investigate the relation between disentanglement and orthogonality -- the generative models are enforced with different proposed orthogonality to show that the disentanglement performance is indeed improved. We also challenge the linear assumption of the latent traversal paths and propose to model the traversal process as dynamic spatiotemporal flows on the potential landscapes. Finally, we build probabilistic generative models of sequences that allow for novel understandings of equivariance and disentanglement. We expect our investigation could pave the way for more in-depth and impactful research at the intersection of numerical methods and deep learning.
|
4 |
Implementace algoritmu dekompozice matice a pseudoinverze na FPGA / Implementation of matrix decomposition and pseudoinversion on FPGARöszler, Pavel January 2018 (has links)
The purpose of this thesis is to implement algorithms of matrix eigendecomposition and pseudoinverse computation on a Field Programmable Gate Array (FPGA) platform. Firstly, there are described matrix decomposition methods that are broadly used in mentioned algorithms. Next section is focused on the basic theory and methods of computation eigenvalues and eigenvectors as well as matrix pseudoinverse. Several examples of implementation using Matlab are attached. The Vivado High-Level Synthesis tools and libraries were used for final implementation. After the brief introduction into the FPGA fundamentals the thesis continues with a description of implemented blocks. The results of each variant were compared in terms of timing and FPGA utilization. The selected block has been validated on the development board and its arithmetic precision was analyzed.
|
5 |
Convolution and Autoencoders Applied to Nonlinear Differential EquationsBorquaye, Noah 01 December 2023 (has links) (PDF)
Autoencoders, a type of artificial neural network, have gained recognition by researchers in various fields, especially machine learning due to their vast applications in data representations from inputs. Recently researchers have explored the possibility to extend the application of autoencoders to solve nonlinear differential equations. Algorithms and methods employed in an autoencoder framework include sparse identification of nonlinear dynamics (SINDy), dynamic mode decomposition (DMD), Koopman operator theory and singular value decomposition (SVD). These approaches use matrix multiplication to represent linear transformation. However, machine learning algorithms often use convolution to represent linear transformations. In our work, we modify these approaches to system identification and forecasting of solutions of nonlinear differential equations by replacing matrix multiplication with convolution transformation. In particular, we develop convolution-based approach to dynamic mode decomposition and discuss its application to problems not solvable otherwise.
|
6 |
Apprentissage statistique avec le processus ponctuel déterminantalVicente, Sergio 02 1900 (has links)
Cette thèse aborde le processus ponctuel déterminantal, un modèle probabiliste qui capture
la répulsion entre les points d’un certain espace. Celle-ci est déterminée par une matrice
de similarité, la matrice noyau du processus, qui spécifie quels points sont les plus similaires
et donc moins susceptibles de figurer dans un même sous-ensemble. Contrairement à la sélection
aléatoire uniforme, ce processus ponctuel privilégie les sous-ensembles qui contiennent
des points diversifiés et hétérogènes. La notion de diversité acquiert une importante grandissante
au sein de sciences comme la médecine, la sociologie, les sciences forensiques et les
sciences comportementales. Le processus ponctuel déterminantal offre donc une alternative
aux traditionnelles méthodes d’échantillonnage en tenant compte de la diversité des éléments
choisis. Actuellement, il est déjà très utilisé en apprentissage automatique comme modèle de
sélection de sous-ensembles. Son application en statistique est illustrée par trois articles. Le
premier article aborde le partitionnement de données effectué par un algorithme répété un
grand nombre de fois sur les mêmes données, le partitionnement par consensus. On montre
qu’en utilisant le processus ponctuel déterminantal pour sélectionner les points initiaux de
l’algorithme, la partition de données finale a une qualité supérieure à celle que l’on obtient
en sélectionnant les points de façon uniforme. Le deuxième article étend la méthodologie
du premier article aux données ayant un grand nombre d’observations. Ce cas impose un
effort computationnel additionnel, étant donné que la sélection de points par le processus
ponctuel déterminantal passe par la décomposition spectrale de la matrice de similarité qui,
dans ce cas-ci, est de grande taille. On présente deux approches différentes pour résoudre ce
problème. On montre que les résultats obtenus par ces deux approches sont meilleurs que
ceux obtenus avec un partitionnement de données basé sur une sélection uniforme de points.
Le troisième article présente le problème de sélection de variables en régression linéaire et
logistique face à un nombre élevé de covariables par une approche bayésienne. La sélection
de variables est faite en recourant aux méthodes de Monte Carlo par chaînes de Markov,
en utilisant l’algorithme de Metropolis-Hastings. On montre qu’en choisissant le processus
ponctuel déterminantal comme loi a priori de l’espace des modèles, le sous-ensemble final de
variables est meilleur que celui que l’on obtient avec une loi a priori uniforme. / This thesis presents the determinantal point process, a probabilistic model that captures
repulsion between points of a certain space. This repulsion is encompassed by a similarity
matrix, the kernel matrix, which selects which points are more similar and then less likely to
appear in the same subset. This point process gives more weight to subsets characterized by
a larger diversity of its elements, which is not the case with the traditional uniform random
sampling. Diversity has become a key concept in domains such as medicine, sociology,
forensic sciences and behavioral sciences. The determinantal point process is considered
a promising alternative to traditional sampling methods, since it takes into account the
diversity of selected elements. It is already actively used in machine learning as a subset
selection method. Its application in statistics is illustrated with three papers. The first
paper presents the consensus clustering, which consists in running a clustering algorithm
on the same data, a large number of times. To sample the initials points of the algorithm,
we propose the determinantal point process as a sampling method instead of a uniform
random sampling and show that the former option produces better clustering results. The
second paper extends the methodology developed in the first paper to large-data. Such
datasets impose a computational burden since sampling with the determinantal point process
is based on the spectral decomposition of the large kernel matrix. We introduce two methods
to deal with this issue. These methods also produce better clustering results than consensus
clustering based on a uniform sampling of initial points. The third paper addresses the
problem of variable selection for the linear model and the logistic regression, when the
number of predictors is large. A Bayesian approach is adopted, using Markov Chain Monte
Carlo methods with Metropolis-Hasting algorithm. We show that setting the determinantal
point process as the prior distribution for the model space selects a better final model than
the model selected by a uniform prior on the model space.
|
Page generated in 0.0816 seconds