Spelling suggestions: "subject:"1inear least squares"" "subject:"1inear least quares""
1 |
Parametric Estimation of Harmonically Related SinusoidsDixit, Richa 16 December 2013 (has links)
Mud-pulse telemetry is a method used for measurement-while-drilling (MWD)in the oil industry. The telemetry signals are corrupted by spurious mud pump noise consisting of a large number of harmonically related sinusoids. In order to denoise the signal, the noise parameters have to be tracked accurately in real time. There are well established parametric estimation techniques for determining various parameters of independent sinusoids. The iterative methods based on the linear prediction properties of the sinusoids provide a computationally e±cient way of solving the non linear optimization problem presented by these methods. However, owing to the large number of these sinusoids, incorporating the harmonic relationship in the problem becomes important.
This thesis is aimed at solving the problem of estimating parameters of harmonically related sinusoids. We examine the efficacy of IQML algorithm in estimating the
parameters of the telemetry signal for varying SNRs and data lengths. The IQML algorithm proves quite robust and successfully tracks both stationary and slowly varying
frequency signals. Later, we propose an algorithm for fundamental frequency estimation which relies on the initial harmonic frequency estimate. The results of tests performed on synthetic data that imitates real field data are presented. The analysis of the simulation results shows that the proposed method manages to remove noise causing sinusoids in the telemetry signal to a great extent. The low computational complexity of the algorithm also makes for an easy implementation on field where
computational power is limited.
|
2 |
Evaluating and improvement of tree stump volume prediction models in the eastern United StatesBarker, Ethan Jefferson 06 June 2017 (has links)
Forests are considered among the best carbon stocks on the planet. After forest harvest, the residual tree stumps persist on the site for years after harvest continuing to store carbon. A bigger concern is that the component ratio method requires a way to get stump volume to obtain total tree aboveground biomass. Therefore, the stump volumes contribute to the National Carbon Inventory. Agencies and organizations that are concerned with carbon accounting would benefit from an improved method for predicting tree stump volume. In this work, many model forms are evaluated for their accuracy in predicting stump volume. Stump profile and stump volume predictions were among the types of estimates done here for both outside and inside bark measurements. Fitting previously used models to a larger data set allows for improved regression coefficients and potentially more flexible and accurate models. The data set was compiled from a large selection of legacy data as well as some newly collected field measurements. Analysis was conducted for thirty of the most numerous tree species in the eastern United States as well as provide an improved method for inside and outside bark stump volume estimation. / Master of Science / Forests are considered among the best carbon stocks on the planet, and estimates of total tree aboveground biomass are needed to maintain the National Carbon Inventory. Tree stump volumes contribute to total tree aboveground biomass estimates. Agencies and organizations that are concerned with carbon accounting would benefit from an improved method for predicting tree stump volume. In this work, existing mathematical equations used to estimate tree stump volume are evaluated. A larger and more inclusive data set was utilized to improve the current equations, and to gather more insight in to which equations are best for different tree species and different areas of the eastern United States.
|
3 |
Inverse Analysis of Transient Heat Source from Arc ErosionLi, Yung-Yuan 02 July 2001 (has links)
An inverse method is developed to analyze the transient heat source from arc erosion. The temperature at the contour of arc erosion is assumed as melting point. And the temperature in grid points at the last time is calculated by interpolation, which include measurement errors. Then, the unknown parameters of transient heat source can be solved by linear least-squares error method. These parameters are plasma radius at the anode surface grows with time, arc power, and plasma flushing efficiency on the anode. Because the temperature in measuring points includes measurement errors, the exact solution can be found when fewer unknowns are considered. The inverse method is sensitivity to measurement errors.
|
4 |
Regularization Techniques for Linear Least-Squares ProblemsSuliman, Mohamed Abdalla Elhag 04 1900 (has links)
Linear estimation is a fundamental branch of signal processing that deals with estimating the values of parameters from a corrupted measured data. Throughout the years, several optimization criteria have been used to achieve this task. The most astonishing attempt among theses is the linear least-squares. Although this criterion enjoyed a wide popularity in many areas due to its attractive properties, it appeared to suffer from some shortcomings. Alternative optimization criteria, as a result, have been proposed. These new criteria allowed, in one way or another, the incorporation of further prior information to the desired problem. Among theses alternative criteria
is the regularized least-squares (RLS). In this thesis, we propose two new algorithms to find the regularization parameter for linear least-squares problems. In the constrained perturbation regularization
algorithm (COPRA) for random matrices and COPRA for linear discrete ill-posed problems, an artificial perturbation matrix with a bounded norm is forced into the model matrix. This perturbation is introduced to enhance the singular value structure of the matrix. As a result, the new modified model is expected to provide a better stabilize substantial solution when used to estimate the original signal through minimizing the worst-case residual error function.
Unlike many other regularization algorithms that go in search of minimizing the estimated data error, the two new proposed algorithms are developed mainly to select the artifcial perturbation bound and the regularization parameter in a way that approximately minimizes the mean-squared error (MSE) between the original signal and its estimate under various conditions. The first proposed COPRA method is developed mainly to estimate the regularization parameter when the measurement matrix is complex Gaussian, with centered unit variance (standard), and independent and identically distributed (i.i.d.) entries. Furthermore, the second proposed COPRA method deals with discrete ill-posed problems when the singular values of the linear transformation matrix are decaying very fast to a significantly small value. For the both proposed algorithms, the regularization parameter is obtained as a solution of a non-linear characteristic equation. We provide a details study for the general
properties of these functions and address the existence and uniqueness of the root. To demonstrate the performance of the derivations, the first proposed COPRA method is applied to estimate different signals with various characteristics, while the second proposed COPRA method is applied to a large set of different real-world discrete ill-posed problems. Simulation results demonstrate that the two proposed methods outperform a set of benchmark regularization algorithms in most cases. In addition, the algorithms are also shown to have the lowest run time.
|
5 |
Non-invasive estimation of skin chromophores using Hyperspectral ImagingKarambor Chakravarty, Sriya 21 August 2023 (has links)
Melanomas account for more than 1.7% of global cancer diagnoses and about 1% of all skin cancer diagnoses in the United States. This type of cancer occurs in the melanin-producing cells in the epidermis and exhibits distinctive variations in melanin and blood concentration values in the form of skin lesions. The current approach for evaluating skin cancer lesions involves visual inspection with a dermatoscope, typically followed by biopsy and histopathological analysis. However, this process, to decrease the risk of misdiagnosis, results in unnecessary biopsies, contributing to the emotional and financial distress of patients. The implementation of a non-invasive imaging technique to aid the analysis of skin lesions in the early stages can potentially mitigate these consequences.
Hyperspectral imaging (HSI) has shown promise as a non-invasive technique to analyze skin lesions. Images taken of human skin using a hyperspectral camera are a result of numerous elements in the skin. Being a turbid, inhomogeneous material, the skin has chromophores and scattering agents, which interact with light and produce characteristic back-scattered energy that can be harnessed and examined with an HSI camera. In this study, a mathematical model of the skin is used to extract meaningful information from the hyperspectral data in the form of melanin concentration, blood volume fraction and blood oxygen saturation in the skin. The human skin is modelled as a bi-layer planar system, whose surface reflectance is theoretically calculated using the Kubelka-Munk theory and absorption laws by Beer and Lambert. Hyperspectral images of the dorsal portion of three volunteer subjects' hands 400 - 1000 nm range, were used to estimate the contributing parameters. The mean and standard deviation of these estimates are reported compared with theoretical values from the literature. The model is also evaluated for its sensitivity with respect to these parameters, and then fitted to measured hyperspectral data of three volunteer subjects in different conditions. The wavelengths and wavelength groups which were identified to result in the maximum change in percentage reflectance calculated from the model were 450 and 660 nm for melanin, 500 - 520 nm and 590 - 625 nm for blood volume fraction and 606, 646 and 750 nm for blood oxygen saturation. / Master of Science / Melanoma, the most serious type of skin cancer, develops in the melanin-producing cells in the epidermis. A characteristic marker of skin lesions is the abrupt variations in melanin and blood concentration in areas of the lesion. The present technique to inspect skin cancer lesions involves dermatoscopy, which is a qualitative visual analysis of the lesion's features using a few standardized techniques such as the 7-point checklist and the ABCDE rule. Typically, dermatoscopy is followed by a biopsy and then a histopathological analysis of the biopsy. To reduce the possibility of misdiagnosing actual melanomas, a considerable number of dermoscopically unclear lesions are biopsied, increasing emotional, financial, and medical consequences. A non-invasive imaging technique to analyze skin lesions during the dermoscopic stage can help alleviate some of these consequences.
Hyperspectral imaging (HSI) is a promising methodology to non-invasively analyze skin lesions. Images taken of human skin using a hyperspectral camera are a result of numerous elements in the skin. Being a turbid, inhomogeneous material, the skin has chromophores and scattering agents, which interact with light and produce characteristic back-scattered energy that can be harnessed and analyzed with an HSI camera. In this study, a mathematical model of the skin is used to extract meaningful information from the hyperspectral data in the form of melanin concentration, blood volume fraction and blood oxygen saturation. The mean and standard deviation of these estimates are reported compared with theoretical values from the literature. The model is also evaluated for its sensitivity with respect to these parameters, and then fitted to measured hyperspectral data of six volunteer subjects in different conditions. Wavelengths which capture the most influential changes in the model response are identified to be 450 and 660 nm for melanin, 500 - 520 nm and 590 - 625 nm for blood volume fraction and 606, 646 and 750 nm for blood oxygen saturation.
|
6 |
Non-invasive Estimation of Skin Chromophores Using Hyperspectral ImagingKarambor Chakravarty, Sriya 07 March 2024 (has links)
Melanomas account for more than 1.7% of global cancer diagnoses and about 1% of all skin cancer diagnoses in the United States. This type of cancer occurs in the melanin-producing cells in the epidermis and exhibits distinctive variations in melanin and blood concentration values in the form of skin lesions. The current approach for evaluating skin cancer lesions involves visual inspection with a dermatoscope, typically followed by biopsy and histopathological analysis. However, to decrease the risk of misdiagnosis in this process requires invasive biopsies, contributing to the emotional and financial distress of patients. The implementation of a non-invasive imaging technique to aid the analysis of skin lesions in the early stages can potentially mitigate these consequences.
Hyperspectral imaging (HSI) has shown promise as a non-invasive technique to analyze skin lesions. Images taken of human skin using a hyperspectral camera are a result of numerous elements in the skin. Being a turbid, inhomogeneous material, the skin has chromophores and scattering agents, which interact with light and produce characteristic back-scattered energy that can be harnessed and examined with an HSI camera. To achieve this in this study, a mathematical model of the skin is used to extract meaningful information from the hyperspectral data in the form of parameters such as melanin concentration, blood volume fraction and blood oxygen saturation in the skin. The human skin is modelled as a bi-layer planar system, whose surface reflectance is theoretically calculated using the Kubelka-Munk theory and absorption laws by Beer and Lambert. The model is evaluated for its sensitivity to the parameters and then fitted to measured hyperspectral data of four volunteer subjects in different conditions. Mean values of melanin, blood volume fraction and oxygen saturation obtained for each of the subjects are reported and compared with theoretical values from literature. Sensitivity analysis revealed wavelengths and wavelength groups which resulted in maximum change in percentage reflectance calculated from the model were 450 and 660 nm for melanin, 500 - 520 nm and 590 - 625 nm for blood volume fraction and 606, 646 and 750 nm for blood oxygen saturation. / Master of Science / Melanoma, the most serious type of skin cancer, develops in the melanin-producing cells in the epidermis. A characteristic marker of skin lesions is the abrupt variations in melanin and blood concentration in areas of the lesion. The present technique to inspect skin cancer lesions involves dermatoscopy, which is a qualitative visual analysis of the lesion’s features using a few standardized techniques such as the 7-point checklist and the ABCDE rule. Typically, dermatoscopy is followed by a biopsy and then a histopathological analysis of the biopsy. To reduce the possibility of misdiagnosing actual melanomas, a considerable number of dermoscopically unclear lesions are biopsied, increasing emotional, financial, and medical consequences. A non-invasive imaging technique to analyze skin lesions during the dermoscopic stage can help alleviate some of these consequences. Hyperspectral imaging (HSI) is a promising methodology to non-invasively analyze skin lesions. Images taken of human skin using a hyperspectral camera are a result of numerous elements in the skin. Being a turbid, inhomogeneous material, the skin has chromophores and scattering agents, which interact with light and produce characteristic back-scattered energy that can be harnessed and analyzed with an HSI camera. In this study, a mathematical model of the skin is used to extract meaningful information from the hyperspectral data in the form of melanin concentration, blood volume fraction and blood oxygen saturation. The mean and standard deviation of these estimates are reported and compared with theoretical values from the literature. The model is also evaluated for its sensitivity with respect to these parameters to identify the most relevant wavelengths.
|
7 |
Performance Comparison of Localization Algorithms for UWB Measurements with Closely Spaced AnchorsNilsson, Max January 2018 (has links)
Tracking objects or people in an indoor environment has a wide variety of uses in many different areas, similarly to positioning systems outdoors. Indoor positioning systems operate in a very different environment however, having to deal with obstructions while also having high accuracy. A common solution for indoor positioning systems is to have three or more stationary anchor antennas spread out around the perimeter of the area that is to be monitored. The position of a tag antenna moving in range of the anchors can then be found using trilateration. One downside of such a setup is that the anchors must be setup in advance, meaning that rapid deployment to new areas of such a system may be impractical. This thesis aims to investigate the possibility of using a different setup, where three anchors are placed close together, so as to fit in a small hand-held device. This would allow the system to be used without any prior setup of anchors, making rapid deployment into new areas more feasible. The measurements done by the antennas for use in trilateration will always contain noise, and as such algorithms have had to be developed in order to obtain an approximation of the position of a tag in the presence of noise. These algorithms have been developed with the setup of three spaced out anchors in mind, and may not be sufficiently accurate when the anchors are spaced very closely together. To investigate the feasibility of such a setup, this thesis tested four different algorithms with the proposed setup, to see its impact on the performance of the algorithms. The algorithms tested are the Weighted Block Newton, Weighted Clipped Block Newton, Linear Least Squares and Non-Linear Least Squares algorithms. The Linear Least Squares algorithm was also run with measurements that were first run through a simple Kalman filter. Previous studies have used the algorithms to find an estimated position of the tag and compared their efficiency using the positional error of the estimate. This thesis will also use the positional estimates to determine the angular position of the estimate in relation to the anchors, and use that to compare the algorithms. Measurements were done using DWM1001 Ultra Wideband (UWB) antennas, and four different cases were tested. In case 1 the anchors and tag were 10 meters apart in line-of-sight, case two were the same as case 1 but with a person standing between the tag and the anchors. In case 3 the tag was moved behind a wall with an adjacent open door, and in case 4 the tag was in the same place as in case 3 but the door was closed. The Linear Least Squares algorithm using the filtered measurements was found to be the most effective in all cases, with a maximum angular error of less than 5$^\circ$ in the worst case. The worst case here was case 2, showing that the influence of a human body has a strong effect on the UWB signal, causing large errors in the estimates of the other algorithms. The presence of a wall in between the anchors and tag was found to have a minimal impact on the angular error, while having a larger effect on the spatial error. Further studies regarding the effects of the human body on UWB signals may be necessary to determine the feasibility of handheld applications, as well as the effect of the tag and/or the anchors moving on the efficiency of the algorithms.
|
8 |
Training of Template-Specific Weighted Energy Function for Sequence-to-Structure AlignmentLee, En-Shiun Annie January 2008 (has links)
Threading is a protein structure prediction method that uses a library of template protein structures in the following steps: first the target sequence is matched to the template library and the best template structure is selected, secondly the predicted target structure of the target sequence is modeled by this selected template structure. The deceleration of new folds which are added to the protein data bank promises completion of the template structure library. This thesis uses a new set of template-specific weights to improve the energy function for sequence-to-structure alignment in the template selection step of the threading process. The weights are estimated using least squares methods with the quality of the modelling step in the threading process as the label. These new weights show an average 12.74% improvement in estimating the label. Further family analysis show a correlation between the performance of the new weights to the number of seeds in pFam.
|
9 |
Training of Template-Specific Weighted Energy Function for Sequence-to-Structure AlignmentLee, En-Shiun Annie January 2008 (has links)
Threading is a protein structure prediction method that uses a library of template protein structures in the following steps: first the target sequence is matched to the template library and the best template structure is selected, secondly the predicted target structure of the target sequence is modeled by this selected template structure. The deceleration of new folds which are added to the protein data bank promises completion of the template structure library. This thesis uses a new set of template-specific weights to improve the energy function for sequence-to-structure alignment in the template selection step of the threading process. The weights are estimated using least squares methods with the quality of the modelling step in the threading process as the label. These new weights show an average 12.74% improvement in estimating the label. Further family analysis show a correlation between the performance of the new weights to the number of seeds in pFam.
|
10 |
Uso de algoritmo genético no ajuste linear através de dados experimentaisSiqueira Júnior, Erinaldo Leite 15 May 2015 (has links)
Submitted by Maike Costa (maiksebas@gmail.com) on 2016-03-22T11:33:37Z
No. of bitstreams: 1
arquivototal.pdf: 1643585 bytes, checksum: 5ba2336704d1de91b41bbe323ef3781e (MD5) / Made available in DSpace on 2016-03-22T11:33:37Z (GMT). No. of bitstreams: 1
arquivototal.pdf: 1643585 bytes, checksum: 5ba2336704d1de91b41bbe323ef3781e (MD5)
Previous issue date: 2015-05-15 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / In this paper we discuss the problem of linear tting to experimental data using a
method bio-inspired of optimization, i.e., it imitates the biological concepts attempt
to nd optimal or suboptimal results. The method used is the genetic algorithm
(GA), AG makes use of the theory of Darwinian evolution to nd the best route
for the desired maximum point. Traditionally, the linear tting is made through
the method of least squares. The method is e cient, but is di cult to justify
the pre-calculus classes. Therefore, the alternative AG comes as a computationally
exhaustive procedure, however easy justi cation for these classes. Thus, the purpose
of this study is to compare the results of linear tting for some control scenarios using
this methods and certify the quality of the adjustments obtained by the approximate
method. At the end of the work it was found that the results are solid enough to
justify the alternative method and the proposed use of this optimization process has
the potential to spark interest in other areas of mathematics. / Neste trabalho abordaremos o problema de ajuste linear para dados experimentais
através de um método de otimização bio-inspirado, isto é, que mimetiza conceitos
biológicos na tentativa de buscar resultados ótimos ou sub-ótimos. O método
utilizado é o algoritmo genético (AG), AG faz uso da teoria da evolução Darwiniana
para buscar a melhor rota para o ponto de máximo desejado. Tradicionalmente,
o ajuste linear é feito através do método de mínimos quadrados. Tal método é
e ciente, porém é de difícil justi cativa para as turmas pré-cálculo. Diante disso,
a alternativa do AG vem como um procedimento exaustivo computacionalmente,
entretanto de fácil justi cativa para essas turmas. Assim, a proposta do trabalho é
comparar os resultados de ajuste linear para alguns cenários de controle através dos
dois métodos e certi car a qualidade dos ajustes obtidos pelo método aproximado.
No nal do trabalho constatou-se que os resultados encontrados sÿo sólidos o
bastante para justi car o método alternativo e que a proposta da utilização desse
processo de otimização tem potencial para despertar interesse em outras áreas da
matemática.
|
Page generated in 0.0515 seconds