Spelling suggestions: "subject:"[een] NON-LINEAR MODEL"" "subject:"[enn] NON-LINEAR MODEL""
41 |
Target Tracking and Data Fusion with Cooperative IMM-based AlgorithmHsieh, Yu-Chen 26 August 2011 (has links)
In solving target tracking problems, the Kalman filter (KF) is a systematic estimation algorithm. Whether the state of a moving target adapts to the changes in the observations depends on the model assumptions. The interacting multiple model (IMM) algorithm uses interaction of a bank of parallel KFs by updating associated model probabilities. Every parallel KF has its model probability adjusted by the dynamic system. For moving targets of different dynamic linear models, an IMM with two KFs generally performs well. In this thesis, in order to improve the performance of target tracking and state estimation, multi-sensor data fusion technique will be used. Same types of IMMs can be incorporated in the cooperative IMM-based algorithm. The IMM-based estimators exchange with each other the estimates, model robabilities and model transition probabilities. A distributed algorithm for multi-sensor tracking usually needs a fusion center that integrates decisions or estimates, but the proposed cooperative IMM-based algorithm does not use the architecture. Cooperative IMM estimator structures exchange weights and estimates on the platforms to avoid accumulation of errors. Performance of data fusion may degrade due to different kinds of undesirable environmental effects. The simulations show that an IMM estimator with smaller measurement noise level can be used to compensate the other IMM, which is affected by larger measurement noise. In addition, failure of a sensor will cause the problem that model probabilities can not be updated in the corresponding estimator. Kalman filters will not be able to perform state correction for the moving target. To tackle the problem, we can use the estimates from other IMM estimators by adjusting the corresponding weights and model probabilities. The simulations show that the proposed cooperative IMM structure effectively improve the tracking performance.
|
42 |
Bayesian classification and survival analysis with curve predictorsWang, Xiaohui 15 May 2009 (has links)
We propose classification models for binary and multicategory data where the
predictor is a random function. The functional predictor could be irregularly and
sparsely sampled or characterized by high dimension and sharp localized changes. In
the former case, we employ Bayesian modeling utilizing flexible spline basis which is
widely used for functional regression. In the latter case, we use Bayesian modeling
with wavelet basis functions which have nice approximation properties over a large
class of functional spaces and can accommodate varieties of functional forms observed
in real life applications. We develop an unified hierarchical model which accommodates
both the adaptive spline or wavelet based function estimation model as well as
the logistic classification model. These two models are coupled together to borrow
strengths from each other in this unified hierarchical framework. The use of Gibbs
sampling with conjugate priors for posterior inference makes the method computationally
feasible. We compare the performance of the proposed models with the naive
models as well as existing alternatives by analyzing simulated as well as real data. We
also propose a Bayesian unified hierarchical model based on a proportional hazards model and generalized linear model for survival analysis with irregular longitudinal
covariates. This relatively simple joint model has two advantages. One is that using
spline basis simplifies the parameterizations while a flexible non-linear pattern of
the function is captured. The other is that joint modeling framework allows sharing
of the information between the regression of functional predictors and proportional
hazards modeling of survival data to improve the efficiency of estimation. The novel
method can be used not only for one functional predictor case, but also for multiple
functional predictors case. Our methods are applied to analyze real data sets and
compared with a parameterized regression method.
|
43 |
DSP Techniques for Performance Enhancement of Digital Hearing AidUdayashankara, V 12 1900 (has links)
Hearing impairment is the number one chronic disability affecting people in the world. Many people have great difficulty in understanding speech with background noise. This is especially true for a large number of elderly people and the sensorineural impaired persons. Several investigations on speech intelligibility have demonstrated that subjects with sensorineural loss may need a 5-15 dB higher signal-to-noise ratio than the normal hearing subjects. While most defects in transmission chain up to cochlea can nowadays be successfully rehabilitated by means of surgery, the great majority of the remaining inoperable cases are sensorineural hearing impaired, Recent statistics of the hearing impaired patients applying for a hearing aid reveal that 20% of the cases are due to conductive losses, more than 50% are due to sensorineural losses, and the rest 30% of the cases are of mixed origin. Presenting speech to the hearing impaired in an intelligible form remains a major challenge in hearing-aid research today. Even-though various methods have been suggested in the literature for the minimization of noise from the contaminated speech signals, they fail to give good SNR improvement and intelligibility improvement for moderate to-severe sensorineural loss subjects. So far, the power and capability of Newton's method, Nonlinear adaptive filtering methods and the feedback type artificial neural networks have not been exploited for this purpose. Hence we resort to the application of all these methods for improving SNR and intelligibility for the sensorineural loss subjects. Digital hearing aids frequently employ the concept of filter banks. One of the major drawbacks of this techniques is the complexity of computation requiring more number of multiplications. This increases the power consumption. Therefore this Thesis presents the new approach to speech enhancement for the hearing impaired and also the construction of filter bank in Digital hearing aid with minimum number of multiplications. The following are covered in this thesis.
One of the most important application of adaptive systems is in noise cancellation using adaptive filters. The ANC setup requires two input signals (viz., primary and reference). The primary input consists of the sum of the desired signal and noise which is uncorrelated. The reference input consists of mother noise which is correlated in Some unknown way with noise of primary input. The primary signal is obtained by placing the omnidirectional microphone just above one ear on the head of the KEMAR mannikan and the reference signal is obtained by placing the hypercardioid microphone at the center of the vertebral column on the back. Conventional speech enhancement techniques use linear schemes for enhancing speech signals. So far Nonlinear adaptive filtering techniques are not used in hearing aid applications. The motivation behind the use of nonlinear model is that it gives better noise suppression as compared to linear model. This is because the medium through which signals reach the microphone may be highly nonlinear. Hence the use of linear schemes, though motivated by computational simplicity and mathematical tractability, may be suboptimal. Hence, we propose the use of nonlinear models to enhance the speech signals for the hearing impaired: We propose both Linear LMS and Nonlinear second order Volterra LMS schemes to enhance speech signals. Studies conducted for different environmental noise including babble, cafeteria and low frequency noise show that the second-order Volterra LMS performs better compared to linear LMS algorithm. We use measures such as signal-to-noise ratio (SNR),
time plots, and intelligibility tests for performance comparison.
We also propose an ANC scheme which uses Newton's method to enhance speech signals. The main problem associated with LMS based ANC is that their convergence is slow and hence their performance becomes poor for hearing aid applications. The reason for choosing Newton's method is that they have high performance adaptive-filtering methods that often converge and track faster than LMS method. We propose two models to enhance speech signals: one is conventional linear model and the other is a nonlinear model using a second order Volterra function. Development of Newton's type algorithm for linear mdel results in familiar Recursive least square (RLS) algorithm. The performance of both linear and non-linear Newton's algorithm is evaluated for babble, cafeteria and frequency noise. SNR, timeplots and intelligibility tests are used for performance comparison. The results show that Newton's method using Volterra nonlinearity performs better than RLS method.
ln addition to the ANC based schemes, we also develop speech enhancement for the hearing impaired by using the feedback type neural network (FBNN). The main reason is that here we have parallel algorithm which can be implemented directly in hardware. We translate the speech enhancement problem into a neural network (NN) framework by forming an appropriate energy function. We propose both linear and nonlinear FBNN for enhancing the speech signals. Simulated studies on different environmental noise reveal that the FBNN using the Volterra nonlinearity is superior to linear FBNN in enhancing speech signals. We use SNR, time plots, and intelligibility tests for performance comparison.
The design of an effective hearing aid is a challenging problem for sensorineural hearing impaired people. For persons with sensorineural losses it is necessary that the frequency response should be optimally fitted into their residual auditory area. Digital filter enhances the performance of the hearing aids which are either difficult or impossible to realize using analog techniques. The major problem in digital hearing aid is that of reducing power consumption. Multiplication is one of the most power consuming operation in digital filtering. Hence a serious effort has been made to design filter bank with minimum number of multiplications, there by minimizing the power consumption. It is achieved by using Interpolated and complementary FIR filters. This method gives significant savings in the number of arithmetic operations.
The Thesis is concluded by summarizing the results of analysis, and suggesting scope for further investigation
|
44 |
Maize and sugar prices: the effects on ethanol production / Majs och sockerpriser: etanolproduktionens följderPorrez Padilla, Federico January 2009 (has links)
<p> </p><p>The world is experiencing yet another energy- and fuel predicament as oil prices are escalating to new hights. Alternative fuels are being promoted globally as the increasing gasoline prices trigger inflation. Basic food commodities are some of the goods hit by this inflation and the purpose of this thesis is to analyse whether the higher maize and sugar prices are having any effect on the expanding ethanol production. This thesis focuses on the two major crop inputs in ethanol production: maize (in the US) and sugar cane (in Brazil). Econometric tests using cross-sectional data were carried through to find the elasticities of the variables. The crops prices were tested against ethanol output using the log-linear model in several regressions to find a relationship. In addition, the output levels of the crops were tested using the same method. It was found that maize prices and output affects ethanol production. Sugar cane prices do not have any significant impact on ethanol production while sugar cane output has a small, yet significant relationhip with ethanol. Consequently, ethanol’s rise in the fuel market could be a result of increased maize input, rather than sugar.</p><p> </p> / <p>Dagens värld upplever ännu ett energi- och bränsle predikament när oljepriser eskalerar mot nya höjder. Alternativa bränslen marknadsförs globalt samtidigt som de stigande bensinpriserna stimulerar inflationen. Några av de varor som drabbas av denna inflation är grundläggande livsmedelsprodukter och syftet med denna uppsats är att analysera huruvida de högre priserna på majs och socker påverkar den expanderande etanolproduktionen. Uppsatsen fokuserar på de två stora grödor som används som insatsvaror vid framställningen av etanol: majs (i USA) och sockerrör (i Brasilien). Ekonometriska tester genomfördes för att erhålla variablernas elasticiteter med hjälp av den cross-sectional data som behandlades. Genom log-linear modellen utfördes det ett antal regressioner för att hitta ett samband mellan grödornas priser och etanolproduktionen. Därutöver genomfördes tester för att hitta sambandet mellan grödornas utbud och etanol med hjälp av samma modell. Det upptäcktes att både pris och utbudet av majs påverkar etanolproduktionen. Sockerrörspriser har ingen signifikant inverkan på etanolproduktionen medan utbudet av sockerrör har en signifikant, om än svag, relation till etanol. Följaktligen kan etanols tillväxt i bränslemarknaden tolkas som ett resultat av en stigande majsinsats snarare än sockerinstats vid etanolframställningen.</p>
|
45 |
Multiple Imputation on Missing Values in Time Series DataOh, Sohae January 2015 (has links)
<p>Financial stock market data, for various reasons, frequently contain missing values. One reason for this is that, because the markets close for holidays, daily stock prices are not always observed. This creates gaps in information, making it difficult to predict the following day’s stock prices. In this situation, information during the holiday can be “borrowed” from other countries’ stock market, since global stock prices tend to show similar movements and are in fact highly correlated. The main goal of this study is to combine stock index data from various markets around the world and develop an algorithm to impute the missing values in individual stock index using “information-sharing” between different time series. To develop imputation algorithm that accommodate time series-specific features, we take multiple imputation approach using dynamic linear model for time-series and panel data. This algorithm assumes ignorable missing data mechanism, as which missingness due to holiday. The posterior distributions of parameters, including missing values, is simulated using Monte Carlo Markov Chain (MCMC) methods and estimates from sets of draws are then combined using Rubin’s combination rule, rendering final inference of the data set. Specifically, we use the Gibbs sampler and Forward Filtering and Backward Sampling (FFBS) to simulate joint posterior distribution and posterior predictive distribution of latent variables and other parameters. A simulation study is conducted to check the validity and the performance of the algorithm using two error-based measurements: Root Mean Square Error (RMSE), and Normalized Root Mean Square Error (NRMSE). We compared the overall trend of imputed time series with complete data set, and inspected the in-sample predictability of the algorithm using Last Value Carried Forward (LVCF) method as a bench mark. The algorithm is applied to real stock price index data from US, Japan, Hong Kong, UK and Germany. From both of the simulation and the application, we concluded that the imputation algorithm performs well enough to achieve our original goal, predicting the stock price for the opening price after a holiday, outperforming the benchmark method. We believe this multiple imputation algorithm can be used in many applications that deal with time series with missing values such as financial and economic data and biomedical data.</p> / Thesis
|
46 |
Προσεγγίσεις για μοντέλα γραμμικού στοχαστικού προγραμματισμούΜπασέτα, Κωνσταντίνα 30 April 2015 (has links)
Πολλά είναι τα προβλήματα που καλούμαστε να αντιμετωπίσουμε στην καθημερινότητά μας, και που χρίζουν την ανάγκη προσδιορισμού αυτών, μέσω του Γραμμικού Στοχαστικού Προγραμματισμού. Βασικό εργαλείο των προβλημάτων του Γραμμικού Στοχαστικού Προγραμματισμού που χρησιμοποιείται για την υπολογιστική τους επίλυση είναι οι μέθοδοι του Γραμμικού και του Μη Γραμμικού Προγραμματισμού.
Στο 1ο κεφάλαιο της παρούσας Διπλωματικής Εργασίας, υπενθυμίζονται οι βασικές ιδιότητες και μέθοδοι επίλυσης των Γραμμικών και Μη Γραμμικών προβλημάτων, όπως αυτές χρησιμοποιούνται από τον Στοχαστικό Προγραμματισμό.
Στο 2ο κεφάλαιο, παρουσιάζεται μια σειρά από Γραμμικά μοντέλα Στοχαστικού Γραμμικού Προγραμματισμού ενός σταδίου συζητώντας τις θεωρητικές τους ιδιότητες, σχετικές με την υπολογιστική τους δυνατότητα, μία από τις οποίες αποτελεί η κυρτότητά τους.
Στο 3ο, και τελευταίο κεφάλαιο, ακολουθείται μια αντίστοιχη παρουσίαση των Γραμμικών Στοχαστικών μοντέλων πολλαπλών σταδίων, τονίζοντας τις ιδιότητες αυτές που επιτρέπουν την κατασκευή προσεγγιστικών μεθόδων επίλυσης. / There are various special problem formulations to be dealt with in our daily life, and our conclusion is that a basic toolkit of Linear and Nonlinear Programming methods cannot be waived if we want to deal with the computational solution of Stochastic Linear Programming problems.
In chapter 1, basic properties of Linear Problems and Non Linear Problems, as well as their solution methods, are reminded, as they are used in the Stochastic Linear Programming.
In chapter 2, various Single-stage Stochastic Linear Programming (SLP) models are presented and their theoretical properties are discussed, which are relevant for their computational tractability, as convexity statements.
Conclusions are presented in chapter 3, followed by an analogous discussion of Multi-stage SLP models, focussed, among others, on properties allowing for the construction of particular approximation methods for computing solutions.
|
47 |
Stochastic claims reserving in non-life insurance : Bootstrap and smoothing modelsBjörkwall, Susanna January 2011 (has links)
In practice there is a long tradition of actuaries calculating reserve estimates according to deterministic methods without explicit reference to a stochastic model. For instance, the chain-ladder was originally a deterministic reserving method. Moreover, the actuaries often make ad hoc adjustments of the methods, for example, smoothing of the chain-ladder development factors, in order to fit the data set under analysis. However, stochastic models are needed in order to assess the variability of the claims reserve. The standard statistical approach would be to first specify a model, then find an estimate of the outstanding claims under that model, typically by maximum likelihood, and finally the model could be used to find the precision of the estimate. As a compromise between this approach and the actuary's way of working without reference to a model the object of the research area has often been to first construct a model and a method that produces the actuary's estimate and then use this model in order to assess the uncertainty of the estimate. A drawback of this approach is that the suggested models have been constructed to give a measure of the precision of the reserve estimate without the possibility of changing the estimate itself. The starting point of this thesis is the inconsistency between the deterministic approaches used in practice and the stochastic ones suggested in the literature. On one hand, the purpose of Paper I is to develop a bootstrap technique which easily enables the actuary to use other development factor methods than the pure chain-ladder relying on as few model assumptions as possible. This bootstrap technique is then extended and applied to the separation method in Paper II. On the other hand, the purpose of Paper III is to create a stochastic framework which imitates the ad hoc deterministic smoothing of chain-ladder development factors which is frequently used in practice.
|
48 |
Discrimination of High Risk and Low Risk Populations for the Treatment of STDsZhao, Hui 05 August 2011 (has links)
It is an important step in clinical practice to discriminate real diseased patients from healthy persons. It would be great to get such discrimination from some common information like personal information, life style, and the contact with diseased patient. In this study, a score is calculated for each patient based on a survey through generalized linear model, and then the diseased status is decided according to previous sexually transmitted diseases (STDs) records. This study will facilitate clinics in grouping patients into real diseased or healthy, which in turn will affect the method clinics take to screen patients: complete screening for possible diseased patient and some common screening for potentially healthy persons.
|
49 |
Inference for Clustered Mixed Outcomes from a Multivariate Generalized Linear Mixed ModelChen, Hsiang-Chun 16 December 2013 (has links)
Multivariate generalized linear mixed models (MGLMM) are used for jointly modeling the clustered mixed outcomes obtained when there are two or more responses repeatedly measured on each individual in scientific studies. The relationship among these responses is often of interest. In the clustered mixed data, the correlation could be present between repeated measurements either within the same observer or between different observers on the same subjects. This study proposes a series of in- dices, namely, intra, inter and total correlation coefficients, to measure the correlation under various circumstances of observations from a multivariate generalized linear model, especially for joint modeling of clustered count and continuous outcomes.
Bayesian methods are widely used techniques for analyzing MGLMM. The need for noninformative priors arises when there is insufficient prior information on the model parameters. Another aim of this study is to propose an approximate uniform shrinkage prior for the random effect variance components in the Bayesian analysis for the MGLMM. This prior is an extension of the approximate uniform shrinkage prior. This prior is easy to apply and is shown to possess several nice properties. The methods are illustrated in terms of both a simulation study and a case example.
|
50 |
REAL-TIME MODEL PREDICTIVE CONTROL OF QUASI-KEYHOLE PIPE WELDINGQian, Kun 01 January 2010 (has links)
Quasi-keyhole, including plasma keyhole and double-sided welding, is a novel approach proposed to operate the keyhole arc welding process. It can result in a high quality weld, but also raise higher demand of the operator. A computer control system to detect the keyhole and control the arc current can improve the performance of the welding process. To this effect, developing automatic pipe welding, instead of manual welding, is a hot research topic in the welding field.
The objective of this research is to design an automatic quasi-keyhole pipe welding system that can monitor the keyhole and control its establishment time to track the reference trajectory as the dynamic behavior of welding processes changes. For this reason, an automatic plasma welding system is proposed, in which an additional electrode is added on the back side of the workpiece to detect the keyhole, as well as to provide the double-side arc in the double-sided arc welding mode. In the automatic pipe welding system the arc current can be controlled by the computer controller.
Based on the designed automatic plasma pipe welding system, two kinds of model predictive controller − linear and bilinear − are developed, and an optimal algorithm is designed to optimize the keyhole weld process. The result of the proposed approach has been verified by using both linear and bilinear model structures in the quasi-keyhole plasma welding (QKPW) process experiments, both in normal plasma keyhole and double-sided arc welding modes.
|
Page generated in 0.0484 seconds