Spelling suggestions: "subject:"square""
741 |
Predicting The Effect Of Hydrophobicity Surface On Binding Affinity Of Pcp-like Compounds Using Machine Learning MethodsYoldas, Mine 01 April 2011 (has links) (PDF)
This study aims to predict the binding affinity of the PCP-like compounds by means of molecular hydrophobicity. Molecular hydrophobicity is an important property which affects the binding affinity of molecules. The values of molecular hydrophobicity of molecules are obtained on three-dimensional coordinate system. Our aim is to reduce the number of points on the hydrophobicity surface of the molecules. This is modeled by using self organizing maps (SOM) and k-means clustering. The feature sets obtained from SOM and k-means clustering
are used in order to predict binding affinity of molecules individually. Support vector regression and partial least squares regression are used for prediction.
|
742 |
A Bidirectional Lms Algorithm For Estimation Of Fast Time-varying ChannelsYapici, Yavuz 01 May 2011 (has links) (PDF)
Effort to estimate unknown time-varying channels as a part of high-speed mobile communication systems is of interest especially for next-generation wireless systems. The high computational complexity of the optimal Wiener estimator usually makes its use impractical in fast time-varying channels. As a powerful candidate, the adaptive least mean squares (LMS) algorithm offers a computationally efficient solution with its simple first-order weight-vector update equation. However, the performance of the LMS algorithm deteriorates in time-varying channels as a result of the eigenvalue disparity, i.e., spread, of the input correlation matrix in such chan nels. In this work, we incorporate the L MS algorithm into the well-known bidirectional processing idea to produce an extension called the bidirectional LMS. This algorithm is shown to be robust to the adverse effects of time-varying channels such as large eigenvalue spread. The associated tracking performance is observed to be very close to that of the optimal Wiener filter in many cases and the bidirectional LMS algorithm is therefore referred to as near-optimal. The computational complexity is observed to increase by the bidirectional employment of the LMS algorithm, but nevertheless is significantly lower than that of the optimal Wiener filter. The tracking behavior of the bidirectional LMS algorithm is also analyzed and eventually a steady-state step-size dependent mean square error (MSE) expression is derived for single antenna flat-fading channels with various correlation properties. The aforementioned analysis is then generalized to include single-antenna frequency-selective channels where the so-called ind ependence assumption is no more applicable due to the channel memory at hand, and then to multi-antenna flat-fading channels. The optimal selection of the step-size values is also presented using the results of the MSE analysis. The numerical evaluations show a very good match between the theoretical and the experimental results under various scenarios. The tracking analysis of the bidirectional LMS algorithm is believed to be novel in the sense that although there are several works in the literature on the bidirectional estimation, none of them provides a theoretical analysis on the underlying estimators. An iterative channel estimation scheme is also presented as a more realistic application for each of the estimation algorithms and the channel models under consideration. As a result, the bidirectional LMS algorithm is observed to be very successful for this real-life application with its increased but still practical level of complexity, the near-optimal tracking performa nce and robustness to the imperfect initialization.
|
743 |
Least-squares Finite Element Solution Of Euler Equations With Adaptive Mesh RefinementAkargun, Yigit Hayri 01 February 2012 (has links) (PDF)
Least-squares finite element method (LSFEM) is employed to simulate 2-D and axisymmetric flows governed by the compressible Euler equations. Least-squares formulation brings many advantages over classical Galerkin finite element methods. For non-self-adjoint systems, LSFEM result in symmetric positive-definite matrices which can be solved efficiently by iterative methods. Additionally, with a unified formulation it can work in all flight regimes from subsonic to supersonic. Another advantage is that, the method does not require artificial viscosity since it is naturally diffusive which also appears as a difficulty for sharply resolving high gradients in the flow field such as shock waves. This problem is dealt by employing adaptive mesh refinement (AMR) on triangular meshes. LSFEM with AMR technique is numerically tested with various flow problems and good agreement with the available data in literature is seen.
|
744 |
Image Processing Using the Least-Squares Approximation for Quality Improvement of Underwater Laser RangingWu, Chen-Mao 29 June 2003 (has links)
This paper attempts to use image processing methods to reduce the influences of ambient
light and scattering effect on the performance of an underwater range finder. The Taguchi method, as well, is employed to increase the repeatability of underwater range finding. In this study, the image processing methods of the least-squares approximation, brightness and contrast adjustment, and primary color processing are presented. The illumination center is also used to estimate the position of the laser spot in the image. In addition, a bandpass optical filter at the receiving end is used to investigate the effects of filters on the quality of range finding. To verify the effectiveness of the proposed image processing methods, a series of DOE process runs are carried out to study effects of the design parameters on quality of range finding. For each image processing method, its corresponding control factors and levels are assigned to an inner orthogonal array. To make the proposed image processing methods robust against noises, both environmental illumination and turbidity are forced into the experiments by utilizing an outer orthogonal array. Images for processing are then captured under different noise conditions in accordance with the allocation of the outer noise array. And, according to the layout of the inner array, the S/N ratio of each treatment combination is calculated. After that, the optimum combination of control factors is predicted through the analysis of variance. Then, the confirmation experiments are carried out to verify that the combination of control factors at the perceived best levels is valid. Based on the results of experiments and analyses, it is found that the least-squares approximation is better than other proposed image processing methods for increasing the quality of range finding. Moreover, the effect
of increasing quality of range finding by using the least-squares approximation is superior to that of using a bandpass optical filter. Even though a range finding system has incorporated a bandpass optical filter for filtering out unwanted noises, the quality of range finding can still be increased distinctly while the algorithm of the least-squares approximation is employed. As well, the least-squares approximation is feasible to reduce the scattering effects in the laser images if the size of the sparse backscattering light spot is smaller than that of the target light spot.
|
745 |
A Bayesian least squares support vector machines based framework for fault diagnosis and failure prognosisKhawaja, Taimoor Saleem 21 July 2010 (has links)
A high-belief low-overhead Prognostics and Health Management (PHM) system
is desired for online real-time monitoring of complex non-linear systems operating
in a complex (possibly non-Gaussian) noise environment. This thesis presents a
Bayesian Least Squares Support Vector Machine (LS-SVM) based framework for fault
diagnosis and failure prognosis in nonlinear, non-Gaussian systems. The methodology
assumes the availability of real-time process measurements, definition of a set
of fault indicators, and the existence of empirical knowledge (or historical data) to
characterize both nominal and abnormal operating conditions.
An efficient yet powerful Least Squares Support Vector Machine (LS-SVM) algorithm,
set within a Bayesian Inference framework, not only allows for the development of
real-time algorithms for diagnosis and prognosis but also provides a solid theoretical
framework to address key concepts related to classication for diagnosis and regression
modeling for prognosis. SVM machines are founded on the principle of Structural
Risk Minimization (SRM) which tends to nd a good trade-o between low empirical
risk and small capacity. The key features in SVM are the use of non-linear kernels,
the absence of local minima, the sparseness of the solution and the capacity control
obtained by optimizing the margin. The Bayesian Inference framework linked with
LS-SVMs allows a probabilistic interpretation of the results for diagnosis and prognosis.
Additional levels of inference provide the much coveted features of adaptability
and tunability of the modeling parameters.
The two main modules considered in this research are fault diagnosis and failure
prognosis. With the goal of designing an efficient and reliable fault diagnosis scheme, a novel Anomaly Detector is suggested based on the LS-SVM machines. The proposed
scheme uses only baseline data to construct a 1-class LS-SVM machine which,
when presented with online data, is able to distinguish between normal behavior and
any abnormal or novel data during real-time operation. The results of the scheme
are interpreted as a posterior probability of health (1 - probability of fault). As
shown through two case studies in Chapter 3, the scheme is well suited for diagnosing
imminent faults in dynamical non-linear systems.
Finally, the failure prognosis scheme is based on an incremental weighted Bayesian
LS-SVR machine. It is particularly suited for online deployment given the incremental
nature of the algorithm and the quick optimization problem solved in the LS-SVR
algorithm. By way of kernelization and a Gaussian Mixture Modeling (GMM)
scheme, the algorithm can estimate (possibly) non-Gaussian posterior distributions
for complex non-linear systems. An efficient regression scheme associated with the
more rigorous core algorithm allows for long-term predictions, fault growth estimation
with confidence bounds and remaining useful life (RUL) estimation after a fault
is detected.
The leading contributions of this thesis are (a) the development of a novel Bayesian
Anomaly Detector for efficient and reliable Fault Detection and Identification (FDI)
based on Least Squares Support Vector Machines , (b) the development of a data-driven
real-time architecture for long-term Failure Prognosis using Least Squares Support
Vector Machines,(c) Uncertainty representation and management using Bayesian
Inference for posterior distribution estimation and hyper-parameter tuning, and finally (d) the statistical characterization of the performance of diagnosis and prognosis
algorithms in order to relate the efficiency and reliability of the proposed schemes.
|
746 |
Uncalibrated robotic visual servo tracking for large residual problemsMunnae, Jomkwun 17 November 2010 (has links)
In visually guided control of a robot, a large residual problem occurs when the robot configuration is not in the neighborhood of the target acquisition configuration. Most existing uncalibrated visual servoing algorithms use quasi-Gauss-Newton methods which are effective for small residual problems. The solution used in this study switches between a full quasi-Newton method for large residual case and the quasi-Gauss-Newton methods for the small case. Visual servoing to handle large residual problems for tracking a moving target has not previously appeared in the literature.
For large residual problems various Hessian approximations are introduced including an approximation of the entire Hessian matrix, the dynamic BFGS (DBFGS) algorithm, and two distinct approximations of the residual term, the modified BFGS (MBFGS) algorithm and the dynamic full Newton method with BFGS (DFN-BFGS) algorithm. Due to the fact that the quasi-Gauss-Newton method has the advantage of fast convergence, the quasi-Gauss-Newton step is used as the iteration is sufficiently near the desired solution. A switching algorithm combines a full quasi-Newton method and a quasi-Gauss-Newton method. Switching occurs if the image error norm is less than the switching criterion, which is heuristically selected.
An adaptive forgetting factor called the dynamic adaptive forgetting factor (DAFF) is presented. The DAFF method is a heuristic scheme to determine the forgetting factor value based on the image error norm. Compared to other existing adaptive forgetting factor schemes, the DAFF method yields the best performance for both convergence time and the RMS error.
Simulation results verify validity of the proposed switching algorithms with the DAFF method for large residual problems. The switching MBFGS algorithm with the DAFF method significantly improves tracking performance in the presence of noise. This work is the first successfully developed model independent, vision-guided control for large residual with capability to stably track a moving target with a robot.
|
747 |
結構型商品評價-以美元雙指標利率連動債與歐元逆浮動連動債為例謝明翰 Unknown Date (has links)
本文採用BGM模型評價兩個配息型態不同的利率連結商品。利用BGM模型,我們可以直接透過蒐集市場資料,即可描述LIBOR利率的期間結構。同時,對模型內遠期利率波動度與相關係數進行校準(Calibration),使評價更為正確。
而本文評價的第一個商品為「三年期美元每日計息雙指標利率連動債」,第二個商品則是「10年期歐元逆浮動連動債」。使用BGM模型,並透過最小平方蒙地卡羅模擬,考慮提前買回條款及計算各期的配息,分別求得兩個商品的合理價格並計算避險參數。此外,從發行商與投資人的角度,分別給予避險與投資建議。
關鍵字:利率連動債、每日計息、逆浮動、BGM模型、LIBOR Market Model、Least-Squares Monte Carlo
|
748 |
Improving process monitoring and modeling of batch-type plasma etching toolsLu, Bo, active 21st century 01 September 2015 (has links)
Manufacturing equipments in semiconductor factories (fabs) provide abundant data and opportunities for data-driven process monitoring and modeling. In particular, virtual metrology (VM) is an active area of research. Traditional monitoring techniques using univariate statistical process control charts do not provide immediate feedback to quality excursions, hindering the implementation of fab-wide advanced process control initiatives. VM models or inferential sensors aim to bridge this gap by predicting of quality measurements instantaneously using tool fault detection and classification (FDC) sensor measurements. The existing research in the field of inferential sensor and VM has focused on comparing regressions algorithms to demonstrate their feasibility in various applications. However, two important areas, data pretreatment and post-deployment model maintenance, are usually neglected in these discussions. Since it is well known that the industrial data collected is of poor quality, and that the semiconductor processes undergo drifts and periodic disturbances, these two issues are the roadblocks in furthering the adoption of inferential sensors and VM models. In data pretreatment, batch data collected from FDC systems usually contain inconsistent trajectories of various durations. Most analysis techniques requires the data from all batches to be of same duration with similar trajectory patterns. These inconsistencies, if unresolved, will propagate into the developed model and cause challenges in interpreting the modeling results and degrade model performance. To address this issue, a Constrained selective Derivative Dynamic Time Warping (CsDTW) method was developed to perform automatic alignment of trajectories. CsDTW is designed to preserve the key features that characterizes each batch and can be solved efficiently in polynomial time. Variable selection after trajectory alignment is another topic that requires improvement. To this end, the proposed Moving Window Variable Importance in Projection (MW-VIP) method yields a more robust set of variables with demonstrably more long-term correlation with the predicted output. In model maintenance, model adaptation has been the standard solution for dealing with drifting processes. However, most case studies have already preprocessed the model update data offline. This is an implicit assumption that the adaptation data is free of faults and outliers, which is often not true for practical implementations. To this end, a moving window scheme using Total Projection to Latent Structure (T-PLS) decomposition screens incoming updates to separate the harmless process noise from the outliers that negatively affects the model. The integrated approach was demonstrated to be more robust. In addition, model adaptation is very inefficient when there are multiplicities in the process, multiplicities could occur due to process nonlinearity, switches in product grade, or different operating conditions. A growing structure multiple model system using local PLS and PCA models have been proposed to improve model performance around process conditions with multiplicity. The use of local PLS and PCA models allows the method to handle a much larger set of inputs and overcome several challenges in mixture model systems. In addition, fault detection sensitivities are also improved by using the multivariate monitoring statistics of these local PLS/PCA models. These proposed methods are tested on two plasma etch data sets provided by Texas Instruments. In addition, a proof of concept using virtual metrology in a controller performance assessment application was also tested.
|
749 |
Magnetic Field of HD119419 From Four Stokes Parameter ObservationsLundin, Andreas January 2015 (has links)
We have used a series of observations of HD119419, performed in 2012 and 2013at the European Southern Observatory 3.6-m telescope in La Silla, Chile. These are high resolutionspectropolarimetric observations with coverage in all four Stokes parameters. We performed a chemical abundance analysis of HD119419, in the absence of any being published previously for this star. We used a LLmodels stellar atmosphere code with effective temperature11500 K and surface gravity log g = 4.0, together with the spectrum synthesis code synmast. Abundances were adjusted until the synthetic spectra matched the mean observed spectra as well as possible, and these abundances were assumed to be representative of the photosphere of HD119419. We found good estimates for some Fe-peak elements and rare-earth elements. The abundance estimates were used to compute least-squares deconvolution Stokes spectra, from which we calculated how the longitudinal magnetic field and net linear polarization varies with rotational phase for HD119419. We calculated an improved rotational period for HD119419 using our longitudinal magnetic field measurements together with previous measurements from the literature, determining it to be 2.60059(1) days. We found that the Stokes QUV are unusually strong for the rare-earth elements in HD119419, considering their weaker Stokes I profiles compared to the Fe-peak elements in particular.
|
750 |
Macroscopic Modeling of Metabolic Reaction Networks and Dynamic Identification of Elementary Flux Modes by Column GenerationOddsdóttir, Hildur Æsa January 2015 (has links)
In this work an intersection between optimization methods and animal cell culture modeling is considered. We present optimization based methods for analyzing and building models of cell culture; models that could be used when designing the environment cells are cultivated in, i.e., medium. Since both the medium and cell line considered are complex, designing a good medium is not straightforward. Developing a model of cell metabolism is a step in facilitating medium design. In order to develop a model of the metabolism the methods presented in this work make use of an underlying metabolic reaction network and extracellular measurements. External substrates and products are connected via the relevant elementary flux modes (EFMs). Modeling from EFMs is generally limited to small networks, because the number of EFMs explodes when the underlying network size increases. The aim of this work is to enable modeling with more complex networks by presenting methods that dynamically identify a subset of the EFMs. In papers A and B we consider a model consisting of the EFMs along with the flux over each mode. In paper A we present how such a model can be decided by an optimization technique named column generation. In paper B the robustness of such a model with respect to measurement errors is considered. We show that a robust version of the underlying optimization problem in paper A can be formed and column generation applied to identify EFMs dynamically. In papers C and D a kinetic macroscopic model is considered. In paper C we show how a kinetic macroscopic model can be constructed from the EFMs. This macroscopic model is created by assuming that the flux along each EFM behaves according to Michaelis-Menten type kinetics. This modeling method has the ability to capture cell behavior in varied types of media, however the size of the underlying network is a limitation. In paper D this limitation is countered by developing an approximation algorithm, that can dynamically identify EFMs for a kinetic model. / I denna avhandling betraktar vi korsningen mellan optimeringsmetoder och modellering av djurcellodling.Vi presenterar optimeringsbaserade metoder för att analysera och bygga modeller av cellkulturer. Dessa modeller kan användas vid konstruktionen av den miljö som cellerna ska odlas i, dvs, medium.Eftersom både mediet och cellinjen är komplexa är det inte okomplicerat att utforma ett bra medium. Att utveckla en modell av cellernas ämnesomsättning är ett steg för att underlätta designen av mediet. För att utveckla en modell av metabolismen kommer de metoder som används i detta arbete att utnyttja ett underliggande metaboliskt reaktions\-nätverk och extracellulära mätningar. Externa substrat och produkter är sammankopplade via de relevanta elementära metaboliska vägarna (EFM).Modellering med hjälp av EFM är i allmänhet begränsad till små nätverk eftersom antalet EFM exploderar när de underliggande nätverket ökar i storlek. Målet med detta arbete är att möjliggöra modellering med mer komplexa nätverk genom att presentera metoder som dynamiskt identifierar en delmängd av EFM. I artikel A och B betraktar vi en modell som består av EFM och ett flöde över varje EFM.I artikel A presenterar vi hur en sådan modell kan bestämmas med hjälp av en optimeringsteknik som kallas kolumngenerering.I artikel A undersöker vi hur robust en sådan modell är med avseende till mätfel. Vi visar att en robust version av det underliggande optimeringsproblemet i artikel A kan konstrueras samt att kolumngenerering kan appliceras för att identifiera EFM dynamiskt. Artikel C och D behandlar en kinetisk makroskopisk modell. Vi visar i artikel C hur en sådan modell kan konstrueras från EFM.Denna makroskopiska modell är skapad genom att anta att flödet genom varje EFM beter sig enligt Michaelis-Menten-typ av kinetik. Denna modelleringsmetod har förmågan att fånga cellernas beteende i olika typer av media, men storleken på nätverket är en begränsning.I artikel D hanterar vi denna begränsing genom att utveckla en approximationsalgoritm som identifierar EFM dynamiskt för en kinetisk modell. / <p>QC 20150827</p>
|
Page generated in 0.0476 seconds