Spelling suggestions: "subject:"8upport vector machines."" "subject:"6upport vector machines.""
281 |
A Bayesian least squares support vector machines based framework for fault diagnosis and failure prognosisKhawaja, Taimoor Saleem 21 July 2010 (has links)
A high-belief low-overhead Prognostics and Health Management (PHM) system
is desired for online real-time monitoring of complex non-linear systems operating
in a complex (possibly non-Gaussian) noise environment. This thesis presents a
Bayesian Least Squares Support Vector Machine (LS-SVM) based framework for fault
diagnosis and failure prognosis in nonlinear, non-Gaussian systems. The methodology
assumes the availability of real-time process measurements, definition of a set
of fault indicators, and the existence of empirical knowledge (or historical data) to
characterize both nominal and abnormal operating conditions.
An efficient yet powerful Least Squares Support Vector Machine (LS-SVM) algorithm,
set within a Bayesian Inference framework, not only allows for the development of
real-time algorithms for diagnosis and prognosis but also provides a solid theoretical
framework to address key concepts related to classication for diagnosis and regression
modeling for prognosis. SVM machines are founded on the principle of Structural
Risk Minimization (SRM) which tends to nd a good trade-o between low empirical
risk and small capacity. The key features in SVM are the use of non-linear kernels,
the absence of local minima, the sparseness of the solution and the capacity control
obtained by optimizing the margin. The Bayesian Inference framework linked with
LS-SVMs allows a probabilistic interpretation of the results for diagnosis and prognosis.
Additional levels of inference provide the much coveted features of adaptability
and tunability of the modeling parameters.
The two main modules considered in this research are fault diagnosis and failure
prognosis. With the goal of designing an efficient and reliable fault diagnosis scheme, a novel Anomaly Detector is suggested based on the LS-SVM machines. The proposed
scheme uses only baseline data to construct a 1-class LS-SVM machine which,
when presented with online data, is able to distinguish between normal behavior and
any abnormal or novel data during real-time operation. The results of the scheme
are interpreted as a posterior probability of health (1 - probability of fault). As
shown through two case studies in Chapter 3, the scheme is well suited for diagnosing
imminent faults in dynamical non-linear systems.
Finally, the failure prognosis scheme is based on an incremental weighted Bayesian
LS-SVR machine. It is particularly suited for online deployment given the incremental
nature of the algorithm and the quick optimization problem solved in the LS-SVR
algorithm. By way of kernelization and a Gaussian Mixture Modeling (GMM)
scheme, the algorithm can estimate (possibly) non-Gaussian posterior distributions
for complex non-linear systems. An efficient regression scheme associated with the
more rigorous core algorithm allows for long-term predictions, fault growth estimation
with confidence bounds and remaining useful life (RUL) estimation after a fault
is detected.
The leading contributions of this thesis are (a) the development of a novel Bayesian
Anomaly Detector for efficient and reliable Fault Detection and Identification (FDI)
based on Least Squares Support Vector Machines , (b) the development of a data-driven
real-time architecture for long-term Failure Prognosis using Least Squares Support
Vector Machines,(c) Uncertainty representation and management using Bayesian
Inference for posterior distribution estimation and hyper-parameter tuning, and finally (d) the statistical characterization of the performance of diagnosis and prognosis
algorithms in order to relate the efficiency and reliability of the proposed schemes.
|
282 |
Closed-loop control for cardiopulmonary management and intensive care unit sedation using digital imagingGholami, Behnood 29 June 2010 (has links)
This dissertation introduces a new problem in the delivery of healthcare, which could result in
lower cost and a higher quality of medical care as compared to the current healthcare practice. In
particular, a framework is developed for sedation and cardiopulmonary management for patients
in the intensive care unit. A method is introduced to automatically detect pain and agitation
in nonverbal patients, specifically in sedated patients in the intensive care unit, using their facial
expressions. Furthermore, deterministic as well as probabilistic expert systems are developed to
suggest the appropriate drug dose based on patient sedation level. This framework can be used
to automatically control the level of sedation in the intensive care unit patients via a closed-loop
control system. Specifically, video and other physiological variables of a patient can be constantly
monitored by a computer and used as a feedback signal in a closed-loop control architecture. In
addition, the expert system selects the appropriate drug dose based on the patient's sedation level.
In clinical intensive care unit practice sedative/analgesic agents are titrated to achieve a specific
level of sedation. The level of sedation is currently based on clinical scoring systems. In general,
the goal of the clinician is to find the drug dose that maintains the patient at a sedation score
corresponding to a moderately sedated state. This is typically done empirically, administering a
drug dose that usually is in the effective range for most patients, observing the patient's response,
and then adjusting the dose accordingly. However, the response of patients to any drug dose is
a reflection of the pharmacokinetic and pharmacodynamic properties of the drug and the specific
patient. In this research, we use pharmacokinetic and pharmacodynamic modeling to find an
optimal drug dosing control policy to drive the patient to a desired sedation score.
|
283 |
Novel Approaches For Demand Forecasting In Semiconductor ManufacturingKumar, Chittari Prasanna 01 1900 (has links)
Accurate demand forecasting is a key capability for a manufacturing organization, more so, a semiconductor manufacturer. Many crucial decisions are based on demand forecasts. The semiconductor industry is characterized by very short product lifecycles (10 to 24 months) and extremely uncertain demand. The pace at which both the manufacturing technology and the product design changes, induce change in manufacturing throughput and potential demand. Well known methods like exponential smoothing, moving average, weighted moving average, ARMA, ARIMA, econometric methods and neural networks have been used in industry with varying degrees of success. We propose a novel forecasting technique which is based on Support Vector Regression (SVR). Specifically, we formulate ν-SVR models for semiconductor product demand data. We propose a 3-phased input vector modeling approach to comprehend demand characteristics learnt while building a standard ARIMA model on the data.
Forecasting Experimentations are done for different semiconductor product demand data like 32 & 64 bit CPU products, 32bit Micro controller units, DSP for cellular products, NAND and NOR Flash Products. Demand data was provided by SRC(Semiconductor Research Consortium) Member Companies. Demand data was actual sales recorded at every month. Model performance is judged based on different performance metrics used in extant literature. Results of experimentation show that compared to other demand forecasting techniques ν-SVR can significantly reduce both mean absolute percentage errors and normalized mean-squared errors of forecasts. ν-SVR with our 3-phased input vector modeling approach performs better than standard ARIMA and simple ν-SVR models in most of the cases.
|
284 |
Χρήση τυχαίων χρονικών διαστημάτων για έλεγχο βιομετρικών χαρακτηριστικώνΣταμούλη, Αλεξία 30 April 2014 (has links)
Η μέθοδος αναγνώρισης μέσω του τρόπου πληκτρολόγησης αποτελεί μία μέθοδο αναγνώρισης βιομετρικών χαρακτηριστικών με στόχο να ελαχιστοποιηθεί ο κίνδυνος κλοπής των προσωπικών κωδικών των πελατών ενός συστήματος. Το παρόν βιομετρικό σύστημα βασίζεται στο σενάριο ότι ο ρυθμός με τον οποίο ένα πρόσωπο πληκτρολογεί είναι ξεχωριστός.
Το βιομετρικό σύστημα έχει δύο λειτουργίες, την εγγραφή των πελατών στο σύστημα και τη σύγκριση. Για την εγγραφή απαραίτητη είναι η εξαγωγή των προτύπων των πελατών τα οποία αποθηκεύονται στη βάση δεδομένων του συστήματος ενώ για στη σύγκριση το πρότυπο του χρήστη συγκρίνεται με το πρότυπο του πελάτη που ισχυρίζεται ότι είναι.
Στη παρούσα εργασία η εξαγωγή τον προτύπων πραγματοποιείται μέσω μία σειράς αλγοριθμικών διαδικασιών. Αρχικά η μονοδιάστατη χαρακτηριστική χρονοσειρά του χρήστη μετατρέπεται μέσω της μεθόδου Method of Delays σε ένα πολυδιάστατο διάνυσμα που λειτουργεί ως χαρακτηριστικό της ακολουθίας. Στη συνέχεια χρησιμοποιούμε δύο διαφορετικές μεθόδους για να υπολογίσουμε τις ανομοιότητες μεταξύ των πολυδιάστατων διανυσμάτων που προέκυψαν. Οι δύο αυτές μέθοδοι είναι οι Wald-Wolfowitz test και Mutual Nearest Point Distance. Οι τιμές αυτές τοποθετούνται σε έναν πίνακα κάθε στοιχείο του οποίου αναπαριστά την ανομοιότητα μεταξύ δύο χρονοσειρών. Ο πίνακας αυτός μπορεί είτε να αποτελέσει το σύνολο των προτύπων των χρηστών είτε να χρησιμοποιηθεί ως είσοδο στη μέθοδο Multidimensional Scaling που χρησιμοποιείται για μετατροπή του πίνακα ανομοιοτήτων σε διανύσματα και εξαγωγή νέων προτύπων. Τέλος, προτείνουμε ως επέκταση της εργασίας την εκπαίδευση του βιομετρικού συστήματος με χρήση των τεχνικών Support Vector Machines.
Για τη λειτουργία της σύγκρισης εξάγουμε πάλι το πρότυπο του χρήστη με την ίδια διαδικασία και το συγκρίνουμε με μία τιμή κατωφλίου. Τέλος, ο έλεγχος της αξιοπιστίας του συστήματος πραγματοποιείται μέσω της χρήσης τριών δεικτών απόδοσης, Equal Error Rate, False Rejection Rate και False Acceptance Rate. / The identification method via keystroke is a method of identifying biometric features in order to minimize the risk of theft of personal codes of customers of a system. The present biometric system based on the scenario that the rate at which a person presses the keyboard buttons is special.
The biometric system has two functions, the enrollment of customers in the system and their test. For enrollment, it is necessary to export standards of customers’ information stored in the system database and for the test the standard of the user is compared with the standard of the user that is intended to be the customer.
In the present thesis the export of the standards is taken place via a series of algorithmic procedures. Initially,the one dimensional characteristic time series of user is converted, by the technique Method of Delays, in a multidimensional vector that acts as a feature of the sequence. Then, two different methods are used to compute the dissimilarities between multidimensional vectors obtained. These two methods are the Wald-Wolfowitz test and the Mutual Nearest Point Distance. These values are placed in an array, each element of which represents the dissimilarity between two time series. This table can be either the standards of users or can be entry in the Multidimensional Scaling method used to convert the table disparities in vectors and then produce new standards of users. Finally, we propose as extension of our thesis, the training of biometric system with using the techniques of Support Vector Machine.
For the test, again the pattern of the user is extracted with the same procedure and is compared to a threshold. Finally, the reliability of the system is carried out through the use of three performance indicators, Equal Error Rate, False Rejection Rate and False Acceptance Rate.
|
285 |
Robust recognition of facial expressions on noise degraded facial imagesSheikh, Munaf January 2011 (has links)
<p>We investigate the use of noise degraded facial images in the application of facial expression recognition. In particular, we trained Gabor+SVMclassifiers to recognize facial expressions images with various types of noise. We applied Gaussian noise, Poisson noise, varying levels of salt and pepper noise, and speckle noise to noiseless facial images. Classifiers were trained with images without noise and then tested on the images with noise. Next, the classifiers were trained using images with noise, and then on tested both images that had noise, and images that were noiseless. Finally, classifiers were tested on images while increasing the levels of salt and pepper in the test set. Our results reflected distinct degradation of recognition accuracy. We also discovered that certain types of noise, particularly Gaussian and Poisson noise, boost recognition rates to levels greater than would be achieved by normal, noiseless images. We attribute this effect to the Gaussian envelope component of Gabor filters being sympathetic to Gaussian-like noise, which is similar in variance to that of the Gabor filters. Finally, using linear regression, we mapped a mathematical model to this degradation and used it to suggest how recognition rates would degrade further should more noise be added to the images.</p>
|
286 |
Dirbtinio intelekto metodų taikymas kredito rizikos vertinime / Application of artificial intelligence method in credit risk evaluationDanėnas, Paulius 23 June 2014 (has links)
Šis magistrinis darbas aprašo plačiausiai naudojamus dirbtinio intelekto metodus ir galimybes juos taikyti kredito rizikos, kuri yra viena svarbiausių sričių bankininkystėje ir finansuose, vertinime. Pagrindinė problema yra rizikos, atsirandančios kreditoriui išduodant kreditą tam tikram individui ar bendrovei, vertinimas, naudojant įvairius matematinius, statistinius ar kitus metodus. Ši rizika atsiranda tada, kai skolininkas negali laiku grąžinti skolos kreditoriui, kas reiškia papildomus nuostolius. Ji gali pasireikšti, priklausomai nuo skolininko tipo (individas, bendrovė ar užsienio vyriausybė) bei finansinio instrumento tipo ar su juo atliekamo veiksmo (skolos teikimas, finansinių derivatyvų tranzakcijos ir kt.), todėl finansinės institucijos jos įvertinimui bei valdymui naudoja įvairius metodus nuo vertinimo balais bei skirtingų faktorių, tokių kaip valdymo bei veiklos strategijos bei politika, įvertinimo iki klasifikavimo pagal įvairius kriterijus, naudojant modernius ir sudėtingus metodus, tiek matematinius, tiek dirbtinio intelekto. Ši sritis plačiai tiriama ir daug naujų metodų bei sprendimų pastoviai randama. Šio darbo tyrimas sukoncentruotas į atraminių vektorių mašinų (angl.Support Vector Machines, sutr. SVM) metodų, kuris yra viena populiariausių dirbtinio intelekto bei mašininio mokymo metodų ir kurio efektyvumas daugeliu atveju įrodytas. Šiuo tyrimo tikslas yra ištirti galimybes pritaikyti SVM metodą čia aprašomai problemai bei realizuoti sistemą, naudojančią... [toliau žr. visą tekstą] / This master work describes the most widely used artificial intelligence methods and the possibilities to apply them in credit risk evaluation which is one of the most important fields in banking and in finance. The main problem here is to evaluate the risk arising when a creditor gives a credit to a particular individual or an enterprise, using various mathematical, statistical or other methods and techniques. This risk arises when the debtor isn’t able to pay for the loan to the creditor in time which means additional loss. It can appear in many forms depending on the type of debtor (individ-ual, enterprise, government of an abroad country) and type of financial instrument or action that is done with it (giving of a loan, transactions of financial derivatives, etc.), this is the reason why fi-nancial institutions and for it’s evaluation and management use various different methodologies which comprise a lot of methods and techniques from credit scoring (evaluating by a particular formula, usually linear) and evaluating different factors, like management and business strategies or policies, to classification by various criterions by using modern and sophisticated methods, either algebraic, either artificial intelligence and machine learning. This field is widely researched and many new techniques are being found. The research here is concentrated mainly on Support Vector Machines (abbr. SVM) which is one of the most popular artificial intelligence and machine learning... [to full text]
|
287 |
Derivative Free Optimization Methods: Application In Stirrer Configuration And Data ClusteringAkteke, Basak 01 July 2005 (has links) (PDF)
Recent developments show that derivative free methods are highly demanded by researches for solving optimization problems in various practical contexts.
Although well-known optimization methods that employ derivative information can be very effcient, a derivative free method will be more effcient in cases
where the objective function is nondifferentiable, the derivative information is
not available or is not reliable. Derivative Free Optimization (DFO) is developed
for solving small dimensional problems (less than 100 variables) in which
the computation of an objective function is relatively expensive and the derivatives
of the objective function are not available. Problems of this nature more
and more arise in modern physical, chemical and econometric measurements
and in engineering applications, where computer simulation is employed for the
evaluation of the objective functions.
In this thesis, we give an example of the implementation of DFO in an approach
for optimizing stirrer configurations, including a parametrized grid generator,
a flow solver, and DFO. A derivative free method, i.e., DFO is preferred because
the gradient of the objective function with respect to the stirrer&rsquo / s design variables is not directly available. This nonlinear objective function is obtained
from the flow field by the flow solver. We present and interpret numerical results
of this implementation. Moreover, a contribution is given to a survey and
a distinction of DFO research directions, to an analysis and discussion of these.
We also state a derivative free algorithm used within a clustering algorithm in
combination with non-smooth optimization techniques to reveal the effectiveness
of derivative free methods in computations. This algorithm is applied on
some data sets from various sources of public life and medicine. We compare
various methods, their practical backgrounds, and conclude with a summary
and outlook. This work may serve as a preparation of possible future research.
|
288 |
Accent Classification from Speech Samples by Use of Machine LearningCarol Pedersen Unknown Date (has links)
“Accent” is the pattern of speech pronunciation by which one can identify a person’s linguistic, social or cultural background. It is an important source of inter-speaker variability and a particular problem for automated speech recognition. The aim of the study was to investigate a new computational approach to accent classification which did not require phonemic segmentation or the identification of phonemes as input, and which could therefore be used as a simple, effective accent classifier. Through a series of structured experiments this study investigated the effectiveness of Support Vector Machines (SVMs) for speech accent classification using time-based units rather than linguistically-informed ones, and compared it to the accuracy of other machine learning methods, as well as the ability of humans to classify speech according to accent. A corpus of read-speech was collected in two accents of English (Arabic and “Indian”) and used as the main datasource for the experiments. Mel-frequency cepstral coefficients were extracted from the speech samples and combined into larger units of 10 to 150ms duration, which then formed the input data for the various machine learning systems. Support Vector Machines were found to classify the samples with up to 97.5% accuracy with very high precision and recall, using samples of between 1 and 4 seconds of speech. This compared favourably with a human listener study where subjects were able to distinguish between the two accent groups with an average of 92.5% accuracy in approximately 8 seconds. Repeating the SVM experiments on a different corpus resulted in a best classification accuracy of 84.6%. Experiments using a decision tree learner and a rule-based classifier on the original corpus gave a best accuracy of 95% but results over the range of conditions were much more variable than those using the SVM. Rule extraction was performed in order to help explain the results and better inform the design of the system. The new approach was therefore shown to be effective for accent classification, and a plan for its role within various other larger speech-related contexts was developed.
|
289 |
Accent Classification from Speech Samples by Use of Machine LearningCarol Pedersen Unknown Date (has links)
“Accent” is the pattern of speech pronunciation by which one can identify a person’s linguistic, social or cultural background. It is an important source of inter-speaker variability and a particular problem for automated speech recognition. The aim of the study was to investigate a new computational approach to accent classification which did not require phonemic segmentation or the identification of phonemes as input, and which could therefore be used as a simple, effective accent classifier. Through a series of structured experiments this study investigated the effectiveness of Support Vector Machines (SVMs) for speech accent classification using time-based units rather than linguistically-informed ones, and compared it to the accuracy of other machine learning methods, as well as the ability of humans to classify speech according to accent. A corpus of read-speech was collected in two accents of English (Arabic and “Indian”) and used as the main datasource for the experiments. Mel-frequency cepstral coefficients were extracted from the speech samples and combined into larger units of 10 to 150ms duration, which then formed the input data for the various machine learning systems. Support Vector Machines were found to classify the samples with up to 97.5% accuracy with very high precision and recall, using samples of between 1 and 4 seconds of speech. This compared favourably with a human listener study where subjects were able to distinguish between the two accent groups with an average of 92.5% accuracy in approximately 8 seconds. Repeating the SVM experiments on a different corpus resulted in a best classification accuracy of 84.6%. Experiments using a decision tree learner and a rule-based classifier on the original corpus gave a best accuracy of 95% but results over the range of conditions were much more variable than those using the SVM. Rule extraction was performed in order to help explain the results and better inform the design of the system. The new approach was therefore shown to be effective for accent classification, and a plan for its role within various other larger speech-related contexts was developed.
|
290 |
Approximate dynamic programming with adaptive critics and the algebraic perceptron as a fast neural network related to support vector machinesHanselmann, Thomas January 2003 (has links)
[Truncated abstract. Please see the pdf version for the complete text. Also, formulae and special characters can only be approximated here. Please see the pdf version of this abstract for an accurate reproduction.] This thesis treats two aspects of intelligent control: The first part is about long-term optimization by approximating dynamic programming and in the second part a specific class of a fast neural network, related to support vector machines (SVMs), is considered. The first part relates to approximate dynamic programming, especially in the framework of adaptive critic designs (ACDs). Dynamic programming can be used to find an optimal decision or control policy over a long-term period. However, in practice it is difficult, and often impossible, to calculate a dynamic programming solution, due to the 'curse of dimensionality'. The adaptive critic design framework addresses this issue and tries to find a good solution by approximating the dynamic programming process for a stationary environment. In an adaptive critic design there are three modules, the plant or environment to be controlled, a critic to estimate the long-term cost and an action or controller module to produce the decision or control strategy. Even though there have been many publications on the subject over the past two decades, there are some points that have had less attention. While most of the publications address the training of the critic, one of the points that has not received systematic attention is training of the action module.¹ Normally, training starts with an arbitrary, hopefully stable, decision policy and its long-term cost is then estimated by the critic. Often the critic is a neural network that has to be trained, using a temporal difference and Bellman's principle of optimality. Once the critic network has converged, a policy improvement step is carried out by gradient descent to adjust the parameters of the controller network. Then the critic is retrained again to give the new long-term cost estimate. However, it would be preferable to focus more on extremal policies earlier in the training. Therefore, the Calculus of Variations is investigated to discard the idea of using the Euler equations to train the actor. However, an adaptive critic formulation for a continuous plant with a short-term cost as an integral cost density is made and the chain rule is applied to calculate the total derivative of the short-term cost with respect to the actor weights. This is different from the discrete systems, usually used in adaptive critics, which are used in conjunction with total ordered derivatives. This idea is then extended to second order derivatives such that Newton's method can be applied to speed up convergence. Based on this, an almost concurrent actor and critic training was proposed. The equations are developed for any non-linear system and short-term cost density function and these were tested on a linear quadratic regulator (LQR) setup. With this approach the solution to the actor and critic weights can be achieved in only a few actor-critic training cycles. Some other, more minor issues, in the adaptive critic framework are investigated, such as the influence of the discounting factor in the Bellman equation on total ordered derivatives, the target interpretation in backpropagation through time as moving and fixed targets, the relation between simultaneous recurrent networks and dynamic programming is stated and a reinterpretation of the recurrent generalized multilayer perceptron (GMLP) as a recurrent generalized finite impulse MLP (GFIR-MLP) is made. Another subject in this area that is investigated, is that of a hybrid dynamical system, characterized as a continuous plant and a set of basic feedback controllers, which are used to control the plant by finding a switching sequence to select one basic controller at a time. The special but important case is considered when the plant is linear but with some uncertainty in the state space and in the observation vector, and a quadratic cost function. This is a form of robust control, where a dynamic programming solution has to be calculated. ¹Werbos comments that most treatment of action nets or policies either assume enumerative maximization, which is good only for small problems, except for the games of Backgammon or Go [1], or, gradient-based training. The latter is prone to difficulties with local minima due to the non-convex nature of the cost-to-go function. With incremental methods, such as backpropagation through time, calculus of variations and model-predictive control, the dangers of non-convexity of the cost-to-go function with respect to the control is much less than the with respect to the critic parameters, when the sampling times are small. Therefore, getting the critic right has priority. But with larger sampling times, when the control represents a more complex plan, non-convexity becomes more serious.
|
Page generated in 0.0883 seconds