Spelling suggestions: "subject:"[een] SUPPORT VECTOR MACHINE"" "subject:"[enn] SUPPORT VECTOR MACHINE""
421 |
A Spatial-Temporal Contextual Kernel Method for Generating High-Quality Land-Cover Time SeriesWehmann, Adam 25 September 2014 (has links)
No description available.
|
422 |
Characterization of the Frictional-Shear Damage Properties of Scaffold-Free Engineered Cartilage and Reduction of Damage Susceptibility by Upregulation of Collagen ContentWhitney, G. Adam 09 February 2015 (has links)
No description available.
|
423 |
Estimating Per-pixel Classification Confidence of Remote Sensing ImagesJiang, Shiguo 19 December 2012 (has links)
No description available.
|
424 |
A Comparative Study of Machine Learning AlgorithmsLe Fort, Eric January 2018 (has links)
The selection of machine learning algorithm used to solve a problem is an important choice. This paper outlines research measuring three performance metrics for eight different algorithms on a prediction task involving under- graduate admissions data. The algorithms that were tested are k-nearest neighbours, decision trees, random forests, gradient tree boosting, logistic regression, naive bayes, support vector machines, and artificial neural net- works. These algorithms were compared in terms of accuracy, training time, and execution time. / Thesis / Master of Applied Science (MASc)
|
425 |
Grön AI : En analys av maskininlärningsalgoritmers prestanda och energiförbrukningBerglin, Caroline, Ellström, Julia January 2024 (has links)
Trots de framsteg som gjorts inom artificiell intelligens (AI) och maskininlärning (ML), uppkommer utmaningar gällande deras miljöpåverkan. Fokuset på att skapa avancerade och träffsäkra modeller innebär ofta att omfattande beräkningsresurser krävs, vilket leder till en hög energiförbrukning. Syftet med detta arbete är att undersöka ämnet grön AI och sambandet mellan prestanda och energiförbrukning hos två ML-algoritmer. De algoritmer som undersöks är beslutsträd och stödvektormaskin (SVM), med hjälp av två dataset: Bank Marketing och MNIST. Prestandan mäts med utvärderingsmåtten noggrannhet, precision, recall och F1-poäng, medan energiförbrukningen mäts med verktyget Intel VTune Profiler. Arbetets resultat visar att en högre prestanda resulterade i en högre energiförbrukning, där SVM presterade bäst men också förbrukade mest energi i samtliga tester. Vidare visar resultatet att optimering av modellerna resulterade både i en förbättrad prestanda men också i en ökad energiförbrukning. Samma resultat kunde ses när ett större dataset användes. Arbetet anses inte bidra med resultat eller riktlinjer som går att generalisera till andra arbeten. Däremot bidrar arbetet med en förståelse och medvetenhet kring miljöaspekterna gällande AI, vilket kan användas som en grund för att undersöka ämnet vidare. Genom en ökad medvetenhet kan ett gemensamt ansvar tas för att utveckla AI-lösningar som inte bara är kraftfulla och effektiva, utan också hållbara. / Despite the advancements made in artificial intelligence (AI) and machine learning (ML), challenges regarding their environmental impact arise. The focus on creating advanced and accurate models often requires extensive computational resources, leading to a high energy consumption. The purpose of this work is to explore the topic of green AI and the relationship between performance and energy consumption of two ML algorithms. The algorithms being evaluated are decision trees and support vector machines (SVM), using two datasets: Bank Marketing and MNIST. Performance is measured using the evaluation metrics accuracy, precision, recall, and F1-score, while energy consumption is measured using the Intel VTune Profiler tool. The results show that higher performance resulted in higher energy consumption, with SVM performing the best but also consuming the most energy in all tests. Furthermore, the results show that optimizing the models resulted in both improved performance and increased energy consumption. The same results were observed when a larger dataset was used. This work is not considered to provide results or guidelines that can be generalized to other studies. However, it contributes to an understanding and awareness of the environmental aspects of AI, which can serve as a foundation for further exploration of the topic. Through increased awareness, shared responsibility can be taken to develop AI solutions that are not only powerful and efficient but also sustainable.
|
426 |
Analýza cytologických snímků / Analysis of cytology imagesPavlík, Jan January 2012 (has links)
This master’s thesis is focused on automating the process of differential leukocyte count in peripherial blood using image processing. It deals with the design of the processing of digital images - from scanning and image preprocessing, segmentation nucleus and cytoplasm, feature selection and classifier, including testing on a set of images that were scanned in the context of this work. This work introduces used segmentation methods and classification procedures which separate nucleus and the cytoplasm of leukocytes. A statistical analysis is performed on the basis of these structures. Following adequate statistical parameters, a set of features has been chosen. This data then go through a classification process realized by three artificial neural networks. Overall were classified 5 types of leukocytes: neutropfiles, lymphocytes, monocytes, eosinophiles and basophiles. The sensitivity and specificity of the classification made for 4 out of 5 leukocyte types (neutropfiles, lymphocytes, monocytes, eosinophiles) is higher than 90 %. Sensitivity of classiffication basophiles was evaluated at 75 % and specificity at 67 %. The total ability of classification has been tested on 111 leukocytes and was approximately 91% successful. All algorithms were created in the MATLAB program.
|
427 |
On Enhancement and Quality Assessment of Audio and Video in Communication SystemsRossholm, Andreas January 2014 (has links)
The use of audio and video communication has increased exponentially over the last decade and has gone from speech over GSM to HD resolution video conference between continents on mobile devices. As the use becomes more widespread the interest in delivering high quality media increases even on devices with limited resources. This includes both development and enhancement of the communication chain but also the topic of objective measurements of the perceived quality. The focus of this thesis work has been to perform enhancement within speech encoding and video decoding, to measure influence factors of audio and video performance, and to build methods to predict the perceived video quality. The audio enhancement part of this thesis addresses the well known problem in the GSM system with an interfering signal generated by the switching nature of TDMA cellular telephony. Two different solutions are given to suppress such interference internally in the mobile handset. The first method involves the use of subtractive noise cancellation employing correlators, the second uses a structure of IIR notch filters. Both solutions use control algorithms based on the state of the communication between the mobile handset and the base station. The video enhancement part presents two post-filters. These two filters are designed to improve visual quality of highly compressed video streams from standard, block-based video codecs by combating both blocking and ringing artifacts. The second post-filter also performs sharpening. The third part addresses the problem of measuring audio and video delay as well as skewness between these, also known as synchronization. This method is a black box technique which enables it to be applied on any audiovisual application, proprietary as well as open standards, and can be run on any platform and over any network connectivity. The last part addresses no-reference (NR) bitstream video quality prediction using features extracted from the coded video stream. Several methods have been used and evaluated: Multiple Linear Regression (MLR), Artificial Neural Network (ANN), and Least Square Support Vector Machines (LS-SVM), showing high correlation with both MOS and objective video assessment methods as PSNR and PEVQ. The impact from temporal, spatial and quantization variations on perceptual video quality has also been addressed, together with the trade off between these, and for this purpose a set of locally conducted subjective experiments were performed.
|
428 |
Odor coding and memory traces in the antennal lobe of honeybeeGalan, Roberto Fernandez 17 December 2003 (has links)
In dieser Arbeit werden zwei wesentliche neue Ergebnisse vorgestellt. Das erste bezieht sich auf die olfaktorische Kodierung und das zweite auf das sensorische Gedaechtnis. Beide Phaenomene werden am Beispiel des Gehirns der Honigbiene untersucht. In Bezug auf die olfaktorische Kodierung zeige ich, dass die neuronale Dynamik waehrend der Stimulation im Antennallobus duftspezifische Trajektorien beschreibt, die in duftspezifischen Attraktoren enden. Das Zeitinterval, in dem diese Attraktoren erreicht werden, betraegt unabhaengig von der Identitaet und der Konzentration des Duftes ungefaehr 800 ms. Darueber hinaus zeige ich, dass Support-Vektor Maschinen, und insbesondere Perzeptronen, ein realistisches und biologisches Model der Wechselwirkung zwischen dem Antennallobus (dem kodierenden Netwerk) und dem Pilzkoerper (dem dekodierenden Netzwerk) darstellen. Dieses Model kann sowohl Reaktionszeiten von ca. 300 ms als auch die Invarianz der Duftwahrnehmung gegenueber der Duftkonzentration erklaeren. In Bezug auf das sensorische Gedaechtnis zeige ich, dass eine einzige Stimulation ohne Belohnung dem Hebbschen Postulat folgend Veraenderungen der paarweisen Korrelationen zwischen Glomeruli induziert. Ich zeige, dass diese Veranderungen der Korrelationen bei 2/3 der Bienen ausreichen, um den letzten Stimulus zu bestimmen. In der zweiten Minute nach der Stimulation ist eine erfolgreiche Bestimmung des Stimulus nur bei 1/3 der Bienen moeglich. Eine Hauptkomponentenanalyse der spontanen Aktivitaet laesst erkennen, dass das dominante Muster des Netzwerks waehrend der spontanen Aktivitaet nach, aber nicht vor der Stimulation das duftinduzierte Aktivitaetsmuster bei 2/3 der Bienen nachbildet. Man kann deshalb die duftinduzierten (Veraenderungen der) Korrelationen als Spuren eines Kurzzeitgedaechtnisses bzw. als Hebbsche "Reverberationen" betrachtet werden. / Two major novel results are reported in this work. The first concerns olfactory coding and the second concerns sensory memory. Both phenomena are investigated in the brain of the honeybee as a model system. Considering olfactory coding I demonstrate that the neural dynamics in the antennal lobe describe odor-specific trajectories during stimulation that converge to odor-specific attractors. The time interval to reach these attractors is, regardless of odor identity and concentration, approximately 800 ms. I show that support-vector machines and, in particular perceptrons provide a realistic and biological model of the interaction between the antennal lobe (coding network) and the mushroom body (decoding network). This model can also account for reaction-times of about 300 ms and for concentration invariance of odor perception. Regarding sensory memory I show that a single stimulation without reward induces changes of pairwise correlation between glomeruli in a Hebbian-like manner. I demonstrate that those changes of correlation suffice to retrieve the last stimulus presented in 2/3 of the bees studied. Succesful retrieval decays to 1/3 of the bees within the second minute after stimulation. In addition, a principal-component analysis of the spontaneous activity reveals that the dominant pattern of the network during the spontaneous activity after, but not before stimulation, reproduces the odor-induced activity pattern in 2/3 of the bees studied. One can therefore consider the odor-induced (changes of) correlation as traces of a short-term memory or as Hebbian reverberations.
|
429 |
Sensory input encoding and readout methods for in vitro living neuronal networksOrtman, Robert L. 06 July 2012 (has links)
Establishing and maintaining successful communication stands as a critical prerequisite for achieving the goals of inducing and studying advanced computation in small-scale living neuronal networks. The following work establishes a novel and effective method for communicating arbitrary "sensory" input information to cultures of living neurons, living neuronal networks (LNNs), consisting of approximately 20 000 rat cortical neurons plated on microelectrode arrays (MEAs) containing 60 electrodes. The sensory coding algorithm determines a set of effective codes (symbols), comprised of different spatio-temporal patterns of electrical stimulation, to which the LNN consistently produces unique responses to each individual symbol. The algorithm evaluates random sequences of candidate electrical stimulation patterns for evoked-response separability and reliability via a support vector machine (SVM)-based method, and employing the separability results as a fitness metric, a genetic algorithm subsequently constructs subsets of highly separable symbols (input patterns). Sustainable input/output (I/O) bit rates of 16-20 bits per second with a 10% symbol error rate resulted for time periods of approximately ten minutes to over ten hours. To further evaluate the resulting code sets' performance, I used the system to encode approximately ten hours of sinusoidal input into stimulation patterns that the algorithm selected and was able to recover the original signal with a normalized root-mean-square error of 20-30% using only the recorded LNN responses and trained SVM classifiers. Response variations over the course of several hours observed in the results of the sine wave I/O experiment suggest that the LNNs may retain some short-term memory of the previous input sample and undergo neuroplastic changes in the context of repeated stimulation with sensory coding patterns identified by the algorithm.
|
430 |
Geotechnical Site Characterization And Liquefaction Evaluation Using Intelligent ModelsSamui, Pijush 02 1900 (has links)
Site characterization is an important task in Geotechnical Engineering. In situ tests based on standard penetration test (SPT), cone penetration test (CPT) and shear wave velocity survey are popular among geotechnical engineers. Site characterization using any of these properties based on finite number of in-situ test data is an imperative task in probabilistic site characterization. These methods have been used to design future soil sampling programs for the site and to specify the soil stratification. It is never possible to know the geotechnical properties at every location beneath an actual site because, in order to do so, one would need to sample and/or test the entire subsurface profile. Therefore, the main objective of site characterization models is to predict the subsurface soil properties with minimum in-situ test data. The prediction of soil property is a difficult task due to the uncertainities. Spatial variability, measurement ‘noise’, measurement and model bias, and statistical error due to limited measurements are the sources of uncertainities.
Liquefaction in soil is one of the other major problems in geotechnical earthquake engineering. It is defined as the transformation of a granular material from a solid to a liquefied state as a consequence of increased pore-water pressure and reduced effective stress. The generation of excess pore pressure under undrained loading conditions is a hallmark of all liquefaction phenomena. This phenomena was brought to the attention of engineers more so after Niigata(1964) and Alaska(1964) earthquakes. Liquefaction will cause building settlement or tipping, sand boils, ground cracks, landslides, dam instability, highway embankment failures, or other hazards. Such damages are generally of great concern to public safety and are of economic significance. Site-spefific evaluation of liquefaction susceptibility of sandy and silty soils is a first step in liquefaction hazard assessment. Many methods (intelligent models and simple methods as suggested by Seed and Idriss, 1971) have been suggested to evaluate liquefaction susceptibility based on the large data from the sites where soil has been liquefied / not liquefied.
The rapid advance in information processing systems in recent decades directed engineering research towards the development of intelligent models that can model natural phenomena automatically. In intelligent model, a process of training is used to build up a model of the particular system, from which it is hoped to deduce responses of the system for situations that have yet to be observed. Intelligent models learn the input output relationship from the data itself. The quantity and quality of the data govern the performance of intelligent model. The objective of this study is to develop intelligent models [geostatistic, artificial neural network(ANN) and support vector machine(SVM)] to estimate corrected standard penetration test (SPT) value, Nc, in the three dimensional (3D) subsurface of Bangalore. The database consists of 766 boreholes spread over a 220 sq km area, with several SPT N values (uncorrected blow counts) in each of them. There are total 3015 N values in the 3D subsurface of Bangalore. To get the corrected blow counts, Nc, various corrections such as for overburden stress, size of borehole, type of sampler, hammer energy and length of connecting rod have been applied on the raw N values. Using a large database of Nc values in the 3D subsurface of Bangalore, three geostatistical models (simple kriging, ordinary kriging and disjunctive kriging) have been developed. Simple and ordinary kriging produces linear estimator whereas, disjunctive kriging produces nonlinear estimator. The knowledge of the semivariogram of the Nc data is used in the kriging theory to estimate the values at points in the subsurface of Bangalore where field measurements are not available. The capability of disjunctive kriging to be a nonlinear estimator and an estimator of the conditional probability is explored. A cross validation (Q1 and Q2) analysis is also done for the developed simple, ordinary and disjunctive kriging model. The result indicates that the performance of the disjunctive kriging model is better than simple as well as ordinary kriging model.
This study also describes two ANN modelling techniques applied to predict Nc data at any point in the 3D subsurface of Bangalore. The first technique uses four layered feed-forward backpropagation (BP) model to approximate the function, Nc=f(x, y, z) where x, y, z are the coordinates of the 3D subsurface of Bangalore. The second technique uses generalized regression neural network (GRNN) that is trained with suitable spread(s) to approximate the function, Nc=f(x, y, z). In this BP model, the transfer function used in first and second hidden layer is tansig and logsig respectively. The logsig transfer function is used in the output layer. The maximum epoch has been set to 30000. A Levenberg-Marquardt algorithm has been used for BP model. The performance of the models obtained using both techniques is assessed in terms of prediction accuracy. BP ANN model outperforms GRNN model and all kriging models.
SVM model, which is firmly based on the theory of statistical learning theory, uses regression technique by introducing -insensitive loss function has been also adopted to predict Nc data at any point in 3D subsurface of Bangalore. The SVM implements the structural risk minimization principle (SRMP), which has been shown to be superior to the more traditional empirical risk minimization principle (ERMP) employed by many of the other modelling techniques. The present study also highlights the capability of SVM over the developed geostatistic models (simple kriging, ordinary kriging and disjunctive kriging) and ANN models.
Further in this thesis, Liquefaction susceptibility is evaluated from SPT, CPT and Vs data using BP-ANN and SVM. Intelligent models (based on ANN and SVM) are developed for prediction of liquefaction susceptibility using SPT data from the 1999 Chi-Chi earthquake, Taiwan. Two models (MODEL I and MODEL II) are developed. The SPT data from the work of Hwang and Yang (2001) has been used for this purpose. In MODEL I, cyclic stress ratio (CSR) and corrected SPT values (N1)60 have been used for prediction of liquefaction susceptibility. In MODEL II, only peak ground acceleration (PGA) and (N1)60 have been used for prediction of liquefaction susceptibility. Further, the generalization capability of the MODEL II has been examined using different case histories available globally (global SPT data) from the work of Goh (1994).
This study also examines the capabilities of ANN and SVM to predict the liquefaction susceptibility of soils from CPT data obtained from the 1999 Chi-Chi earthquake, Taiwan. For determination of liquefaction susceptibility, both ANN and SVM use the classification technique. The CPT data has been taken from the work of Ku et al.(2004). In MODEL I, cone tip resistance (qc) and CSR values have been used for prediction of liquefaction susceptibility (using both ANN and SVM). In MODEL II, only PGA and qc have been used for prediction of liquefaction susceptibility. Further, developed MODEL II has been also applied to different case histories available globally (global CPT data) from the work of Goh (1996).
Intelligent models (ANN and SVM) have been also adopted for liquefaction susceptibility prediction based on shear wave velocity (Vs). The Vs data has been collected from the work of Andrus and Stokoe (1997). The same procedures (as in SPT and CPT) have been applied for Vs also.
SVM outperforms ANN model for all three models based on SPT, CPT and Vs data. CPT method gives better result than SPT and Vs for both ANN and SVM models. For CPT and SPT, two input parameters {PGA and qc or (N1)60} are sufficient input parameters to determine the liquefaction susceptibility using SVM model.
In this study, an attempt has also been made to evaluate geotechnical site characterization by carrying out in situ tests using different in situ techniques such as CPT, SPT and multi channel analysis of surface wave (MASW) techniques. For this purpose a typical site was selected wherein a man made homogeneous embankment and as well natural ground has been met. For this typical site, in situ tests (SPT, CPT and MASW) have been carried out in different ground conditions and the obtained test results are compared. Three CPT continuous test profiles, fifty-four SPT tests and nine MASW test profiles with depth have been carried out for the selected site covering both homogeneous embankment and natural ground. Relationships have been developed between Vs, (N1)60 and qc values for this specific site. From the limited test results, it was found that there is a good correlation between qc and Vs. Liquefaction susceptibility is evaluated using the in situ test data from (N1)60, qc and Vs using ANN and SVM models. It has been shown to compare well with “Idriss and Boulanger, 2004” approach based on SPT test data.
SVM model has been also adopted to determine over consolidation ratio (OCR) based on piezocone data. Sensitivity analysis has been performed to investigate the relative importance of each of the input parameters. SVM model outperforms all the available methods for OCR prediction.
|
Page generated in 0.0388 seconds