Spelling suggestions: "subject:"8upport vector"" "subject:"6upport vector""
41 |
Fixed points, fractals, iterated function systems and generalized support vector machinesQi, Xiaomin January 2016 (has links)
In this thesis, fixed point theory is used to construct a fractal type sets and to solve data classification problem. Fixed point method, which is a beautiful mixture of analysis, topology, and geometry has been revealed as a very powerful and important tool in the study of nonlinear phenomena. The existence of fixed points is therefore of paramount importance in several areas of mathematics and other sciences. In particular, fixed points techniques have been applied in such diverse fields as biology, chemistry, economics, engineering, game theory and physics. In Chapter 2 of this thesis it is demonstrated how to define and construct a fractal type sets with the help of iterations of a finite family of generalized F-contraction mappings, a class of mappings more general than contraction mappings, defined in the context of b-metric space. This leads to a variety of results for iterated function system satisfying a different set of contractive conditions. The results unify, generalize and extend various results in the existing literature. In Chapter 3, the theory of support vector machine for linear and nonlinear classification of data and the notion of generalized support vector machine is considered. In the thesis it is also shown that the problem of generalized support vector machine can be considered in the framework of generalized variation inequalities and results on the existence of solutions are established. / FUSION
|
42 |
Sentiment analysis : text, pre-processing, reader views and cross domainsHaddi, Emma January 2015 (has links)
Sentiment analysis has emerged as a field that has attracted a significant amount of attention since it has a wide variety of applications that could benefit from its results, such as news analytics, marketing, question answering, knowledge management and so on. This area, however, is still early in its development where urgent improvements are required on many issues, particularly on the performance of sentiment classification. In this thesis, three key challenging issues affecting sentiment classification are outlined and innovative ways of addressing these issues are presented. First, text pre-processing has been found crucial on the sentiment classification performance. Consequently, a combination of several existing preprocessing methods is proposed for the sentiment classification process. Second, text properties of financial news are utilised to build models to predict sentiment. Two different models are proposed, one that uses financial events to predict financial news sentiment, and the other uses a new interesting perspective that considers the opinion reader view, as opposed to the classic approach that examines the opinion holder view. A new method to capture the reader sentiment is suggested. Third, one characteristic of financial news is that it stretches over a number of domains, and it is very challenging to infer sentiment between different domains. Various approaches for cross-domain sentiment analysis have been proposed and critically evaluated.
|
43 |
A study of the temporal relationship between eye actions and facial expressionsRupenga, Moses January 2017 (has links)
A dissertation submitted in ful llment of the requirements for the
degree of Master of Science
in the
School of Computer Science and Applied Mathematics
Faculty of Science
August 15, 2017 / Facial expression recognition is one of the most common means of communication used
for complementing spoken word. However, people have grown to master ways of ex-
hibiting deceptive expressions. Hence, it is imperative to understand di erences in
expressions mostly for security purposes among others. Traditional methods employ
machine learning techniques in di erentiating real and fake expressions. However, this
approach does not always work as human subjects can easily mimic real expressions with
a bit of practice. This study presents an approach that evaluates the time related dis-
tance that exists between eye actions and an exhibited expression. The approach gives
insights on some of the most fundamental characteristics of expressions. The study fo-
cuses on nding and understanding the temporal relationship that exists between eye
blinks and smiles. It further looks at the relationship that exits between eye closure and
pain expressions. The study incorporates active appearance models (AAM) for feature
extraction and support vector machines (SVM) for classi cation. It tests extreme learn-
ing machines (ELM) in both smile and pain studies, which in turn, attains excellent
results than predominant algorithms like the SVM. The study shows that eye blinks
are highly correlated with the beginning of a smile in posed smiles while eye blinks are
highly correlated with the end of a smile in spontaneous smiles. A high correlation is
observed between eye closure and pain in spontaneous pain expressions. Furthermore,
this study brings about ideas that lead to potential applications such as lie detection
systems, robust health care monitoring systems and enhanced animation design systems
among others. / MT 2018
|
44 |
Designing energy-efficient computing systems using equalization and machine learningTakhirov, Zafar 20 February 2018 (has links)
As technology scaling slows down in the nanometer CMOS regime and mobile computing becomes more ubiquitous, designing energy-efficient hardware for mobile systems is becoming increasingly critical and challenging. Although various approaches like near-threshold computing (NTC), aggressive voltage scaling with shadow latches, etc. have been proposed to get the most out of limited battery life, there is still no “silver bullet” to increasing power-performance demands of the mobile systems. Moreover, given that a mobile system could operate in a variety of environmental conditions, like different temperatures, have varying performance requirements, etc., there is a growing need for designing tunable/reconfigurable systems in order to achieve energy-efficient operation. In this work we propose to address the energy- efficiency problem of mobile systems using two different approaches: circuit tunability and distributed adaptive algorithms.
Inspired by the communication systems, we developed feedback equalization based digital logic that changes the threshold of its gates based on the input pattern. We showed that feedback equalization in static complementary CMOS logic enabled up to 20% reduction in energy dissipation while maintaining the performance metrics. We also achieved 30% reduction in energy dissipation for pass-transistor digital logic (PTL) with equalization while maintaining performance. In addition, we proposed a mechanism that leverages feedback equalization techniques to achieve near optimal operation of static complementary CMOS logic blocks over the entire voltage range from near threshold supply voltage to nominal supply voltage. Using energy-delay product (EDP) as a metric we analyzed the use of the feedback equalizer as part of various sequential computational blocks. Our analysis shows that for near-threshold voltage operation, when equalization was used, we can improve the operating frequency by up to 30%, while the energy increase was less than 15%, with an overall EDP reduction of ≈10%. We also observe an EDP reduction of close to 5% across entire above-threshold voltage range.
On the distributed adaptive algorithm front, we explored energy-efficient hardware implementation of machine learning algorithms. We proposed an adaptive classifier that leverages the wide variability in data complexity to enable energy-efficient data classification operations for mobile systems. Our approach takes advantage of varying classification hardness across data to dynamically allocate resources and improve energy efficiency. On average, our adaptive classifier is ≈100× more energy efficient but has ≈1% higher error rate than a complex radial basis function classifier and is ≈10× less energy efficient but has ≈40% lower error rate than a simple linear classifier across a wide range of classification data sets. We also developed a field of groves (FoG) implementation of random forests (RF) that achieves an accuracy comparable to Convolutional Neural Networks (CNN) and Support Vector Machines (SVM) under tight energy budgets. The FoG architecture takes advantage of the fact that in random forests a small portion of the weak classifiers (decision trees) might be sufficient to achieve high statistical performance. By dividing the random forest into smaller forests (Groves), and conditionally executing the rest of the forest, FoG is able to achieve much higher energy efficiency levels for comparable error rates. We also take advantage of the distributed nature of the FoG to achieve high level of parallelism. Our evaluation shows that at maximum achievable accuracies FoG consumes ≈1.48×, ≈24×, ≈2.5×, and ≈34.7× lower energy per classification compared to conventional RF, SVM-RBF , Multi-Layer Perceptron Network (MLP), and CNN, respectively. FoG is 6.5× less energy efficient than SVM-LR, but achieves 18% higher accuracy on average across all considered datasets.
|
45 |
Rating de risco de projetos de inovação tecnológica: uma proposta através da aplicação das Support Vector MachinesGuimarães Júnior, Djalma Silva 31 January 2010 (has links)
Made available in DSpace on 2014-06-12T17:39:31Z (GMT). No. of bitstreams: 2
arquivo519_1.pdf: 8081494 bytes, checksum: ba2427869f3a7c683fd296629937e553 (MD5)
license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5)
Previous issue date: 2010 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Um projeto de inovação tecnológica consiste em uma série de análises e procedimentos
que tem como fim estimar o valor de uma tecnologia, ou seja, gerar uma estimativa dos
rendimentos futuros que tal empreendimento/tecnologia possa proporcionar. A
abordagem tradicional da análise de investimentos para esta categoria de projetos possui
uma limitação no que tange a: 1 estimação do valor da tecnologia que exige a
incorporação de variáveis qualitativas que não são consideradas por essa modelagem; e 2
a elevada variabilidade das estimativas do fluxo de caixa projetado, em virtude das
diferentes categorias de risco inerentes a esse tipo de projeto. A partir desta limitação
apresentada no estado da arte da avaliação deste tipo de projeto, esta pesquisa de cunho
exploratório pretende utilizar a metodologia de rating como uma alternativa a avaliação
de projetos de inovação. Pois um sistema de classificação através de rating possui a
flexibilidade necessária para a incorporação de variáveis qualitativas que podem auxiliar
na mensuração do valor da tecnologia, bem como fornece uma série de procedimentos
que permitem a estimação do risco de tais projetos. Tal aplicação da metodologia de
rating gera o Sistema de Classificação de Risco de Projetos de Inovação (SCRP), que a
partir de uma amostra de 40 projetos de investimento industrial fornecidos pelo Banco do
Nordeste do Brasil, indicadores setoriais, macroeconômicos e tecnológicos, provê uma
classificação de viabilidade e risco para tais projetos. As Support Vector Machines,
técnica de inteligência artificial com resultados exitosos em várias áreas das finanças,
inclusive com ratings é introduzida nesta pesquisa para testar a classificação gerada pelo
SCRP. A aplicação do SVM fez uso do código LIBSVM e do Software Matlab. A
classificação obtida pelo SCRP apresentou um ajuste médio de 83,6% quando comparado
aos 10 melhores projetos classificados pelo critério da TIR e de 87,6% de ajuste médio
para com os 8 piores projetos classificados pelo critério do VPL, a classificação obtida
através do SVM, apresentou uma acuracia de 37,5% frente aos dados de teste
|
46 |
Metodologia computacional para detecção e diagnóstico automáticos e planejamento cirúrgico do estrabismo / COMPUTATIONAL METHODS FOR DETECTION AND AUTOMATIC DIAGNOSIS AND SURGICAL PLANNING OF STRABISMUSALMEIDA, João Dallyson Sousa de 05 July 2013 (has links)
Submitted by Rosivalda Pereira (mrs.pereira@ufma.br) on 2017-08-14T20:25:49Z
No. of bitstreams: 1
JoaoDallyson.pdf: 6621483 bytes, checksum: 19e928fa3d5789994cc1db5d829e0575 (MD5) / Made available in DSpace on 2017-08-14T20:25:49Z (GMT). No. of bitstreams: 1
JoaoDallyson.pdf: 6621483 bytes, checksum: 19e928fa3d5789994cc1db5d829e0575 (MD5)
Previous issue date: 2013-07-05 / Strabismus is a condition that affects approximately 4% of the population causing aesthetic problems, reversible at any age, and irreversible sensory changes that modify the mechanism of vision. The Hirschberg test is one of the types of existing tests to detect such a condition. Detection Systems and computeraided diagnosis are being used with some success in helping health professionals. However, in spite of the increasing routine use of high-tech technologies, the diagnosis and therapy in ophthalmology is not a reality within the strabismus subspecialty. Thus, this thesis aims to present a methodology to detect and automatically diagnose and propose the plan of strabismus surgery through digital images. To do this, the study is organized in seven steps: (1) face segmentation; (2) eye region detection; (3) eyes location; (4) limbus and brilliance location; (5) detection, (6) diagnosis and (7) surgical planning of strabismus. The effectiveness of the study in the indication of the diagnosis and surgical plan was evaluated by the mean diference between the results provided by the methodology and the original indication of the expert. Patients were evaluated for eye positions: PPO, INFRA, SUPRA, DEXTRO and LEVO. The method was 88% accurate in identifying esotropias (ET), 100% in exotropias (XT), 80.33% in hipertropias (HT) and 83.33% in hipotropias (HoT). The overall average error in diagnosis was 5:6 and 3:83 for horizontal and vertical desviations, respectivelly. In planning surgeries of medial rectus muscles the average error was 0.6 mm for recession, and 0.9 mm for ressection. For lateral rectus muscles, the average error was 0.8 mm for recession, and 1 mm for resection. / O estrabismo é uma patologia que afeta cerca de 4% da população, provocando problemas estéticos (reversíveis a qualquer idade) e alterações sensoriais irreversíveis, modi cando o mecanismo da visão. O teste de Hirschberg é um dos tipos de exames existentes para detectar tal patologia. Sistemas de Detecção e Diagnóstico auxiliados por computador estão sendo usados com relativo sucesso no auxílio aos pro fissionais de saúde. No entanto, o emprego rotineiro de recursos de alta tecnologia no auxílio diagnóstico e terapêutico em oftalmologia não é uma realidade dentro da subespecialidade estrabismo. Sendo assim, o presente trabalho tem como objetivo apresentar uma metodologia para detectar e diagnosticar automaticamente, além de propor o plano cirúrgico do estrabismo por meio de imagens digitais. Para tanto, o estudo está organizado em sete estágios: (1) segmentação da face; (2) detecção da região dos olhos; (3) localização dos olhos; (4) localização do limbo e do brilho; (5) detecção; (6) diagnóstico e (7) planejamento cirúrgico do estrabismo. A e ficácia do estudo na indicação do diagnóstico e do plano cirúrgico foi avaliada pela m édia da diferença entre os resultados fornecidos pela metodologia e as indicações originais do especialista. Os pacientes foram avaliados nas posições do olhar: PPO, INFRA, SUPRA, DEXTRO e LEVO. O método obteve acuracia de 88% na identi cação de esotropias (ET), 100% nas exotropias (XT), 80,33% nas hipertropias (HT) e 83,33% nas hipotropias (HoT). O erro médio global na realização do diagnóstico foi de 5:6 e 3:83 para desvios horizontais e verticais, respectivamente. No planejamento de cirurgias de músculos retos mediais obteve-se erro médio de 0,6 mm para recuo, e 0,9 mm para ressecção. Para os músculos retos laterais, o erro médio foi de 0,8 mm para recuo e 1 mm para ressecção.
|
47 |
Learning to rank documents with support vector machines via active learningArens, Robert James 01 December 2009 (has links)
Navigating through the debris of the information explosion requires powerful, flexible search tools. These tools must be both useful and useable; that is, they must do their jobs effectively without placing too many burdens on the user. While general interest search engines, such as Google, have addressed this latter challenge well, more topic-specific search engines, such as PubMed, have not. These search engines, though effective, often require training in their use, as well as in-depth knowledge of the domain over which they operate. Furthermore, search results are often returned in an order irrespective of users' preferences, forcing them to manually search through search results in order to find the documents they find most useful.
To solve these problems, we intend to learn ranking functions from user relevance preferences. Applying these ranking functions to search results allows us to improve search usability without having to reengineer existing, effective search engines. Using ranking SVMs and active learning techniques, we can effectively learn what is relevant to a user from relatively small amounts of preference data, and apply these learned models as ranking functions. This gives users the convenience of seeing relevance-ordered search results, which are tailored to their preferences as opposed to using a one-size-fits-all sorting method. As giving preference feedback does not require in-depth domain knowledge, this approach is suitable for use by domain experts as well as neophytes. Furthermore, giving preference feedback does not require a great deal of training, adding very little overhead to the search process.
|
48 |
Novel Application of Neutrosophic Logic in Classifiers Evaluated under Region-Based Image Categorization SystemJu, Wen 01 May 2011 (has links)
Neutrosophic logic is a relatively new logic that is a generalization of fuzzy logic. In this dissertation, for the first time, neutrosophic logic is applied to the field of classifiers where a support vector machine (SVM) is adopted as the example to validate the feasibility and effectiveness of neutrosophic logic. The proposed neutrosophic set is integrated into a reformulated SVM, and the performance of the achieved classifier N-SVM is evaluated under an image categorization system. Image categorization is an important yet challenging research topic in computer vision. In this dissertation, images are first segmented by a hierarchical two-stage self organizing map (HSOM), using color and texture features. A novel approach is proposed to select the training samples of HSOM based on homogeneity properties. A diverse density support vector machine (DD-SVM) framework that extends the multiple-instance learning (MIL) technique is then applied to the image categorization problem by viewing an image as a bag of instances corresponding to the regions obtained from the image segmentation. Using the instance prototype, every bag is mapped to a point in the new bag space, and the categorization is transformed to a classification problem. Then, the proposed N-SVM based on the neutrosophic set is used as the classifier in the new bag space. N-SVM treats samples differently according to the weighting function, and it helps reduce the effects of outliers. Experimental results on a COREL dataset of 1000 general purpose images and a Caltech 101 dataset of 9000 images demonstrate the validity and effectiveness of the proposed method.
|
49 |
On Web Taxonomy IntegrationZhang, Dell, Lee, Wee Sun 01 1900 (has links)
We address the problem of integrating objects from a source taxonomy into a master taxonomy. This problem is not only pervasive on the nowadays web, but also important to the emerging semantic web. A straightforward approach to automating this process would be to train a classifier for each category in the master taxonomy, and then classify objects from the source taxonomy into these categories. In this paper we attempt to use a powerful classification method, Support Vector Machine (SVM), to attack this problem. Our key insight is that the availability of the source taxonomy data could be helpful to build better classifiers in this scenario, therefore it would be beneficial to do transductive learning rather than inductive learning, i.e., learning to optimize classification performance on a particular set of test examples. Noticing that the categorization of the master and source taxonomies often have some semantic overlap, we propose a new method, Cluster Shrinkage (CS), to further enhance the classification by exploiting such implicit knowledge. Our experiments with real-world web data show substantial improvements in the performance of taxonomy integration. / Singapore-MIT Alliance (SMA)
|
50 |
Improving Multiclass Text Classification with the Support Vector MachineRennie, Jason D. M., Rifkin, Ryan 16 October 2001 (has links)
We compare Naive Bayes and Support Vector Machines on the task of multiclass text classification. Using a variety of approaches to combine the underlying binary classifiers, we find that SVMs substantially outperform Naive Bayes. We present full multiclass results on two well-known text data sets, including the lowest error to date on both data sets. We develop a new indicator of binary performance to show that the SVM's lower multiclass error is a result of its improved binary performance. Furthermore, we demonstrate and explore the surprising result that one-vs-all classification performs favorably compared to other approaches even though it has no error-correcting properties.
|
Page generated in 0.0603 seconds