651 |
Mahalanobis kernel-based support vector data description for detection of large shifts in mean vectorNguyen, Vu 01 January 2015 (has links)
Statistical process control (SPC) applies the science of statistics to various process control in order to provide higher-quality products and better services. The K chart is one among the many important tools that SPC offers. Creation of the K chart is based on Support Vector Data Description (SVDD), a popular data classifier method inspired by Support Vector Machine (SVM). As any methods associated with SVM, SVDD benefits from a wide variety of choices of kernel, which determines the effectiveness of the whole model. Among the most popular choices is the Euclidean distance-based Gaussian kernel, which enables SVDD to obtain a flexible data description, thus enhances its overall predictive capability. This thesis explores an even more robust approach by incorporating the Mahalanobis distance-based kernel (hereinafter referred to as Mahalanobis kernel) to SVDD and compare it with SVDD using the traditional Gaussian kernel. Method's sensitivity is benchmarked by Average Run Lengths obtained from multiple Monte Carlo simulations. Data of such simulations are generated from multivariate normal, multivariate Student's (t), and multivariate gamma populations using R, a popular software environment for statistical computing. One case study is also discussed using a real data set received from Halberg Chronobiology Center. Compared to Gaussian kernel, Mahalanobis kernel makes SVDD and thus the K chart significantly more sensitive to shifts in mean vector, and also in covariance matrix.
|
652 |
Infection Prevalence in a Novel Ixodes scapularis Population in Northern WisconsinWestwood, Mary Lynn 30 August 2017 (has links)
No description available.
|
653 |
Towards Mosquitocides for Prevention of Vector-Borne Infectious Diseases : discovery and Development of Acetylcholinesterase 1 Inhibitors / Mot nya insekticider för bekämpning av sjukdomsbärande myggor : identifiering och utveckling av acetylkolinesteras 1 inhibitorerKnutsson, Sofie January 2016 (has links)
Diseases such as malaria and dengue impose great economic burdens and are a serious threat to public health, with young children being among the worst affected. These diseases are transmitted by mosquitoes, also called disease vectors, which are able to transmit both parasitic and viral infections. One of the most important strategies in the battle against mosquito-borne diseases is vector control by insecticides and the goal is to prevent people from being bitten by mosquitoes. Today’s vector control methods are seriously threatened by the development and spread of insecticide-resistant mosquitos warranting the search for new insecticides. This thesis has investigated the possibilities of vector control using non-covalent inhibitors targeting acetylcholinesterase (AChE); an essential enzyme present in mosquitoes as well as in humans and other mammals. A key requirement for such compounds to be considered safe and suitable for development into new public health insecticides is selectivity towards the mosquito enzyme AChE1. The work presented here is focused on AChE1 from the disease transmitting mosquitoes Anopheles gambiae (AgAChE1) and Aedes aegypti (AaAChE1), and their human (hAChE) and mouse (mAChE) counterparts. By taking a medicinal chemistry approach and utilizing high throughput screening (HTS), new chemical starting points have been identified. Analysis of the combined results of three different HTS campaigns targeting AgAChE1, AaAChE1, and hAChE allowed the identification of several mosquito-selective inhibitors and a number of compound classes were selected for further development. These compounds are non-covalent inhibitors of AChE1 and thereby work via a different mechanism compared to current anti-cholinergic insecticides, whose activity is the result of a covalent modification of the enzyme. The potency and selectivity of two compound classes have been explored in depth using a combination of different tools including design, organic synthesis, biochemical assays, protein X-ray crystallography and homology modeling. Several potent inhibitors with promising selectivity for the mosquito enzymes have been identified and the insecticidal activity of one new compound has been confirmed by in vivo experiments on mosquitoes. The results presented here contribute to the field of public health insecticide discovery by demonstrating the potential of selectively targeting mosquito AChE1 using non-covalent inhibitors. Further, the presented compounds can be used as tools to study mechanisms important in insecticide development, such as exoskeleton penetration and other ADME processes in mosquitoes.
|
654 |
Normalisation de champs de vecteurs holomorphes et équations différentielles implicites / Normalization of holomorphic vector fields and implicit differential equationsAurouet, Julien 06 December 2013 (has links)
La théorie classique des formes normales a pour but de simplifier des problèmes compliqués grâce à des changements de coordonnées réguliers pour ne conserver que les caractéristiques dynamiques du système. Plus précisément, on considère un système dynamique que l'on dit "élémentaire", comme par exemple la partie linéaire d'un champ de vecteurs au voisinage d'un point singulier, et on se donne une perturbation de ce système élémentaire. Les formes normales sont alors l'ensemble des représentants de ces perturbations à la conjugaison près d'une transformation régulière. Elles ne sont constituées que des termes qui caractérisent la dynamique du système perturbé et que l'on appelle "résonances". Dans la première partie de la thèse on cherche à comprendre la dynamique locale d'équations différentielles implicites de la forme F(x,y,y')=0, où F est un germe de fonction holomorphe au voisinage d'un point singulier. Pour cela on utilise la relation intime entre les systèmes implicites et les champs liouvilliens. La classification par transformation de contact des équations implicites provient de la classification symplectique des champs liouvilliens. On utilise alors toute la théorie des formes normales pour les champs de vecteurs, dans le cas holomorphe (Brjuno, Siegel, Stolovitch) et dans le cas réel (Sternberg), que l'on adapte pour les champs liouviliens avec des transformations symplectiques. On établit alors des résultats de classification des équations implicites en fonction des invariants dynamiques, ainsi que des conditions d'existence de solutions locales via les formes normales. / The aim of the classical theory of normal forms is to simplify complicated problems by using regular changes of coordinates, in order to keep the dynamical characteristics of the system. More precisely, we consider a dynamic system said to be "elementary", like a linear part of a vector field in the neighborhood of a singular point, and we focus on a perturbation of this elementary system. Normal forms are the set of all representatives of those perturbations under the action of the group of regular transformation. They are composed of terms which caracterise the dynamics of the perturbed system, and which are called "resonances". In the first part, we try to understand the local dynamic of implicit equations of the form $F(x,y,y')=0$, where $F$ is a germ of holomorphic function in a neighborhood of a singular point. To this end we use the relation between implicit systems and liouvillian vector fields. The classification by contact transformations of implicit equations come from the symplectic classification of liouvillian vector fields. We use all normal forms theory for vector fields, in complex case (Bjruno, Siegel, Stolovitch), and in real case (Sternberg), adapted to liouvillian fields with symplectic transformations. We establish classification results for implicit equations according to the dynamical invariants, and existence conditions of local solutions using normal forms. In the second part, we undertake the normalization of an analytic vector field in a neighborhood of the torus. Brjuno enunciates a theorem of normalization, under conditions of control of small divisors and integrability of the normal forms ; however he doesn't give any proof of that theorem.
|
655 |
QCD analysis of deep inelastic lepton-hadron scattering in the region of small values of the Bjorken parameter xStaśto, Anna January 1999 (has links)
We present the new framework based on BFKL and DGLAP evolution equations in which the leading in(Q(_2)) and in(l/x) terms are treated on equal footing. We introduce a pair of coupled integro-difFerential equations for the quark singlet and the unintegrated gluon distribution. The observable structure functions are calculated using high energy factorisation approach. We also include the sub-leading in (l/x) effects via consistency constraint. We argue that the use of this constraint leads to more stable solution to the Pomeron intercept than that based on the NLO calculation of the BFKL equation alone and generates resummation to all orders of the major part of the subleading in (l/x) effects. The global fit to all available deep inelastic data is performed using a simple parametrisation of the non-perturbative region. We also present the results for the longitudinal structure function and the charm component of the F(_2) structure function. Next; we extend this approach to the low Q(^2) domain. At small distances we use the perturbative approach based on the unified BFKL/DGLAP equations and for large distances we use Vector Meson Dominance Model and, for the higher mass qq states, the additive quark approach. We show the results for the total cross section and for the ratio of the longitudinal and transverse structure functions. Finally, we calculate the dijet production and consider the decorrelation effects in the azimuthal distributions caused by the diffusion in the transverse momentum k(_r) of the exchanged gluon. Using the gluon distribution which is fixed by the fit to the DIS data we are able to make absolute predictions. We show the results for the dF(_r)/dɸ, the total cross section and also the distributions in Q(^2) as well as in the longitudinal momentum fraction of the gluon. Our theoretical predictions are confronted with the measurements made using ZEUS detector at HERA.
|
656 |
[en] FEATURE-PRESERVING VECTOR FIELD DENOISING / [pt] REMOÇÃO DE RUÍDO EM CAMPO VETORIALJOAO ANTONIO RECIO DA PAIXAO 14 May 2019 (has links)
[pt] Nos últimos anos, vários mecanismos permitem medir campos vetoriais reais, provendo uma compreensão melhor de fenômenos importantes, tais como dinâmica de fluidos ou movimentos de fluido cerebral. Isso abre um leque de novos desafios a visualização e análise de campos vetoriais em muitas aplicações de engenharia e de medicina por exemplo. Em particular, dados reais são geralmente corrompidos por ruído, dificultando a compreensão na hora da visualização. Esta informação necessita de uma etapa de remoção de ruído como pré-processamento, no entanto remoção de ruído normalmente remove as descontinuidades
e singularidades, que são fundamentais para a análise do campo vetorial. Nesta dissertação é proposto um método inovador para remoção de ruído em campo vetorial baseado em caminhadas aleatórias que preservam certas descontinuidades. O método funciona em um ambiente desestruturado, sendo rápido, simples de implementar e mostra um desempenho melhor do que a tradicional técnica Gaussiana de remoção de ruído. Esta tese propõe também uma metodologia semi-automática para remover ruído, onde o usuário controla a escala visual da filtragem, levando em consideração as mudanças topológicas que ocorrem por causa da filtragem. / [en] In recent years, several devices allow to measure real vector fields, leading to a better understanding of fundamental phenomena such as fluid dynamics or brain water movements. This gives vector field visualization and analysis new challenges in many applications in engineering and in medicine. In particular
real data is generally corrupted by noise, puzzling the understanding provided by visualization tools. This data needs a denoising step as preprocessing, however usual denoising removes discontinuities and singularities, which are fundamental for vector field analysis. In this dissertation a novel method for vector field denoising based on random walks is proposed which preserves certain discontinuities. It works in a unstructured setting; being fast, simple to implement, and shows a better performance than the traditional Gaussian denoising technique. This dissertation also proposes a semi-automatic vector field denoising methodology, where the user visually controls the filtering scale by validating topological changes caused by classical vector field filtering.
|
657 |
"Investigação de estratégias para a geração de máquinas de vetores de suporte multiclasses" / Investigation of strategies for the generation of multiclass support vector machinesLorena, Ana Carolina 16 February 2006 (has links)
Diversos problemas envolvem a classificação de dados em categorias, também denominadas classes. A partir de um conjunto de dados cujas classes são conhecidas, algoritmos de Aprendizado de Máquina (AM) podem ser utilizados na indução de um classificador capaz de predizer a classe de novos dados do mesmo domínio, realizando assim a discriminação desejada. Dentre as diversas técnicas de AM utilizadas em problemas de classificação, as Máquinas de Vetores de Suporte (Support Vector Machines - SVMs) se destacam por sua boa capacidade de generalização. Elas são originalmente concebidas para a solução de problemas com apenas duas classes, também denominados binários. Entretanto, diversos problemas requerem a discriminação dos dados em mais que duas categorias ou classes. Nesta Tese são investigadas e propostas estratégias para a generalização das SVMs para problemas com mais que duas classes, intitulados multiclasses. O foco deste trabalho é em estratégias que decompõem o problema multiclasses original em múltiplos subproblemas binários, cujas saídas são então combinadas na obtenção da classificação final. As estratégias propostas visam investigar a adaptação das decomposições a cada aplicação considerada, a partir de informações do desempenho obtido em sua solução ou extraídas de seus dados. Os algoritmos implementados foram avaliados em conjuntos de dados gerais e em aplicações reais da área de Bioinformática. Os resultados obtidos abrem várias possibilidades de pesquisas futuras. Entre os benefícios verificados tem-se a obtenção de decomposições mais simples, que requerem menos classificadores binários na solução multiclasses. / Several problems involve the classification of data into categories, also called classes. Given a dataset containing data whose classes are known, Machine Learning (ML) algorithms can be employed for the induction of a classifier able to predict the class of new data from the same domain, thus performing the desired discrimination. Among the several ML techniques applied to classification problems, the Support Vector Machines (SVMs) are known by their high generalization ability. They are originally conceived for the solution of problems with only two classes, also named binary problems. However, several problems require the discrimination of examples into more than two categories or classes. This thesis investigates and proposes strategies for the generalization of SVMs to problems with more than two classes, known as multiclass problems. The focus of this work is on strategies that decompose the original multiclass problem into multiple binary subtasks, whose outputs are then combined to obtain the final classification. The proposed strategies aim to investigate the adaptation of the decompositions for each multiclass application considered, using information of the performance obtained for its solution or extracted from its examples. The implemented algorithms were evaluated on general datasets and on real applications from the Bioinformatics domain. The results obtained open possibilities of many future work. Among the benefits observed is the obtainment of simpler decompositions, which require less binary classifiers in the multiclass solution.
|
658 |
"Classificação de páginas na internet" / "Internet pages classification"Martins Júnior, José 11 April 2003 (has links)
O grande crescimento da Internet ocorreu a partir da década de 1990 com o surgimento dos provedores comerciais de serviços, e resulta principalmente da boa aceitação e vasta disseminação do uso da Web. O grande problema que afeta a escalabilidade e o uso de tal serviço refere-se à organização e à classificação de seu conteúdo. Os engenhos de busca atuais possibilitam a localização de páginas na Web pela comparação léxica de conjuntos de palavras perante os conteúdos dos hipertextos. Tal mecanismo mostra-se ineficaz quando da necessidade pela localização de conteúdos que expressem conceitos ou objetos, a exemplo de produtos à venda oferecidos em sites de comércio eletrônico. A criação da Web Semântica foi anunciada no ano de 2000 para esse propósito, visando o estabelecimento de novos padrões para a representação formal de conteúdos nas páginas Web. Com sua implantação, cujo prazo inicialmente previsto foi de dez anos, será possível a expressão de conceitos nos conteúdos dos hipertextos, que representarão objetos classificados por uma ontologia, viabilizando assim o uso de sistemas, baseados em conhecimento, implementados por agentes inteligentes de software. O projeto DEEPSIA foi concebido como uma solução centrada no comprador, ao contrário dos atuais Market Places, para resolver o problema da localização de páginas Web com a descrição de produtos à venda, fazendo uso de métodos de classificação de textos, apoiados pelos algoritmos k-NN e C4.5, no suporte ao processo decisório realizado por um agente previsto em sua arquitetura, o Crawler Agent. Os testes com o sistema em sites brasileiros denotaram a necessidade pela sua adaptação em diversos aspectos, incluindo-se o processo decisório envolvido, que foi abordado pelo presente trabalho. A solução para o problema envolveu a aplicação e a avaliação do método Support Vector Machines, e é descrita em detalhes. / The huge growth of the Internet has been occurring since 90s with the arrival of the internet service providers. One important reason is the good acceptance and wide dissemination of the Web. The main problem that affects its scalability and usage is the organization and classification of its content. The current search engines make possible the localization of pages in the Web by means of a lexical comparison among sets of words and the hypertexts contents. In order to find contents that express concepts or object, such as products for sale in electronic commerce sites such mechanisms are inefficient. The proposition of the Semantic Web was announced in 2000 for this purpose, envisioning the establishment of new standards for formal contents representation in the Web pages. With its implementation, whose deadline was initially stated for ten years, it will be possible to express concepts in hypertexts contents, that will fully represent objects classified into an ontology, making possible the use of knowledge based systems implemented by intelligent softwares agents. The DEEPSIA project was conceived as a solution centered in the purchaser, instead of current Market Places, in order to solve the problem of finding Web pages with products for sale description, making use of methods of text classification, with k-NN and C4.5 algorithms, to support the decision problem to be solved by an specific agent designed, the Crawler Agent. The tests of the system in Brazilian sites have denoted the necessity for its adaptation in many aspects, including the involved decision process, which was focused in present work. The solution for the problem includes the application and evaluation of the Support Vector Machines method, and it is described in detail.
|
659 |
Monitoramento da cobertura do solo no entorno de hidrelétricas utilizando o classificador SVM (Support Vector Machines). / Land cover monitoring in hydroelectric domain area using Support Vector Machines (SVM) classifier.Albuquerque, Rafael Walter de 07 December 2011 (has links)
A classificação de imagens de satélite é muito utilizada para elaborar mapas de cobertura do solo. O objetivo principal deste trabalho consistiu no mapeamento automático da cobertura do solo no entorno da Usina de Lajeado (TO) utilizando-se o classificador SVM. Buscou-se avaliar a dimensão de áreas antropizadas presentes na represa e a acurácia da classificação gerada pelo algoritmo, que foi comparada com a acurácia da classificação obtida pelo tradicional classificador MAXVER. Esta dissertação apresentou sugestões de calibração do algoritmo SVM para a otimização do seu resultado. Verificou-se uma alta acurácia na classificação SVM, que mostrou o entorno da represa hidrelétrica em uma situação ambientalmente favorável. Os resultados obtidos pela classificação SVM foram similares aos obtidos pelo MAXVER, porém este último contextualizou espacialmente as classes de cobertura do solo com uma acurácia considerada um pouco menor. Apesar do bom estado de preservação ambiental apresentado, a represa deve ter seu entorno devidamente monitorado, pois foi diagnosticada uma grande quantidade de incêndios gerados pela população local, sendo que as ferramentas discutidas nesta dissertação auxiliam esta atividade de monitoramento. / Satellite Image Classification are very useful for building land cover maps. The aim of this study consists on an automatic land cover mapping in the domain area of Lajeados dam, at Tocantins state, using the SVM classifier. The aim of this work was to evaluate anthropic dimension areas near the dam and also to verify the algorithms classification accuracy, which was compared to the results of the standard ML (Maximum Likelihood) classifier. This work presents calibration suggestions to the SVM algorithm for optimizing its results. SVM classification presented high accuracy, suggesting a good environmental situation along Lajeados dam region. Classification results comparison between SVM and ML were quite similar, but SVMs spatial contextual mapping areas were slightly better. Although environmental situation of the study area was considered good, monitoring ecosystem is important because a significant quantity of burnt areas was noticed due to local communities activities. This fact emphasized the importance of the tools discussed in this work, which helps environmental monitoring.
|
660 |
Effets masqués en analyse prédictive / Masked effects in predictive analysisBascoul, Ganaël 27 June 2013 (has links)
L’objectif de cette thèse consiste en l’élaboration de deux méthodologies visant à révéler des effets jusqu’alors masqués en modélisation décisionnelle. Dans la première partie, nous cherchons à mettre en œuvre une méthode d’analyse locale des critères de choix dans un contexte de choix binaires. Dans une seconde partie, nous mettons en avant les effets de génération dans l’étude des comportements de choix. Dans les deux parties, notre démarche de recherche combine de nouveaux outils d’analyse prédictive (Support Vector Machines, FANOVA, PLS) aux outils traditionnels de statistique inférentielle, afin d’enrichir les résultats habituels par des informations complémentaires sur les effets masqués que constituent les effets locaux dans les fonctions de choix binaires, et les effets de génération dans l’analyse temporelle des comportement de choix. Les méthodologies proposées, respectivement nommées AEL et APC-PLS, sont appliquées sur des cas réels, afin d’en illustrer le fonctionnement et la pertinence. / The objective of this thesis is the development of two methodologies to reveal previously hidden effects in decision modeling. In the first part, we try to implement a method of local analysis in order to select criteria in the context of binary choices. In a second part, we highlight the effects of generations in the study of consumer behavior. In both parts, our research approach combines new predictive analytical tools (such as Support Vector Machines, FANOVA, PLS) to traditional tools of inferential statistics, to enrich the usual results by additional on the masked effects, which are the local effects in the binary choice functions, and the effects of generation in temporal choice behavior analysis.The proposed methodologies, respectively named AEL and APC- PLS are both applied to real cases in order to illustrate their operation and relevance.
|
Page generated in 0.0991 seconds