• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 95
  • 80
  • 11
  • 11
  • 10
  • 4
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 255
  • 92
  • 80
  • 69
  • 60
  • 57
  • 53
  • 52
  • 47
  • 47
  • 44
  • 41
  • 38
  • 37
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Methods for face detection and adaptive face recognition

Pavani, Sri-Kaushik 21 July 2010 (has links)
The focus of this thesis is on facial biometrics; specifically in the problems of face detection and face recognition. Despite intensive research over the last 20 years, the technology is not foolproof, which is why we do not see use of face recognition systems in critical sectors such as banking. In this thesis, we focus on three sub-problems in these two areas of research. Firstly, we propose methods to improve the speed-accuracy trade-off of the state-of-the-art face detector. Secondly, we consider a problem that is often ignored in the literature: to decrease the training time of the detectors. We propose two techniques to this end. Thirdly, we present a detailed large-scale study on self-updating face recognition systems in an attempt to answer if continuously changing facial appearance can be learnt automatically. / L'objectiu d'aquesta tesi és sobre biometria facial, específicament en els problemes de detecció de rostres i reconeixement facial. Malgrat la intensa recerca durant els últims 20 anys, la tecnologia no és infalible, de manera que no veiem l'ús dels sistemes de reconeixement de rostres en sectors crítics com la banca. En aquesta tesi, ens centrem en tres sub-problemes en aquestes dues àrees de recerca. En primer lloc, es proposa mètodes per millorar l'equilibri entre la precisió i la velocitat del detector de cares d'última generació. En segon lloc, considerem un problema que sovint s'ignora en la literatura: disminuir el temps de formació dels detectors. Es proposen dues tècniques per a aquest fi. En tercer lloc, es presenta un estudi detallat a gran escala sobre l'auto-actualització dels sistemes de reconeixement facial en un intent de respondre si el canvi constant de l'aparença facial es pot aprendre de forma automàtica.
252

Porovnání klasifikačních metod / Comparison of Classification Methods

Dočekal, Martin January 2019 (has links)
This thesis deals with a comparison of classification methods. At first, these classification methods based on machine learning are described, then a classifier comparison system is designed and implemented. This thesis also describes some classification tasks and datasets on which the designed system will be tested. The evaluation of classification tasks is done according to standard metrics. In this thesis is presented design and implementation of a classifier that is based on the principle of evolutionary algorithms.
253

Interconnection Architecture of Proximity Smart IoE-Networks with Centralised Management

González Ramírez, Pedro Luis 07 April 2022 (has links)
[ES] La interoperabilidad entre los objetos comunicados es el objetivo principal del internet de las cosas (IoT). Algunos esfuerzos para lograrlo han generado diversas propuestas de arquitecturas, sin embargo, aún no se ha llegado a un conceso. Estas arquitecturas difieren en el tipo de estructura, grado de centralización, algoritmo de enrutamiento, métricas de enrutamiento, técnicas de descubrimiento, algoritmos de búsqueda, segmentación, calidad de servicio y seguridad, entre otros. Algunas son mejores que otras, dependiendo del entorno en el que se desempeñan y del tipo de parámetro que se use. Las más populares son las orientadas a eventos o acciones basadas en reglas, las cuales han permitido que IoT ingrese en el mercado y logre una rápida masificación. Sin embargo, su interoperabilidad se basa en alianzas entre fabricantes para lograr su compatibilidad. Esta solución se logra en la nube con una plataforma que unifica a las diferentes marcas aliadas. Esto permite la introducción de estas tecnologías a la vida común de los usuarios pero no resuelve problemas de autonomía ni de interoperabilidad. Además, no incluye a la nueva generación de redes inteligentes basadas en cosas inteligentes. La arquitectura propuesta en esta tesis toma los aspectos más relevantes de las cuatro arquitecturas IoT más aceptadas y las integra en una, separando la capa IoT (comúnmente presente en estas arquitecturas), en tres capas. Además, está pensada para abarcar redes de proximidad (integrando diferentes tecnologías de interconexión IoT) y basar su funcionamiento en inteligencia artificial (AI). Por lo tanto, esta propuesta aumenta la posibilidad de lograr la interoperabilidad esperada y aumenta la funcionalidad de cada objeto en la red enfocada en prestar un servicio al usuario. Aunque el sistema que se propone incluye el procesamiento de una inteligencia artificial, sigue los mismos aspectos técnicos que sus antecesoras, ya que su operación y comunicación continúan basándose en la capa de aplicación y trasporte de la pila de protocolo TCP/IP. Sin embargo, con el fin de aprovechar los protocolos IoT sin modificar su funcionamiento, se crea un protocolo adicional que se encapsula y adapta a su carga útil. Se trata de un protocolo que se encarga de descubrir las características de un objeto (DFSP) divididas en funciones, servicios, capacidades y recursos, y las extrae para centralizarla en el administrador de la red (IoT-Gateway). Con esta información el IoT-Gateway puede tomar decisiones como crear grupos de trabajo autónomos que presten un servicio al usuario y enrutar a los objetos de este grupo que prestan el servicio, además de medir la calidad de la experiencia (QoE) del servicio; también administra el acceso a internet e integra a otras redes IoT, utilizando inteligencia artificial en la nube. Al basarse esta propuesta en un nuevo sistema jerárquico para interconectar objetos de diferente tipo controlados por AI con una gestión centralizada, se reduce la tolerancia a fallos y seguridad, y se mejora el procesamiento de los datos. Los datos son preprocesados en tres niveles dependiendo del tipo de servicio y enviados a través de una interfaz. Sin embargo, si se trata de datos sobre sus características estos no requieren mucho procesamiento, por lo que cada objeto los preprocesa de forma independiente, los estructura y los envía a la administración central. La red IoT basada en esta arquitectura tiene la capacidad de clasificar un objeto nuevo que llegue a la red en un grupo de trabajo sin la intervención del usuario. Además de tener la capacidad de prestar un servicio que requiera un alto procesamiento (por ejemplo, multimedia), y un seguimiento del usuario en otras redes IoT a través de la nube. / [CA] La interoperabilitat entre els objectes comunicats és l'objectiu principal de la internet de les coses (IoT). Alguns esforços per aconseguir-ho han generat diverses propostes d'arquitectures, però, encara no s'arriba a un concens. Aquestes arquitectures difereixen en el tipus d'estructura, grau de centralització, algoritme d'encaminament, mètriques d'enrutament, tècniques de descobriment, algoritmes de cerca, segmentació, qualitat de servei i seguretat entre d'altres. Algunes són millors que altres depenent de l'entorn en què es desenvolupen i de el tipus de paràmetre que es faci servir. Les més populars són les orientades a esdeveniments o accions basades en regles. Les quals li han permès entrar al mercat i aconseguir una ràpida massificació. No obstant això, la seva interoperabilitat es basa en aliances entre fabricants per aconseguir la seva compatibilitat. Aquesta solució s'aconsegueix en el núvol amb una plataforma que unifica les diferents marques aliades. Això permet la introducció d'aquestes tecnologies a la vida comuna dels usuaris però no resol problemes d'autonomia ni d'interoperabilitat. A més, no inclou a la nova generació de xarxes intel·ligents basades en coses intel·ligents. L'arquitectura proposada en aquesta tesi, pren els aspectes més rellevants de les quatre arquitectures IoT mes acceptades i les integra en una, separant la capa IoT (comunament present en aquestes arquitectures), en tres capes. A més aquesta pensada en abastar xarxes de proximitat (integrant diferents tecnologies d'interconnexió IoT) i basar el seu funcionament en intel·ligència artificial. Per tant, aquesta proposta augmenta la possibilitat d'aconseguir la interoperabilitat esperada i augmenta la funcionalitat de cada objecte a la xarxa enfocada a prestar un servei a l'usuari. Tot i que el sistema que es proposa inclou el processament d'una intel·ligència artificial, segueix els mateixos aspectes tècnics que les seves antecessores, ja que, la seva operació i comunicació se segueix basant en la capa d'aplicació i transport de la pila de protocol TCP / IP. No obstant això, per tal d'aprofitar els protocols IoT sense modificar el seu funcionament es crea un protocol addicional que s'encapsula i s'adapta a la seva càrrega útil. Es tracta d'un protocol que s'encarrega de descobrir les característiques d'un objecte (DFSP) dividides en funcions, serveis, capacitats i recursos, i les extreu per centralitzar-la en l'administrador de la xarxa (IoT-Gateway). Amb aquesta informació l'IoT-Gateway pot prendre decisions com crear grups de treball autònoms que prestin un servei a l'usuari i encaminar als objectes d'aquest grup que presten el servei. A més de mesurar la qualitat de l'experiència (QoE) de el servei. També administra l'accés a internet i integra a altres xarxes Iot, utilitzant intel·ligència artificial en el núvol. A l'basar-se aquesta proposta en un nou sistema jeràrquic per interconnectar objectes de diferent tipus controlats per AI amb una gestió centralitzada, es redueix la tolerància a fallades i seguretat, i es millora el processament de les dades. Les dades són processats en tres nivells depenent de el tipus de servei i enviats a través d'una interfície. No obstant això, si es tracta de dades sobre les seves característiques aquests no requereixen molt processament, de manera que cada objecte els processa de forma independent, els estructura i els envia a l'administració central. La xarxa IoT basada en aquesta arquitectura té la capacitat de classificar un objecte nou que arribi a la xarxa en un grup de treball sense la intervenció de l'usuari. A més de tenir la capacitat de prestar un servei que requereixi un alt processament (per exemple multimèdia), i un seguiment de l'usuari en altres xarxes IoT a través del núvol. / [EN] Interoperability between communicating objects is the main goal of the Internet of Things (IoT). Efforts to achieve this have generated several architectures' proposals; however, no consensus has yet been reached. These architectures differ in structure, degree of centralisation, routing algorithm, routing metrics, discovery techniques, search algorithms, segmentation, quality of service, and security. Some are better than others depending on the environment in which they perform, and the type of parameter used. The most popular are those oriented to events or actions based on rules, which has allowed them to enter the market and achieve rapid massification. However, their interoperability is based on alliances between manufacturers to achieve compatibility. This solution is achieved in the cloud with a dashboard that unifies the different allied brands, allowing the introduction of these technologies into users' everyday lives but does not solve problems of autonomy or interoperability. Moreover, it does not include the new generation of smart grids based on smart things. The architecture proposed in this thesis takes the most relevant aspects of the four most accepted IoT-Architectures and integrates them into one, separating the IoT layer (commonly present in these architectures) into three layers. It is also intended to cover proximity networks (integrating different IoT interconnection technologies) and base its operation on artificial intelligence (AI). Therefore, this proposal increases the possibility of achieving the expected interoperability and increases the functionality of each object in the network focused on providing a service to the user. Although the proposed system includes artificial intelligence processing, it follows the same technical aspects as its predecessors since its operation and communication is still based on the application and transport layer of the TCP/IP protocol stack. However, in order to take advantage of IoT-Protocols without modifying their operation, an additional protocol is created that encapsulates and adapts to its payload. This protocol discovers the features of an object (DFSP) divided into functions, services, capabilities, and resources, and extracts them to be centralised in the network manager (IoT-Gateway). With this information, the IoT-Gateway can make decisions such as creating autonomous workgroups that provide a service to the user and routing the objects in this group that provide the service. It also measures the quality of experience (QoE) of the service. Moreover, manages internet access and integrates with other IoT-Networks, using artificial intelligence in the cloud. This proposal is based on a new hierarchical system for interconnecting objects of different types controlled by AI with centralised management, reducing the fault tolerance and security, and improving data processing. Data is preprocessed on three levels depending on the type of service and sent through an interface. However, if it is data about its features, it does not require much processing, so each object preprocesses it independently, structures it and sends it to the central administration. The IoT-Network based on this architecture can classify a new object arriving on the network in a workgroup without user intervention. It also can provide a service that requires high processing (e.g., multimedia), and user tracking in other IoT-Networks through the cloud. / González Ramírez, PL. (2022). Interconnection Architecture of Proximity Smart IoE-Networks with Centralised Management [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/181892 / TESIS
254

Random parameters in learning: advantages and guarantees

Evzenie Coupkova (18396918) 22 April 2024 (has links)
<p dir="ltr">The generalization error of a classifier is related to the complexity of the set of functions among which the classifier is chosen. We study a family of low-complexity classifiers consisting of thresholding a random one-dimensional feature. The feature is obtained by projecting the data on a random line after embedding it into a higher-dimensional space parametrized by monomials of order up to k. More specifically, the extended data is projected n-times and the best classifier among those n, based on its performance on training data, is chosen. </p><p dir="ltr">We show that this type of classifier is extremely flexible, as it is likely to approximate, to an arbitrary precision, any continuous function on a compact set as well as any Boolean function on a compact set that splits the support into measurable subsets. In particular, given full knowledge of the class conditional densities, the error of these low-complexity classifiers would converge to the optimal (Bayes) error as k and n go to infinity. On the other hand, if only a training dataset is given, we show that the classifiers will perfectly classify all the training points as k and n go to infinity. </p><p dir="ltr">We also bound the generalization error of our random classifiers. In general, our bounds are better than those for any classifier with VC dimension greater than O(ln(n)). In particular, our bounds imply that, unless the number of projections n is extremely large, there is a significant advantageous gap between the generalization error of the random projection approach and that of a linear classifier in the extended space. Asymptotically, as the number of samples approaches infinity, the gap persists for any such n. Thus, there is a potentially large gain in generalization properties by selecting parameters at random, rather than optimization. </p><p dir="ltr">Given a classification problem and a family of classifiers, the Rashomon ratio measures the proportion of classifiers that yield less than a given loss. Previous work has explored the advantage of a large Rashomon ratio in the case of a finite family of classifiers. Here we consider the more general case of an infinite family. We show that a large Rashomon ratio guarantees that choosing the classifier with the best empirical accuracy among a random subset of the family, which is likely to improve generalizability, will not increase the empirical loss too much. </p><p dir="ltr">We quantify the Rashomon ratio in two examples involving infinite classifier families in order to illustrate situations in which it is large. In the first example, we estimate the Rashomon ratio of the classification of normally distributed classes using an affine classifier. In the second, we obtain a lower bound for the Rashomon ratio of a classification problem with a modified Gram matrix when the classifier family consists of two-layer ReLU neural networks. In general, we show that the Rashomon ratio can be estimated using a training dataset along with random samples from the classifier family and we provide guarantees that such an estimation is close to the true value of the Rashomon ratio.</p>
255

The research on chinese text multi-label classification / Avancée en classification multi-labels de textes en langue chinoise / 中文文本多标签分类研究

Wei, Zhihua 07 May 2010 (has links)
Text Classification (TC) which is an important field in information technology has many valuable applications. When facing the sea of information resources, the objects of TC are more complicated and diversity. The researches in pursuit of effective and practical TC technology are fairly challenging. More and more researchers regard that multi-label TC is more suited for many applications. This thesis analyses the difficulties and problems in multi-label TC and Chinese text representation based on a mass of algorithms for single-label TC and multi-label TC. Aiming at high dimensionality in feature space, sparse distribution in text representation and poor performance of multi-label classifier, this thesis will bring forward corresponding algorithms from different angles.Focusing on the problem of dimensionality “disaster” when Chinese texts are represented by using n-grams, two-step feature selection algorithm is constructed. The method combines filtering rare features within class and selecting discriminative features across classes. Moreover, the proper value of “n”, the strategy of feature weight and the correlation among features are discussed based on variety of experiments. Some useful conclusions are contributed to the research of n-gram representation in Chinese texts.In a view of the disadvantage in Latent Dirichlet Allocation (LDA) model, that is, arbitrarily revising the variable in smooth process, a new strategy for smoothing based on Tolerance Rough Set (TRS) is put forward. It constructs tolerant class in global vocabulary database firstly and then assigns value for out-of-vocabulary (oov) word in each class according to tolerant class.In order to improve performance of multi-label classifier and degrade computing complexity, a new TC method based on LDA model is applied for Chinese text representation. It extracts topics statistically from texts and then texts are represented by using the topic vector. It shows competitive performance both in English and in Chinese corpus.To enhance the performance of classifiers in multi-label TC, a compound classification framework is raised. It partitions the text space by computing the upper approximation and lower approximation. This algorithm decomposes a multi-label TC problem into several single-label TCs and several multi-label TCs which have less labels than original problem. That is, an unknown text should be classified by single-label classifier when it is partitioned into lower approximation space of some class. Otherwise, it should be classified by corresponding multi-label classifier.An application system TJ-MLWC (Tongji Multi-label Web Classifier) was designed. It could call the result from Search Engines directly and classify these results real-time using improved Naïve Bayes classifier. This makes the browse process more conveniently for users. Users could locate the texts interested immediately according to the class information given by TJ-MLWC. / La thèse est centrée sur la Classification de texte, domaine en pleine expansion, avec de nombreuses applications actuelles et potentielles. Les apports principaux de la thèse portent sur deux points : Les spécificités du codage et du traitement automatique de la langue chinoise : mots pouvant être composés de un, deux ou trois caractères ; absence de séparation typographique entre les mots ; grand nombre d’ordres possibles entre les mots d’une phrase ; tout ceci aboutissant à des problèmes difficiles d’ambiguïté. La solution du codage en «n-grams »(suite de n=1, ou 2 ou 3 caractères) est particulièrement adaptée à la langue chinoise, car elle est rapide et ne nécessite pas les étapes préalables de reconnaissance des mots à l’aide d’un dictionnaire, ni leur séparation. La classification multi-labels, c'est-à-dire quand chaque individus peut être affecté à une ou plusieurs classes. Dans le cas des textes, on cherche des classes qui correspondent à des thèmes (topics) ; un même texte pouvant être rattaché à un ou plusieurs thème. Cette approche multilabel est plus générale : un même patient peut être atteint de plusieurs pathologies ; une même entreprise peut être active dans plusieurs secteurs industriels ou de services. La thèse analyse ces problèmes et tente de leur apporter des solutions, d’abord pour les classifieurs unilabels, puis multi-labels. Parmi les difficultés, la définition des variables caractérisant les textes, leur grand nombre, le traitement des tableaux creux (beaucoup de zéros dans la matrice croisant les textes et les descripteurs), et les performances relativement mauvaises des classifieurs multi-classes habituels. / 文本分类是信息科学中一个重要而且富有实际应用价值的研究领域。随着文本分类处理内容日趋复杂化和多元化,分类目标也逐渐多样化,研究有效的、切合实际应用需求的文本分类技术成为一个很有挑战性的任务,对多标签分类的研究应运而生。本文在对大量的单标签和多标签文本分类算法进行分析和研究的基础上,针对文本表示中特征高维问题、数据稀疏问题和多标签分类中分类复杂度高而精度低的问题,从不同的角度尝试运用粗糙集理论加以解决,提出了相应的算法,主要包括:针对n-gram作为中文文本特征时带来的维数灾难问题,提出了两步特征选择的方法,即去除类内稀有特征和类间特征选择相结合的方法,并就n-gram作为特征时的n值选取、特征权重的选择和特征相关性等问题在大规模中文语料库上进行了大量的实验,得出一些有用的结论。针对文本分类中运用高维特征表示文本带来的分类效率低,开销大等问题,提出了基于LDA模型的多标签文本分类算法,利用LDA模型提取的主题作为文本特征,构建高效的分类器。在PT3多标签分类转换方法下,该分类算法在中英文数据集上都表现出很好的效果,与目前公认最好的多标签分类方法效果相当。针对LDA模型现有平滑策略的随意性和武断性的缺点,提出了基于容差粗糙集的LDA语言模型平滑策略。该平滑策略首先在全局词表上构造词的容差类,再根据容差类中词的频率为每类文档的未登录词赋予平滑值。在中英文、平衡和不平衡语料库上的大量实验都表明该平滑方法显著提高了LDA模型的分类性能,在不平衡语料库上的提高尤其明显。针对多标签分类中分类复杂度高而精度低的问题,提出了一种基于可变精度粗糙集的复合多标签文本分类框架,该框架通过可变精度粗糙集方法划分文本特征空间,进而将多标签分类问题分解为若干个两类单标签分类问题和若干个标签数减少了的多标签分类问题。即,当一篇未知文本被划分到某一类文本的下近似区域时,可以直接用简单的单标签文本分类器判断其类别;当未知文本被划分在边界域时,则采用相应区域的多标签分类器进行分类。实验表明,这种分类框架下,分类的精确度和算法效率都有较大的提高。本文还设计和实现了一个基于多标签分类的网页搜索结果可视化系统(MLWC),该系统能够直接调用搜索引擎返回的搜索结果,并采用改进的Naïve Bayes多标签分类算法实现实时的搜索结果分类,使用户可以快速地定位搜索结果中感兴趣的文本。

Page generated in 0.0524 seconds