241 |
Klasifikace dokumentů podle tématu / Document ClassificationMarek, Tomáš January 2013 (has links)
This thesis deals with a document classification, especially with a text classification method. Main goal of this thesis is to analyze two arbitrary document classification algorithms to describe them and to create an implementation of those algorithms. Chosen algorithms are Bayes classifier and classifier based on support vector machines (SVM) which were analyzed and implemented in the practical part of this thesis. One of the main goals of this thesis is to create and choose optimal text features, which are describing the input text best and thus lead to the best classification results. At the end of this thesis there is a bunch of tests showing comparison of efficiency of the chosen classifiers under various conditions.
|
242 |
Development and Evaluation of a Flexible Framework for the Design of Autonomous Classifier SystemsGanapathy, Priya 22 December 2009 (has links)
No description available.
|
243 |
Towards a Versatile System for the Visual Recognition of Surface DefectsKoprnicky, Miroslav January 2005 (has links)
Automated visual inspection is an emerging multi-disciplinary field with many challenges; it combines different aspects of computer vision, pattern recognition, automation, and control systems. There does not exist a large body of work dedicated to the design of generalized visual inspection systems; that is, those that might easily be made applicable to different product types. This is an important oversight, in that many improvements in design and implementation times, as well as costs, might be realized with a system that could easily be made to function in different production environments. <br /><br /> This thesis proposes a framework for generalizing and automating the design of the defect classification stage of an automated visual inspection system. It involves using an expandable set of features which are optimized along with the classifier operating on them in order to adapt to the application at hand. The particular implementation explored involves optimizing the feature set in disjoint sets logically grouped by feature type to keep search spaces reasonable. Operator input is kept at a minimum throughout this customization process, since it is limited only to those cases in which the existing feature library cannot adequately delineate the classes at hand, at which time new features (or pools) may have to be introduced by an engineer with experience in the domain. <br /><br /> Two novel methods are put forward which fit well within this framework: cluster-space and hybrid-space classifiers. They are compared in a series of tests against both standard benchmark classifiers, as well as mean and majority vote multi-classifiers, on feature sets comprised of just the logical feature subsets, as well as the entire feature sets formed by their union. The proposed classifiers as well as the benchmarks are optimized with both a progressive combinatorial approach and with an genetic algorithm. Experimentation was performed on true colour industrial lumber defect images, as well as binary hand-written digits. <br /><br /> Based on the experiments conducted in this work, it was found that the sequentially optimized multi hybrid-space methods are capable of matching the performances of the benchmark classifiers on the lumber data, with the exception of the mean-rule multi-classifiers, which dominated most experiments by approximately 3% in classification accuracy. The genetic algorithm optimized hybrid-space multi-classifier achieved best performance however; an accuracy of 79. 2%. <br /><br /> The numeral dataset results were less promising; the proposed methods could not equal benchmark performance. This is probably because the numeral feature-sets were much more conducive to good class separation, with standard benchmark accuracies approaching 95% not uncommon. This indicates that the cluster-space transform inherent to the proposed methods appear to be most useful in highly dependant or confusing feature-spaces, a hypothesis supported by the outstanding performance of the single hybrid-space classifier in the difficult texture feature subspace: 42. 6% accuracy, a 6% increase over the best benchmark performance. <br /><br /> The generalized framework proposed appears promising, because classifier performance over feature sets formed by the union of independently optimized feature subsets regularly met and exceeded those classifiers operating on feature sets formed by the optimization of the feature set in its entirety. This finding corroborates earlier work with similar results [3, 9], and is an aspect of pattern recognition that should be examined further.
|
244 |
Learning a graph made of boolean function nodes : a new approach in machine learningMokaddem, Mouna 08 1900 (has links)
Dans ce document, nous présentons une nouvelle approche en apprentissage machine
pour la classification. Le cadre que nous proposons est basé sur des circuits booléens,
plus précisément le classifieur produit par notre algorithme a cette forme. L’utilisation
des bits et des portes logiques permet à l’algorithme d’apprentissage et au classifieur
d’utiliser des opérations vectorielles binaires très efficaces. La qualité du classifieur, produit
par notre approche, se compare très favorablement à ceux qui sont produits par des
techniques classiques, à la fois en termes d’efficacité et de précision. En outre, notre
approche peut être utilisée dans un contexte où la confidentialité est une nécessité, par
exemple, nous pouvons classer des données privées. Ceci est possible car le calcul ne
peut être effectué que par des circuits booléens et les données chiffrées sont quantifiées
en bits. De plus, en supposant que le classifieur a été déjà entraîné, il peut être alors
facilement implémenté sur un FPGA car ces circuits sont également basés sur des portes
logiques et des opérations binaires. Par conséquent, notre modèle peut être facilement
intégré dans des systèmes de classification en temps réel. / In this document we present a novel approach in machine learning for classification.
The framework we propose is based on boolean circuits, more specifically the classifier
produced by our algorithm has that form. Using bits and boolean gates enable the
learning algorithm and the classifier to use very efficient boolean vector operations. The
accuracy of the classifier we obtain with our framework compares very favourably with
those produced by conventional techniques, both in terms of efficiency and accuracy.
Furthermore, the framework can be used in a context where information privacy is a necessity,
for example we can classify private data. This can be done because computation
can be performed only through boolean circuits as encrypted data is quantized in bits.
Moreover, assuming that the classifier was trained, it can then be easily implemented on
FPGAs (i.e., Field-programmable gate array) as those circuits are also based on logic
gates and bitwise operations. Therefore, our model can be easily integrated in real-time
classification systems.
|
245 |
Fast Methods for Vascular Segmentation Based on Approximate Skeleton DetectionLidayová, Kristína January 2017 (has links)
Modern medical imaging techniques have revolutionized health care over the last decades, providing clinicians with high-resolution 3D images of the inside of the patient's body without the need for invasive procedures. Detailed images of the vascular anatomy can be captured by angiography, providing a valuable source of information when deciding whether a vascular intervention is needed, for planning treatment, and for analyzing the success of therapy. However, increasing level of detail in the images, together with a wide availability of imaging devices, lead to an urgent need for automated techniques for image segmentation and analysis in order to assist the clinicians in performing a fast and accurate examination. To reduce the need for user interaction and increase the speed of vascular segmentation, we propose a fast and fully automatic vascular skeleton extraction algorithm. This algorithm first analyzes the volume's intensity histogram in order to automatically adapt the internal parameters to each patient and then it produces an approximate skeleton of the patient's vasculature. The skeleton can serve as a seed region for subsequent surface extraction algorithms. Further improvements of the skeleton extraction algorithm include the expansion to detect the skeleton of diseased arteries and the design of a convolutional neural network classifier that reduces false positive detections of vascular cross-sections. In addition to the complete skeleton extraction algorithm, the thesis presents a segmentation algorithm based on modified onion-kernel region growing. It initiates the growing from the previously extracted skeleton and provides a rapid binary segmentation of tubular structures. To provide the possibility of extracting precise measurements from this segmentation we introduce a method for obtaining a segmentation with subpixel precision out of the binary segmentation and the original image. This method is especially suited for thin and elongated structures, such as vessels, since it does not shrink the long protrusions. The method supports both 2D and 3D image data. The methods were validated on real computed tomography datasets and are primarily intended for applications in vascular segmentation, however, they are robust enough to work with other anatomical tree structures after adequate parameter adjustment, which was demonstrated on an airway-tree segmentation.
|
246 |
E-banking operational risk assessment : a soft computing approach in the context of the Nigerian banking industryOchuko, Rita Erhovwo January 2012 (has links)
This study investigates E-banking Operational Risk Assessment (ORA) to enable the development of a new ORA framework and methodology. The general view is that E-banking systems have modified some of the traditional banking risks, particularly Operational Risk (OR) as suggested by the Basel Committee on Banking Supervision in 2003. In addition, recent E-banking financial losses together with risk management principles and standards raise the need for an effective ORA methodology and framework in the context of E-banking. Moreover, evaluation tools and / or methods for ORA are highly subjective, are still in their infant stages, and have not yet reached a consensus. Therefore, it is essential to develop valid and reliable methods for effective ORA and evaluations. The main contribution of this thesis is to apply Fuzzy Inference System (FIS) and Tree Augmented Naïve Bayes (TAN) classifier as standard tools for identifying OR, and measuring OR exposure level. In addition, a new ORA methodology is proposed which consists of four major steps: a risk model, assessment approach, analysis approach and a risk assessment process. Further, a new ORA framework and measurement metrics are proposed with six factors: frequency of triggering event, effectiveness of avoidance barriers, frequency of undesirable operational state, effectiveness of recovery barriers before the risk outcome, approximate cost for Undesirable Operational State (UOS) occurrence, and severity of the risk outcome. The study results were reported based on surveys conducted with Nigerian senior banking officers and banking customers. The study revealed that the framework and assessment tools gave good predictions for risk learning and inference in such systems. Thus, results obtained can be considered promising and useful for both E-banking system adopters and future researchers in this area.
|
247 |
Influence des facteurs émotionnels sur la résistance au changement dans les organisationsMenezes, Ilusca Lima Lopes de January 2008 (has links)
Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal.
|
248 |
FPGA-based object detection using classification circuitsFu, Min 04 1900 (has links)
Dans l'apprentissage machine, la classification est le processus d’assigner une nouvelle observation à une certaine catégorie. Les classifieurs qui mettent en œuvre des algorithmes de classification ont été largement étudié au cours des dernières décennies. Les classifieurs traditionnels sont basés sur des algorithmes tels que le SVM et les réseaux de neurones, et sont généralement exécutés par des logiciels sur CPUs qui fait que le système souffre d’un manque de performance et d’une forte consommation d'énergie. Bien que les GPUs puissent être utilisés pour accélérer le calcul de certains classifieurs, leur grande consommation de puissance empêche la technologie d'être mise en œuvre sur des appareils portables tels que les systèmes embarqués. Pour rendre le système de classification plus léger, les classifieurs devraient être capable de fonctionner sur un système matériel plus compact au lieu d'un groupe de CPUs ou GPUs, et les classifieurs eux-mêmes devraient être optimisés pour ce matériel.
Dans ce mémoire, nous explorons la mise en œuvre d'un classifieur novateur sur une plate-forme matérielle à base de FPGA. Le classifieur, conçu par Alain Tapp (Université de Montréal), est basé sur une grande quantité de tables de recherche qui forment des circuits arborescents qui effectuent les tâches de classification. Le FPGA semble être un élément fait sur mesure pour mettre en œuvre ce classifieur avec ses riches ressources de tables de recherche et l'architecture à parallélisme élevé. Notre travail montre que les FPGAs peuvent implémenter plusieurs classifieurs et faire les classification sur des images haute définition à une vitesse très élevée. / In the machine learning area, classification is a process of mapping a new observation to a certain category. Classifiers which implement classification algorithms have been studied widely over the past decades. Traditional classifiers are based on algorithms such as SVM and neural nets, and are usually run by software on CPUs which cause the system to suffer low performance and high power consumption. Although GPUs can be used to accelerate the computation of some classifiers, its high power consumption prevents the technology from being implemented on portable devices such as embedded systems or wearable hardware. To make a lightweight classification system, classifiers should be able to run on a more compact hardware system instead of a group of CPUs/GPUs, and classifiers themselves should be optimized to fit that hardware.
In this thesis, we explore the implementation of a novel classifier on a FPGA-based hardware platform. The classifier, devised by Alain Tapp (Université de Montréal), is based on a large amount of look-up tables that form tree-structured circuits to do classification tasks. The FPGA appears to be a tailor-made component to implement this classifier with its rich resources of look-up tables and the highly parallel architecture. Our work shows that a single FPGA can implement multiple classifiers to do classification on high definition images at a very high speed.
|
249 |
Spike-Based Bayesian-Hebbian Learning in Cortical and Subcortical MicrocircuitsTully, Philip January 2017 (has links)
Cortical and subcortical microcircuits are continuously modified throughout life. Despite ongoing changes these networks stubbornly maintain their functions, which persist although destabilizing synaptic and nonsynaptic mechanisms should ostensibly propel them towards runaway excitation or quiescence. What dynamical phenomena exist to act together to balance such learning with information processing? What types of activity patterns do they underpin, and how do these patterns relate to our perceptual experiences? What enables learning and memory operations to occur despite such massive and constant neural reorganization? Progress towards answering many of these questions can be pursued through large-scale neuronal simulations. In this thesis, a Hebbian learning rule for spiking neurons inspired by statistical inference is introduced. The spike-based version of the Bayesian Confidence Propagation Neural Network (BCPNN) learning rule involves changes in both synaptic strengths and intrinsic neuronal currents. The model is motivated by molecular cascades whose functional outcomes are mapped onto biological mechanisms such as Hebbian and homeostatic plasticity, neuromodulation, and intrinsic excitability. Temporally interacting memory traces enable spike-timing dependence, a stable learning regime that remains competitive, postsynaptic activity regulation, spike-based reinforcement learning and intrinsic graded persistent firing levels. The thesis seeks to demonstrate how multiple interacting plasticity mechanisms can coordinate reinforcement, auto- and hetero-associative learning within large-scale, spiking, plastic neuronal networks. Spiking neural networks can represent information in the form of probability distributions, and a biophysical realization of Bayesian computation can help reconcile disparate experimental observations. / <p>QC 20170421</p>
|
250 |
Analyse de changements multiples : une approche probabiliste utilisant les réseaux bayésiensBali, Khaled 12 1900 (has links)
No description available.
|
Page generated in 0.027 seconds