• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 397
  • 64
  • 43
  • 26
  • 6
  • 4
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 626
  • 626
  • 284
  • 222
  • 213
  • 150
  • 138
  • 131
  • 101
  • 95
  • 93
  • 88
  • 80
  • 78
  • 78
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
511

Learning OWL Class Expressions

Lehmann, Jens 24 June 2010 (has links) (PDF)
With the advent of the Semantic Web and Semantic Technologies, ontologies have become one of the most prominent paradigms for knowledge representation and reasoning. The popular ontology language OWL, based on description logics, became a W3C recommendation in 2004 and a standard for modelling ontologies on the Web. In the meantime, many studies and applications using OWL have been reported in research and industrial environments, many of which go beyond Internet usage and employ the power of ontological modelling in other fields such as biology, medicine, software engineering, knowledge management, and cognitive systems. However, recent progress in the field faces a lack of well-structured ontologies with large amounts of instance data due to the fact that engineering such ontologies requires a considerable investment of resources. Nowadays, knowledge bases often provide large volumes of data without sophisticated schemata. Hence, methods for automated schema acquisition and maintenance are sought. Schema acquisition is closely related to solving typical classification problems in machine learning, e.g. the detection of chemical compounds causing cancer. In this work, we investigate both, the underlying machine learning techniques and their application to knowledge acquisition in the Semantic Web. In order to leverage machine-learning approaches for solving these tasks, it is required to develop methods and tools for learning concepts in description logics or, equivalently, class expressions in OWL. In this thesis, it is shown that methods from Inductive Logic Programming (ILP) are applicable to learning in description logic knowledge bases. The results provide foundations for the semi-automatic creation and maintenance of OWL ontologies, in particular in cases when extensional information (i.e. facts, instance data) is abundantly available, while corresponding intensional information (schema) is missing or not expressive enough to allow powerful reasoning over the ontology in a useful way. Such situations often occur when extracting knowledge from different sources, e.g. databases, or in collaborative knowledge engineering scenarios, e.g. using semantic wikis. It can be argued that being able to learn OWL class expressions is a step towards enriching OWL knowledge bases in order to enable powerful reasoning, consistency checking, and improved querying possibilities. In particular, plugins for OWL ontology editors based on learning methods are developed and evaluated in this work. The developed algorithms are not restricted to ontology engineering and can handle other learning problems. Indeed, they lend themselves to generic use in machine learning in the same way as ILP systems do. The main difference, however, is the employed knowledge representation paradigm: ILP traditionally uses logic programs for knowledge representation, whereas this work rests on description logics and OWL. This difference is crucial when considering Semantic Web applications as target use cases, as such applications hinge centrally on the chosen knowledge representation format for knowledge interchange and integration. The work in this thesis can be understood as a broadening of the scope of research and applications of ILP methods. This goal is particularly important since the number of OWL-based systems is already increasing rapidly and can be expected to grow further in the future. The thesis starts by establishing the necessary theoretical basis and continues with the specification of algorithms. It also contains their evaluation and, finally, presents a number of application scenarios. The research contributions of this work are threefold: The first contribution is a complete analysis of desirable properties of refinement operators in description logics. Refinement operators are used to traverse the target search space and are, therefore, a crucial element in many learning algorithms. Their properties (completeness, weak completeness, properness, redundancy, infinity, minimality) indicate whether a refinement operator is suitable for being employed in a learning algorithm. The key research question is which of those properties can be combined. It is shown that there is no ideal, i.e. complete, proper, and finite, refinement operator for expressive description logics, which indicates that learning in description logics is a challenging machine learning task. A number of other new results for different property combinations are also proven. The need for these investigations has already been expressed in several articles prior to this PhD work. The theoretical limitations, which were shown as a result of these investigations, provide clear criteria for the design of refinement operators. In the analysis, as few assumptions as possible were made regarding the used description language. The second contribution is the development of two refinement operators. The first operator supports a wide range of concept constructors and it is shown that it is complete and can be extended to a proper operator. It is the most expressive operator designed for a description language so far. The second operator uses the light-weight language EL and is weakly complete, proper, and finite. It is straightforward to extend it to an ideal operator, if required. It is the first published ideal refinement operator in description logics. While the two operators differ a lot in their technical details, they both use background knowledge efficiently. The third contribution is the actual learning algorithms using the introduced operators. New redundancy elimination and infinity-handling techniques are introduced in these algorithms. According to the evaluation, the algorithms produce very readable solutions, while their accuracy is competitive with the state-of-the-art in machine learning. Several optimisations for achieving scalability of the introduced algorithms are described, including a knowledge base fragment selection approach, a dedicated reasoning procedure, and a stochastic coverage computation approach. The research contributions are evaluated on benchmark problems and in use cases. Standard statistical measurements such as cross validation and significance tests show that the approaches are very competitive. Furthermore, the ontology engineering case study provides evidence that the described algorithms can solve the target problems in practice. A major outcome of the doctoral work is the DL-Learner framework. It provides the source code for all algorithms and examples as open-source and has been incorporated in other projects.
512

Multi-layer Perceptron Error Surfaces: Visualization, Structure and Modelling

Gallagher, Marcus Reginald Unknown Date (has links)
The Multi-Layer Perceptron (MLP) is one of the most widely applied and researched Artificial Neural Network model. MLP networks are normally applied to performing supervised learning tasks, which involve iterative training methods to adjust the connection weights within the network. This is commonly formulated as a multivariate non-linear optimization problem over a very high-dimensional space of possible weight configurations. Analogous to the field of mathematical optimization, training an MLP is often described as the search of an error surface for a weight vector which gives the smallest possible error value. Although this presents a useful notion of the training process, there are many problems associated with using the error surface to understand the behaviour of learning algorithms and the properties of MLP mappings themselves. Because of the high-dimensionality of the system, many existing methods of analysis are not well-suited to this problem. Visualizing and describing the error surface are also nontrivial and problematic. These problems are specific to complex systems such as neural networks, which contain large numbers of adjustable parameters, and the investigation of such systems in this way is largely a developing area of research. In this thesis, the concept of the error surface is explored using three related methods. Firstly, Principal Component Analysis (PCA) is proposed as a method for visualizing the learning trajectory followed by an algorithm on the error surface. It is found that PCA provides an effective method for performing such a visualization, as well as providing an indication of the significance of individual weights to the training process. Secondly, sampling methods are used to explore the error surface and to measure certain properties of the error surface, providing the necessary data for an intuitive description of the error surface. A number of practical MLP error surfaces are found to contain a high degree of ultrametric structure, in common with other known configuration spaces of complex systems. Thirdly, a class of global optimization algorithms is also developed, which is focused on the construction and evolution of a model of the error surface (or search spa ce) as an integral part of the optimization process. The relationships between this algorithm class, the Population-Based Incremental Learning algorithm, evolutionary algorithms and cooperative search are discussed. The work provides important practical techniques for exploration of the error surfaces of MLP networks. These techniques can be used to examine the dynamics of different training algorithms, the complexity of MLP mappings and an intuitive description of the nature of the error surface. The configuration spaces of other complex systems are also amenable to many of these techniques. Finally, the algorithmic framework provides a powerful paradigm for visualization of the optimization process and the development of parallel coupled optimization algorithms which apply knowledge of the error surface to solving the optimization problem.
513

Εφαρμογή τεχνικών εξόρυξης γνώσης στην εκπαίδευση

Παπανικολάου, Δονάτος 31 May 2012 (has links)
Σε αυτή την Διπλωματική εργασία μελετήσαμε με ποιο τρόπο μπορούν να εφαρμοστούν οι διάφορες τεχνικές Εξόρυξης Γνώσης (Data Mining) στην εκπαίδευση. Αυτός ο επιστημονικός τομέας o οποίος ερευνά και αναπτύσσει τεχνικές προκειμένου να ανακαλύψει γνώση από δεδομένα τα οποία προέρχονται από την εκπαίδευση ονομάζεται Εξόρυξη Γνώσης από Εκπαιδευτικά Δεδομένα (Educational Data Mining –EDM. Στην εργασία αυτή εκτός από την θεωρητική μελέτη των αλγορίθμων και των τεχνικών που διέπουν την εξόρυξη γνώσης από δεδομένα γενικά, έγινε και μια λεπτομερέστερη μελέτη και παρουσίαση της κατηγορίας των αλγορίθμων κατηγοριοποίησης (Classification), διότι αυτοί οι αλγόριθμοι χρησιμοποιήθηκαν στην φάση της υλοποίησης/αξιολόγησης. Στην συνέχεια η εργασία επικεντρώθηκε στον τρόπο με τον οποίο μπορούν να εφαρμοστούν αυτοί οι αλγόριθμοι σε εκπαιδευτικά δεδομένα, τι εφαρμογές έχουμε στην εκπαίδευση, ενώ αναφερόμαστε και σε μια πληθώρα ερευνών που έχουν πραγματοποιηθεί πάνω στο συγκεκριμένο αντικείμενο. Στην συνέχεια διερευνήσαμε την εφαρμογή τεχνικών κατηγοριοποίησης στην πρόγνωση της επίδοσης μαθητών Δευτεροβάθμιας Εκπαίδευσης στα μαθήματα της Γεωγραφίας Α’ και Β’ Γυμνασίου. Συγκεκριμένα υλοποιήσαμε και θα αξιολογήσαμε έξι αλγορίθμους οι οποίοι ανήκουν στην ομάδα των αλγορίθμων κατηγοριοποίησης(Classification) και είναι αντιπροσωπευτικοί των σημαντικότερων τεχνικών κατηγοριοποίησης. Από την οικογένεια των ταξινομητών με χρήση δένδρων απόφασης (Decision Tree Classifiers) υλοποιήσαμε τον J48, από τους αλγορίθμους κανόνων ταξινόμησης (Rule-based Classification ) τον Ripper, από τους αλγόριθμους στατιστικής κατηγοριοποίησης τον Naïve Bayes, από την μέθοδο των Κ πλησιέστερων γειτόνων (KNN) τον 3-ΝΝ, από την κατηγορία των τεχνητών νευρωνικών δικτύων τον Back Propagation και τέλος από τις μηχανές διανυσμάτων υποστήριξης (Support Vector Machines SVM) τον SMO (Sequental Minimal Optimazation). Όλες οι παραπάνω υλοποιήσεις και αξιολογήσεις έγιναν με το ελεύθερο λογισμικού Weka το οποίο είναι υλοποιημένο σε Java και το οποίο προσφέρει μια πληθώρα αλγορίθμων μηχανικής μάθησης για να κάνουμε εξόρυξη γνώσης. / In this work we will study the way the misc data mining techniques can be applied to the misc fields of the education. This new scientific field is commonly named Educational Data Mining. In this study we will study the theoretical analysis of the data mining techniques focussing to the classification techniques as those are the most commonly used for prediction purpose. We also intend to predict student performance in secondary education using data mining techniques. The data we collect are concerned the class of Geography and we apply to them six data mining models with the help of the open source machine learning software Weka. We use supervised machine learning algorithms from the Classification field (Decision Tree Classifiers, Rule-based Classification, Neural Networks, k-Nearest Neighbour Algorithm, Bayesian and Support Vector Machines). After we have evaluate the algorithms we build a java tool, that uses the 3-KNN algorithm, to help us predict the performance of a student at the end of the year.
514

Classifying Hate Speech using Fine-tuned Language Models

Brorson, Erik January 2018 (has links)
Given the explosion in the size of social media, the amount of hate speech is also growing. To efficiently combat this issue we need reliable and scalable machine learning models. Current solutions rely on crowdsourced datasets that are limited in size, or using training data from self-identified hateful communities, that lacks specificity. In this thesis we introduce a novel semi-supervised modelling strategy. It is first trained on the freely available data from the hateful communities and then fine-tuned to classify hateful tweets from crowdsourced annotated datasets. We show that our model reach state of the art performance with minimal hyper-parameter tuning.
515

Multimodal Deep Learning for Multi-Label Classification and Ranking Problems

Dubey, Abhishek January 2015 (has links) (PDF)
In recent years, deep neural network models have shown to outperform many state of the art algorithms. The reason for this is, unsupervised pretraining with multi-layered deep neural networks have shown to learn better features, which further improves many supervised tasks. These models not only automate the feature extraction process but also provide with robust features for various machine learning tasks. But the unsupervised pretraining and feature extraction using multi-layered networks are restricted only to the input features and not to the output. The performance of many supervised learning algorithms (or models) depends on how well the output dependencies are handled by these algorithms [Dembczy´nski et al., 2012]. Adapting the standard neural networks to handle these output dependencies for any specific type of problem has been an active area of research [Zhang and Zhou, 2006, Ribeiro et al., 2012]. On the other hand, inference into multimodal data is considered as a difficult problem in machine learning and recently ‘deep multimodal neural networks’ have shown significant results [Ngiam et al., 2011, Srivastava and Salakhutdinov, 2012]. Several problems like classification with complete or missing modality data, generating the missing modality etc., are shown to perform very well with these models. In this work, we consider three nontrivial supervised learning tasks (i) multi-class classification (MCC), (ii) multi-label classification (MLC) and (iii) label ranking (LR), mentioned in the order of increasing complexity of the output. While multi-class classification deals with predicting one class for every instance, multi-label classification deals with predicting more than one classes for every instance and label ranking deals with assigning a rank to each label for every instance. All the work in this field is associated around formulating new error functions that can force network to identify the output dependencies. Aim of our work is to adapt neural network to implicitly handle the feature extraction (dependencies) for output in the network structure, removing the need of hand crafted error functions. We show that the multimodal deep architectures can be adapted for these type of problems (or data) by considering labels as one of the modalities. This also brings unsupervised pretraining to the output along with the input. We show that these models can not only outperform standard deep neural networks, but also outperform standard adaptations of neural networks for individual domains under various metrics over several data sets considered by us. We can observe that the performance of our models over other models improves even more as the complexity of the output/ problem increases.
516

Learning visual representations with neural networks for video captioning and image generation

Yao, Li 12 1900 (has links)
No description available.
517

Apprentissage semi-supervisé pour la détection multi-objets dans des séquences vidéos : Application à l'analyse de flux urbains / Semi-supervised learning for multi-object detection in video sequences : Application to the analysis of urban flow

Maâmatou, Houda 05 April 2017 (has links)
Depuis les années 2000, un progrès significatif est enregistré dans les travaux de recherche qui proposent l’apprentissage de détecteurs d’objets sur des grandes bases de données étiquetées manuellement et disponibles publiquement. Cependant, lorsqu’un détecteur générique d’objets est appliqué sur des images issues d’une scène spécifique les performances de détection diminuent considérablement. Cette diminution peut être expliquée par les différences entre les échantillons de test et ceux d’apprentissage au niveau des points de vues prises par la(les) caméra(s), de la résolution, de l’éclairage et du fond des images. De plus, l’évolution de la capacité de stockage des systèmes informatiques, la démocratisation de la "vidéo-surveillance" et le développement d’outils d’analyse automatique des données vidéos encouragent la recherche dans le domaine du trafic routier. Les buts ultimes sont l’évaluation des demandes de gestion du trafic actuelles et futures, le développement des infrastructures routières en se basant sur les besoins réels, l’intervention pour une maintenance à temps et la surveillance des routes en continu. Par ailleurs, l’analyse de trafic est une problématique dans laquelle plusieurs verrous scientifiques restent à lever. Ces derniers sont dus à une grande variété dans la fluidité de trafic, aux différents types d’usagers, ainsi qu’aux multiples conditions météorologiques et lumineuses. Ainsi le développement d’outils automatiques et temps réel pour l’analyse vidéo de trafic routier est devenu indispensable. Ces outils doivent permettre la récupération d’informations riches sur le trafic à partir de la séquence vidéo et doivent être précis et faciles à utiliser. C’est dans ce contexte que s’insèrent nos travaux de thèse qui proposent d’utiliser les connaissances antérieurement acquises et de les combiner avec des informations provenant de la nouvelle scène pour spécialiser un détecteur d’objet aux nouvelles situations de la scène cible. Dans cette thèse, nous proposons de spécialiser automatiquement un classifieur/détecteur générique d’objets à une scène de trafic routier surveillée par une caméra fixe. Nous présentons principalement deux contributions. La première est une formalisation originale de transfert d’apprentissage transductif à base d’un filtre séquentiel de type Monte Carlo pour la spécialisation automatique d’un classifieur. Cette formalisation approxime itérativement la distribution cible inconnue au départ, comme étant un ensemble d’échantillons de la base spécialisée à la scène cible. Les échantillons de cette dernière sont sélectionnés à la fois à partir de la base source et de la scène cible moyennant une pondération qui utilise certaines informations a priori sur la scène. La base spécialisée obtenue permet d’entraîner un classifieur spécialisé à la scène cible sans intervention humaine. La deuxième contribution consiste à proposer deux stratégies d’observation pour l’étape mise à jour du filtre SMC. Ces stratégies sont à la base d’un ensemble d’indices spatio-temporels spécifiques à la scène de vidéo-surveillance. Elles sont utilisées pour la pondération des échantillons cibles. Les différentes expérimentations réalisées ont montré que l’approche de spécialisation proposée est performante et générique. Nous avons pu y intégrer de multiples stratégies d’observation. Elle peut être aussi appliquée à tout type de classifieur. De plus, nous avons implémenté dans le logiciel OD SOFT de Logiroad les possibilités de chargement et d’utilisation d’un détecteur fourni par notre approche. Nous avons montré également les avantages des détecteurs spécialisés en comparant leurs résultats avec celui de la méthode Vu-mètre de Logiroad. / Since 2000, a significant progress has been recorded in research work which has proposed to learn object detectors using large manually labeled and publicly available databases. However, when a generic object detector is applied on images of a specific scene, the detection performances will decrease considerably. This decrease may be explained by the differences between the test samples and the learning ones at viewpoints taken by camera(s), resolution, illumination and background images. In addition, the storage capacity evolution of computer systems, the "video surveillance" democratization and the development of automatic video-data analysis tools have encouraged research into the road-traffic domain. The ultimate aims are the management evaluation of current and future trafic requests, the road infrastructures development based on real necessities, the intervention of maintenance task in time and the continuous road surveillance. Moreover, traffic analysis is a problematicness where several scientific locks should be lifted. These latter are due to a great variety of traffic fluidity, various types of users, as well multiple weather and lighting conditions. Thus, developing automatic and real-time tools to analyse road-traffic videos has become an indispensable task. These tools should allow retrieving rich data concerning the traffic from the video sequence and they must be precise and easy to use. This is the context of our thesis work which proposes to use previous knowledges and to combine it with information extracted from the new scene to specialize an object detector to the new situations of the target scene. In this thesis, we propose to automatically specialize a generic object classifier/detector to a road traffic scene surveilled by a fixed camera. We mainly present two contributions. The first one is an original formalization of Transductive Transfer Learning based on a sequential Monte Carlo filter for automatic classifier specialization. This formalization approximates iteratively the previously unknown target distribution as a set of samples composing the specialized dataset of the target scene. The samples of this dataset are selected from both source dataset and target scene further to a weighting step using some prior information on the scene. The obtained specialized dataset allows training a specialized classifier to the target scene without human intervention. The second contribution consists in proposing two observation strategies to be used in the SMC filter’s update step. These strategies are based on a set of specific spatio-temporal cues of the video surveillance scene. They are used to weight the target samples. The different experiments carried out have shown that the proposed specialization approach is efficient and generic. We have been able to integrate multiple observation strategies. It can also be applied to any classifier / detector. In addition, we have implemented into the Logiroad OD SOFT software the loading and utilizing possibilities of a detector provided by our approach. We have also shown the advantages of the specialized detectors by comparing their results to the result of Logiroad’s Vu-meter method.
518

Generative models : a critical review

Lamb, Alexander 07 1900 (has links)
No description available.
519

Réseaux de neurones, SVM et approches locales pour la prévision de séries temporelles / No available

Cherif, Aymen 16 July 2013 (has links)
La prévision des séries temporelles est un problème qui est traité depuis de nombreuses années. On y trouve des applications dans différents domaines tels que : la finance, la médecine, le transport, etc. Dans cette thèse, on s’est intéressé aux méthodes issues de l’apprentissage artificiel : les réseaux de neurones et les SVM. On s’est également intéressé à l’intérêt des méta-méthodes pour améliorer les performances des prédicteurs, notamment l’approche locale. Dans une optique de diviser pour régner, les approches locales effectuent le clustering des données avant d’affecter les prédicteurs aux sous ensembles obtenus. Nous présentons une modification dans l’algorithme d’apprentissage des réseaux de neurones récurrents afin de les adapter à cette approche. Nous proposons également deux nouvelles techniques de clustering, la première basée sur les cartes de Kohonen et la seconde sur les arbres binaires. / Time series forecasting is a widely discussed issue for many years. Researchers from various disciplines have addressed it in several application areas : finance, medical, transportation, etc. In this thesis, we focused on machine learning methods : neural networks and SVM. We have also been interested in the meta-methods to push up the predictor performances, and more specifically the local models. In a divide and conquer strategy, the local models perform a clustering over the data sets before different predictors are affected into each obtained subset. We present in this thesis a new algorithm for recurrent neural networks to use them as local predictors. We also propose two novel clustering techniques suitable for local models. The first is based on Kohonen maps, and the second is based on binary trees.
520

Atomistic modelling of precipitation in Ni-base superalloys

Schmidt, Eric January 2019 (has links)
The presence of the ordered $\gamma^{\prime}$ phase ($\text{Ni}_{3}\text{Al}$) in Ni-base superalloys is fundamental to the performance of engineering components such as turbine disks and blades which operate at high temperatures and loads. Hence for these alloys it is important to optimize their microstructure and phase composition. This is typically done by varying their chemistry and heat treatment to achieve an appropriate balance between $\gamma^{\prime}$ content and other constituents such as carbides, borides, oxides and topologically close packed phases. In this work we have set out to investigate the onset of $\gamma^{\prime}$ ordering in Ni-Al single crystals and in Ni-Al bicrystals containing coincidence site lattice grain boundaries (GBs) and we do this at high temperatures, which are representative of typical heat treatment schedules including quenching and annealing. For this we use the atomistic simulation methods of molecular dynamics (MD) and density functional theory (DFT). In the first part of this work we develop robust Bayesian classifiers to identify the $\gamma^{\prime}$ phase in large scale simulation boxes at high temperatures around 1500 K. We observe significant \gamma^{\prime} ordering in the simulations in the form of clusters of $\gamma^{\prime}$-like ordered atoms embedded in a $\gamma$ host solid solution and this happens within 100 ns. Single crystals are found to exhibit the expected homogeneous ordering with slight indications of chemical composition change and a positive correlation between the Al concentration and the concentration of $\gamma^{\prime}$ phase. In general, the ordering is found to take place faster in systems with GBs and preferentially adjacent to the GBs. The sole exception to this is the $\Sigma3 \left(111\right)$ tilt GB, which is a coherent twin. An analysis of the ensemble and time lag average displacements of the GBs reveals mostly `anomalous diffusion' behaviour. Increasing the Al content from pure Ni to Ni 20 at.% Al was found to either consistently increase or decrease the mobility of the GB as seen from the changing slope of the time lag displacement average. The movement of the GB can then be characterized as either `super' or `sub-diffusive' and is interpreted in terms of diffusion induced grain boundary migration, which is posited as a possible precursor to the appearance of serrated edge grain boundaries. In the second part of this work we develop a method for the training of empirical interatomic potentials to capture more elements in the alloy system. We focus on the embedded atom method (EAM) and use the Ni-Al system as a test case. Recently, empirical potentials have been developed based on results from DFT which utilize energies and forces, but neglect the electron densities, which are also available. Noting the importance of electron densities, we propose a route to include them into the training of EAM-type potentials via Bayesian linear regression. Electron density models obtained for structures with a range of bonding types are shown to accurately reproduce the electron densities from DFT. Also, the resulting empirical potentials accurately reproduce DFT energies and forces of all the phases considered within the Ni-Al system. Properties not included in the training process, such as stacking fault energies, are sometimes not reproduced with the desired accuracy and the reasons for this are discussed. General regression issues, known to the machine learning community, are identified as the main difficulty facing further development of empirical potentials using this approach.

Page generated in 0.1512 seconds