• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 160
  • 30
  • 10
  • 7
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 262
  • 102
  • 77
  • 74
  • 65
  • 49
  • 49
  • 48
  • 47
  • 43
  • 39
  • 36
  • 35
  • 29
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Dataset Generation in a Simulated Environment Using Real Flight Data for Reliable Runway Detection Capabilities

Tagebrand, Emil, Gustafsson Ek, Emil January 2021 (has links)
Implementing object detection methods for runway detection during landing approaches is limited in the safety-critical aircraft domain. This limitation is due to the difficulty that comes with verification of the design and the ability to understand how the object detection behaves during operation. During operation, object detection needs to consider the aircraft's position, environmental factors, different runways and aircraft attitudes. Training such an object detection model requires a comprehensive dataset that defines the features mentioned above. The feature's impact on the detection capabilities needs to be analysed to ensure the correct distribution of images in the dataset. Gathering images for these scenarios would be costly and needed due to the aviation industry's safety standards. Synthetic data can be used to limit the cost and time required to create a dataset where all features occur. By using synthesised data in the form of generating datasets in a simulated environment, these features could be applied to the dataset directly. The features could also be implemented separately in different datasets and compared to each other to analyse their impact on the object detections capabilities. By utilising this method for the features mentioned above, the following results could be determined. For object detection to consider most landing cases and different runways, the dataset needs to replicate real flight data and generate additional extreme landing cases. The dataset also needs to consider landings at different altitudes, which can differ at a different airport. Environmental conditions such as clouds and time of day reduce detection capabilities far from the runway, while attitude and runway appearance reduce it at close range. Runway appearance did also affect the runway at long ranges but only for darker runways.
42

Dataset selection for aggregate model implementation in predictive data mining

Lutu, P.E.N. (Patricia Elizabeth Nalwoga) 15 November 2010 (has links)
Data mining has become a commonly used method for the analysis of organisational data, for purposes of summarizing data in useful ways and identifying non-trivial patterns and relationships in the data. Given the large volumes of data that are collected by business, government, non-government and scientific research organizations, a major challenge for data mining researchers and practitioners is how to select relevant data for analysis in sufficient quantities, in order to meet the objectives of a data mining task. This thesis addresses the problem of dataset selection for predictive data mining. Dataset selection was studied in the context of aggregate modeling for classification. The central argument of this thesis is that, for predictive data mining, it is possible to systematically select many dataset samples and employ different approaches (different from current practice) to feature selection, training dataset selection, and model construction. When a large amount of information in a large dataset is utilised in the modeling process, the resulting models will have a high level of predictive performance and should be more reliable. Aggregate classification models, also known as ensemble classifiers, have been shown to provide a high level of predictive accuracy on small datasets. Such models are known to achieve a reduction in the bias and variance components of the prediction error of a model. The research for this thesis was aimed at the design of aggregate models and the selection of training datasets from large amounts of available data. The objectives for the model design and dataset selection were to reduce the bias and variance components of the prediction error for the aggregate models. Design science research was adopted as the paradigm for the research. Large datasets obtained from the UCI KDD Archive were used in the experiments. Two classification algorithms: See5 for classification tree modeling and K-Nearest Neighbour, were used in the experiments. The two methods of aggregate modeling that were studied are One-Vs-All (OVA) and positive-Vs-negative (pVn) modeling. While OVA is an existing method that has been used for small datasets, pVn is a new method of aggregate modeling, proposed in this thesis. Methods for feature selection from large datasets, and methods for training dataset selection from large datasets, for OVA and pVn aggregate modeling, were studied. The experiments of feature selection revealed that the use of many samples, robust measures of correlation, and validation procedures result in the reliable selection of relevant features for classification. A new algorithm for feature subset search, based on the decision rule-based approach to heuristic search, was designed and the performance of this algorithm was compared to two existing algorithms for feature subset search. The experimental results revealed that the new algorithm makes better decisions for feature subset search. The information provided by a confusion matrix was used as a basis for the design of OVA and pVn base models which aren combined into one aggregate model. A new construct called a confusion graph was used in conjunction with new algorithms for the design of pVn base models. A new algorithm for combining base model predictions and resolving conflicting predictions was designed and implemented. Experiments to study the performance of the OVA and pVn aggregate models revealed the aggregate models provide a high level of predictive accuracy compared to single models. Finally, theoretical models to depict the relationships between the factors that influence feature selection and training dataset selection for aggregate models are proposed, based on the experimental results. / Thesis (PhD)--University of Pretoria, 2010. / Computer Science / unrestricted
43

Aplicación de técnicas de Deep Learning para el reconocimiento de páginas Web y emociones faciales: Un estudio comparativo y experimental

Mejia-Escobar, Christian 07 March 2023 (has links)
El progreso de la Inteligencia Artificial (IA) ha sido notable en los últimos años. Los impresionantes avances en imitar las capacidades humanas por parte de las máquinas se deben especialmente al campo del Deep Learning (DL). Este paradigma evita el complejo diseño manual de características. En su lugar, los datos pasan directamente a un algoritmo, que aprende a extraer y representar características jerárquicamente en múltiples capas a medida que aprende a resolver una tarea. Esto ha demostrado ser ideal para problemas relacionados con el mundo visual. Una solución de DL comprende datos y un modelo. La mayor parte de la investigación actual se centra en los modelos, en busca de mejores algoritmos. Sin embargo, aunque se prueben diferentes arquitecturas y configuraciones, difícilmente mejorará el rendimiento si los datos no son de buena calidad. Son escasos los estudios que se centran en mejorar los datos, pese a que constituyen el principal recurso para el aprendizaje automático. La recolección y el etiquetado de extensos datasets de imágenes consumen mucho tiempo, esfuerzo e introducen errores. La mala clasificación, la presencia de imágenes irrelevantes, el desequilibrio de las clases y la falta de representatividad del mundo real son problemas ampliamente conocidos que afectan el rendimiento de los modelos en escenarios prácticos. Nuestra propuesta enfrenta estos problemas a través de un enfoque data-centric. A través de la ingeniería del dataset original utilizando técnicas de DL, lo hacemos más adecuado para entrenar un modelo con mejor rendimiento y generalización en escenarios reales. Para demostrar esta hipótesis, consideramos dos casos prácticos que se han convertido en temas de creciente interés para la investigación. Por una parte, Internet es la plataforma mundial de comunicación y la Web es la principal fuente de información para las actividades humanas. Las páginas Web crecen a cada segundo y son cada vez más sofisticadas. Para organizar este complejo y vasto contenido, la clasificación es la técnica básica. El aspecto visual de una página Web puede ser una alternativa al análisis textual del código para distinguir entre categorías. Abordamos el reconocimiento y la clasificación de páginas Web creando un dataset de capturas de pantalla apropiado desde cero. Por otro lado, aunque los avances de la IA son significativos en el aspecto cognitivo, la parte emocional de las personas es un desafío. La expresión facial es la mejor evidencia para manifestar y transmitir nuestras emociones. Aunque algunos datasets de imágenes faciales existen para entrenar modelos de DL, no ha sido posible alcanzar el alto rendimiento en entornos controlados utilizando datasets in-the-lab. Abordamos el reconocimiento y la clasificación de emociones humanas mediante la combinación de varios datasets in-the wild de imágenes faciales. Estas dos problemáticas plantean situaciones distintas y requieren de imágenes con contenido muy diferente, por lo que hemos diseñado un método de refinamiento del dataset según el caso de estudio. En el primer caso, implementamos un modelo de DL para clasificar páginas Web en determinadas categorías utilizando únicamente capturas de pantalla, donde los resultados demostraron un problema multiclase muy difícil. Tratamos el mismo problema con la estrategia One vs. Rest y mejoramos el dataset mediante reclasificación, detección de imágenes irrelevantes, equilibrio y representatividad, además de utilizar técnicas de regularización y un nuevo mecanismo de predicción con los clasificadores binarios. Estos clasificadores operando por separado mejoran el rendimiento, en promedio incrementan un 26.29% la precisión de validación y disminuyen un 42.30% el sobreajuste, mostrando importantes mejoras respecto al clasificador múltiple que opera con todas las categorías juntas. Utilizando el nuevo modelo, hemos desarrollado un sistema en línea para clasificar páginas Web que puede ayudar a diseñadores, propietarios de sitios Web, Webmasters y usuarios en general. En el segundo caso, la estrategia consiste en refinar progresivamente el dataset de imágenes faciales mediante varios entrenamientos sucesivos de un modelo de red convolucional. En cada entrenamiento, se utilizan las imágenes faciales correspondientes a las predicciones correctas del entrenamiento anterior, lo que permite al modelo captar más características distintivas de cada clase de emoción. Tras el último entrenamiento, el modelo realiza una reclasificación automática de todo el dataset. Este proceso también nos permite detectar las imágenes irrelevantes, pero nuestro propósito es mejorar el dataset sin modificar, borrar o aumentar las imágenes, a diferencia de otros trabajos similares. Los resultados experimentales en tres datasets representativos demostraron la eficacia del método propuesto, mejorando la precisión de validación en un 20.45%, 14.47% y 39.66%, para FER2013, NHFI y AffectNet, respectivamente. Las tasas de reconocimiento en las versiones reclasificadas de estos datasets son del 86.71%, el 70.44% y el 89.17%, que alcanzan el estado del arte. Combinamos estas versiones mejor clasificadas para aumentar el número de imágenes y enriquecer la diversidad de personas, gestos y atributos de resolución, color, fondo, iluminación y formato de imagen. El dataset resultante se utiliza para entrenar un modelo más general. Frente a la necesidad de métricas más realistas de la generalización de los modelos, creamos un dataset evaluador combinado, equilibrado, imparcial y bien etiquetado. Para tal fin, organizamos este dataset en categorías de género, edad y etnia. Utilizando un predictor de estas características representativas de la población, podemos seleccionar el mismo número de imágenes y mediante el exitoso modelo Stable Diffusion es posible generar las imágenes faciales necesarias para equilibrar las categorías creadas a partir de las mencionadas características. Los experimentos single-dataset y cross-dataset indican que el modelo entrenado en el dataset combinado mejora la generalización de los modelos entrenados individualmente en FER2013, NHFI y AffectNet en un 13.93%, 24.17% y 7.45%, respectivamente. Desarrollamos un sistema en línea de reconocimiento de emociones que aprovecha el modelo más genérico obtenido del dataset combinado. Por último, la buena calidad de las imágenes faciales sintéticas y la reducción de tiempo conseguida con el método generativo nos motivan para crear el primer y mayor dataset artificial de emociones categóricas. Este producto de libre acceso puede complementar los datasets reales, que son difíciles de recopilar, etiquetar, equilibrar, controlar las características y proteger la identidad de las personas.
44

Defending Against Trojan Attacks on Neural Network-based Language Models

Azizi, Ahmadreza 15 May 2020 (has links)
Backdoor (Trojan) attacks are a major threat to the security of deep neural network (DNN) models. They are created by an attacker who adds a certain pattern to a portion of given training dataset, causing the DNN model to misclassify any inputs that contain the pattern. These infected classifiers are called Trojan models and the added pattern is referred to as the trigger. In image domain, a trigger can be a patch of pixel values added to the images and in text domain, it can be a set of words. In this thesis, we propose Trojan-Miner (T-Miner), a defense scheme against such backdoor attacks on text classification deep learning models. The goal of T-Miner is to detect whether a given classifier is a Trojan model or not. To create T-Miner , our approach is based on a sequence-to-sequence text generation model. T-Miner uses feedback from the suspicious (test) classifier to perturb input sentences such that their resulting class label is changed. These perturbations can be different for each of the inputs. T-Miner thus extracts the perturbations to determine whether they include any backdoor trigger and correspondingly flag the suspicious classifier as a Trojan model. We evaluate T-Miner on three text classification datasets: Yelp Restaurant Reviews, Twitter Hate Speech, and Rotten Tomatoes Movie Reviews. To illustrate the effectiveness of T-Miner, we evaluate it on attack models over text classifiers. Hence, we build a set of clean classifiers with no trigger in their training datasets and also using several trigger phrases, we create a set of Trojan models. Then, we compute how many of these models are correctly marked by T-Miner. We show that our system is able to detect trojan and clean models with 97% overall accuracy over 400 classifiers. Finally, we discuss the robustness of T-Miner in the case that the attacker knows T-Miner framework and wants to use this knowledge to weaken T-Miner performance. To this end, we propose four different scenarios for the attacker and report the performance of T-Miner under these new attack methods. / M.S. / Backdoor (Trojan) attacks are a major threat to the security of predictive models that make use of deep neural networks. The idea behind these attacks is as follows: an attacker adds a certain pattern to a portion of given training dataset and in the next step, trains a predictive model over this dataset. As a result, the predictive model misclassifies any inputs that contain the pattern. In image domain this pattern that is called trigger, can be a patch of pixel values added to the images and in text domain, it can be a set of words. In this thesis, we propose Trojan-Miner (T-Miner), a defense scheme against such backdoor attacks on text classification deep learning models. The goal of T-Miner is to detect whether a given classifier is a Trojan model or not. T-Miner is based on a sequence-to-sequence text generation model that is connected to the given predictive model and determine if the predictive model is being backdoor attacked. When T-Miner is connected to the predictive model, it generates a set of words, called perturbations, and analyses these perturbations to determine whether they include any backdoor trigger. Hence if any part of the trigger is present in the perturbations, the predictive model is flagged as a Trojan model. We evaluate T-Miner on three text classification datasets: Yelp Restaurant Reviews, Twitter Hate Speech, and Rotten Tomatoes Movie Reviews. To illustrate the effectiveness of T-Miner, we evaluate it on attack models over text classifiers. Hence, we build a set of clean classifiers with no trigger in their training datasets and also using several trigger phrases, we create a set of Trojan models. Then, we compute how many of these models are correctly marked by T-Miner. We show that our system is able to detect Trojan models with 97% overall accuracy over 400 predictive models.
45

Development Of Gis-based National Hydrography Dataset, Sub-basin Boundaries, And Water Quality/quantity Data Analysis System For Turkey

Girgin, Serkan 01 December 2003 (has links) (PDF)
Computerized data visualization and analysis tools, especially Geographic Information Systems (GIS), constitute an important part of today&amp / #65533 / s water resources development and management studies. In order to obtain satisfactory results from such tools, accurate and comprehensive hydrography datasets are needed that include both spatial and hydrologic information on surface water resources and watersheds. If present, such datasets may support many applications, such as hydrologic and environmental modeling, impact assessment, and construction planning. The primary purposes of this study are production of prototype national hydrography and watershed datasets for Turkey, and development of GIS-based tools for the analysis of local water quality and quantity data. For these purposes national hydrography datasets and analysis systems of several counties are reviewed, and based on gained experience / 1) Sub-watershed boundaries of 26 major national basins are derived from digital elevation model of the country by using raster-based analysis methods and these watersheds are named according to coding system of the European Union, 2) A prototype hydrography dataset with built-in connectivity and water flow direction information is produced from publicly available data sources, 3) GIS based spatial tools are developed to facilitate navigation through streams and watersheds in the hydrography dataset, and 4) A state-of-the art GIS-based stream flow and water quality data analysis system is developed, which is based on the structure of nationally available data and includes advanced statistical and spatial analysis capabilities. All datasets and developed tools are gathered in a single graphical user-interface within GIS and made available to the end-users.
46

Pseudonymizace textových datových kolekcí pro strojové učení / De-identification of text data collections for machine learning

Mareš, Martin January 2021 (has links)
Text data collections enable the deployment of artificial intelligence algorithms for novel tasks. Such collections often contain miscellaneous personal data and other sensitive information that complicates sharing and further processing due to the personal data protection requirements. Searching for personal data is often carried out by sequential passes through the complete text. The objective of this thesis is to create a tool that helps the annotators decrease the risk of data leaks from the text collections. The tool utilizes pseudonymization (replacing a word with a different word, based on a set of rules). During the annotation process, the tool tags the words as "public", "private" and "candidate". The task of the annotator is to determine the role of the candidate words and detect any other untagged private information. The private words then become the subject of the pseudonymization process. The auto-tagging tool utilizes a named entity recognizer and a database of rules. The database is automatically improved based on the decisions of the annotator. Different named entity recognizers were compared for the purpose of personal data search on the collection of the ELITR project. During the comparison, a method was found which increased the sensitivity of the named entities detection which also...
47

Strojové učení v úloze predikce vlivu nukleotidového polymorfismu / Prediction of the Effect of Nucleotide Substitution Using Machine Learning

Šalanda, Ondřej January 2015 (has links)
This thesis brings a new approach to the prediction of the effect of nucleotide polymorphism on human genome. The main goal is to create a new meta-classifier, which combines predictions of several already implemented software classifiers. The novelty of developed tool lies in using machine learning methods to find consensus over those tools, that would enhance accuracy and versatility of prediction. Final experiments show, that compared to the best integrated tool, the meta-classifier increases the area under ROC curve by 3,4 in average and normalized accuracy is improved by up to 7\,\%. The new classifying service is available at http://ll06.sci.muni.cz:6232/snpeffect/.
48

Generátor syntetické datové sady pro dopravní analýzu / Synthetic Data Set Generator for Traffic Analysis

Šlosár, Peter January 2014 (has links)
This Master's thesis deals with the design and development of tools for generating a synthetic dataset for traffic analysis purposes. The first part contains a brief introduction to the vehicle detection and rendering methods. Blender and the set of scripts are used to create highly customizable training images dataset and synthetic videos from a single photograph. Great care is taken to create very realistic output, that is suitable for further processing in field of traffic analysis. Produced images and videos are automatically richly annotated. Achieved results are tested by training a sample car detector and evaluated with real life testing data. Synthetic dataset outperforms real training datasets in this comparison of the detection rate. Computational demands of the tools are evaluated as well. The final part sums up the contribution of this thesis and outlines some extensions of the tools for the future.
49

INVESTIGATING DATA ACQUISITION TO IMPROVE FAIRNESS OF MACHINE LEARNING MODELS

Ekta (18406989) 23 April 2024 (has links)
<p dir="ltr">Machine learning (ML) algorithms are increasingly being used in a variety of applications and are heavily relied upon to make decisions that impact people’s lives. ML models are often praised for their precision, yet they can discriminate against certain groups due to biased data. These biases, rooted in historical inequities, pose significant challenges in developing fair and unbiased models. Central to addressing this issue is the mitigation of biases inherent in the training data, as their presence can yield unfair and unjust outcomes when models are deployed in real-world scenarios. This study investigates the efficacy of data acquisition, i.e., one of the stages of data preparation, akin to the pre-processing bias mitigation technique. Through experimental evaluation, we showcase the effectiveness of data acquisition, where the data is acquired using data valuation techniques to enhance the fairness of machine learning models.</p>
50

Study on the performance of ontology based approaches to link prediction in social networks as the number of users increases

Phanse, Shruti January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Doina Caragea / Recent advances in social network applications have resulted in millions of users joining such networks in the last few years. User data collected from social networks can be used for various data mining problems such as interest recommendations, friendship recommendations and many more. Social networks, in general, can be seen as a huge directed network graph representing users of the network (together with their information, e.g., user interests) and their interactions (also known as friendship links). Previous work [Hsu et al., 2007] on friendship link prediction has shown that graph features contain important predictive information. Furthermore, it has been shown that user interests can be used to improve link predictions, if they are organized into an explicitly or implicitly ontology [Haridas, 2009; Parimi, 2010]. However, the above mentioned previous studies have been performed using a small set of users in the social network LiveJournal. The goal of this work is to study the performance of the ontology based approach proposed in [Haridas, 2009], when number of users in the dataset is increased. More precisely, we study the performance of the approach in terms of performance for data sets consisting of 1000, 2000, 3000 and 4000 users. Our results show that the performance generally increases with the number of users. However, the problem becomes quickly intractable from a computation time point of view. As a part of our study, we also compare our results obtained using the ontology-based approach [Haridas, 2009] with results obtained with the LDA based approach in [Parimi, 2010], when such results are available.

Page generated in 0.0486 seconds