Spelling suggestions: "subject:"1earning cachine"" "subject:"1earning amachine""
151 |
Improving the Accessibility of Arabic Electronic Theses and Dissertations (ETDs) with Metadata and ClassificationAbdelrahman, Eman January 2021 (has links)
Much research work has been done to extract data from scientific papers, journals, and articles. However, Electronic Theses and Dissertations (ETDs) remain an unexplored genre of data in the research fields of natural language processing and machine learning. Moreover, much of the related research involved data that is in the English language. Arabic data such as news and tweets have begun to receive some attention in the past decade. However, Arabic ETDs remain an untapped source of data despite the vast number of benefits to students and future generations of scholars. Some ways of improving the browsability and accessibility of data include data annotation, indexing, parsing, translation, and classification. Classification is essential for the searchability and management of data, which can be manual or automated. The latter is beneficial when handling growing volumes of data. There are two main roadblocks to performing automatic subject classification on Arabic ETDs. The first is the unavailability of a public corpus of Arabic ETDs. The second is the Arabic language’s linguistic complexity, especially in academic documents. This research presents the Otrouha project, which aims at building a corpus of key metadata of Arabic ETDs as well as providing a methodology for their automatic subject classification. The first goal is aided by collecting data from the AskZad Digital Library. The second goal is achieved by exploring different machine learning and deep learning techniques. The experiments’ results show that deep learning using pretrained language models gave the highest classification performance, indicating that language models significantly contribute to natural language understanding. / M.S. / An Electronic Thesis or Dissertation (ETD) is an openly-accessible electronic version of a graduate student’s research thesis or dissertation. It documents their main research effort that has taken place and becomes available in the University Library instead of a paper copy. Over time, collections of ETDs have been gathered and made available online through different digital libraries. ETDs are a valuable source of information for scholars and researchers, as well as librarians. With the digitalization move in most Middle Eastern Universities, the need to make Arabic ETDs more accessible significantly increases as their numbers increase. One of the ways to improve their accessibility and searchability is through providing automatic classification instead of manual classification. This thesis project focuses on building a corpus of metadata of Arabic ETDs and building a framework for their automatic subject classification. This is expected to pave the way for more exploratory research on this valuable genre of data.
|
152 |
COCO-Bridge: Common Objects in Context Dataset and Benchmark for Structural Detail Detection of BridgesBianchi, Eric Loran 14 February 2019 (has links)
Common Objects in Context for bridge inspection (COCO-Bridge) was introduced for use by unmanned aircraft systems (UAS) to assist in GPS denied environments, flight-planning, and detail identification and contextualization, but has far-reaching applications such as augmented reality (AR) and other artificial intelligence (AI) platforms. COCO-Bridge is an annotated dataset which can be trained using a convolutional neural network (CNN) to identify specific structural details. Many annotated datasets have been developed to detect regions of interest in images for a wide variety of applications and industries. While some annotated datasets of structural defects (primarily cracks) have been developed, most efforts are individualized and focus on a small niche of the industry. This effort initiated a benchmark dataset with a focus on structural details. This research investigated the required parameters for detail identification and evaluated performance enhancements on the annotation process. The image dataset consisted of four structural details which are commonly reviewed and rated during bridge inspections: bearings, cover plate terminations, gusset plate connections, and out of plane stiffeners. This initial version of COCO-Bridge includes a total of 774 images; 10% for evaluation and 90% for training. Several models were used with the dataset to evaluate model overfitting and performance enhancements from augmentation and number of iteration steps. Methods to economize the predictive capabilities of the model without the addition of unique data were investigated to reduce the required number of training images. Results from model tests indicated the following: additional images, mirrored along the vertical-axis, provided precision and accuracy enhancements; increasing computational step iterations improved predictive precision and accuracy, and the optimal confidence threshold for operation was 25%. Annotation recommendations and improvements were also discovered and documented as a result of the research. / MS / Common Objects in Context for bridge inspection (COCO-Bridge) was introduced to improve a drone-conducted bridge inspection process. Drones are a great tool for bridge inspectors because they bring flexibility and access to the inspection. However, drones have a notoriously difficult time operating near bridges, because the signal can be lost between the operator and the drone. COCO-Bridge is an imagebased dataset that uses Artificial Intelligence (AI) as a solution to this particular problem, but has applications in other facets of the inspection as well. This effort initiated a dataset with a focus on identifying specific parts of a bridge or structural bridge elements. This would allow a drone to fly without explicit direction if the signal was lost, and also has the potential to extend its flight time. Extending flight time and operating autonomously are great advantagesfor drone operators and bridge inspectors. The output from COCO-Bridge would also help the inspectors identify areas that are prone to defects by highlighting regions that require inspection. The image dataset consisted of 774 images to detect four structural bridge elements which are commonly reviewed and rated during bridge inspections. The goal is to continue to increase the number of images and encompass more structural bridge elements in the dataset so that it may be used for all types of bridges. Methods to reduce the required number of images were investigated, because gathering images of structural bridge elements is challenging,. The results from model tests helped build a roadmap for the expansion and best-practices for developing a dataset of this type.
|
153 |
Probabilistic Approaches for Deep Learning: Representation Learning and Uncertainty EstimationPark, Yookoon January 2024 (has links)
In this thesis, we present probabilistic approaches for two critical aspects of deep learning: unsupervised representation learning and uncertainty estimation. The first part of the thesis focuses on developing a probabilistic method for deep representation learning and an application of representation learning on multimodal text-video data. Unsupervised representation learning has been proven effective for learning useful representations of data using deep learning and enhancing the performance on downstream applications. However, current methods for representation learning lack a solid theoretical foundation despite their empirical success.
To bridge this gap, we present a novel perspective for unsupervised representation learning: we argue that representation learning should maximize the effective nonlinear expressivity of a deep neural network on the data so that the downstream predictors can take full advantage of its nonlinear representation power. To this end, we propose our method of neural activation coding (NAC) that maximizes the mutual information between activation patterns of the encoder and the data over a noisy communication channel. We show that learning for a noise-robust activation code maximizes the number of distinct linear regions of ReLU encoders, hence maximizing its nonlinear expressivity. Experiment results demonstrate that NAC enhances downstream performance on linear classification and nearest neighbor retrieval on natural image datasets, and furthermore significantly improve the training of deep generative models.
Next, we study an application of representation learning for multimodal text-video retrieval. We reveal that when using a pretrained representation model, many test instances are either over- or under-represented during text-video retrieval, hurting the retrieval performance. To address the problem, we propose normalized contrastive learning (NCL) that utilizes the Sinkhorn-Knopp algorithm to normalize the retrieval probabilities of text and video instances, thereby significantly enhancing the text-video retrieval performance.
The second part of the thesis addresses the critical challenge of quantifying the predictive uncertainty of deep learning models, which is crucial for high-stakes applications of ML including medical diagnosis, autonomous driving, and financial forecasting. However, uncertainty estimation for deep learning remains an open challenge and current Bayesian approximations often output unreliable uncertainty estimates. We propose a density-based uncertainty criterion that posits that a model’s predictive uncertainty should be grounded in the density of the model’s training data so that the predictive uncertainty is high for inputs that are unlikely under the training data distribution. To this end, we introduce density uncertainty layers as a general building block for density-aware deep architectures.
These layers embed the density-based uncertainty criterion directly into the model architecture and can be used as a drop-in replacement for existing neural network layers to produce reliable uncertainty estimates for deep learning models. On uncertainty estimation benchmarks, we show that the proposed method delivers more reliable uncertainty estimates and robust out-of-distribution detection performance.
|
154 |
Deep Learning Models for Context-Aware Object DetectionArefiyan Khalilabad, Seyyed Mostafa 15 September 2017 (has links)
In this thesis, we present ContextNet, a novel general object detection framework for incorporating context cues into a detection pipeline. Current deep learning methods for object detection exploit state-of-the-art image recognition networks for classifying the given region-of-interest (ROI) to predefined classes and regressing a bounding-box around it without using any information about the corresponding scene. ContextNet is based on an intuitive idea of having cues about the general scene (e.g., kitchen and library), and changes the priors about presence/absence of some object classes. We provide a general means for integrating this notion in the decision process about the given ROI by using a pretrained network on the scene recognition datasets in parallel to a pretrained network for extracting object-level features for the corresponding ROI. Using comprehensive experiments on the PASCAL VOC 2007, we demonstrate the effectiveness of our design choices, the resulting system outperforms the baseline in most object classes, and reaches 57.5 mAP (mean Average Precision) on the PASCAL VOC 2007 test set in comparison with 55.6 mAP for the baseline. / MS / The object detection problem is to find objects of interest in a given image and draw boxes around them with object labels. With the emergence of deep learning in recent years, current object detection methods use deep learning technologies. The detection process is solely based on features which are extracted from several thousand regions in the given image. We propose a novel framework for incorporating scene information in the detection process. For example, if we know the image is taken from a kitchen, the probability of seeing a cow or an airplane decreases and observation probability of plates and persons increases. Our new detection network uses this intuition to improve the detection accuracy. Using extensive experiments, we show the proposed methods outperform the baseline for almost all object types.
|
155 |
Decision support by machine learning systems for acute management of severely injured patients: A systematic reviewBaur, David, Gehlen, Tobias, Scherer, Julian, Back, David Alexander, Tsitsilonis, Serafeim, Kabir, Koroush, Osterhoff, Georg 26 July 2024 (has links)
Introduction: Treating severely injured patients requires numerous critical decisions within short intervals in a highly complex situation. The coordination of a trauma team in this setting has been shown to be associated with multiple procedural errors, even of experienced care teams. Machine learning (ML) is an approach that estimates outcomes based on past experiences and data patterns using a computer-generated algorithm. This systematic review aimed to summarize the existing literature on the value of ML for the initial management of severely injured patients.
Methods: We conducted a systematic review of the literature with the goal of finding all articles describing the use of ML systems in the context of acute management of severely injured patients. MESH search of Pubmed/Medline and Web of Science was conducted. Studies including fewer than 10 patients were excluded. Studies were divided into the following main prediction groups: (1) injury pattern, (2) hemorrhage/need for transfusion, (3) emergency intervention, (4) ICU/length of hospital stay, and (5) mortality.
Results: Thirty-six articles met the inclusion criteria; among these were two prospective and thirty-four retrospective case series. Publication dates ranged from 2000 to 2020 and included 32 different first authors. A total of 18,586,929 patients were included in the prediction models. Mortality was the most represented main prediction group (n = 19). ML models used were artificial neural network ( n = 15), singular vector machine (n = 3), Bayesian network (n = 7), random forest (n = 6), natural language processing (n = 2), stacked ensemble classifier [SuperLearner (SL), n = 3], k-nearest neighbor (n = 1), belief system (n = 1), and sequential minimal optimization (n = 2) models. Thirty articles assessed results as positive, five showed moderate results, and one article described negative results to their implementation of the respective prediction model.
Conclusions: While the majority of articles show a generally positive result with high accuracy and precision, there are several requirements that need to be met to make the implementation of such models in daily clinical work possible. Furthermore, experience in dealing with on-site implementation and more clinical trials are necessary before the implementation of ML techniques in clinical care can become a reality.
|
156 |
Event Detection and Extraction from News ArticlesWang, Wei 21 February 2018 (has links)
Event extraction is a type of information extraction(IE) that works on extracting the specific knowledge of certain incidents from texts. Nowadays the amount of available information (such as news, blogs, and social media) grows in exponential order. Therefore, it becomes imperative to develop algorithms that automatically extract the machine-readable information from large volumes of text data. In this dissertation, we focus on three problems in obtaining event-related information from news articles. (1) The first effort is to comprehensively analyze the performance and challenges in current large-scale event encoding systems. (2) The second problem involves event detection and critical information extractions from news articles. (3) Third, the efforts concentrate on event-encoding which aims to extract event extent and arguments from texts.
We start by investigating the two large-scale event extraction systems (ICEWS and GDELT) in the political science domain. We design a set of experiments to evaluate the quality of the extracted events from the two target systems, in terms of reliability and correctness. The results show that there exist significant discrepancies between the outputs of automated systems and hand-coded system and the accuracy of both systems are far away from satisfying. These findings provide preliminary background and set the foundation for using advanced machine learning algorithms for event related information extraction.
Inspired by the successful application of deep learning in Natural Language Processing (NLP), we propose a Multi-Instance Convolutional Neural Network (MI-CNN) model for event detection and critical sentences extraction without sentence level labels. To evaluate the model, we run a set of experiments on a real-world protest event dataset. The result shows that our model could be able to outperform the strong baseline models and extract the meaningful key sentences without domain knowledge and manually designed features.
We also extend the MI-CNN model and propose an MIMTRNN model for event extraction with distant supervision to overcome the problem of lacking fine level labels and small size training data. The proposed MIMTRNN model systematically integrates the RNN, Multi-Instance Learning, and Multi-Task Learning into a unified framework. The RNN module aims to encode into the representation of entity mentions the sequential information as well as the dependencies between event arguments, which are very useful in the event extraction task. The Multi-Instance Learning paradigm makes the system does not require the precise labels in entity mention level and make it perfect to work together with distant supervision for event extraction. And the Multi-Task Learning module in our approach is designed to alleviate the potential overfitting problem caused by the relatively small size of training data. The results of the experiments on two real-world datasets(Cyber-Attack and Civil Unrest) show that our model could be able to benefit from the advantage of each component and outperform other baseline methods significantly. / Ph. D. / Nowadays the amount of available information (such as news, blogs, and social media) grows in exponential order. The demand of making use of the massive on-line information during decision making process becomes increasing intensive. Therefore, it is imperative to develop algorithms that automatically extract the formatted information from large volumes of the unstructured text data. In this dissertation, we focus on three problems in obtaining event-related information from news articles. (1) The first effort is to comprehensively analyze the performance and challenges in current large-scale event encoding systems. (2) The second problem involves detecting the event and extracting key information about the event in the article. (3) Third, the efforts concentrate on extracting the arguments of the event from the text. We found that there exist significant discrepancies between the outputs of automated systems and hand-coded system and the accuracy of current event extraction systems are far away from satisfying. These findings provide preliminary background and set the foundation for using advanced machine learning algorithms for event related information extraction. Our experiments on two real-world event extraction tasks (Cyber-Attack and Civil Unrest) show the effectiveness of our deep learning approaches for detecting and extracting the event information from unstructured text data.
|
157 |
A Deep Learning Based Pipeline for Image Grading of Diabetic RetinopathyWang, Yu 21 June 2018 (has links)
Diabetic Retinopathy (DR) is one of the principal sources of blindness due to diabetes mellitus. It can be identified by lesions of the retina, namely microaneurysms, hemorrhages, and exudates. DR can be effectively prevented or delayed if discovered early enough and well-managed. Prior studies on diabetic retinopathy typically extract features manually but are time-consuming and not accurate. In this research, we propose a research framework using advanced retina image processing, deep learning, and a boosting algorithm for high-performance DR grading. First, we preprocess the retina image datasets to highlight signs of DR, then follow by a convolutional neural network to extract features of retina images, and finally apply a boosting tree algorithm to make a prediction based on extracted features. Experimental results show that our pipeline has excellent performance when grading diabetic retinopathy images, as evidenced by scores for both the Kaggle dataset and the IDRiD dataset. / Master of Science / Diabetes is a disease in which insulin can not work very well, that leads to long-term high blood sugar level. Diabetic Retinopathy (DR), a result of diabetes mellitus, is one of the leading causes of blindness. It can be identified by lesions on the surface of the retina. DR can be effectively prevented or delayed if discovered early enough and well-managed. Prior image processing studies of diabetic retinopathy typically detect features manually, like retinal lesions, but are time-consuming and not accurate. In this research, we propose a framework using advanced retina image processing, deep learning, and a boosting decision tree algorithm for high-performance DR grading. Deep learning is a method that can be used to extract features of the image. A boosting decision tree is a method widely used in classification tasks. We preprocess the retina image datasets to highlight signs of DR, followed by deep learning to extract features of retina images. Then, we apply a boosting decision tree algorithm to make a prediction based on extracted features. The results of experiments show that our pipeline has excellent performance when grading the diabetic retinopathy score for both Kaggle and IDRiD datasets.
|
158 |
Assaying T Cell Function by Morphometric Analysis and Image-Based Deep LearningWang, Xin January 2024 (has links)
Immune cell function varies tremendously between individuals, posing a major challenge to the development and success of emerging cellular immunotherapies. In the context of T cell therapy for cancer, long-term diseases such as Chronic Lymphocytic Leukemia (CLL) often induce T cell deficiencies resembling cellular exhaustion, complicating the preparation of therapeutic quantities of cells and maintaining efficacy once reintroduced to patients. The ability to rapidly estimate the responsiveness of an individual’s T cells could provide a powerful tool for tailoring treatment conditions and monitoring T cell functionality over the course of therapy.
This dissertation investigates the use of short-term cellular behavior assays as a predictive indicator of long-term T cell function. Specifically, the short-term spreading of T cells on functionalized planar, elastic surfaces was quantified by 11 morphological parameters. These parameters were analyzed to discern the impact of both intrinsic factors, such as disease state, and extrinsic factors, such as substrate stiffness. This study identified morphological features that varied between T cells isolated from healthy donors and those from patients being treated for CLL. Combining multiple features through a machine learning approach such as Decision Tree or Random Forest provided an effective means for identifying whether T cells came from healthy or CLL donors.
To further automate this assay and enhance the classification outcome, an image-based deep learning workflow was developed. The image-based deep learning approach notably outperformed morphometric analysis and showed great promise in classifying both intrinsic disease states and extrinsic environmental stiffness. Furthermore, we applied this imaging-based deep learning method to predict T cell proliferative capacity under different stiffness conditions, enabling rapid and efficient optimization of T cell expansion conditions to better guide cellular immunotherapy. Looking ahead, future efforts will focus on optimizing and generalizing the model to enhance its predictive accuracy and applicability across diverse patient populations.
Additionally, we aim to incorporate multi-channel imaging that captures detailed T cell subset information, enabling the model to better understand the complex interactions between different cellular features and their influence on long-term proliferation. Our ultimate vision is to translate this technology into an automated device that offers a streamlined and efficient assessment of T cell functions. This device could serve as a critical tool in optimizing T cell production and monitoring T cell functions for both autologous and allogeneic cell therapies, significantly improving the effectiveness and personalization of cancer immunotherapy.
|
159 |
Developing fast machine learning techniques with applications to steganalysis problemsMiche, Yoan 02 November 2010 (has links) (PDF)
Depuis que les Hommes communiquent, le besoin de dissimuler tout ou partie de la communication existe. On peut citer au moins deux formes de dissimulation d'un message au sein d'une communication: Dans le premier cas, le message à envoyer peut lui même être modifié, de telle sorte que seul le destinataire puisse le décoder. La cryptographie s'emploie par exemple à cette tâche. Une autre forme est celle de la stéganographie, qui vise à dissimuler le message au sein d'un document. Et de même que pour la cryptographie dont le pendant est la cryptanalyse visant à décrypter le message, la stéganalyse est à l'opposé de la stéganographie et se charge de détecter l'existence d'un message. Le terme de stéganalyse peut également désigner l'importante classe de problèmes liés à la détection de l'existence du message mais aussi à l'estimation de sa taille (stéganalyse quantitative) ou encore de son contenu. Dans cette thèse, l'accent est tout d'abord mis sur le problème classique de stéganalyse (détection de la présence du message). Une méthodologie permettant d'obtenir des résultats statistiquement fiables dans ce contexte est proposée. Il sagit tout d'abord d'estimer le nombre d'échantillons (ici des images) suffisant à l'obtention de résultats pertinents, puis de réduire la dimensionalité du problème par une approche basée sur la sélection de variables. Dans le contexte de la stéganalyse, la plupart des variables obtenues peuvent être interprétées physiquement, ce qui permet une interprétation de la sélection de variables obtenue: les variables sélectionnées en premier réagissent vraisemblablement de façon importante aux changements causés par la présence du message. Leur analyse peut permettre de comprendre le fonctionnement et les faiblesses de l'algorithme de stéganographie utilisé, par exemple. Cette méthodologie peut s'avérer complexe en termes de calculs et donc nécessiter des temps d'éxecution importants. Pour pallier à ce problème, un nouveau modèle pour le "Machine Learning" est proposé, l'OP-ELM. L'OPELM est constitué d'un Réseau de Neurones au sein duquel des projections aléatoires sont utilisées. Les neurones sont ensuite classés par pertinence vis à vis du problème, et seuls les plus pertinents sont conservés. Cette structure de modèle parvient à obtenir des performances similaires à celles de l'état de l'art dans le domaine du "Machine Learning". Enfin, le modèle OP-ELM est utilisé dans le cadre de la stéganalyse quantitative, cette fois (l'estimation de la taille du message). Une approche nouvelle sur ce problème est utilisée, faisant appel à une technique de ré-insertion d'un message au sein d'une image considérée comme suspecte. En répétant ce processus de ré-insertion un certain nombre de fois, et pour des messages connus de tailles différentes, il est possible d'estimer la taille du message original utilisé par l'expéditeur. De plus, par l'utilisation de la largeur de l'intervalle de confiance obtenu sur la taille du message original, une mesure de la difficulté intrinsèque à l'image est présentée. Ceci permet d'estimer la fiabilité de la prédiction obtenue pour la taille du message original.
|
160 |
Detecção de ilhamento de Geradores Distribuídos utilizando Transformada S e Redes Neurais Artificiais com Máquina de Aprendizado Extremo / Islanding detection for Distributed Generators using S-transform and Artificial Neural Networks with Extreme Learning MachineMenezes, Thiago Souza 24 May 2019 (has links)
A conexão de Geradores Distribuídos (GDs) no sistema de distribuição vem se intensificando nos últimos anos. Neste cenário, o aumento de GDs pode trazer alguns benefícios, como a redundância da geração e redução das perdas elétricas. Por outro lado, o problema do ilhamento também vem se destacando. Atualmente, existem técnicas já consolidadas para a detecção do ilhamento, sendo que as técnicas passivas estão entre as mais utilizadas. Entretanto, as técnicas passivas são bastante dependentes do desbalanço de potência entre a geração e as cargas no momento de ocorrência do ilhamento para atuarem corretamente. Caso o desbalanço de potência seja pequeno, as técnicas passivas tendem a não identificar o ilhamento, gerando as chamadas Zonas de Não Detecção (ZNDs). Para mitigar este problema, a pesquisa por técnicas passivas inteligentes baseadas em aprendizagem de máquina vem se tornando cada vez mais comum. Neste trabalho foi modelada uma proteção anti-ilhamento baseada em Redes Neurais Artificiais (RNAs). A classificação do ilhamento é feita com base no espectro de frequência das tensões nos terminais do GD com o uso da Transformada de Stockwell, ou apenas Transformada S (TS). Outro ponto importante da metodologia foi a implementação de uma etapa de detecção de eventos, também baseada nas energias do espectro de frequência das tensões, para evitar a constante execução do classificador. Assim, a RNA apenas irá classificar o evento após receber um sinal de trigger da etapa de detecção de evento. Para o treinamento da RNA foram testados dois algoritmos diferentes, o clássico Backpropagation (BP) e a Máquina de Aprendizado Extremo, do inglês Extreme Learning Machine (ELM). Ressalta-se o melhor desempenho obtido com as redes treinadas pelo ELM, que apresentaram uma capacidade de generalização muito maior, logo, resultando em taxas de acerto mais elevadas. De modo geral, depois de comparada com métodos passivos convencionais para a detecção de ilhamento, a proteção proposta se mostrou mais precisa e com um tempo de detecção muito menor, sendo inferior a 2 ciclos. Por fim, ainda foi realizada a análise das ZNDs para a proteção proposta e as técnicas convencionais, por ser uma característica muito importante para a proteção antiilhamento, mas que não é comumente abordada para técnicas passivas inteligentes. Nesta análise, o método para a detecção de ilhamento proposto novamente se sobressaiu às técnicas convencionais, apresentado uma ZND muito menor. / The connection of distributed generators (DG) in the distribution system has been intensified in the recent years. In this scenario, the increase of DG can bring some benefits, such as generation redundancy and reduction of power losses. On the other hand, the problem of islanding is also been highlighted. Currently, there are already consolidated techniques for islanding detection, and passive techniques are among the most used ones. However, the passive techniques are very dependent of the power unbalance between the generation and the loads at the moment of the islanding in order to actuate properly. If the power mismatch is small, the passive techniques tend to not identify the islanding, generating the so called Non-Detection Zones (NDZ). To mitigate this issue, the research of intelligent passive techniques based in machine learning is becoming more common. In this study, an anti-islanding protection based on Artificial Neural Networks (ANN) was modelled. The islanding classification is done based on the frequency spectrum of the DG\'s terminal voltages using the Stockwell Transform, or just S-Transform (ST). Another important point of the methodology was the implementation of an event detection stage, also based on the energies of the voltages frequency spectrum, to avoid the constant execution of the classifier. Therefore, the ANN will only classify the event after receiving a trigger signal from the event detection stage. To train the ANN, two different algorithms were tested: the classic Backpropagation and the Extreme Learning Machine (ELM). It is noteworthy the better performance obtained with the neural networks trained by the ELM, which had a greater capacity of generalization, hence resulting in higher success rates. In general, after being compared with conventional passive techniques for islanding detection, the proposed protection was more accurate and with a much smaller detection time, being less than 2 cycles. Finally, the analysis of the NDZ for the proposed protection and the conventional techniques was carried out, as it is a very important feature for anti-islanding protection, but is not commonly addressed for intelligent passive techniques. In this analysis, the islanding detection method proposed again overcame the conventional techniques, presenting a much smaller NDZ.
|
Page generated in 0.1162 seconds