• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 113
  • 34
  • 22
  • 19
  • 9
  • 6
  • 6
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 246
  • 62
  • 33
  • 33
  • 25
  • 23
  • 23
  • 22
  • 21
  • 18
  • 17
  • 17
  • 17
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Electronic Flight Bag / Electronic Flight Bag

Kúšik, Lukáš January 2021 (has links)
Cieľom tejto diplomovej práce je vytvoriť Electronic Flight Bag (EFB) aplikáciu pre mobilné telefóny s operačným systémom Android. Pre splnenie tejto úlohy bola preskúmaná aktuálna legislatíva ohľadom EFB aplikácií spolu s najmodernejšími EFB aplikáciami dostupnými na aplikačnom trhu. Na základe týchto informácií je navrhnutá a implementovaná EFB aplikácia určená pre pilotov všeobecného letectva. Výsledný produkt obsahuje funkcie pre plánovanie letu, vlastnú leteckú mapu, pilotný denník, katalóg letísk s dátami z celého sveta a ďalšie. Podpora offline zaručuje funkčnosť v reálnych podmienkach letu. Konečný produkt sa taktiež snaží inovovať nad existujúcimi EFB aplikáciami zahrnutím funkcionalít, akými sú napríklad automatické kontrolné zoznamy a náhľad v rozšírenej realite.
232

Påverkar prehospitala luftvägshjälpmedel överlevnaden hos patienter som drabbats av hjärtstopp? : en litteraturstudie

Henriksson, Jonatan, Tedmar, Jens January 2020 (has links)
Bakgrund Vid ett prehospitalt hjärtstopp krävs utöver hjärt- och lungräddning med bröstkompressioner och defibrillering med hjärtstartare, även avancerad luftvägshantering för att skapa en fri luftväg vilket ambulanssjuksköterskan ansvarar för. Det finns en mängd olika luftvägshjälpmedel som ambulanssjuksköterskan kan använda sig av. För en del sjuksköterskor inom ambulanssjukvården kan en viss osäkerhet kring användningen av luftvägshjälpmedel finnas då de kan sakna rätt kompetens, utbildning eller ej fått tillräcklig träning i användandet för att utföra det på ett patientsäkert sätt.   Syfte Syftet med denna studie var att jämföra prehospitala luftvägshjälpmedel vid hjärtstopp utanför sjukhus i förhållande till överlevnad.   Metod Studien är en litteraturöversikt med kvantitativ ansats. Studien genomfördes genom en systematisk sökning av vetenskapliga artiklar vilka har jämfört olika luftvägshjälpmedel vid prehospitala hjärtstopp. Databaser som PubMed och CINAHL har främst använts. De utvalda artiklarna har kvalitetsgranskat.   Resultat Två huvudfynd framkom där mask- och blåsa var korrelerad till högre prevalens av överlevnad och där endotracheal intubering var korrelerad till högre prevalens att uppnå återkomst av spontan cirkulation.   Slutsats Av de inkluderade artiklarna visar resultatet på att mask- och blåsa är bästa alternativet för överlevnad och att endotracheal intubering är bästa alternativet för att uppnå återkomst av spontan cirkulation under ett prehospitalt hjärtstopp. Dock bör slutsatsen tas med försiktighet då resultaten kan skilja sig och bero på en mängd olika faktorer som skiljer sig åt i de olika studierna. / Background In addition to cardiac and pulmonary rescue with chest compressions and defibrillation with defibrillator, pre-hospital cardiac arrest also requires advanced airway management to create a clear airway for which the ambulance nurse is responsible. There are a variety of respiratory aids that the ambulance nurse can use. For some nurses in ambulance care, there may be some uncertainty about the use of respiratory aids as they may lack the right skills, education or have not received sufficient training in the use of it to perform it in a patient-safe manner.   Aim The purpose of this study was to compare prehospital airway aids in cardiac arrest outside of hospital in relation to survival.   Method The study is a literature review with a quantitative approach. The study was conducted through a systematic search of scientific articles comparing different respiratory aids at prehospital cardiac arrest. Databases such as PubMed and CINAHL have mainly been used. The selected articles have been quality checked.   Results Two main findings emerged where bag valve mask was correlated to higher prevalence to survival and where endotracheal intubation was correlated to higher prevalence to achieve return of spontaneous circulation.    Conclusion Of the included articles, the results indicate that bag valve mask is the best option for survival and that endotracheal intubation is the best option for achieving return of spontaneous circulation during a prehospital cardiac arrest. However, the conclusion should be taken with caution as the results may differ and depend on a variety of factors that differ in the different studies.
233

Production of filamentous fungal biomass on waste-derived volatile fatty acids for ruminant feed supplementation and it's in vitro digestion analysis

Bouzarjomehr, Mohammadali January 2022 (has links)
Single cell proteins such as that of edible filamentous fungal biomass are considered as a promising sustainable source of animal feed supplementation. Filamentous fungi can be cultivated on different organic substrates including volatile fatty acids (VFAs) such as acetic, propionic, and butyric acids. These VFAs can be generated through the famous waste valorisation approach of anaerobic digestion (AD) as intermediate metabolites. This project investigates a sustainable approach for the production of animal feed supplementation through cultivation of fungal biomass on waste derived VFAs along with the in vitro analysis of fungal biomass digestibility as ruminant feed. In this regard, optimum conditions for the production of Aspergillus oryzae biomass on different VFAs effluents derived from anaerobic digestion process of food waste plus chicken manure (FWCKM) and potato protein liquor (PPL) at different pH, nitrogen sources, and feed mixture was studied. Accordingly, analyses showed that PPL has the highest biomass yield with 0.4 (g biomass/g consumed VFAs) based on the volatile solids (VS) by adjusting pH to 6.2. Furthermore, the digestibility of the produced fungal biomass is analysed by using three different in vitro digestion methods including Tilley and Terry (TT) method, Gas Production Method (GPM), and Nylon Bag Method (NBM) and the results are compared with the conventional feed (silage and rapeseed meal). Results obtained from different digestibility methods illustrate that different A. oryzae fungal biomass had approximately 10-15 % higher dry matter digestibility fraction compared to silage and rapeseed meal (reference feeds). Hence, these results revealed that A. oryzae fungal biomass can grow on VFAs effluents and produce protein-rich fungal biomass while this biomass has better digestibility compared to conventional feeds and confirmed the initial hypothesis of the study.
234

An Exploration of the Word2vec Algorithm: Creating a Vector Representation of a Language Vocabulary that Encodes Meaning and Usage Patterns in the Vector Space Structure

Le, Thu Anh 05 1900 (has links)
This thesis is an exloration and exposition of a highly efficient shallow neural network algorithm called word2vec, which was developed by T. Mikolov et al. in order to create vector representations of a language vocabulary such that information about the meaning and usage of the vocabulary words is encoded in the vector space structure. Chapter 1 introduces natural language processing, vector representations of language vocabularies, and the word2vec algorithm. Chapter 2 reviews the basic mathematical theory of deterministic convex optimization. Chapter 3 provides background on some concepts from computer science that are used in the word2vec algorithm: Huffman trees, neural networks, and binary cross-entropy. Chapter 4 provides a detailed discussion of the word2vec algorithm itself and includes a discussion of continuous bag of words, skip-gram, hierarchical softmax, and negative sampling. Finally, Chapter 5 explores some applications of vector representations: word categorization, analogy completion, and language translation assistance.
235

Automatic Detection of Brain Functional Disorder Using Imaging Data

Dey, Soumyabrata 01 January 2014 (has links)
Recently, Attention Deficit Hyperactive Disorder (ADHD) is getting a lot of attention mainly for two reasons. First, it is one of the most commonly found childhood behavioral disorders. Around 5-10% of the children all over the world are diagnosed with ADHD. Second, the root cause of the problem is still unknown and therefore no biological measure exists to diagnose ADHD. Instead, doctors need to diagnose it based on the clinical symptoms, such as inattention, impulsivity and hyperactivity, which are all subjective. Functional Magnetic Resonance Imaging (fMRI) data has become a popular tool to understand the functioning of the brain such as identifying the brain regions responsible for different cognitive tasks or analyzing the statistical differences of the brain functioning between the diseased and control subjects. ADHD is also being studied using the fMRI data. In this dissertation we aim to solve the problem of automatic diagnosis of the ADHD subjects using their resting state fMRI (rs-fMRI) data. As a core step of our approach, we model the functions of a brain as a connectivity network, which is expected to capture the information about how synchronous different brain regions are in terms of their functional activities. The network is constructed by representing different brain regions as the nodes where any two nodes of the network are connected by an edge if the correlation of the activity patterns of the two nodes is higher than some threshold. The brain regions, represented as the nodes of the network, can be selected at different granularities e.g. single voxels or cluster of functionally homogeneous voxels. The topological differences of the constructed networks of the ADHD and control group of subjects are then exploited in the classification approach. We have developed a simple method employing the Bag-of-Words (BoW) framework for the classification of the ADHD subjects. We represent each node in the network by a 4-D feature vector: node degree and 3-D location. The 4-D vectors of all the network nodes of the training data are then grouped in a number of clusters using K-means; where each such cluster is termed as a word. Finally, each subject is represented by a histogram (bag) of such words. The Support Vector Machine (SVM) classifier is used for the detection of the ADHD subjects using their histogram representation. The method is able to achieve 64% classification accuracy. The above simple approach has several shortcomings. First, there is a loss of spatial information while constructing the histogram because it only counts the occurrences of words ignoring the spatial positions. Second, features from the whole brain are used for classification, but some of the brain regions may not contain any useful information and may only increase the feature dimensions and noise of the system. Third, in our study we used only one network feature, the degree of a node which measures the connectivity of the node, while other complex network features may be useful for solving the proposed problem. In order to address the above shortcomings, we hypothesize that only a subset of the nodes of the network possesses important information for the classification of the ADHD subjects. To identify the important nodes of the network we have developed a novel algorithm. The algorithm generates different random subset of nodes each time extracting the features from a subset to compute the feature vector and perform classification. The subsets are then ranked based on the classification accuracy and the occurrences of each node in the top ranked subsets are measured. Our algorithm selects the highly occurring nodes for the final classification. Furthermore, along with the node degree, we employ three more node features: network cycles, the varying distance degree and the edge weight sum. We concatenate the features of the selected nodes in a fixed order to preserve the relative spatial information. Experimental validation suggests that the use of the features from the nodes selected using our algorithm indeed help to improve the classification accuracy. Also, our finding is in concordance with the existing literature as the brain regions identified by our algorithms are independently found by many other studies on the ADHD. We achieved a classification accuracy of 69.59% using this approach. However, since this method represents each voxel as a node of the network which makes the number of nodes of the network several thousands. As a result, the network construction step becomes computationally very expensive. Another limitation of the approach is that the network features, which are computed for each node of the network, captures only the local structures while ignore the global structure of the network. Next, in order to capture the global structure of the networks, we use the Multi-Dimensional Scaling (MDS) technique to project all the subjects from an unknown network-space to a low dimensional space based on their inter-network distance measures. For the purpose of computing distance between two networks, we represent each node by a set of attributes such as the node degree, the average power, the physical location, the neighbor node degrees, and the average powers of the neighbor nodes. The nodes of the two networks are then mapped in such a way that for all pair of nodes, the sum of the attribute distances, which is the inter-network distance, is minimized. To reduce the network computation cost, we enforce that the maximum relevant information is preserved with minimum redundancy. To achieve this, the nodes of the network are constructed with clusters of highly active voxels while the activity levels of the voxels are measured based on the average power of their corresponding fMRI time-series. Our method shows promise as we achieve impressive classification accuracies (73.55%) on the ADHD-200 data set. Our results also reveal that the detection rates are higher when classification is performed separately on the male and female groups of subjects. So far, we have only used the fMRI data for solving the ADHD diagnosis problem. Finally, we investigated the answers of the following questions. Do the structural brain images contain useful information related to the ADHD diagnosis problem? Can the classification accuracy of the automatic diagnosis system be improved combining the information of the structural and functional brain data? Towards that end, we developed a new method to combine the information of structural and functional brain images in a late fusion framework. For structural data we input the gray matter (GM) brain images to a Convolutional Neural Network (CNN). The output of the CNN is a feature vector per subject which is used to train the SVM classifier. For the functional data we compute the average power of each voxel based on its fMRI time series. The average power of the fMRI time series of a voxel measures the activity level of the voxel. We found significant differences in the voxel power distribution patterns of the ADHD and control groups of subjects. The Local binary pattern (LBP) texture feature is used on the voxel power map to capture these differences. We achieved 74.23% accuracy using GM features, 77.30% using LBP features and 79.14% using combined information. In summary this dissertation demonstrated that the structural and functional brain imaging data are useful for the automatic detection of the ADHD subjects as we achieve impressive classification accuracies on the ADHD-200 data set. Our study also helps to identify the brain regions which are useful for ADHD subject classification. These findings can help in understanding the pathophysiology of the problem. Finally, we expect that our approaches will contribute towards the development of a biological measure for the diagnosis of the ADHD subjects.
236

Subliminal priming : Manipulation till att välja en specifik kulör på plastpåse / Subliminal priming : Manipulation to choose a specific colour on a plastic bag

Nordberg, Rickard January 2014 (has links)
Primad information är lättare tillgängligt i minnet och kan således lättare bli igenkänt. Förutsättningar för priming är bland annat subliminal perception, mål, tillförlitlighet, icke vaksamt och icke vanemässigt. Studiens syfte är att få bredare förståelse gällande subliminal primings påverkan. Frågeställningen var om kunder i en affär kan manipuleras, primas, till att ta en specifik kulör på plastpåse vid kassan samt om det finns någon könsskillnad vid effekten av priming. Deltagarna var 490 kunder, varav 333 män. Två olika skyltar med olika kulörer placerades vid kassan. Det noterades om kunderna valde den primade kulören på plastpåse eller inte. Kontrollgruppen bestod av 117 personer och dessa fick inte se någon skylt. Resultatet visade en signifikant skillnad, deltagarna valde samma kulör på plastpåse som skylten. Inga könsskillnader påträffades. Forskning visar att primingeffekter kan motstridas genom att individen gör sig medveten av potentiell omedveten påverkan. / Primed information is more accessible in memory and can thus easily be recognized. Prerequisites for priming include subliminal perception, goals, reliability, non alert and non habitually. The study aims to gain broader understanding regarding subliminal primings influence. The purpose of this thesis was to see whether the customers in a store could be manipulated, primed, to take a specific colour on plastic bags at checkout and if there are any gender differences in the effect of priming. Participants were 490 customers, of whom 333 men. Two different signs with different colours were placed at the checkout. It was noted if customers chose the primed colour of the plastic bag or not. The results showed a significant difference, the participants chose the same colour on the plastic bag as the sign. No gender differences were found. Research shows that priming effects can be opposed if people make themselves aware of potential unconscious influences.
237

Charting habitus : Stephen King, the author protagonist and the field of literary production

Palko, Amy Joyce January 2009 (has links)
While most research in King studies focuses on Stephen King’s contribution to the horror genre, this thesis approaches King as a participant in American popular culture, specifically exploring the role the author-protagonist plays in his writing about writing. I have chosen Bourdieu’s theoretical construct of habitus through which to focus my analysis into not only King’s narratives, but also into his non-fiction and paratextual material: forewords, introductions, afterwords, interviews, reviews, articles, editorials and unpublished archival documents. This has facilitated my investigation into the literary field that King participates within, and represents in his fiction, in order to provide insight into his perception of the high/low cultural divide, the autonomous and heteronomous principles of production and the ways in which position-taking within that field might be effected. This approach has resulted in a study that combines the methods of literary analysis and book history; it investigates both the literary construct and the tangible page. King’s part autobiography, part how-to guide, On Writing (2000), illustrates the rewards such an approach yields, by indicating four main ways in which his perception of, and participation in, the literary field manifests: the art/money dialectic, the dangers inherent in producing genre fiction, the representation of art produced according to the heteronomous principle and the relationship between popular culture and the Academy. The texts which form the focus of the case studies in this thesis, The Shining, Misery, The Dark Half, Bag of Bones and Lisey’s Story demonstrate that there exists a dramatisation of King’s habitus at the level of the narrative which is centred on the figure of the author-protagonist. I argue that the actions of the characters Jack Torrance, Paul Sheldon, Thad Beaumont, Mike Noonan and Scott Landon, and the situations they find themselves in, offer an expression of King’s perception of the literary field, an expression which benefits from being situated within the context of his paratextually articulated pronouncements of authorship, publication and cultural production.
238

Biodegradable Composites : Processing of thermoplastic polymers for medical applications.

Damadzadeh, Behzad, Jabari, Hamideh January 2009 (has links)
Despite the recent development in PLA and PLGA based medical devices, there are still needs to further improve the mechanical performance of bioresorbable medical implants and their bioactivity. This is normally done by optimizing the filler compositions in selected groups ofbiodegradable polymer matrices. In this study, the effects of various filler levels on mechanical strength and thermal properties of PLA and PLGA composites were investigated. Composites containing different dosage of osteoconductive HAp with various particles size (0-5μm, 0-50 μm, nano size), β-TCP, bioactive glass and biodegradable Poly-L-lactide and Polylactide-glycolic acid was manufactured with melt blending, using a twin-screw extruder.The samples were investigated by Differential Scanning Calorimetry (DSC), thermo gravimetric analysis (TGA), Scanning Electron Microscopy (SEM), viscometer, three points bending machine, and Optical Microscopy (OM). The Extruder produced a porous profile. The result from TGA and SEM indicated that there was homogenous filler dispersion in the matrix after compounding.The result from DSC and Viscometer shows that there was some degradation duringcompounding. Mechanical properties of composites were modified by adding filler to matrix. The addition of Bioactive glass, as a filler, increases the degradation of the polymer matrix. The best filler that was applied is 0-5μm and nano HAp. Also in in-vitro degradation part of this thesis work, the effects of calcium phosphate materialsare investigated on degradation process.
239

Investigation into submicrometer particle and gaseous emissions from airport ground running procedures

Mazaheri, Mandana January 2009 (has links)
Emissions from airport operations are of significant concern because of their potential impact on local air quality and human health. The currently limited scientific knowledge of aircraft emissions is an important issue worldwide, when considering air pollution associated with airport operation, and this is especially so for ultrafine particles. This limited knowledge is due to scientific complexities associated with measuring aircraft emissions during normal operations on the ground. In particular this type of research has required the development of novel sampling techniques which must take into account aircraft plume dispersion and dilution as well as the various particle dynamics that can affect the measurements of the aircraft engine plume from an operational aircraft. In order to address this scientific problem, a novel mobile emission measurement method called the Plume Capture and Analysis System (PCAS), was developed and tested. The PCAS permits the capture and analysis of aircraft exhaust during ground level operations including landing, taxiing, takeoff and idle. The PCAS uses a sampling bag to temporarily store a sample, providing sufficient time to utilize sensitive but slow instrumental techniques to be employed to measure gas and particle emissions simultaneously and to record detailed particle size distributions. The challenges in relation to the development of the technique include complexities associated with the assessment of the various particle loss and deposition mechanisms which are active during storage in the PCAS. Laboratory based assessment of the method showed that the bag sampling technique can be used to accurately measure particle emissions (e.g. particle number, mass and size distribution) from a moving aircraft or vehicle. Further assessment of the sensitivity of PCAS results to distance from the source and plume concentration was conducted in the airfield with taxiing aircraft. The results showed that the PCAS is a robust method capable of capturing the plume in only 10 seconds. The PCAS is able to account for aircraft plume dispersion and dilution at distances of 60 to 180 meters downwind of moving a aircraft along with particle deposition loss mechanisms during the measurements. Characterization of the plume in terms of particle number, mass (PM2.5), gaseous emissions and particle size distribution takes only 5 minutes allowing large numbers of tests to be completed in a short time. The results were broadly consistent and compared well with the available data. Comprehensive measurements and analyses of the aircraft plumes during various modes of the landing and takeoff (LTO) cycle (e.g. idle, taxi, landing and takeoff) were conducted at Brisbane Airport (BNE). Gaseous (NOx, CO2) emission factors, particle number and mass (PM2.5) emission factors and size distributions were determined for a range of Boeing and Airbus aircraft, as a function of aircraft type and engine thrust level. The scientific complexities including the analysis of the often multimodal particle size distributions to describe the contributions of different particle source processes during the various stages of aircraft operation were addressed through comprehensive data analysis and interpretation. The measurement results were used to develop an inventory of aircraft emissions at BNE, including all modes of the aircraft LTO cycle and ground running procedures (GRP). Measurements of the actual duration of aircraft activity in each mode of operation (time-in-mode) and compiling a comprehensive matrix of gas and particle emission rates as a function of aircraft type and engine thrust level for real world situations was crucial for developing the inventory. The significance of the resulting matrix of emission rates in this study lies in the estimate it provides of the annual particle emissions due to aircraft operations, especially in terms of particle number. In summary, this PhD thesis presents for the first time a comprehensive study of the particle and NOx emission factors and rates along with the particle size distributions from aircraft operations and provides a basis for estimating such emissions at other airports. This is a significant addition to the scientific knowledge in terms of particle emissions from aircraft operations, since the standard particle number emissions rates are not currently available for aircraft activities.
240

Réduction de dimension de sac de mots visuels grâce à l’analyse formelle de concepts / Dimension reduction on bag of visual words with formal concept analysis

Dao, Ngoc Bich 23 June 2017 (has links)
La réduction des informations redondantes et/ou non-pertinentes dans la description de données est une étape importante dans plusieurs domaines scientifiques comme les statistiques, la vision par ordinateur, la fouille de données ou l’apprentissage automatique. Dans ce manuscrit, nous abordons la réduction de la taille des signatures des images par une méthode issue de l’Analyse Formelle de Concepts (AFC), qui repose sur la structure du treillis des concepts et la théorie des treillis. Les modèles de sac de mots visuels consistent à décrire une image sous forme d’un ensemble de mots visuels obtenus par clustering. La réduction de la taille des signatures des images consiste donc à sélectionner certains de ces mots visuels. Dans cette thèse, nous proposons deux algorithmes de sélection d’attributs (mots visuels) qui sont utilisables pour l’apprentissage supervisé ou non. Le premier algorithme, RedAttSansPerte, ne retient que les attributs qui correspondent aux irréductibles du treillis. En effet, le théorème fondamental de la théorie des treillis garantit que la structure du treillis des concepts est maintenue en ne conservant que les irréductibles. Notre algorithme utilise un graphe d’attributs, le graphe de précédence, où deux attributs sont en relation lorsque les ensembles d’objets à qui ils appartiennent sont inclus l’un dans l’autre. Nous montrons par des expérimentations que la réduction par l’algorithme RedAttsSansPerte permet de diminuer le nombre d’attributs tout en conservant de bonnes performances de classification. Le deuxième algorithme, RedAttsFloue, est une extension de l’algorithme RedAttsSansPerte. Il repose sur une version approximative du graphe de précédence. Il s’agit de supprimer les attributs selon le même principe que l’algorithme précédent, mais en utilisant ce graphe flou. Un seuil de flexibilité élevé du graphe flou entraîne mécaniquement une perte d’information et de ce fait une baisse de performance de la classification. Nous montrons par des expérimentations que la réduction par l’algorithme RedAttsFloue permet de diminuer davantage l’ensemble des attributs sans diminuer de manière significative les performances de classification. / In several scientific fields such as statistics, computer vision and machine learning, redundant and/or irrelevant information reduction in the data description (dimension reduction) is an important step. This process contains two different categories : feature extraction and feature selection, of which feature selection in unsupervised learning is hitherto an open question. In this manuscript, we discussed about feature selection on image datasets using the Formal Concept Analysis (FCA), with focus on lattice structure and lattice theory. The images in a dataset were described as a set of visual words by the bag of visual words model. Two algorithms were proposed in this thesis to select relevant features and they can be used in both unsupervised learning and supervised learning. The first algorithm was the RedAttSansPerte, which based on lattice structure and lattice theory, to ensure its ability to remove redundant features using the precedence graph. The formal definition of precedence graph was given in this thesis. We also demonstrated their properties and the relationship between this graph and the AC-poset. Results from experiments indicated that the RedAttsSansPerte algorithm reduced the size of feature set while maintaining their performance against the evaluation by classification. Secondly, the RedAttsFloue algorithm, an extension of the RedAttsSansPerte algorithm, was also proposed. This extension used the fuzzy precedence graph. The formal definition and the properties of this graph were demonstrated in this manuscript. The RedAttsFloue algorithm removed redundant and irrelevant features while retaining relevant information according to the flexibility threshold of the fuzzy precedence graph. The quality of relevant information was evaluated by the classification. The RedAttsFloue algorithm is suggested to be more robust than the RedAttsSansPerte algorithm in terms of reduction.

Page generated in 0.0547 seconds