• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 171
  • 51
  • 35
  • 26
  • 25
  • 14
  • 10
  • 7
  • 4
  • 4
  • 4
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 420
  • 56
  • 55
  • 54
  • 46
  • 46
  • 36
  • 34
  • 34
  • 33
  • 32
  • 32
  • 32
  • 28
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Θεωρία διαστάσεων και καθολικοί χώροι

Μεγαρίτης, Αθανάσιος 29 July 2011 (has links)
Η κατασκευή του Peano το 1890 μιας συνεχούς απεικόνισης από ένα τμήμα επί ενός τετραγώνου έδωσε αφορμή για το πρόβλημα εάν ένα τμήμα και ένα τετράγωνο είναι ομοιόμορφα, και γενικότερα εάν ο $n$-κύβος $I^{n}$ είναι ομοιόμορφος με τον $m$-κύβο $I^{m}$ για $n\neq m$. Το πρόβλημα αυτό λύθηκε από τον Brouwer το 1911 και η μελέτη αυτού του προβλήματος οδήγησε στον ορισμό των διαστάσεων ${\rm ind}$, ${\rm Ind}$ και ${\rm dim}$ και γενικότερα στη γένεση και ανάπτυξη της Θεωρίας Διαστάσεων. Στη διατριβή αυτή ορίζονται διαστάσεις-συναρτήσεις του τύπου ${\rm ind}$, ${\rm Ind}$ και ${\rm dim}$ και αποδεικνύονται βασικές ιδιότητες της Θεωρίας Διαστάσεων (θεωρήματα υποχώρου, αθροίσματος και γινομένου) για τις συναρτήσεις αυτές. Με τη βοήθεια των συναρτήσεων αυτών ορίζονται νέες κλάσεις τοπολογικών χώρων και μελετάται για τις κλάσεις αυτές το πρόβλημα της καθολικότητας, δηλαδή της ύπαρξης ή μη καθολικών χώρων για τις κλάσεις αυτές. Ένας τοπολογικός χώρος $T$ καλείται καθολικός για μια κλάση ${\rm I\!P}$ τοπολογικών χώρων, όταν ο $T$ ανήκει στην κλάση ${\rm I\!P}$ και κάθε τοπολογικός χώρος που ανήκει στην κλάση ${\rm I\!P}$ περιέχεται τοπολογικά στο χώρο $T$. Για την ύπαρξη καθολικών στοιχείων στις κλάσεις αυτές χρησιμοποιείται η μέθοδος κατασκευής Περιεκτικών Χώρων του βιβλίου: S.D. Iliadis, Universal spaces and mappings, North-Holland Mathematics Studies, 198. Elsevier Science B.V., Amsterdam, 2005. xvi+559 pp. / Peano' s construction in 1890 of a continuous map of a segment onto a square gave rise to the problem of whether a segment and a square are homeomorphic and generally whether the cubes $I^{n}$ and $I^{m}$ are homeomorhic for $n\neq m$. This problem was solved by Brouwer in 1911 and the investigation of this problem leads to the definitions of ${\rm ind}$, ${\rm Ind}$, and ${\rm dim}$ and generally to the beginning of Dimension Theory. In this thesis we define new dimension-like functions of the type ${\rm ind}$, ${\rm Ind}$ and ${\rm dim}$ and we give basic properties of Dimension Theory (subspace theorems, sum theorems, product theorems) for these dimension-like functions. Using the introduced dimension-like functions, new classes of spaces are defined and the investigation of the universality problem for these classes is given, that is whether there exists universal space in these classes. A space $T$ is said to be universal in a class ${\rm I\!P}$ of spaces if $T\in{\rm I\!P}$ and for every $X\in{\rm I\!P}$ there exists an embedding of $X$ into $T$. For the existence of universal elements in these classes is used the construction of Containing Spaces given in book: S.D. Iliadis, Universal spaces and mappings, North-Holland Mathematics Studies, 198. Elsevier Science B.V., Amsterdam, 2005. xvi+559 pp.
192

Apprentissage de connaissances structurelles à partir d’images satellitaires et de données exogènes pour la cartographie dynamique de l’environnement amazonien / Structurel Knowledge learning from satellite images and exogenous data for dynamic mapping of the amazonian environment

Bayoudh, Meriam 06 December 2013 (has links)
Les méthodes classiques d'analyse d'images satellites sont inadaptées au volume actuel du flux de données. L'automatisation de l'interprétation de ces images devient donc cruciale pour l'analyse et la gestion des phénomènes observables par satellite et évoluant dans le temps et l'espace. Ce travail vise à automatiser la cartographie dynamique de l'occupation du sol à partir d'images satellites, par des mécanismes expressifs, facilement interprétables en prenant en compte les aspects structurels de l'information géographique. Il s'inscrit dans le cadre de l'analyse d'images basée objet. Ainsi, un paramétrage supervisé d'un algorithme de segmentation d'images est proposé. Dans un deuxième temps, une méthode de classification supervisée d'objets géographiques est présentée combinant apprentissage automatique par programmation logique inductive et classement par l'approche multi-class rule set intersection. Ces approches sont appliquées à la cartographie de la bande côtière Guyanaise. Les résultats démontrent la faisabilité du paramétrage de la segmentation, mais également sa variabilité en fonction des classes de la carte de référence et des données d'entrée. Les résultats de la classification supervisée montrent qu'il est possible d'induire des règles de classification expressives, véhiculant des informations cohérentes et structurelles dans un contexte applicatif donnée et conduisant à des valeurs satisfaisantes de précision et de KAPPA (respectivement 84,6% et 0,7). Ce travail de thèse contribue ainsi à l'automatisation de la cartographie dynamique à partir d'images de télédétection et propose des perspectives originales et prometteuses. / Classical methods for satellite image analysis are inadequate for the current bulky data flow. Thus, automate the interpretation of such images becomes crucial for the analysis and management of phenomena changing in time and space, observable by satellite. Thus, this work aims at automating land cover cartography from satellite images, by expressive and easily interpretable mechanism, and by explicitly taking into account structural aspects of geographic information. It is part of the object-based image analysis framework, and assumes that it is possible to extract useful contextual knowledge from maps. Thus, a supervised parameterization methods of a segmentation algorithm is proposed. Secondly, a supervised classification of geographical objects is presented. It combines machine learning by inductive logic programming and the multi-class rule set intersection approach. These approaches are applied to the French Guiana coastline cartography. The results demonstrate the feasibility of the segmentation parameterization, but also its variability as a function of the reference map classes and of the input data. Yet, methodological developments allow to consider an operational implementation of such an approach. The results of the object supervised classification show that it is possible to induce expressive classification rules that convey consistent and structural information in a given application context and lead to reliable predictions, with overall accuracy and Kappa values equal to, respectively, 84,6% and 0,7. In conclusion, this work contributes to the automation of the dynamic cartography from remotely sensed images and proposes original and promising perpectives
193

Les systèmes cognitifs dans les réseaux autonomes : une méthode d'apprentissage distribué et collaboratif situé dans le plan de connaissance pour l'auto-adaptation / Cognitive systems in automatic networks : a distributed and collaborative learning method in knoledge plane for self-adapting function

Mbaye, Maïssa 17 December 2009 (has links)
L'un des défis majeurs pour les décennies à venir, dans le domaine des technologies de l'information et de la communication, est la réalisation du concept des réseaux autonomes. Ce paradigme a pour objectif de rendre les équipements réseaux capables de s'autogérer, c'est-à-dire qu'ils pourront s'auto-configurer, s'auto-optimiser, s'auto-protéger et s'auto-restaurer en respectant les objectifs de haut niveau de leurs concepteurs. Les architectures majeures de réseaux autonomes se basent principalement sur la notion de boucle de contrôle fermée permettant l'auto-adaptation (auto-configuration et auto-optimisation) de l'équipement réseau en fonction des événements qui surviennent sur leur environnement. Le plan de connaissance est une des approches, très mise en avant ces dernières années par le monde de la recherche, qui suggère l'utilisation des systèmes cognitifs (l'apprentissage et le raisonnement) pour fermer la boucle de contrôle. Cependant, bien que les architectures majeures de gestion autonomes intègrent des modules d'apprentissage sous forme de boite noire, peu de recherches s'intéressent véritablement au contenu de ces boites. C'est dans ce cadre que nous avons fait une étude sur l'apport potentiel de l'apprentissage et proposé une méthode d'apprentissage distribué et collaboratif. Nous proposons une formalisation du problème d'auto-adaptation sous forme d'un problème d'apprentissage d'état-actions. Cette formalisation nous permet de définir un apprentissage de stratégies d'auto-adaptation qui se base sur l'utilisation de l'historique des transitions et utilise la programmation logique inductive pour découvrir de nouvelles stratégies à partir de celles déjà découvertes. Nous définissons, aussi un algorithme de partage de la connaissance qui permet d'accélérer le processus d'apprentissage. Enfin, nous avons testé l'approche proposé dans le cadre d'un réseau DiffServ et montré sa transposition sur le contexte du transport de flux multimédia dans les réseaux sans-fil 802.11. / One of the major challenges for decades to come, in the field of information technologies and the communication, is realization of autonomic paradigm. It aims to enable network equipments to self-manage, enable them to self-configure, self-optimize, self-protect and self-heal according to high-level objectives of their designers. Major architectures of autonomic networking are based on closed control loop allowing self-adapting (self-configuring and self-optimizing) of the network equipment according to the events which arise on their environment. Knowledge plane is one approach, very emphasis these last years by researchers, which suggests the use of the cognitive systems (machine learning and the reasoning) to realize closed control loop. However, although the major autonomic architectures integrate machine learning modules as functional block, few researches are really interested in the contents of these blocks. It is in this context that we made a study on the potential contribution machine learning and proposed a method of distributed and collaborative machine learning. We propose a formalization self-adapting problem in term of learning configuration strategies (state-actions) problem. This formalization allows us to define a strategies machine learning method for self-adapting which is based on the history observed transitions and uses inductive logic programming to discover new strategies from those already discovered. We defined, also a knowledge sharing algorithm which makes network components collaborate to improve learning process. Finally, we tested our approach in DiffServ context and showed its transposition on multimedia streaming in 802.11 wireless networks.
194

Training a Multilayer Perceptron to predict the final selling price of an apartment in co-operative housing society sold in Stockholm city with features stemming from open data / Träning av en “Multilayer Perceptron” att förutsäga försäljningspriset för en bostadsrättslägenhet till försäljning i Stockholm city med egenskaper från öppna datakällor

Tibell, Rasmus January 2014 (has links)
The need for a robust model for predicting the value of condominiums and houses are becoming more apparent as further evidence of systematic errors in existing models are presented. Traditional valuation methods fail to produce good predictions of condominium sales prices and systematic patterns in the errors linked to for example the repeat sales methodology and the hedonic pricing model have been pointed out by papers referenced in this thesis. This inability can lead to monetary problems for individuals and in worst-case economic crises for whole societies. In this master thesis paper we present how a predictive model constructed from a multilayer perceptron can predict the price of a condominium in the centre of Stockholm using objective data from sources publicly available. The value produced by the model is enriched with a predictive interval using the Inductive Conformal Prediction algorithm to give a clear view of the quality of the prediction. In addition, the Multilayer Perceptron is compared with the commonly used Support Vector Regression algorithm to underline the hallmark of neural networks handling of a broad spectrum of features. The features used to construct the Multilayer Perceptron model are gathered from multiple “Open Data” sources and includes data as: 5,990 apartment sales prices from 2011- 2013, interest rates for condominium loans from two major banks, national election results from 2010, geographic information and nineteen local features. Several well-known techniques of improving performance of Multilayer Perceptrons are applied and evaluated. A Genetic Algorithm is deployed to facilitate the process of determine appropriate parameters used by the backpropagation algorithm. Finally, we conclude that the model created as a Multilayer Perceptron using backpropagation can produce good predictions and outperforms the results from the Support Vector Regression models and the studies in the referenced papers. / Behovet av en robust modell för att förutsäga värdet på bostadsrättslägenheter och hus blir allt mer uppenbart alt eftersom ytterligare bevis på systematiska fel i befintliga modeller läggs fram. I artiklar refererade i denna avhandling påvisas systematiska fel i de estimat som görs av metoder som bygger på priser från repetitiv försäljning och hedoniska prismodeller. Detta tillkortakommandet kan leda till monetära problem för individer och i värsta fall ekonomisk kris för hela samhällen. I detta examensarbete påvisar vi att en prediktiv modell konstruerad utifrån en “Multilayer Perceptron” kan estimera priset på en bostadsrättslägenhet i centrala Stockholm baserad på allmänt tillgängligt data (“Öppen Data”). Modellens resultat har utökats med ett prediktivt intervall beräknat utifrån “Inductive Conformal Prediction”- algoritmen som ger en klar bild över estimatets tillförlitlighet. Utöver detta jämförs “Multilayer Perceptron”-algoritmen med en annan vanlig algoritm för maskinlärande, den så kallade “Support Vector Regression” för att påvisa neurala nätverks kvalité och förmåga att hantera dataset med många variabler. De variabler som används för att konstruera “Multilayer Perceptron”-modellen är sammanställda utifrån allmänt tillgängliga öppna datakällor och innehåller information så som: priser från 5990 sålda lägenheter under perioden 2011- 2013, ränteläget för bostadsrättslån från två av de stora bankerna, valresultat från riksdagsvalet 2010, geografisk information och nitton lokala särdrag. Ett flertal välkända förbättringar för “Multilayer Perceptron”-algoritmen har applicerats och evaluerats. En genetisk algoritm har använts för att stödja processen att hitta lämpliga parametrar till “Backpropagation”-algoritmen. I detta arbete drar vi slutsatsen att modellen kan producera goda förutsägelser med en modell konstruerad utifrån ett neuralt nätverk av typen “Multilayer Perceptron” beräknad med “backpropagation”, och därmed utklassar de resultat som levereras av Support Vector Regression modellen och de studier som refererats i denna avhandling
195

Visualizing issue tracking data using a process mining tool to support the agile maturity assessment within the Scaled Agile Framework : A case study / Visualisering av ärendehanteringsdata med hjälp av ett process-mining-verktyg med syftet att stödja den agila mognadsmätningen inom SAFe. : En fallstudie

Hovmark, Olle January 2022 (has links)
Today, agile development is broadly used within both small and large organizations worldwide. Transitioning to agile development in a large organization is a complex task that requires support from everyone in it. The Scaled Agile Framework (SAFe) is a framework meant to help integrate agile development within all parts of an organization. Regularly conducted assessments of how well an organization has integrated agile development can be a way to make sure the transition is happening as intended. These kinds of assessments are often called agile maturity assessments. SAFe includes such an assessment, but since the assessment is based on thoughts and reflections from members in the organization, doing the assessment can be difficult and may give unreliable results. This study aims to explore one way to support the assessment with objective data by generating visualizations of issue tracking data extracted from Jira and Github. The Inductive visual Miner, a plugin in the process mining software ProM, was used for the visualizations. A case study was conducted at the IT department of the Swedish Tax Agency. A slightly modified version of the methodology called the PM2 methodology was used to conduct the study. The modified methodology included six stages; a planning stage, a data extraction stage, a data processing stage, a mining and analysis stage, an evaluation stage and lastly, an improvement stage where an attempt was made to improve the visualizations based on the analysis and the evaluation. The planning stage was used to gain information about the work processes in the organization and what kind of data that may exist in the chosen data sources. A set of goal questions connected to the agile maturity assessment were generated, which the visualizations were expected to provide answers to. Data from six teams were then used to generate the visualizations. At first, the visualizations were explored and later evaluated in collaboration with the SCRUM-master in each team. The results in this study show that visualizations generated from issue tracking data using the Inductive visual Miner can be used to answer questions about time and order of events that are related to the agile maturity assessment within SAFe. However, additional analysis and reflections are needed to draw conclusions about the agile maturity from the information obtained from the visualizations. A set of requirements for the data used to generate this kind of visualizations is also proposed. The requirements were based on the results from all stages. / Agil utveckling används idag flitigt inom både små och stora organisationer över hela världen. Att gå över till agil utveckling i en stor organisation är en komplex uppgift som kräver stöd från alla inom den. Scaled Agile Framework (SAFe) är ett ramverk avsett att göra det lättare att integrera agil utveckling inom alla delar av en organisation. Regelbundet genomförda utvärderingar av en organisations användning av agil utveckling kan vara ett sätt att se till att övergången sker som planerat. Dessa utvärderingar kallas ofta agila mognadsmätningar. I SAFe ingår en sådan utvärdering, men eftersom utvärderingen baseras på tankar och reflektioner från medlemmar i organisationen kan det vara komplicerat att utföra utvärderingen och den kan ge icke pålitliga resultat. Syftet med denna studie är att utforska ett sätt att stödja den agila mognadsmätningen med objektiv data genom att generera visualiseringar av ärendehanteringsdata som extraherats från Jira och Github. En plugin kallad Inductive visual Miner i process mining-verktyget ProM används för denna visualisering. En fallstudie genomfördes på Skatteverkets IT-avdelning. En något modifierad version av metoden kallad PM2 -metoden användes för att genomföra studien. Den modifierade metoden inkluderade sex steg; ett planeringsskede, ett datautvinningsskede, ett databearbetningsskede, ett process mining- och analysskede, ett utvärderingsskede och slutligen ett förbättringsskede där ett försök till att förbättra visualiseringarna utifrån analysen och utvärderingen genomfördes. Planeringsstadiet användes för att få information om arbetsprocesserna i organisationen och vilken typ av data som kan finnas i de valda datakällorna. En uppsättning målfrågor kopplade till den agila mognadsmätningen formulerades. Detta var frågor som visualiseringarna förväntades ge svar på. Data från sex team användes sedan för att generera visualiseringarna. Visualiseringarna undersöktes först noggrant och utvärderades sedan i samarbete med SCRUM-mastern i varje team. Resultaten i denna studie visar att visualiseringar genererade från ärendehanteringdata med hjälp av verktyget Inductive visual Miner kan användas för att svara på frågor om tid och ordning av händelser i arbetsprocessen, som är relaterade till den agila mognadsmätningen inom SAFe. Ytterligare analyser och reflektioner behövs dock för att dra slutsatser om den agila mognaden utifrån informationen som erhålls från visualiseringarna. En uppsättning krav för den data som ska användas för att generera denna typ av visualiseringar föreslås också. Dessa krav är baserade på resultaten från alla steg i metoden.
196

The functions of imagery in narrative preaching

Booysen, Willem Matheus 12 1900 (has links)
This dissertation investigates the validity of the hypothesis that biblical images [imagery] in the narrative model of preaching enhance relevance and recall possibilities of the sermon, filling the open spaces for the listener in a meaningful way. "Imagery" is researched in its application in various genres of the narrative sermon, e.g. the inductive, the narrative as such, metaphor, parable and transformational preaching. In the final analysis, the Midrash hermeneutical model as theoretical exposition and fresh proposition for homiletical possibilities for today was suggested and instruments proposed to aid in the preparaUon of Midrashic narrative sermons. / Philosophy, Practical & Systematic Theology / D.Th. (Practical theology)
197

Characterization and management of voltage noise in multi-core, multi-threaded processors

Kim, Youngtaek 14 July 2014 (has links)
Reliability is one of the important issues of recent microprocessor design. Processors must provide correct behavior as users expect, and must not fail at any time. However, unreliable operation can be caused by excessive supply voltage fluctuations due to an inductive part in a microprocessor power distribution network. This voltage fluctuation issue is referred to as inductive or di/dt noise, and requires thorough analysis and sophisticated design solutions. This dissertation proposes an automated stressmark generation framework to characterize di/dt noise effect, and suggests a practical solution for management of di/dt effects while achieving performance and energy goals. First, the di/dt noise issue is analyzed from theory to a practical view. Inductance is a parasitic part in power distribution network for microprocessor, and its characteristics such as resonant frequencies are reviewed. Then, it is shown that supply voltage fluctuation from resonant behavior is much harmful than single event voltage fluctuations. Voltage fluctuations caused by standard benchmarks such as SPEC CPU2006, PARSEC, Linpack, etc. are studied. Next, an AUtomated DI/dT stressmark generation framework, referred to as AUDIT, is proposed to identify maximum voltage droop in a microprocessor power distribution network. The di/dt stressmark generated from AUDIT framework is an instruction sequence, which draws periodic high and low current pulses that maximize voltage fluctuations including voltage droops. AUDIT uses a Genetic Algorithm in scheduling and optimizing candidate instruction sequences to create a maximum voltage droop. In addition, AUDIT provides with both simulation and hardware measurement methods for finding maximum voltage droops in different design and verification stages of a processor. Failure points in hardware due to voltage droops are analyzed. Finally, a hardware technique, floating-point (FP) issue throttling, is examined, which provides a reduction in worst case voltage droop. This dissertation shows the impact of floating point throttling on voltage droop, and translates this reduction in voltage droop to an increase in operating frequency because additional guardband is no longer required to guard against droops resulting from heavy floating point usage. This dissertation presents two techniques to dynamically determine when to tradeoff FP throughput for reduced voltage margin and increased frequency. These techniques can work in software level without any modification of existing hardware. / text
198

A multiband inductive wireless link for implantable medical devices and small freely behaving animal subjects

Jow, Uei-Ming 08 February 2013 (has links)
The objective of this research is to introduce two state-of-the-art wireless biomedical systems: (1) a multiband transcutaneous communication system for implantable microelectronic devices (IMDs) and (2) a new wireless power delivery system, called the “EnerCage,” for experiments involving freely-behaving animals. The wireless multiband link for IMDs achieves power transmission via a pair of coils designed for maximum coupling efficiency. The data link is able to handle large communication bandwidth with minimum interference from the power-carrier thanks to its optimized geometry. Wireless data and power links have promising prospects for use in biomedical devices such as biosensors, neural recording, and neural stimulation devices. The EnerCage system includes a stationary unit with an array of coils for inductive power transmission and three-dimensional magnetic sensors for non-line-of-sight tracking of animal subjects. It aims to energize novel biological data-acquisition and stimulation instruments for long-term experiments, without interruption, on freely behaving small animal subjects in large experimental arenas. The EnerCage system has been tested in one-hour in vivo experiment for wireless power and data communication, and the results show the feasibility of this system. The contributions from this research work are summarized as follows: 1. Development of an inductive link model. 2. Development of an accurate PSC models, with parasitic effects for implantable devices. 3. Proposing the design procedure for the inductive link with optimal physical geometry to maximize the PTE. 4. Design of novel antenna and coil geometry for wireless multiband link: power carrier, forward data link, and back telemetry. 5. Development of a model of overlapping PSCs, which can create a homogenous magnetic in a large experimental area for wireless power transmission at a certain coupling distance. 6. Design and optimization for multi-coil link, which can provide optimal load matching for maximum PTE. 7. Design of the wireless power and data communication system for long-term animal experiments, without interruption, on freely behaving small animal subjects in any shape of experimental arenas.
199

An investigation into theory completion techniques in inductive logic programming

Moyle, Stephen Anthony January 2003 (has links)
Traditional Inductive Logic Programming (ILP) focuses on the setting where the target theory is a generalisation of the observations. This is known as Observational Predicate Learning (OPL). In the Theory Completion setting the target theory is not in the same predicate as the observations (non-OPL). This thesis investigates two alternative simple extensions to traditional ILP to perform non-OPL or Theory Completion. Both techniques perform extraction-case abduction from an existing background theory and one seed observation. The first technique -- Logical Back-propagation -- modifies the existing background theory so that abductions can be achieved by a form of constructive negation using a standard SLD-resolution theorem prover. The second technique -- SOLD-resolution -- modifies the theorem prover, and leaves the existing background theory unchanged. It is shown that all abductions produced by Logical Back-propagation can also be generated by SOLD-resolution; but the reverse does not hold. The implementation using the SOLD-resolution technique -- the ALECTO system -- was applied to the problems of completing context free and context dependant grammars; and learning Event Calculus programs. It was successfully able to learn an Event Calculus program to control the navigation of a real-life robot. The Event Calculus is a formalism to represent common-sense knowledge. It follows that the discovery of some common-sense knowledge was produced with the assistance of a machine.
200

MULTIPLE-INSTANCE AND ONE-CLASS RULE-BASED ALGORITHMS

Nguyen, Dat 17 April 2013 (has links)
In this work we developed rule-based algorithms for multiple-instance learning and one-class learning problems, namely, the mi-DS and OneClass-DS algorithms. Multiple-Instance Learning (MIL) is a variation of classical supervised learning where there is a need to classify bags (collection) of instances instead of single instances. The bag is labeled positive if at least one of its instances is positive, otherwise it is negative. One-class learning problem is also known as outlier or novelty detection problem. One-class classifiers are trained on data describing only one class and are used in situations where data from other classes are not available, and also for highly unbalanced data sets. Extensive comparisons and statistical testing of the two algorithms show that they generate models that perform on par with other state-of-the-art algorithms.

Page generated in 0.0729 seconds