• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 200
  • 70
  • 23
  • 22
  • 21
  • 8
  • 5
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 443
  • 443
  • 443
  • 178
  • 146
  • 99
  • 86
  • 73
  • 72
  • 58
  • 56
  • 55
  • 54
  • 50
  • 48
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
321

Automatsko određivanje vrsta riječi u morfološki složenom jeziku / Automatic parts of speech determination in amorphologically complex language

Dimitrijević Strahinja 24 July 2015 (has links)
<p>Istraţivanje je imalo za cilj da provjeri u<br />kojoj mjeri se na&scaron; kognitivni sistem moţe<br />osloniti na fonotaktiĉke informacije, tj.<br />moguće/dozvoljene kombinacije fonema/<br />grafema, u zadacima automatske percepcije i<br />produkcije rijeĉi u jezicima sa bogatom<br />infleksionom morfologijom.<br />Da bi se dobio odgovor na to pitanje,<br />sprovedene su tri studije. U prvoj studiji, uz<br />pomoć ma&scaron;ina sa vektorima podr&scaron;ke (SVM),<br />obavljena je diskriminacija promjenljivih<br />vrsta rijeĉi. U drugoj studiji, produkcija<br />infleksionih oblika rijeĉi izvedena je<br />pomoću uĉenja zasnovanog na memoriji<br />(MBL). Na osnovu rezultata iz druge studije,<br />izveden je eksperiment u kojem se traţila<br />potvrda kognitivne vjerodostojnosti modela i<br />kori&scaron;ćenih informacija.<br />Diskriminacija promjenljivih vrsta rijeĉi<br />obavljena je na osnovu dozvoljenih sekvenci<br />dva i tri grafema/fonema (tzv. bigrama i<br />trigrama), ĉije su frekvencije javljanja<br />unutar pojedinaĉnih gramatiĉkih tipova<br />izraĉunate u zavisnosti od njihovog poloţaja<br />u rijeĉima: na poĉetku, na kraju, unutar<br />rijeĉi, svi zajedno. Maksimalna taĉnost se<br />kretala oko 95% i dobijena je na svim<br />bigramima, uz pomoć RBF jezgrene<br />funkcije. Ovako visok procenat taĉne<br />diskriminacije ukazuje da postoje<br />karakteristiĉne distribucije bigrama za<br />razliĉite vrste promjenljivih rijeĉi. S druge<br />strane, najmanje informativnim su se<br />pokazali bigrami na kraju i na poĉetku rijeĉi.<br />MBL model iskori&scaron;ćen je u zadatku<br />automatske infleksione produkcije, tako &scaron;to<br />je za zadatu rijeĉ, na osnovu fonotaktiĉkih<br />informacija iz posljednja ĉetiri sloga,<br />generisan traţeni infleksioni oblik. Na<br />uzorku od 89024 promjenljivih rijeĉi uzetih<br />iz Frekvencijskog reĉnika dnevne &scaron;tampe<br />srpskog jezika, koristeći metod izostavljanja<br />jednog primjera i konstantu veliĉinu skupa<br />susjeda (k = 7), ostvarena je taĉnost oko<br />92%. Identifikovano je nekoliko faktora koji<br />su uticali na ovu taĉnost, kao &scaron;to su: vrsta<br />rijeĉi, gramatiĉki tip, naĉin tvorbe i broj<br />primjera u okviru jednog gramatiĉkog tipa,<br />broju izuzetaka, broj fonolo&scaron;kih alternacija<br />itd.<br />U istraţivanju na subjektima, u zadatku<br />leksiĉke odluke, za rijeĉi koje je MBL<br />pogre&scaron;no obradio utvrĊeno je duţe vrijeme<br />obrade. Ovo ukazuje na kognitivnu<br />vjerodostojnost uĉenja zasnovanog na<br />memoriji. Osim toga, potvrĊena je i<br />kognitivna vjerodostojnost fonotaktiĉkih<br />informacija, ovaj put u zadatku<br />razumijevanja jezika.<br />Sveukupno, nalazi dobijeni u ove tri studije<br />govore u prilog teze o znaĉajnoj ulozi<br />fonotaktiĉkih informacija u percepciji i<br />produkciji morfolo&scaron;ki sloţenih rijeĉi.<br />Rezultati, takoĊe, ukazuju na potrebu da se<br />ove informacije uzmu u obzir kada se<br />diskutuje pojavljivanje većih jeziĉkih<br />jedinica i obrazaca.</p> / <p>The study was aimed at testing the extent to<br />which our cognitive system can rely on<br />phonotactic information, i.e., possible/<br />permissible combinations of phonemes/<br />graphemes, in the tasks of automatic<br />processing and production of words in<br />languages with rich inflectional<br />morphology.<br />In order to obtain the answer to this<br />question, three studies have been conducted.<br />In the first study, by applying the support<br />vector machines (SVM) the discrimination<br />of part of speech (PoS) with more than one<br />possible meaning (i.e., ambiguous PoS) was<br />performed. In the second study, the<br />production of inflected word forms was<br />done with memory based learning (MBL).<br />Based on the results from the second study,<br />a behavioral experiment was conducted as<br />the third study, to test cognitive plausibility<br />of the MBL performance.<br />The discrimination of ambiguous PoS was<br />performed using permissible sequences of<br />two and three characters/sounds (i.e.,<br />bigrams and trigrams), whose frequency of<br />occurrence within individual grammatical<br />types was calculated depending on their<br />position in a word: at the beginning, at the<br />end, and irrespective of position in a word.<br />Maximum accuracy achieved was<br />approximatelly 95%. It was obtained when<br />bigrams irrespective of position in a word<br />were used. SVM model used RBF kernel<br />function. Such high accuracy suggests that<br />brigrams&#39; probability distribution is<br />informative about the types of flective<br />words. Interestingly, the least informative<br />were bigrams at the end and at the beginning<br />of words.<br />The MBL model was used in the task of<br />automatic production of inflected forms,<br />utilizingphonotactic information from the<br />last four syllables. In a sample of 89024<br />flective words, taken from the Frequency<br />dictionary of Serbian language (daily press),<br />achieved accuracy was 92%. For this result<br />the MBL used leave<br />-one<br />-out method and nearest neighborhood size of 7 (k = 7). We</p><p>identified several factors that have<br />contributed to the accuracy; in particular,<br />part of speech, grammatical type, formation<br />method and number of examples within one<br />grammatical type, number of exceptions, the<br />number of phonological alternations, etc.<br />The visual lexical decision experiment<br />revealed that words that the MBL model<br />produced incorrectly also induced elongated<br />reaction time latencies. Thus, we concluded<br />that the MBL model might be cognitively<br />plausibile. In addition, we reconfirmed<br />informativeness of phonotactic information,<br />this time in human conmprehension task.<br />Overall, findings from three undertaken<br />studies are in favor of phonotactic<br />information for both processing and<br />production of morphologically complex<br />words. Results also suggest a necessity of<br />taking into account this information when<br />discussing emergence of larger units and<br />language patterns.</p>
322

Near Real-time Detection of Masquerade attacks in Web applications : catching imposters using their browsing behavor

Panopoulos, Vasileios January 2016 (has links)
This Thesis details the research on Machine Learning techniques that are central in performing Anomaly and Masquerade attack detection. The main focus is put on Web Applications because of their immense popularity and ubiquity. This popularity has led to an increase in attacks, making them the most targeted entry point to violate a system. Specifically, a group of attacks that range from identity theft using social engineering to cross site scripting attacks, aim at exploiting and masquerading users. Masquerading attacks are even harder to detect due to their resemblance with normal sessions, thus posing an additional burden. Concerning prevention, the diversity and complexity of those systems makes it harder to define reliable protection mechanisms. Additionally, new and emerging attack patterns make manually configured and Signature based systems less effective with the need to continuously update them with new rules and signatures. This leads to a situation where they eventually become obsolete if left unmanaged. Finally the huge amount of traffic makes manual inspection of attacks and False alarms an impossible task. To tackle those issues, Anomaly Detection systems are proposed using powerful and proven Machine Learning algorithms. Gravitating around the context of Anomaly Detection and Machine Learning, this Thesis initially defines several basic definitions such as user behavior, normality and normal and anomalous behavior. Those definitions aim at setting the context in which the proposed method is targeted and at defining the theoretical premises. To ease the transition into the implementation phase, the underlying methodology is also explained in detail. Naturally, the implementation is also presented, where, starting from server logs, a method is described on how to pre-process the data into a form suitable for classification. This preprocessing phase was constructed from several statistical analyses and normalization methods (Univariate Selection, ANOVA) to clear and transform the given logs and perform feature selection. Furthermore, given that the proposed detection method is based on the source and1request URLs, a method of aggregation is proposed to limit the user privacy and classifier over-fitting issues. Subsequently, two popular classification algorithms (Multinomial Naive Bayes and Support Vector Machines) have been tested and compared to define which one performs better in our given situations. Each of the implementation steps (pre-processing and classification) requires a number of different parameters to be set and thus a method called Hyper-parameter optimization is defined. This method searches for the parameters that improve the classification results. Moreover, the training and testing methodology is also outlined alongside the experimental setup. The Hyper-parameter optimization and the training phases are the most computationally intensive steps, especially given a large number of samples/users. To overcome this obstacle, a scaling methodology is also defined and evaluated to demonstrate its ability to handle larger data sets. To complete this framework, several other options have been also evaluated and compared to each other to challenge the method and implementation decisions. An example of this, is the "Transitions-vs-Pages" dilemma, the block restriction effect, the DR usefulness and the classification parameters optimization. Moreover, a Survivability Analysis is performed to demonstrate how the produced alarms could be correlated affecting the resulting detection rates and interval times. The implementation of the proposed detection method and outlined experimental setup lead to interesting results. Even so, the data-set that has been used to produce this evaluation is also provided online to promote further investigation and research on this field. / Det här arbetet behandlar forskningen på maskininlärningstekniker som är centrala i utförandet av detektion av anomali- och maskeradattacker. Huvud-fokus läggs på webbapplikationer på grund av deras enorma popularitet och att de är så vanligt förekommande. Denna popularitet har lett till en ökning av attacker och har gjort dem till den mest utsatta punkten för att bryta sig in i ett system. Mer specifikt så syftar en grupp attacker som sträcker sig från identitetsstölder genom social ingenjörskonst, till cross-site scripting-attacker, på att exploatera och maskera sig som olika användare. Maskeradattacker är ännu svårare att upptäcka på grund av deras likhet med vanliga sessioner, vilket utgör en ytterligare börda. Vad gäller förebyggande, gör mångfalden och komplexiteten av dessa system det svårare att definiera pålitliga skyddsmekanismer. Dessutom gör nya och framväxande attackmönster manuellt konfigurerade och signaturbaserade system mindre effektiva på grund av behovet att kontinuerligt uppdatera dem med nya regler och signaturer. Detta leder till en situation där de så småningom blir obsoleta om de inte sköts om. Slutligen gör den enorma mängden trafik manuell inspektion av attacker och falska alarm ett omöjligt uppdrag. För att ta itu med de här problemen, föreslås anomalidetektionssystem som använder kraftfulla och beprövade maskininlärningsalgoritmer. Graviterande kring kontexten av anomalidetektion och maskininlärning, definierar det här arbetet först flera enkla definitioner såsom användarbeteende, normalitet, och normalt och anomalt beteende. De här definitionerna syftar på att fastställa sammanhanget i vilket den föreslagna metoden är måltavla och på att definiera de teoretiska premisserna. För att under-lätta övergången till implementeringsfasen, förklaras även den bakomliggande metodologin i detalj. Naturligtvis presenteras även implementeringen, där, med avstamp i server-loggar, en metod för hur man kan för-bearbeta datan till en form som är lämplig för klassificering beskrivs. Den här för´-bearbetningsfasen konstruerades från flera statistiska analyser och normaliseringsmetoder (univariate se-lection, ANOVA) för att rensa och transformera de givna loggarna och utföra feature selection. Dessutom, givet att en föreslagen detektionsmetod är baserad på käll- och request-URLs, föreslås en metod för aggregation för att begränsa problem med överanpassning relaterade till användarsekretess och klassificerare. Efter det så testas och jämförs två populära klassificeringsalgoritmer (Multinomialnaive bayes och Support vector machines) för att definiera vilken som fungerar bäst i våra givna situationer. Varje implementeringssteg (för-bearbetning och klassificering) kräver att ett antal olika parametrar ställs in och således definieras en metod som kallas Hyper-parameter optimization. Den här metoden söker efter parametrar som förbättrar klassificeringsresultaten. Dessutom så beskrivs tränings- och test-ningsmetodologin kortfattat vid sidan av experimentuppställningen. Hyper-parameter optimization och träningsfaserna är de mest beräkningsintensiva stegen, särskilt givet ett stort urval/stort antal användare. För att övervinna detta hinder så definieras och utvärderas även en skalningsmetodologi baserat på dess förmåga att hantera stora datauppsättningar. För att slutföra detta ramverk, utvärderas och jämförs även flera andra alternativ med varandra för att utmana metod- och implementeringsbesluten. Ett exempel på det är ”Transitions-vs-Pages”-dilemmat, block restriction-effekten, DR-användbarheten och optimeringen av klassificeringsparametrarna. Dessu-tom så utförs en survivability analysis för att demonstrera hur de producerade alarmen kan korreleras för att påverka den resulterande detektionsträ˙säker-heten och intervalltiderna. Implementeringen av den föreslagna detektionsmetoden och beskrivna experimentuppsättningen leder till intressanta resultat. Icke desto mindre är datauppsättningen som använts för att producera den här utvärderingen också tillgänglig online för att främja vidare utredning och forskning på området.
323

Urban Land-cover Mapping with High-resolution Spaceborne SAR Data

Hu, Hongtao January 2010 (has links)
Urban areas around the world are changing constantly and therefore it is necessary to update urban land cover maps regularly. Remote sensing techniques have been used to monitor changes and update land-use/land-cover information in urban areas for decades. Optical imaging systems have received most of the attention in urban studies. The development of SAR applications in urban monitoring has been accelerated with more and more advanced SAR systems operating in space.   This research investigated object-based and rule-based classification methodologies for extracting urban land-cover information from high resolution SAR data. The study area is located in the north and northwest part of the Greater Toronto Area (GTA), Ontario, Canada, which has been undergoing rapid urban growth during the past decades. Five-date RADARSAT-1 fine-beam C-HH SAR images with a spatial resolution of 10 meters were acquired during May to August in 2002. Three-date RADARSAT-2 ultra-fine-beam C-HH SAR images with a spatial resolution of 3 meters were acquired during June to September in 2008.   SAR images were pre-processed and then segmented using multi-resolution segmentation algorithm. Specific features such as geometric and texture features were selected and calculated for image objects derived from the segmentation of SAR images. Both neural network (NN) and support vector machines (SVM) were investigated for the supervised classification of image objects of RADARSAT-1 SAR images, while SVM was employed to classify image objects of RADARSAT-2 SAR images. Knowledge-based rules were developed and applied to resolve the confusion among some classes in the object-based classification results.   The classification of both RADARSAT-1 and RADARSAT-2 SAR images yielded relatively high accuracies (over 80%). SVM classifier generated better result than NN classifier for the object-based supervised classification of RADARSAT-1 SAR images. Well-designed knowledge-based rules could increase the accuracies of some classes after the object-based supervised classification. The comparison of the classification results of RADARSAT-1 and RADARSAT-2 SAR images showed that SAR images with higher resolution could reveal more details, but might produce lower classification accuracies for certain land cover classes due to the increasing complexity of the images. Overall, the classification results indicate that the proposed object-based and rule-based approaches have potential for operational urban land cover mapping from high-resolution space borne SAR images. / QC 20101209
324

Automated sleep scoring using unsupervised learning of meta-features / Automatiserad sömnmätning med användning av oövervakad inlärning av meta-särdrag

Olsson, Sebastian January 2016 (has links)
Sleep is an important part of life as it affects the performance of one's activities during all awake hours. The study of sleep and wakefulness is therefore of great interest, particularly to the clinical and medical fields where sleep disorders are diagnosed. When studying sleep, it is common to talk about different types, or stages, of sleep. A common task in sleep research is to determine the sleep stage of the sleeping subject as a function of time. This process is known as sleep stage scoring. In this study, I seek to determine whether there is any benefit to using unsupervised feature learning in the context of electroencephalogram-based (EEG) sleep scoring. More specifically, the effect of generating and making use of new feature representations for hand-crafted features of sleep data – meta-features – is studied. For this purpose, two scoring algorithms have been implemented and compared. Both scoring algorithms involve segmentation of the EEG signal, feature extraction, feature selection and classification using a support vector machine (SVM). Unsupervised feature learning was implemented in the form of a dimensionality-reducing deep-belief network (DBN) which the feature space was processed through. Both scorers were shown to have a classification accuracy of about 76 %. The application of unsupervised feature learning did not affect the accuracy significantly. It is speculated that with a better choice of parameters for the DBN in a possible future work, the accuracy may improve significantly. / Sömnen är en viktig del av livet eftersom den påverkar ens prestation under alla vakna timmar. Forskning om sömn and vakenhet är därför av stort intresse, i synnerhet för de kliniska och medicinska områdena där sömnbesvär diagnostiseras. I forskning om sömn är det är vanligt att tala om olika typer av sömn, eller sömnstadium. En vanlig uppgift i sömnforskning är att avgöra sömnstadiet av den sovande exemplaret som en funktion av tiden. Den här processen kallas sömnmätning. I den här studien försöker jag avgöra om det finns någon fördel med att använda oövervakad inlärning av särdrag för att utföra elektroencephalogram-baserad (EEG) sömnmätning. Mer specifikt undersöker jag effekten av att generera och använda nya särdragsrepresentationer som härstammar från handgjorda särdrag av sömndata – meta-särdrag. Två sömnmätningsalgoritmer har implementerats och jämförts för det här syftet. Sömnmätningsalgoritmerna involverar segmentering av EEG-signalen, extraktion av särdragen, urval av särdrag och klassificering genom användning av en stödvektormaskin (SVM). Oövervakad inlärning av särdrag implementerades i form av ett dimensionskrympande djuptrosnätverk (DBN) som användes för att bearbetasärdragsrymden. Båda sömnmätarna visades ha en klassificeringsprecision av omkring 76 %. Användningen av oövervakad inlärning av särdrag hade ingen signifikant inverkan på precisionen. Det spekuleras att precisionen skulle kunna höjas med ett mer lämpligt val av parametrar för djuptrosnätverket.
325

ADHD-200 Patient Characterization and Classification using Resting State Networks: A Dissertation

Czerniak, Suzanne M. 28 March 2014 (has links)
Attention Deficit/Hyperactivity Disorder (ADHD) is a common psychiatric disorder of childhood that is characterized by symptoms of inattention, impulsivity/hyperactivity, or a combination of both. Intrinsic brain dysfunction in ADHD can be examined through various methods including resting state functional Magnetic Resonance Imaging (rs-fMRI), which investigates patients’ functional brain connections in the absence of an explicit task. To date, studies of group differences in resting brain connectivity between patients with ADHD and typically developing controls (TDCs) have revealed reduced connectivity within the Default Mode Network (DMN), a resting state network implicated in introspection, mind-wandering, and day-dreaming. However, few studies have addressed the use of resting state connectivity measures as a diagnostic aide for ADHD on the individual patient level. In the current work, we attempted first to characterize the differences in resting state networks, including the DMN and three attention networks (the salience network, the left executive network, and the right executive network), between a group of youth with ADHD and a group of TDCs matched for age, IQ, gender, and handedness. Significant over- and under-connections were found in the ADHD group in all of these networks compared with TDCs. We then attempted to use a support vector machine (SVM) based on the information extracted from resting state network connectivity to classify participants as “ADHD” or “TDC.” The IFGmiddle temporal network (66.8% accuracy), the parietal association network (86.6% specificity and 48.5% PPV), and a physiological noise component (sensitivity 39.7% and NPV 69.6%) performed the best classifications. Finally, we attempted to combine and utilize information from all the resting state networks that we identified to improve classification accuracy. Contrary to our hypothesis, classification accuracy decreased to 54-55% when this information was combined. Overall, the work presented here supports the theory that the ADHD brain is differently connected at rest than that of TDCs, and that this information may be useful for developing a diagnostic aid. However, because ADHD is such a heterogeneous disorder, each ADHD patient’s underlying brain deficits may be unique making it difficult to determine what connectivity information is diagnostically useful.
326

A Survey of Systems for Predicting Stock Market Movements, Combining Market Indicators and Machine Learning Classifiers

Caley, Jeffrey Allan 14 March 2013 (has links)
In this work, we propose and investigate a series of methods to predict stock market movements. These methods use stock market technical and macroeconomic indicators as inputs into different machine learning classifiers. The objective is to survey existing domain knowledge, and combine multiple techniques into one method to predict daily market movements for stocks. Approaches using nearest neighbor classification, support vector machine classification, K-means classification, principal component analysis and genetic algorithms for feature reduction and redefining the classification rule were explored. Ten stocks, 9 companies and 1 index, were used to evaluate each iteration of the trading method. The classification rate, modified Sharpe ratio and profit gained over the test period is used to evaluate each strategy. The findings showed nearest neighbor classification using genetic algorithm input feature reduction produced the best results, achieving higher profits than buy-and-hold for a majority of the companies.
327

Using Natural Language Processing and Machine Learning for Analyzing Clinical Notes in Sickle Cell Disease Patients

Khizra, Shufa January 2018 (has links)
No description available.
328

Sources of Ensemble Forecast Variation and their Effects on Severe Convective Weather Forecasts

Thead, Erin Amanda 06 May 2017 (has links)
The use of numerical weather prediction (NWP) has brought significant improvements to severe weather outbreak forecasting; however, determination of the primary mode of severe weather (in particular tornadic and nontornadic outbreaks) continues to be a challenge. Uncertainty in model runs contributes to forecasting difficulty; therefore it is beneficial to a forecaster to understand the sources and magnitude of uncertainty in a severe weather forecast. This research examines the impact of data assimilation, microphysics parameterizations, and planetary boundary layer (PBL) physics parameterizations on severe weather forecast accuracy and model variability, both at a mesoscale and synoptic-scale level. NWP model simulations of twenty United States tornadic and twenty nontornadic outbreaks are generated. In the first research phase, each case is modeled with three different modes of data assimilation and a control. In the second phase, each event is modeled with 15 combinations of physics parameterizations: five microphysics and three PBL, all of which were designed to perform well in convective weather situations. A learning machine technique known as a support vector machine (SVM) is used to predict outbreak mode for each run for both the data assimilated model simulations and the different parameterization simulations. Parameters determined to be significant for outbreak discrimination are extracted from the model simulations and input to the SVM, which issues a diagnosis of outbreak type (tornadic or nontornadic) for each model run. In the third phase, standard synoptic parameters are extracted from the model simulations and a k-means cluster analysis is performed on tornadic and nontornadic outbreak data sets to generate synoptically distinct clusters representing atmospheric conditions found in each type of outbreak. Variations among the synoptic features in each cluster are examined across the varied physics parameterization and data assimilation runs. Phase I found that conventional and HIRS-4 radiance assimilation performs best of all examined assimilation variations by lowering false alarm ratios relative to other runs. Phase II found that the selection of PBL physics produces greater spread in the SVM classification ability. Phase III found that data assimilation generates greater model changes in the strength of synoptic-scale features than either microphysics or PBL physics parameterization.
329

Swedish Stock and Index Price Prediction Using Machine Learning

Wik, Henrik January 2023 (has links)
Machine learning is an area of computer science that only grows as time goes on, and there are applications in areas such as finance, biology, and computer vision. Some common applications are stock price prediction, data analysis of DNA expressions, and optical character recognition. This thesis uses machine learning techniques to predict prices for different stocks and indices on the Swedish stock market. These techniques are then compared to see which performs best and why. To accomplish this, we used some of the most popular models with sets of historical stock and index data. Our best-performing models are linear regression and neural networks, this is because they are the best at handling the big spikes in price action that occur in certain cases. However, all models are affected by overfitting, indicating that feature selection and hyperparameter optimization could be improved.
330

Feature Pruning For Action Recognition In Complex Environment

Nagaraja, Adarsh 01 January 2011 (has links)
A significant number of action recognition research efforts use spatio-temporal interest point detectors for feature extraction. Although the extracted features provide useful information for recognizing actions, a significant number of them contain irrelevant motion and background clutter. In many cases, the extracted features are included as is in the classification pipeline, and sophisticated noise removal techniques are subsequently used to alleviate their effect on classification. We introduce a new action database, created from the Weizmann database, that reveals a significant weakness in systems based on popular cuboid descriptors. Experiments show that introducing complex backgrounds, stationary or dynamic, into the video causes a significant degradation in recognition performance. Moreover, this degradation cannot be fixed by fine-tuning the system or selecting better interest points. Instead, we show that the problem lies at the descriptor level and must be addressed by modifying descriptors.

Page generated in 0.087 seconds