• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 92
  • 13
  • 12
  • 12
  • 10
  • 9
  • 8
  • 8
  • 8
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 192
  • 47
  • 39
  • 30
  • 30
  • 24
  • 22
  • 22
  • 21
  • 21
  • 20
  • 20
  • 19
  • 19
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Who, what and when: how media and politicians shape the Brazilian debate on foreign affairs / Quem, o que e quando: como a mídia e os políticos moldam o debate sobre política externa no Brasil

Hardt, Matheus Soldi 10 July 2019 (has links)
What do politicians talk about when discussing foreign affairs? Are these topics different from the ones in the newspapers? Finally, can unsupervised methods be used to help us understand these problems? Answering these questions is of paramount importance to understanding the relationship between foreign policy and mass media. Based on this discussion, this research has three main objectives: (a) to verify whether unsupervised methods can be used to analyze documents on international issues; (b) to understand the issues that politicians talk about when dealing with foreign affairs; and (c) to understand when and with which periodicity the mass media publish news on certain international topics. To do so, I created two new corpora, one with news articles published in the international section of two major Brazilian newspapers; and a corpus with all speeches made within the two Committees on Foreign Affairs of the National Congress of Brazil. I ran a topic model using Latent Dirichlet Allocation (LDA) in both. The results of this topic model show that LDA can be used to distinguish different international issues that appear in both political discourse and the mass media in Brazil. Additionally, I found that the LDA model can be used to identify when some topics are debated and for how long. The findings also demonstrate that Brazilian politicians and Brazilian newspapers are neither isolated nor unstable in what regards international issues. / Sobre o que os políticos falam quando discutem temas internacionais? Esses tópicos são diferentes daqueles que aparecem nos jornais? Finalmente, métodos não supervisionados podem ser usados para nos ajudar a entender esses problemas? Responder a essas perguntas é de suma importância para entender a relação entre política externa e mídia de massa. Com base nessa discussão, esta pesquisa tem três objetivos principais: (a) verificar se os métodos não supervisionados podem ser usados para analisar documentos sobre questões internacionais; (b) compreender sobre que assuntos os políticos falam quando lidam com relações exteriores; e (c) entender quando e por quanto tempo a mídia de massa publica notícias sobre determinados tópicos internacionais. Para tanto, eu criei dois novos corpora, um com notícias publicadas no caderno internacional de dois dos principais jornais brasileiros; e um corpus com todos os discursos feitos dentro das duas Comissões de Relações Exteriores do Congresso Brasileiro. Executei um modelo de tópico usando Latent Dirichlet Allocation (LDA) em ambos. Os resultados desse modelo de tópico mostram que ele pode ser usado para distinguir diferentes questões internacionais que aparecem tanto no discurso político como na mídia de massa no Brasil. Além disso, o modelo pode ser usado para identificar quando alguns tópicos são debatidos e por quanto tempo. Os resultados também demonstram que tanto os políticos como os jornais brasileiros não são isolados nem instáveis em relação a questões internacionais.
12

Sistema láser de medida de velocidad por efecto doppler de bajo coste para aplicaciones industriales e hidrodinámicas

García Vizcaino, David 29 June 2005 (has links)
La utilización práctica del efecto Doppler en la emisión láser fue propuesta desde los inicios del desarrollo de los láseres en los años sesenta. Sólo en los años ochenta la investigación realizada pudo salir del laboratorio y dar lugar a la fabricación de aparatos de medida de velocidad comerciales. A partir de los noventa estos aparatos se popularizaron rápidamente. Actualmente se utilizan medidores de velocidad láser por efecto Doppler en múltiples aplicaciones, entre las que sobresale la medida de velocidad de fluidos, para estudios aéreo e hidrodinámicos. Sus características únicas, como la precisión obtenida en la medida, su alta resolución espacial y el carácter no intrusivo, sólo han comenzado recientemente a tener rivales de consideración, como pueden ser la velocimetría de imagen de partículas (PIV). También la medida de velocidades de móviles sólidos comienza a resultar, con el abaratamiento general de los componentes opto-electrónicos, un objetivo para muchas empresas. Entre las aplicaciones de este tipo se contemplan el control de velocidad de los vehículos en carretera y el control de procesos industriales del ramo textil, papelero y deempresas fabricantes de cables, entre otros. Empresas europeas y americanas, como Dantec Electronik y TSI, por citar las más representativas, comercializan aparatos LDA de propósito general de altas prestaciones. Hasta la fecha estos sistemas sólo podían ser adquiridos por importantes centros de investigación o grandes empresas, debido a su elevado coste. El futuro comercial de la velocimetría láser Doppler exige la fabricación de aparatos más económicos y adaptados a las necesidades del cliente. Muchos de los sistemas actuales son voluminosos,difíciles de manejar y con potencias de trabajo elevadas. Se está llevando a cabo un importante trabajo para conseguir reducir el tamaño y coste de los equipos sin perder sus principales prestaciones. Asimismo la alta velocidad y capacidad de cálculo de los ordenadores personales actuales debe hacer innecesaria la inclusión de procesadores específicos para estos equipos.Presentamos el diseño y construcción de un sistema integral de medida de dos componentes de la velocidad, sistema 2D-LDA, para aplicaciones industriales e hidrodinámicas de baja potencia. Siguiendo la filosofía delineada arriba, el diseño de nuestro sistema LDA fue realizado utilizando únicamente una fuente laser y un módulo detector. Los sistemas LDA de medida de dos componentes de velocidad comercialmente disponibles, por el contrario, emplean dos longitudes de onda óptica y dos fotodetectores independientes. Las emisiones azul y verde típicas de los láseres de ion-Ar son las longitudes de onda a menudo elegidas en este tipo de medidas. Por otra parte, se ha empleado los dos canales de entrada de una tarjeta de adquisición de uso general para realizar el disparo multinivel. Esta configuración permite trabajar en cada momento con la parte de señal burst teóricamente más adecuada, con mayor valor de relación señal a ruido.Este trabajo ha sido financiado por la CICYT Proyecto PETRI 95-0249-OP:REALIZACIÓN DE SISTEMAS LÁSER PORTÁTILES DE MEDIDA DE VELOCIDAD POR EFECTO DOPPLER (LDA-LDV) DE BAJO COSTE PARA APLICACIONES INDUSTRIALES E HIDRODINÁMICAS. / The practical use of the Doppler effect at optical wavelengths was proposed at the early beginning of the development of the laser, in the sixties. However, it was only in the eighties when the results of the experimental work could finally get out of the laboratories, and the first Laser Velocimeters were commercially available. In the nineties this kind of systems became rapidly popular. Nowadays the Laser Velocimeters based on the Doppler frequency shift find a lot of important applications, especially in some industrial processes and in hydrodynamic and aerodynamic research.The unique characteristics of the Laser Doppler Velocimetry (LDV) only recently have encountered a rival technique in the Particle Image Velocimetry (PIV), for applications on fluids. The main features of LDV systems are the accuracy and the speed of the measurements, the high spatial resolution and, of course, the non-intrusive character of the technique. Moreover this kind of systems present advantages not only in fluid applications: actually it can compete with the microwave radar in the estimation of the velocity of solid targets. This becomes possible due to theprogressive reduction of prices of optoelectronic devices and the improvement of its performances. The monitoring of the traffic velocity and the control of machinery in the manufacture of paper, wires and cables or thread can be mentioned among these applications.European an American companies, as Dantek Electronic or TSI, to mention the two most representative, commercialize high performance general-purpose LDV systems. Up to the date these instruments are sizeable and expensive, and its use requires some special training. There is not doubt that the future market of the LDV systems goes through a substantial decrease of prices and, indeed, through the possibility of custom-built designs. The potential number of users would increase then in an important manner. Many efforts are now being devoted by researchers in that direction. Moreover, the important improvement of capabilities of the desktop computers makes unnecessary the special electronic processors that, up to now, have been provided by the manufacturers of LDV systems as a part of them.In this Thesis the design and realization of a complete Laser Doppler Anemometer is presented. The system can measure two components of a fluid velocity (2D-LDA) and originally it was conceived to be used in industrial and hydrodynamics applications.Following the philosophy outlined above, the design of our LDA system was performed with only one laser source and one detector module. The common commercially available LDAs, on the contrary, designed to measure two components of velocity, use two different optical wavelengths and two independent photodiodes.On the other hand, a general-purpose acquisition card with two input channels has been used to implement a multilevel trigger. The configuration performed here permits to work in each moment with the part of the burst having the best signal to noise ratio.This work has been supported by the Spanish Government, CICYT project PETRI 95-0249-OP.
13

Infrared microspectroscopy of inflammatory process and colon tumors. / Microespectroscopia infravermelha de processos inflamatórios e tumores de cólon

Fabricio Augusto de Lima 17 March 2016 (has links)
According to the last global burden of disease published by the World Health Organization, tumors were the third leading cause of death worldwide in 2004. Among the different types of tumors, colorectal cancer ranks as the fourth most lethal. To date, tumor diagnosis is based mainly on the identification of morphological changes in tissues. Considering that these changes appears after many biochemical reactions, the development of vibrational techniques may contribute to the early detection of tumors, since they are able to detect such reactions. The present study aimed to develop a methodology based on infrared microspectroscopy to characterize colon samples, providing complementary information to the pathologist and facilitating the early diagnosis of tumors. The study groups were composed by human colon samples obtained from paraffin-embedded biopsies. The groups are divided in normal (n=20), inflammation (n=17) and tumor (n=18). Two adjacent slices were acquired from each block. The first one was subjected to chemical dewaxing and H&E staining. The infrared imaging was performed on the second slice, which was not dewaxed or stained. A computational preprocessing methodology was employed to identify the paraffin in the images and to perform spectral baseline correction. Such methodology was adapted to include two types of spectral quality control. Afterwards the preprocessing step, spectra belonging to the same image were analyzed and grouped according to their biochemical similarities. One pathologist associated each obtained group with some histological structure based on the H&E stained slice. Such analysis highlighted the biochemical differences between the three studied groups. Results showed that severe inflammation presents biochemical features similar to the tumors ones, indicating that tumors can develop from inflammatory process. A spectral database was constructed containing the biochemical information identified in the previous step. Spectra obtained from new samples were confronted with the database information, leading to their classification into one of the three groups: normal, inflammation or tumor. Internal and external validation were performed based on the classification sensitivity, specificity and accuracy. Comparison between the classification results and H&E stained sections revealed some discrepancies. Some regions histologically normal were identified as inflammation by the classification algorithm. Similarly, some regions presenting inflammatory lesions in the stained section were classified into the tumor group. Such differences were considered as misclassification, but they may actually evidence that biochemical changes are in course in the analyzed sample. In the latter case, the method developed throughout this thesis would have proved able to identify early stages of inflammatory and tumor lesions. It is necessary to perform additional experiments to elucidate this discrepancy between the classification results and the morphological features. One solution would be the use of immunohistochemistry techniques with specific markers for tumor and inflammation. Another option includes the recovering of the medical records of patients who participated in this study in order to check, in later times to the biopsy collection, whether they actually developed the lesions supposedly detected in this research. / De acordo com o último compêndio de doenças publicado pela Organização Mundial da Saúde, tumores foram a terceira principal causa de morte mundial em 2004, sendo o câncer colorretal o quarto mais letal. O diagnóstico de tumores baseia-se, principalmente, na identificação de alterações morfológicas dos tecidos. Considerando-se que estas surgem após alterações bioquímicas, o desenvolvimento de técnicas espectroscópicas pode contribuir para a identificação de tumores em estágios iniciais, já que estas são capazes de caracterizar bioquimicamente as amostras em estudo. Esta pesquisa teve por objetivo desenvolver uma metodologia baseada em microespectroscopia infravermelha para caracterização de amostras de cólon, visando fornecer informações complementares ao médico patologista. Foram estudados três grupos de amostras obtidas de biópsias humanas incluídas em parafina: tecido normal (n=20), lesões inflamatórias (n=17) e tumores (n=18). Dois cortes histológicos adjacentes foram coletados de cada bloco. O primeiro corte foi submetido à remoção química de parafina e coloração H&E. O segundo corte foi utilizado para aquisição de imagens espectrais, não sendo submetido à remoção de parafina ou à coloração química. Foi implementada uma técnica computacional para pré-processamento dos espectros coletados, visando identificar a parafina nas imagens e corrigir variações na linha de base espectral. Tal metodologia foi adaptada para incluir dois tipos de controle de qualidade espectral. Após o pré-processamento, espectros pertencentes a uma mesma imagem foram comparados e agrupados de acordo com suas semelhanças bioquímicas. Os grupos obtidos foram submetidos à análise de um médico patologista que associou cada grupo a uma estrutura histológica, tendo como base o corte corado com H&E. Esta análise revelou as diferenças bioquímicas entre os três grupos estudados. Os resultados mostraram que inflamações severas tem propriedades bioquímicas semelhantes às dos tumores, sugerindo que estes podem evoluir a partir de tais inflamações. Foi construído um banco de dados espectral contendo as informações bioquímicas identificadas em cada grupo na etapa anterior. Espectros de novas amostras foram comparados com a informação contida no banco de dados, possibilitando a sua classificação em um dos três grupos: normal, inflamação ou tumor. O banco de dados foi validado interna e externamente por meio da sensibilidade, especificidade e acurácia de classificação. Discrepâncias foram encontradas ao comparar os resultados da classificação com os cortes histológicos corados com H&E. Algumas regiões que se mostram histologicamente normais foram identificadas como inflamação pelo algoritmo de classificação, assim como regiões histologicamente inflamadas foram classificadas no grupo tumoral. Tais discrepâncias foram consideradas como erros de classificação, ainda que possam ser indícios de que alterações bioquímicas estejam ocorrendo nos tecidos analisados. Neste caso, a metodologia desenvolvida teria se mostrado capaz de identificar precocemente lesões inflamatórias e tumorais. É necessário realizar experimentos adicionais para elucidar esta discrepância entre o algoritmo de classificação e a morfologia do tecidual. Uma solução seria o emprego de técnicas de imunohistoquímica com marcadores específicos para câncer e inflamação. Outra opção seria recuperar os registros médicos dos pacientes que participaram deste estudo para verificar se, em períodos posteriores à coleta da biópsia, houve realmente o desenvolvimento das lesões supostamente identificadas neste estudo.
14

Využití vokalických formantů pro rozpoznání mluvčího v přirozených forenzních nahrávkách / Using vowel formants for speaker identification in natural forensic recordings

Nechanský, Tomáš January 2017 (has links)
Voice comparison is one of the most frequently addressed terms in the context of forensic phonetics; however, so far experts have not been able to find a speech parameter which reliably discriminates between two speakers. Formant dynamics have brought promising results in this respect, therefore in our study using linear discriminant analysis (LDA) we tested the speaker-discriminatory potential of formant trajectories on real forensic recordings. The aim was firstly, to compare the results of LDA when formant frequencies or coefficients of quadratic and cubic fit are used as predictors and secondly, to compare the results when the analyzed classes are balanced or not regarding the number of objects. As for the predictors, all of the types demonstrated comparable classification rates, nevertheless, as LDA limits the number of predictors in relation to the class size, the quadratic fit appears to be the most efficient. Even though LDA was able to discriminate between different voices above chance, it cannot be recommended for forensic use. It delivered highly inconsistent results when the number of objects in the classes was changed; and more importantly, it significantly discriminates between objects of the same speaker. Key words: formant trajectories, voice comparison, LDA, Czech, forensic phonetics
15

Simultaneous Adaptive Fractional Discriminant Analysis: Applications to the Face Recognition Problem

Draper, John Daniel 19 June 2012 (has links)
No description available.
16

Neural probabilistic topic modeling of short and messy text / Neuronprobabilistisk ämnesmodellering av kort och stökig text

Harrysson, Mattias January 2016 (has links)
Exploring massive amount of user generated data with topics posits a new way to find useful information. The topics are assumed to be “hidden” and must be “uncovered” by statistical methods such as topic modeling. However, the user generated data is typically short and messy e.g. informal chat conversations, heavy use of slang words and “noise” which could be URL’s or other forms of pseudo-text. This type of data is difficult to process for most natural language processing methods, including topic modeling. This thesis attempts to find the approach that objectively give the better topics from short and messy text in a comparative study. The compared approaches are latent Dirichlet allocation (LDA), Re-organized LDA (RO-LDA), Gaussian Mixture Model (GMM) with distributed representation of words, and a new approach based on previous work named Neural Probabilistic Topic Modeling (NPTM). It could only be concluded that NPTM have a tendency to achieve better topics on short and messy text than LDA and RO-LDA. GMM on the other hand could not produce any meaningful results at all. The results are less conclusive since NPTM suffers from long running times which prevented enough samples to be obtained for a statistical test. / Att utforska enorma mängder användargenererad data med ämnen postulerar ett nytt sätt att hitta användbar information. Ämnena antas vara “gömda” och måste “avtäckas” med statistiska metoder såsom ämnesmodellering. Dock är användargenererad data generellt sätt kort och stökig t.ex. informella chattkonversationer, mycket slangord och “brus” som kan vara URL:er eller andra former av pseudo-text. Denna typ av data är svår att bearbeta för de flesta algoritmer i naturligt språk, inklusive ämnesmodellering. Det här arbetet har försökt hitta den metod som objektivt ger dem bättre ämnena ur kort och stökig text i en jämförande studie. De metoder som jämfördes var latent Dirichlet allocation (LDA), Re-organized LDA (RO-LDA), Gaussian Mixture Model (GMM) with distributed representation of words samt en egen metod med namnet Neural Probabilistic Topic Modeling (NPTM) baserat på tidigare arbeten. Den slutsats som kan dras är att NPTM har en tendens att ge bättre ämnen på kort och stökig text jämfört med LDA och RO-LDA. GMM lyckades inte ge några meningsfulla resultat alls. Resultaten är mindre bevisande eftersom NPTM har problem med långa körtider vilket innebär att tillräckligt många stickprov inte kunde erhållas för ett statistiskt test.
17

An investigation and comparison between standard steady flow measurements and those in a motored engine

Pitcher, Graham January 2013 (has links)
With the ever more stringent requirements of emissions and fuel economy imposed on the automotive industry, there is a need to understand more fully all aspects of the internal combustion engine to meet these requirements and at the same time the desire of the customer for acceptable performance. This research was aimed at investigating one part of the engine behaviour i.e. induction of the fresh charge to the engine cylinder. Conventionally, these measurements have been performed on a steady state flow rig, where bulk, integral measurements for mass flow rate and swirl or tumble ratio are performed. However, for some of the combustion strategies now being implemented on modern engines, the flow structure is becoming more important necessitating the use of techniques that can measure the flow field and its interaction with spray systems. This piece of work compares engine flow measurements on both a standard steady flow rig and in the cylinder of a motored engine. The flow bench measurements are both easier and cheaper to implement, but serve no real purpose unless there is a correspondence between the flow measured under steady state conditions and that measured in the transient environment of an engine cylinder. On the steady flow bench, both conventional measurements and also measurements of the detailed flow using laser Doppler anemometry have been made. This allowed a direct comparison to be performed between these two sets of measurements. Laser Doppler anemometry measurements were than performed in the cylinder of a motored engine, allowing a direct comparison between the results from the steady flow rig and the engine. Additionally, particle image velocimetry was used to investigate the data on the steady flow bench. It was found that the laser Doppler anemometry measurements were no substitute in terms of accuracy, when compared to the integral measurement of mass flow rate. They did however give some insight into the flow patterns being generated within the cylinder under these conditions. When compared to similar measurements in the engine, in most instances a high degree of correlation was found between the air velocity measurements, although the tumble ratio calculated from the engine was generally higher than that from the steady flow bench. A comparison of vector flows fields from the particle image velocimetry for the steady state and laser Doppler anemometry for the engine measurements, suggested that the influence of the piston on the flows, not present for steady state measurements, was only relevant in the neighbourhood of the piston itself. The transient nature of the flow in engine also seemed to show very little differences between the two sets of measurements. It was concluded that ideally both sets of measurements are required, but that a lot of the detail, with some additional work, could be extracted from the steady flow measurements, but only by using laser diagnostics to measure the flow fields. It was also observed that more than one plane of measurements is required using laser diagnostics to fully characterise the tumble flow field, which is not uniform across the cylinder. This also led to a simple form of weighting of the data in different planes which could be improved with a more detailed set of measurements to gain better insight into the weighting factors required.
18

STUDYING SOFTWARE QUALITY USING TOPIC MODELS

Chen, TSE-HSUN 14 January 2013 (has links)
Software is an integral part of our everyday lives, and hence the quality of software is very important. However, improving and maintaining high software quality is a difficult task, and a significant amount of resources is spent on fixing software defects. Previous studies have studied software quality using various measurable aspects of software, such as code size and code change history. Nevertheless, these metrics do not consider all possible factors that are related to defects. For instance, while lines of code may be a good general measure for defects, a large file responsible for simple I/O tasks is likely to have fewer defects than a small file responsible for complicated compiler implementation details. In this thesis, we address this issue by considering the conceptual concerns (or features). We use a statistical topic modelling approach to approximate the conceptual concerns as topics. We then use topics to study software quality along two dimensions: code quality and code testedness. We perform our studies using three versions of four large real-world software systems: Mylyn, Eclipse, Firefox, and NetBeans. Our proposed topic metrics help improve the defect explanatory power (i.e., fitness of the regression model) of traditional static and historical metrics by 4–314%. We compare one of our metrics, which measures the cohesion of files, with other topic-based cohesion and coupling metrics in the literature and find that our metric gives the greatest improvement in explaining defects over traditional software quality metrics (i.e., lines of code) by 8–55%. We then study how we can use topics to help improve the testing processes. By training on previous releases of the subject systems, we can predict not well-tested topics that are defect prone in future releases with a precision and recall of 0.77 and 0.75, respectively. We can map these topics back to files and help allocate code inspection and testing resources. We show that our approach outperforms traditional prediction-based resource allocation approaches in terms of saving testing and code inspection efforts. The results of our studies show that topics can be used to study software quality and support traditional quality assurance approaches. / Thesis (Master, Computing) -- Queen's University, 2013-01-08 10:10:37.878
19

An independent evaluation of subspace facial recognition algorithms

Surajpal, Dhiresh Ramchander 23 December 2008 (has links)
In traversing the diverse field of biometric security and face recognition techniques, this investigation explores a rather rare comparative study of three of the most popular Appearance-based Face Recognition projection classes, these being the methodologies of Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA) and Independent Component Analysis (ICA). Both the linear and kernel alternatives are investigated along with the four most widely accepted similarity measures of City Block (L1), Euclidean (L2), Cosine and the Mahalanobis metrics. Although comparisons between these classes can become fairly complex given the different task natures, the algorithm architectures and the distance metrics that must be taken into account, an important aspect of this study is the completely equal working conditions that are provided in order to facilitate fair and proper comparative levels of evaluation. In doing so, one is able to realise an independent study that significantly contributes to prior literary findings, either by verifying previous results, offering further insight into why certain conclusions were made or by providing a better understanding as to why certain claims should be disputed and under which conditions they may hold true. The experimental procedure examines ten algorithms in the categories of expression, illumination, occlusion and temporal delay; the results are then evaluated based on a sequential combination of assessment tools that facilitate both intuitive and statistical decisiveness among the intra and inter-class comparisons. In a bid to boost the overall efficiency and accuracy levels of the identification system, the ‘best’ categorical algorithms are then incorporated into a hybrid methodology, where the advantageous effects of fusion strategies are considered. This investigation explores the weighted-sum approach, which by fusion at a matching score level, effectively harnesses the complimentary strengths of the component algorithms and in doing so highlights the improved performance levels that can be provided by hybrid implementations. In the process, by firstly exploring previous literature with respect to each other and secondly by relating the important findings of this paper to previous works one is also able to meet the primary objective in providing an amateur with a very insightful understanding of publicly available subspace techniques and their comparable application status within the environment of face recognition.
20

Evaluation of the Robustness of Different Classifiers under Low- and High-Dimensional Settings

Lantz, Linnea January 2019 (has links)
This thesis compares the performance and robustness of five different varities of discriminant analysis, namely linear (LDA), quadratic (QDA), generalized quadratic (GQDA), diagonal linear (DLDA) and diagonal quadratic (DQDA) discriminant analysis, under elliptical distributions and small sample sizes.  By means of simulations, the performance of the classifiers are compared against separation of mean vectors, sample size, number of variables, degree of non-normality and covariance structures. Results show that QDA is competitive under most settings, but can be outperformed by other classifiers with increasing sample size and when the covariance structures across classes are similar. Other noteworthy results include sensitivity of DQDA to non-normality and dependence of the performance of GQDA on whether sample sizes are balanced or not.

Page generated in 0.0515 seconds