• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 92
  • 13
  • 12
  • 12
  • 10
  • 9
  • 8
  • 8
  • 8
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 192
  • 47
  • 39
  • 30
  • 30
  • 24
  • 22
  • 22
  • 21
  • 21
  • 20
  • 20
  • 19
  • 19
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

News media attention in Climate Action: Latent topics and open access

Karlsson, Kalle January 2020 (has links)
The purpose of the thesis is i) to discover the latent topics of SDG13 and their coverage in news media ii) to investigate the share of OA and Non-OA articles and reviews in each topic iii) to compare the share of different OA types (Green, Gold, Hybrid and Bronze) in each topic. It imposes a heuristic perspective and explorative approach in reviewing the three concepts open access, altmetrics and climate action (SDG13). Data is collected from SciVal, Unpaywall, Altmetric.com and Scopus rendering a dataset of 70,206 articles and reviews published between 2014-2018. The documents retrieved are analyzed with descriptive statistics and topic modeling using Sklearn’s package for LDA(Latent Dirichlet Allocation) in Python. The findings show an altmetric advantage for OA in the case of news media and SDG13 which fluctuates over topics. News media is shown to focus on subjects with “visible” effects in concordance with previous research on media coverage. Examples of this were topics concerning emissions of greenhouse gases and melting glaciers. Gold OA is the most common type being mentioned in news outlets. It also generates the highest number of news mentions while the average sum of news mentions was highest for documents published as Bronze. Moreover, the thesis is largely driven by methods used and most notably the programming language Python. As such it outlines future paths for research into the three concepts reviewed as well as methods used for topic modeling and programming.
112

Effect of swirling blade on flow pattern in nozzle for up-hill teeming

Hallgren, Line January 2006 (has links)
The fluid flow in the mold during up-hill teeming is of great importance for the quality of the cast ingot and therefore the quality of the final steel products. At the early stage of the filling of an up-hill teeming mold, liquid steel enters, with high velocity, from the runner into the mold and the turbulence on the meniscus could lead to entrainment of mold flux. The entrained mold flux might subsequently end up as defects in the final product. It is therefore very important to get a mild and stable inlet flow in the entrance region of the mold. It has been acknowledged recently that swirling motion induced using a helix shaped swirl blade, in the submerged entry nozzle is remarkably effective to control the fluid flow pattern in both the slab and billet type continuous casting molds. This result in increased productivity and quality of the produced steel. Due to the result with continuous casting there is reason to investigate the swirling effect for up-hill teeming, a casting method with similar problem with turbulence. With this thesis we will study the effect of swirling flow generated through a swirl blade inserted into the entry nozzle, as a new method of reducing the deformation of the rising surface and the unevenness of the flow during filling of the up-hill teeming mold. The swirling blade has two features: (1) to generate a swirling flow in the entrance nozzle and (2) to suppress the uneven flow, generated/developed after flowing through the elbow. The effect of the use of a helix shaped swirl blade was studied using both numerical calculations and physical modelling. Water modelling was used to assert the effect of the swirling blade on rectifying of tangential and axial velocities in the filling tube for the up-hill teeming and also to verify the results from the numerical calculations. The effect of swirl in combination with diverged nozzle was also investigated in a similar way, i. e. with water model trials and numerical calculations. / QC 20101115
113

Anemone: a Visual Semantic Graph

Ficapal Vila, Joan January 2019 (has links)
Semantic graphs have been used for optimizing various natural language processing tasks as well as augmenting search and information retrieval tasks. In most cases these semantic graphs have been constructed through supervised machine learning methodologies that depend on manually curated ontologies such as Wikipedia or similar. In this thesis, which consists of two parts, we explore in the first part the possibility to automatically populate a semantic graph from an ad hoc data set of 50 000 newspaper articles in a completely unsupervised manner. The utility of the visual representation of the resulting graph is tested on 14 human subjects performing basic information retrieval tasks on a subset of the articles. Our study shows that, for entity finding and document similarity our feature engineering is viable and the visual map produced by our artifact is visually useful. In the second part, we explore the possibility to identify entity relationships in an unsupervised fashion by employing abstractive deep learning methods for sentence reformulation. The reformulated sentence structures are qualitatively assessed with respect to grammatical correctness and meaningfulness as perceived by 14 test subjects. We negatively evaluate the outcomes of this second part as they have not been good enough to acquire any definitive conclusion but have instead opened new doors to explore. / Semantiska grafer har använts för att optimera olika processer för naturlig språkbehandling samt för att förbättra sökoch informationsinhämtningsuppgifter. I de flesta fall har sådana semantiska grafer konstruerats genom övervakade maskininlärningsmetoder som förutsätter manuellt kurerade ontologier såsom Wikipedia eller liknande. I denna uppsats, som består av två delar, undersöker vi i första delen möjligheten att automatiskt generera en semantisk graf från ett ad hoc dataset bestående av 50 000 tidningsartiklar på ett helt oövervakat sätt. Användbarheten hos den visuella representationen av den resulterande grafen testas på 14 försökspersoner som utför grundläggande informationshämtningsuppgifter på en delmängd av artiklarna. Vår studie visar att vår funktionalitet är lönsam för att hitta och dokumentera likhet med varandra, och den visuella kartan som produceras av vår artefakt är visuellt användbar. I den andra delen utforskar vi möjligheten att identifiera entitetsrelationer på ett oövervakat sätt genom att använda abstraktiva djupa inlärningsmetoder för meningsomformulering. De omformulerade meningarna utvärderas kvalitativt med avseende på grammatisk korrekthet och meningsfullhet såsom detta uppfattas av 14 testpersoner. Vi utvärderar negativt resultaten av denna andra del, eftersom de inte har varit tillräckligt bra för att få någon definitiv slutsats, men har istället öppnat nya dörrar för att utforska.
114

Exploring Swedish Attitudes and Needs Regarding Sustainable Food through Sentiment Analysis in Social Media

Ayubu, Victoria Said, Khan, Mohammed Shahid January 2024 (has links)
Social media has recently become an essential component of our daily modern life, with platforms like Facebook, YouTube, and Twitter serving as popular venues for people to share their opinions on various topics, including sustainable food. The interest in consumer sentiments towards sustainable practices has increased particularly after Covid-2019. This study investigates the attitudes and needs of Swedish consumers regarding sustainable food consumption as reflected in their social media interactions using 4588 comments from Facebook and YouTube. The methodology used are sentiment analysis and topic modelling with VADER and Latent Dirichlet Allocation (LDA) respectively. The results reveal a generally strong positive attitude toward sustainable food. However, the study observes further a decline in positive sentiments over time, indicating changing consumer opinions. The primary topic identified is market challenges, such as high pricing. Furthermore, health concerns and environmental considerations are identified both as important factors influencing the choice of sustainable food. The findings highlight the necessity for policy interventions to enhance the affordability and accessibility of sustainable food, as well as the effective use of social media for raising consumer awareness.
115

Same same, but different? On the Relation of Information Science and the Digital Humanities: A Scientometric Comparison of Academic Journals Using LDA and Hierarchical Clustering

Burghardt, Manuel, Luhmann, Jan 26 June 2024 (has links)
In this paper we investigate the relationship of Information Science (IS) and the Digital Humanities (DH) by means of a scientometric comparison of academic journals from the respective disciplines. In order to identify scholarly practices for both disciplines, we apply a recent variant of LDA topic modeling that makes use of additional hierarchical clustering. The results reveal the existence of characteristic topic areas for both IS (information retrieval, information seeking behavior, scientometrics) and DH (computational linguistics, distant reading and digital editions) that can be used to distinguish them as disciplines in their own right. However, there is also a larger shared area of practices related to information management and also a few shared topic clusters that indicate a common ground for – mostly methodological – exchange between the two disciplines.
116

A framework for exploiting electronic documentation in support of innovation processes

Uys, J. W. 03 1900 (has links)
Thesis (PhD (Industrial Engineering))--University of Stellenbosch, 2010. / ENGLISH ABSTRACT: The crucial role of innovation in creating sustainable competitive advantage is widely recognised in industry today. Likewise, the importance of having the required information accessible to the right employees at the right time is well-appreciated. More specifically, the dependency of effective, efficient innovation processes on the availability of information has been pointed out in literature. A great challenge is countering the effects of the information overload phenomenon in organisations in order for employees to find the information appropriate to their needs without having to wade through excessively large quantities of information to do so. The initial stages of the innovation process, which are characterised by free association, semi-formal activities, conceptualisation, and experimentation, have already been identified as a key focus area for improving the effectiveness of the entire innovation process. The dependency on information during these early stages of the innovation process is especially high. Any organisation requires a strategy for innovation, a number of well-defined, implemented processes and measures to be able to innovate in an effective and efficient manner and to drive its innovation endeavours. In addition, the organisation requires certain enablers to support its innovation efforts which include certain core competencies, technologies and knowledge. Most importantly for this research, enablers are required to more effectively manage and utilise innovation-related information. Information residing inside and outside the boundaries of the organisation is required to feed the innovation process. The specific sources of such information are numerous. Such information may further be structured or unstructured in nature. However, an ever-increasing ratio of available innovation-related information is of the unstructured type. Examples include the textual content of reports, books, e-mail messages and web pages. This research explores the innovation landscape and typical sources of innovation-related information. In addition, it explores the landscape of text analytical approaches and techniques in search of ways to more effectively and efficiently deal with unstructured, textual information. A framework that can be used to provide a unified, dynamic view of an organisation‟s innovation-related information, both structured and unstructured, is presented. Once implemented, this framework will constitute an innovation-focused knowledge base that will organise and make accessible such innovation-related information to the stakeholders of the innovation process. Two novel, complementary text analytical techniques, Latent Dirichlet Allocation and the Concept-Topic Model, were identified for application with the framework. The potential value of these techniques as part of the information systems that would embody the framework is illustrated. The resulting knowledge base would cause a quantum leap in the accessibility of information and may significantly improve the way innovation is done and managed in the target organisation. / AFRIKAANSE OPSOMMING: Die belangrikheid van innovasie vir die daarstel van „n volhoubare mededingende voordeel word tans wyd erken in baie sektore van die bedryf. Ook die belangrikheid van die toeganklikmaking van relevante inligting aan werknemers op die geskikte tyd, word vandag terdeë besef. Die afhanklikheid van effektiewe, doeltreffende innovasieprosesse op die beskikbaarheid van inligting word deurlopend beklemtoon in die navorsingsliteratuur. „n Groot uitdaging tans is om die oorsake en impak van die inligtingsoorvloedverskynsel in ondernemings te bestry ten einde werknemers in staat te stel om inligting te vind wat voldoen aan hul behoeftes sonder om in die proses deur oormatige groot hoeveelhede inligting te sif. Die aanvanklike stappe van die innovasieproses, gekenmerk deur vrye assosiasie, semi-formele aktiwiteite, konseptualisering en eksperimentasie, is reeds geïdentifiseer as sleutelareas vir die verbetering van die effektiwiteit van die innovasieproses in sy geheel. Die afhanklikheid van hierdie deel van die innovasieproses op inligting is besonder hoog. Om op „n doeltreffende en optimale wyse te innoveer, benodig elke onderneming „n strategie vir innovasie sowel as „n aantal goed gedefinieerde, ontplooide prosesse en metingskriteria om die innovasieaktiwiteite van die onderneming te dryf. Bykomend benodig ondernemings sekere innovasie-ondersteuningsmeganismes wat bepaalde sleutelaanlegde, -tegnologiëe en kennis insluit. Kern tot hierdie navorsing, benodig organisasies ook ondersteuningsmeganismes om hul in staat te stel om meer doeltreffend innovasie-verwante inligting te bestuur en te gebruik. Inligting, gehuisves beide binne en buite die grense van die onderneming, word benodig om die innovasieproses te voer. Die bronne van sulke inligting is veeltallig en hierdie inligting mag gestruktureerd of ongestruktureerd van aard wees. „n Toenemende persentasie van innovasieverwante inligting is egter van die ongestruktureerde tipe, byvoorbeeld die inligting vervat in die tekstuele inhoud van verslae, boeke, e-posboodskappe en webbladsye. In hierdie navorsing word die innovasielandskap asook tipiese bronne van innovasie-verwante inligting verken. Verder word die landskap van teksanalitiese benaderings en -tegnieke ondersoek ten einde maniere te vind om meer doeltreffend en optimaal met ongestruktureerde, tekstuele inligting om te gaan. „n Raamwerk wat aangewend kan word om „n verenigde, dinamiese voorstelling van „n onderneming se innovasieverwante inligting, beide gestruktureerd en ongestruktureerd, te skep word voorgestel. Na afloop van implementasie sal hierdie raamwerk die innovasieverwante inligting van die onderneming organiseer en meer toeganklik maak vir die deelnemers van die innovasieproses. Daar word verslag gelewer oor die aanwending van twee nuwerwetse, komplementêre teksanalitiese tegnieke tot aanvulling van die raamwerk. Voorts word die potensiele waarde van hierdie tegnieke as deel van die inligtingstelsels wat die raamwerk realiseer, verder uitgewys en geillustreer.
117

Avaliação da gravidade da malária utilizando técnicas de extração de características e redes neurais artificiais

Almeida, Larissa Medeiros de 17 April 2015 (has links)
Submitted by Kamila Costa (kamilavasconceloscosta@gmail.com) on 2015-06-15T21:53:52Z No. of bitstreams: 1 Dissertação-Larissa M de Almeida.pdf: 5516102 bytes, checksum: e49d2bccd21168f811140c6accd54e8f (MD5) / Approved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2015-06-16T15:05:39Z (GMT) No. of bitstreams: 1 Dissertação-Larissa M de Almeida.pdf: 5516102 bytes, checksum: e49d2bccd21168f811140c6accd54e8f (MD5) / Approved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2015-06-16T15:07:25Z (GMT) No. of bitstreams: 1 Dissertação-Larissa M de Almeida.pdf: 5516102 bytes, checksum: e49d2bccd21168f811140c6accd54e8f (MD5) / Made available in DSpace on 2015-06-16T15:07:25Z (GMT). No. of bitstreams: 1 Dissertação-Larissa M de Almeida.pdf: 5516102 bytes, checksum: e49d2bccd21168f811140c6accd54e8f (MD5) Previous issue date: 2015-04-17 / Não Informada / About half the world's population lives in malaria risk areas. Moreover, given the globalization of travel, these diseases that were once considered exotic and mostly tropical are increasingly found in hospital emergency rooms around the world. And often when it comes to experience in tropical diseases, expert opinion most of the time is not available or not accessible in a timely manner. The task of an accurate and efficient diagnosis of malaria, essential in medical practice, can become complex. And the complexity of this process increases as patients have non-specific symptoms with a large amount of data and inaccurate information involved. In this approach, Uzoka and colleagues (2011a), from clinical information of 30 Nigerian patients with confirmed malaria, used the Analytic Hierarchy Process method (AHP) and Fuzzy methodology to conduct the evaluation of the severity of malaria. The results obtained were compared with the diagnosis of medical experts. This paper develops a new methodology to evaluate the severity of malaria and compare with the techniques used by Uzoka and colleagues (2011a). For this purpose the data set used is the same of that study. The technique used is the Artificial Neural Networks (ANN). Are evaluated three architectures with different numbers of neurons in the hidden layer, two training methodologies (leave-one-out and 10-fold cross-validation) and three stopping criteria, namely: the root mean square error, early stop and regularization. In the first phase, we use the full database. Subsequently, the feature extraction methods are used: in the second stage, the Principal Component Analysis (PCA) and in the third stage, the Linear Discriminant Analysis (LDA). The best result obtained in the three phases, it was with the full database, using the criterion of regularization associated with the leave-one-out method, of 83.3%. And the best result obtained in (Uzoka, Osuji and Obot, 2011) was with the fuzzy network which revealed 80% accuracy / Cerca de metade da população mundial vive em áreas de risco da malária. Além disso, dada a globalização das viagens, essas doenças que antes eram consideradas exóticas e principalmente tropicais são cada vez mais encontradas em salas de emergência de hospitais no mundo todo. E frequentemente quando se trata de experiência em doenças tropicais, a opinião de especialistas na maioria das vezes está indisponível ou não acessível em tempo hábil. A tarefa de chegar a um diagnóstico da malária preciso e eficaz, fundamental na prática médica, pode tornar-se complexa. E a complexidade desse processo aumenta à medida que os pacientes apresentam sintomas não específicos com uma grande quantidade de dados e informação imprecisa envolvida. Nesse sentido, Uzoka e colaboradores (2011a), a partir de informações clínicas de 30 pacientes nigerianos com diagnóstico confirmado de malária, utilizaram a metodologia Analytic Hierarchy Process (AHP) e metodologia Fuzzy para realizar a avaliação da gravidade da malária. Os resultados obtidos foram comparados com o diagnóstico de médicos especialistas. Esta dissertação desenvolve uma nova metodologia para avaliação da gravidade da malária e a compara com as técnicas utilizadas por Uzoka e colaboradores (2011a). Para tal o conjunto de dados utilizados é o mesmo do referido estudo. A técnica utilizada é a de Redes Neurais Artificiais (RNA). São avaliadas três arquiteturas com diferentes números de neurônios na camada escondida, duas metodologias de treinamento (leave-one-out e 10-fold cross-validation) e três critérios de parada, a saber: o erro médio quadrático, parada antecipada e regularização. Na primeira fase, é utilizado o banco de dados completo. Posteriormente, são utilizados os métodos de extração de características: na segunda fase, a Análise dos Componentes Principais (do inglês, Principal Component Analysis - PCA) e na terceira fase, a Análise Discriminante Linear (do inglês, Linear Discriminant Analysis – LDA). O melhor resultado obtido nas três fases, foi com o banco de dados completo, utilizando o critério de regularização, associado ao leave-one-out, de 83.3%. Já o melhor resultado obtido em (Uzoka, Osuji e Obot, 2011) foi com a rede fuzzy onde obteve 80% de acurácia.
118

Feature Extraction and Dimensionality Reduction in Pattern Recognition and Their Application in Speech Recognition

Wang, Xuechuan, n/a January 2003 (has links)
Conventional pattern recognition systems have two components: feature analysis and pattern classification. Feature analysis is achieved in two steps: parameter extraction step and feature extraction step. In the parameter extraction step, information relevant for pattern classification is extracted from the input data in the form of parameter vector. In the feature extraction step, the parameter vector is transformed to a feature vector. Feature extraction can be conducted independently or jointly with either parameter extraction or classification. Linear Discriminant Analysis (LDA) and Principal Component Analysis (PCA) are the two popular independent feature extraction algorithms. Both of them extract features by projecting the parameter vectors into a new feature space through a linear transformation matrix. But they optimize the transformation matrix with different intentions. PCA optimizes the transformation matrix by finding the largest variations in the original feature space. LDA pursues the largest ratio of between-class variation and within-class variation when projecting the original feature space to a subspace. The drawback of independent feature extraction algorithms is that their optimization criteria are different from the classifier’s minimum classification error criterion, which may cause inconsistency between feature extraction and the classification stages of a pattern recognizer and consequently, degrade the performance of classifiers. A direct way to overcome this problem is to conduct feature extraction and classification jointly with a consistent criterion. Minimum classification Error (MCE) training algorithm provides such an integrated framework. MCE algorithm was first proposed for optimizing classifiers. It is a type of discriminative learning algorithm but achieves minimum classification error directly. The flexibility of the framework of MCE algorithm makes it convenient to conduct feature extraction and classification jointly. Conventional feature extraction and pattern classification algorithms, LDA, PCA, MCE training algorithm, minimum distance classifier, likelihood classifier and Bayesian classifier, are linear algorithms. The advantage of linear algorithms is their simplicity and ability to reduce feature dimensionalities. However, they have the limitation that the decision boundaries generated are linear and have little computational flexibility. SVM is a recently developed integrated pattern classification algorithm with non-linear formulation. It is based on the idea that the classification that a.ords dot-products can be computed efficiently in higher dimensional feature spaces. The classes which are not linearly separable in the original parametric space can be linearly separated in the higher dimensional feature space. Because of this, SVM has the advantage that it can handle the classes with complex nonlinear decision boundaries. However, SVM is a highly integrated and closed pattern classification system. It is very difficult to adopt feature extraction into SVM’s framework. Thus SVM is unable to conduct feature extraction tasks. This thesis investigates LDA and PCA for feature extraction and dimensionality reduction and proposes the application of MCE training algorithms for joint feature extraction and classification tasks. A generalized MCE (GMCE) training algorithm is proposed to mend the shortcomings of the MCE training algorithms in joint feature and classification tasks. SVM, as a non-linear pattern classification system is also investigated in this thesis. A reduced-dimensional SVM (RDSVM) is proposed to enable SVM to conduct feature extraction and classification jointly. All of the investigated and proposed algorithms are tested and compared firstly on a number of small databases, such as Deterding Vowels Database, Fisher’s IRIS database and German’s GLASS database. Then they are tested in a large-scale speech recognition experiment based on TIMIT database.
119

Simulations multi-échelles de la diffusion des défauts dans les semi-conducteurs Si et SiGe

Caliste, Damien 07 December 2005 (has links) (PDF)
Le sujet abordé dans ce manuscrit traite de l'étude des défauts ponctuels et de leur rôle dans la diffusion au sein des semi-conducteurs Si et SiGe suivant une approche numérique. Le fait que les changements de concentrations observés dans un cristal à son échelle soient induits par des mouvements à l'échelle atomique, a conduit à une approche multi-échelle.<br /><br />Le calcul ab initio est un outil adapté à l'exploration des phénomènes inter-atomiques. Couplées à des algorithmes de minimisation des configurations, cet outil donne accès aux états stables et aux états de transition des phénomènes diffusifs. Le mouvement macroscopique est ensuite reproduit par l'utilisation de simulations de Monte Carlo cinétique.<br /><br />Nous détaillons, dans le présent travail, les coûts énergétiques et les géométries des principaux défauts répertoriés dans Si et SiGe. Il en ressort que la lacune, l'interstitiel dissocié, l'interstitiel hexagonal et le défaut tétra-coordonné sont tous les quatre des défauts de moindre énergie dans ces systèmes. L'étude des mouvements possibles et leurs utilisations dans des simulations de physique thermodynamique, permet de montrer l'existence de plusieurs régimes de diffusion, selon que les médiateurs du mouvement agissent seuls ou de façon coordonnée. Nous donnons l'exemple de la diffusion lacunaire, dont les variations observées s'expliquent par la présence plus ou moins importante de bi-lacunes et par les phénomènes de dissociation en jeu.<br /><br />Par cette étude, nous mettons en avant la nécessité de combiner, dans le cas de la diffusion, une analyse de l'échelle atomique avec des simulations à des échelles plus macroscopiques.
120

Apprentissage automatique des classes d'occupation du sol et représentation en mots visuels des images satellitaires

Lienou, Marie Lauginie 02 March 2009 (has links) (PDF)
La reconnaissance de la couverture des sols à partir de classifications automatiques est l'une des recherches méthodologiques importantes en télédétection. Par ailleurs, l'obtention de résultats fidèles aux attentes des utilisateurs nécessite d'aborder la classification d'un point de vue sémantique. Cette thèse s'inscrit dans ce contexte, et vise l'élaboration de méthodes automatiques capables d'apprendre des classes sémantiques définies par des experts de la production des cartes d'occupation du sol, et d'annoter automatiquement de nouvelles images à l'aide de cette classification. A partir des cartes issues de la classification CORINE Land Cover, et des images satellitaires multispectrales ayant contribué à la constitution de ces cartes, nous montrons tout d'abord que si les approches classiques de la littérature basées sur le pixel ou la région sont suffisantes pour identifier les classes homogènes d'occupation du sol telles que les champs, elles peinent cependant à retrouver les classes de haut-niveau sémantique, dites de mélange, parce qu'étant composées de différents types de couverture des terres. Pour détecter de telles classes complexes, nous représentons les images sous une forme particulière basée sur les régions ou objets. Cette représentation de l'image, dite en mots visuels, permet d'exploiter des outils de l'analyse de textes qui ont montré leur efficacité dans le domaine de la fouille de données textuelles et en classification d'images multimédia. A l'aide d'approches supervisées et non supervisées, nous exploitons d'une part, la notion de compositionnalité sémantique, en mettant en évidence l'importance des relations spatiales entre les mots visuels dans la détermination des classes de haut-niveau sémantique. D'autre part, nous proposons une méthode d'annotation utilisant un modèle d'analyse statistique de textes : l'Allocation Dirichlet Latente. Nous nous basons sur ce modèle de mélange, qui requiert une représentation de l'image dite en sacs-de-mots visuels, pour modéliser judicieusement les classes riches en sémantique. Les évaluations des approches proposées et des études comparatives menées avec les modèles gaussiens et dérivés, ainsi qu'avec le classificateur SVM, sont illustrées sur des images SPOT et QuickBird entre autres.

Page generated in 0.0345 seconds