• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 27
  • 8
  • 5
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 59
  • 59
  • 12
  • 11
  • 11
  • 10
  • 9
  • 9
  • 8
  • 7
  • 7
  • 6
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Semi-automatic Classification of Remote Sensing Images

Dos santos, Jefersson Alex 25 March 2013 (has links) (PDF)
A huge effort has been made in the development of image classification systemswith the objective of creating high-quality thematic maps and to establishprecise inventories about land cover use. The peculiarities of Remote SensingImages (RSIs) combined with the traditional image classification challengesmake RSI classification a hard task. Many of the problems are related to therepresentation scale of the data, and to both the size and therepresentativeness of used training set.In this work, we addressed four research issues in order to develop effectivesolutions for interactive classification of remote sensing images.The first research issue concerns the fact that image descriptorsproposed in the literature achieve good results in various applications, butmany of them have never been used in remote sensing classification tasks.We have tested twelve descriptors that encodespectral/color properties and seven texture descriptors. We have also proposeda methodology based on the K-Nearest Neighbor (KNN) classifier for evaluationof descriptors in classification context. Experiments demonstrate that JointAuto-Correlogram (JAC), Color Bitmap, Invariant Steerable Pyramid Decomposition(SID), and Quantized Compound Change Histogram (QCCH) yield the best results incoffee and pasture recognition tasks.The second research issue refers to the problem of selecting the scaleof segmentation for object-based remote sensing classification. Recentlyproposed methods exploit features extracted from segmented objects to improvehigh-resolution image classification. However, the definition of the scale ofsegmentation is a challenging task. We have proposedtwo multiscale classification approaches based on boosting of weak classifiers.The first approach, Multiscale Classifier (MSC), builds a strongclassifier that combines features extracted from multiple scales ofsegmentation. The other, Hierarchical Multiscale Classifier (HMSC), exploits thehierarchical topology of segmented regions to improve training efficiencywithout accuracy loss when compared to the MSC. Experiments show that it isbetter to use multiple scales than use only one segmentation scale result. Wehave also analyzed and discussed about the correlation among the useddescriptors and the scales of segmentation.The third research issue concerns the selection of training examples and therefinement of classification results through multiscale segmentation. We have proposed an approach forinteractive multiscale classification of remote sensing images.It is an active learning strategy that allows the classification resultrefinement by the user along iterations. Experimentalresults show that the combination of scales produces better results thanisolated scales in a relevance feedback process. Furthermore, the interactivemethod achieves good results with few user interactions. The proposed methodneeds only a small portion of the training set to build classifiers that are asstrong as the ones generated by a supervised method that uses the whole availabletraining set.The fourth research issue refers to the problem of extracting features of ahierarchy of regions for multiscale classification. We have proposed a strategythat exploits the existing relationships among regions in a hierarchy. Thisapproach, called BoW-Propagation, exploits the bag-of-visual-word model topropagate features along multiple scales. We also extend this idea topropagate histogram-based global descriptors, the H-Propagation method. The proposedmethods speed up the feature extraction process and yield good results when compared with globallow-level extraction approaches.
12

Semi-automatic Classification of Remote Sensing Images / Classification semi-automatique des images de télédétection

Dos santos, Jefersson Alex 25 March 2013 (has links)
L'objectif de cette thèse est de développer des solutions efficaces pour laclassification interactive des images de télédétection. Cet objectif a étéréalisé en répondant à quatre questions de recherche.La première question porte sur le fait que les descripteursd'images proposées dans la littérature obtiennent de bons résultats dansdiverses applications, mais beaucoup d'entre eux n'ont jamais été utilisés pour la classification des images de télédétection. Nous avons testé douzedescripteurs qui codent les propriétés spectrales et la couleur, ainsi que septdescripteurs de texture. Nous avons également proposé une méthodologie baséesur le classificateur KNN (K plus proches voisins) pour l'évaluation desdescripteurs dans le contexte de la classification. Les descripteurs Joint Auto-Correlogram (JAC),Color Bitmap, Invariant Steerable Pyramid Decomposition (SID) etQuantized Compound Change Histogram (QCCH), ont obtenu les meilleursrésultats dans les expériences de reconnaissance des plantations de café et depâturages.La deuxième question se rapporte au choix del'échelle de segmentation pour la classification d'images baséesur objets.Certaines méthodes récemment proposées exploitent des caractéristiques extraitesdes objets segmentés pour améliorer classification des images hauterésolution. Toutefois, le choix d'une bonne échelle de segmentation est unetâche difficile.Ainsi, nous avons proposé deux approches pour la classification multi-échelles fondées sur le les principes du Boosting, qui permet de combiner desclassifieurs faibles pour former un classifieur fort.La première approche, Multiscale Classifier (MSC), construit unclassifieur fort qui combine des caractéristiques extraites de plusieurséchelles de segmentation. L'autre, Hierarchical Multiscale Classifier(HMSC), exploite la topologie hiérarchique de régions segmentées afind'améliorer l'efficacité des classifications sans perte de précision parrapport au MSC. Les expériences montrent qu'il est préférable d'utiliser des plusieurs échelles plutôt qu'une seul échelle de segmentation. Nous avons également analysé et discuté la corrélation entre lesdescripteurs et des échelles de segmentation.La troisième question concerne la sélection des exemplesd'apprentissage et l'amélioration des résultats de classification basés sur lasegmentation multiéchelle. Nous avons proposé une approche pour laclassification interactive multi-échelles des images de télédétection. Ils'agit d'une stratégie d'apprentissage actif qui permet le raffinement desrésultats de classification par l'utilisateur. Les résultats des expériencesmontrent que la combinaison des échelles produit de meilleurs résultats que leschaque échelle isolément dans un processus de retour de pertinence. Par ailleurs,la méthode interactive permet d'obtenir de bons résultats avec peud'interactions de l'utilisateur. Il n'a besoin que d'une faible partie del'ensemble d'apprentissage pour construire des classificateurs qui sont aussiforts que ceux générés par une méthode supervisée qui utilise l'ensembled'apprentissage complet.La quatrième question se réfère au problème de l'extraction descaractéristiques d'un hiérarchie des régions pour la classificationmulti-échelles. Nous avons proposé une stratégie qui exploite les relationsexistantes entre les régions dans une hiérarchie. Cette approche, appelée BoW-Propagation, exploite le modèle de bag-of-visual-word pour propagerles caractéristiques entre les échelles de la hiérarchie. Nous avons égalementétendu cette idée pour propager des descripteurs globaux basés sur leshistogrammes, l'approche H-Propagation. Ces approches accélèrent leprocessus d'extraction et donnent de bons résultats par rapport à l'extractionde descripteurs globaux. / A huge effort has been made in the development of image classification systemswith the objective of creating high-quality thematic maps and to establishprecise inventories about land cover use. The peculiarities of Remote SensingImages (RSIs) combined with the traditional image classification challengesmake RSI classification a hard task. Many of the problems are related to therepresentation scale of the data, and to both the size and therepresentativeness of used training set.In this work, we addressed four research issues in order to develop effectivesolutions for interactive classification of remote sensing images.The first research issue concerns the fact that image descriptorsproposed in the literature achieve good results in various applications, butmany of them have never been used in remote sensing classification tasks.We have tested twelve descriptors that encodespectral/color properties and seven texture descriptors. We have also proposeda methodology based on the K-Nearest Neighbor (KNN) classifier for evaluationof descriptors in classification context. Experiments demonstrate that JointAuto-Correlogram (JAC), Color Bitmap, Invariant Steerable Pyramid Decomposition(SID), and Quantized Compound Change Histogram (QCCH) yield the best results incoffee and pasture recognition tasks.The second research issue refers to the problem of selecting the scaleof segmentation for object-based remote sensing classification. Recentlyproposed methods exploit features extracted from segmented objects to improvehigh-resolution image classification. However, the definition of the scale ofsegmentation is a challenging task. We have proposedtwo multiscale classification approaches based on boosting of weak classifiers.The first approach, Multiscale Classifier (MSC), builds a strongclassifier that combines features extracted from multiple scales ofsegmentation. The other, Hierarchical Multiscale Classifier (HMSC), exploits thehierarchical topology of segmented regions to improve training efficiencywithout accuracy loss when compared to the MSC. Experiments show that it isbetter to use multiple scales than use only one segmentation scale result. Wehave also analyzed and discussed about the correlation among the useddescriptors and the scales of segmentation.The third research issue concerns the selection of training examples and therefinement of classification results through multiscale segmentation. We have proposed an approach forinteractive multiscale classification of remote sensing images.It is an active learning strategy that allows the classification resultrefinement by the user along iterations. Experimentalresults show that the combination of scales produces better results thanisolated scales in a relevance feedback process. Furthermore, the interactivemethod achieves good results with few user interactions. The proposed methodneeds only a small portion of the training set to build classifiers that are asstrong as the ones generated by a supervised method that uses the whole availabletraining set.The fourth research issue refers to the problem of extracting features of ahierarchy of regions for multiscale classification. We have proposed a strategythat exploits the existing relationships among regions in a hierarchy. Thisapproach, called BoW-Propagation, exploits the bag-of-visual-word model topropagate features along multiple scales. We also extend this idea topropagate histogram-based global descriptors, the H-Propagation method. The proposedmethods speed up the feature extraction process and yield good results when compared with globallow-level extraction approaches.
13

IFSO: A Integrated Framework For Automatic/Semi-automatic Software Refactoring and Analysis

Zheng, Yilei 23 April 2004 (has links)
To automatically/semi-automatically improve internal structures of a legacy system, there are several challenges: most available software analysis algorithms focus on only one particular granularity level (e.g., method level, class level) without considering possible side effects on other levels during the process; the quality of a software system cannot be judged by a single algorithm; software analysis is a time-consuming process which typically requires lengthy interactions. In this thesis, we present a framework, IFSO (Integrated Framework for automatic/semi-automatic Software refactoring and analysis), as a foundation for automatic/semi-automatic software refactoring and analysis. Our proposed conceptual model, LSR (Layered Software Representation Model), defines an abstract representation for software using a layered approach. Each layer corresponds to a granularity level. The IFSO framework, which is built upon the LSR model for component-based software, represents software at the system level, component level, class level, method level and logic unit level. Each level can be customized by different algorithms such as cohesion metrics, design heuristics, design problem detection and operations independently. Cooperating between levels together, a global view and an interactive environment for software refactoring and analysis are presented by IFSO. A prototype was implemented for evaluation of our technology. Three case studies were developed based on the prototype: three metrics, dead code removing, low coupled unit detection.
14

Knowledge-enhanced text classification : descriptive modelling and new approaches

Martinez-Alvarez, Miguel January 2014 (has links)
The knowledge available to be exploited by text classification and information retrieval systems has significantly changed, both in nature and quantity, in the last years. Nowadays, there are several sources of information that can potentially improve the classification process, and systems should be able to adapt to incorporate multiple sources of available data in different formats. This fact is specially important in environments where the required information changes rapidly, and its utility may be contingent on timely implementation. For these reasons, the importance of adaptability and flexibility in information systems is rapidly growing. Current systems are usually developed for specific scenarios. As a result, significant engineering effort is needed to adapt them when new knowledge appears or there are changes in the information needs. This research investigates the usage of knowledge within text classification from two different perspectives. On one hand, the application of descriptive approaches for the seamless modelling of text classification, focusing on knowledge integration and complex data representation. The main goal is to achieve a scalable and efficient approach for rapid prototyping for Text Classification that can incorporate different sources and types of knowledge, and to minimise the gap between the mathematical definition and the modelling of a solution. On the other hand, the improvement of different steps of the classification process where knowledge exploitation has traditionally not been applied. In particular, this thesis introduces two classification sub-tasks, namely Semi-Automatic Text Classification (SATC) and Document Performance Prediction (DPP), and several methods to address them. SATC focuses on selecting the documents that are more likely to be wrongly assigned by the system to be manually classified, while automatically labelling the rest. Document performance prediction estimates the classification quality that will be achieved for a document, given a classifier. In addition, we also propose a family of evaluation metrics to measure degrees of misclassification, and an adaptive variation of k-NN.
15

Elaboration de critères prosodiques pour une évaluation semi-automatique des apprenants francophones de l'anglais / Devising prosodic criteria for a semi-automatic assessment of Francophone learners of English

Cauvin, Evelyne 04 December 2017 (has links)
L’objectif de cette thèse est de modéliser l’interlangue prosodique des apprenants francophones de l’anglais afin de pouvoir élaborer des critères utilisables pour une évaluation semi-automatique de leur niveau prosodique. Le domaine évaluatif requiert la plus grande rigueur dans la mise en place de ses critères pour aboutir à la validité, la fiabilité, la faisabilité et l’équité maximales, alors que la prosodie anglaise de la langue cible se caractérise par son extrême variabilité. Aussi, peu d’études se sont engagées dans l’évaluation de la prosodie, qui représente une réelle gageure. Pour relever ce défi, une stratégie particulière a été mise en place pour élaborer une méthodologie permettant d’atteindre l’objectif fixé, en lecture.L’approche choisie repose sur la symbiose permanente qu’entretient la prosodie avec le monde dans lequel évolue le locuteur. Cette méthodologie, ou « profilage », est destinée à sélectionner par inclusion ou exclusion les éléments analysés tant au niveau perceptif qu’acoustique. Le profilage des réalisations sur l’axe syntagmatique permet de sélectionner les locuteurs natifs servant de modèles, et celui basé sur le phénomène d’emphase rend possible un ciblage de leurs réalisations les plus pertinentes à modéliser sur l’axe paradigmatique. Conformément à cette méthodologie d’investigation nouvelle et aux résultats perceptifs et acoustiques concordants pour la langue cible, les réalisations des apprenants francophones du corpus Longdale-Charliphonia sont analysés acoustiquement. Le classement automatique à partir des variables prosodiques (acoustiques et perceptives) est confronté à celui d’experts évaluant par perception classique.Les travaux de cette thèse aboutissent essentiellement à : Une modélisation de la prosodie anglaise non native par grilles évaluatives critériées s’appuyant sur critères distinctifs natifs et non natifs issus de variables temporelles (vitesse d’élocution avec ou sans pauses), de registre et de mélodie, ainsi que de rythme, À partir de ces variables, une évaluation semi-automatisée de 15 apprenants représentatifs du corpus par classement et notation, une correspondance des résultats de l’évaluation traditionnelle avec celle semi-automatique évoluant entre 56,83% et 59,74% dans une catégorisation des apprenants en 3 niveaux de maîtrise, en fonction du profilage d’experts évaluateurs. / The aim of our study is to modelise the prosodic interlanguage of Francophone learners of English in order to provide useful criteria for a semi-automatic assessment of their prosodic level in English. Learner assessment is a field that requires to be very rigorous and fair when setting up criteria that ensure validity, reliability, feasibility and equality, whereas English prosody is highly variable. Hence, few studies have carried out research in assessing prosody because it represents a real challenge. To address this issue, a specific strategy has been devised to elaborate a methodology that would ensure assessing a reading task successfully.The approach relies upon the constant symbiosis between prosody and a speaker’s subjective response to their environment. Our methodology, also known as « profiling », first aims at selecting relevant native perceived and acoustic prosodic features that will optimize assessment criteria by using their degree of emphasis and creating speakers’ prosodic profiles. Then, using the Longdale-Charliphonia corpus, the learner's productions are analysed acoustically. The automatic classification of the learners based on acoustic or perception prosodic variables is then submitted to expert aural assessment which assesses the learner evaluation criteria.This study achieves: A modelisation of non-native English prosody based on assessment grids that rely upon features of both native and non-native speakers of English, namely, speech rate – with or without the inclusion of pauses, register, melody and rhythm,A semi-automatic evaluation of 15 representative learners based on the above modelisation – ranking and marking,A comparison of the semi-automatic results with those of experts' auditory assessment; correspondence between the two varies from 56.83% to 59.74% when categorising the learners into three prosodic proficiency groups.
16

Reconstruction of 3D rigid body motion in a virtual environment from a 2D image sequence

Dasgupta, Sumantra 30 September 2004 (has links)
This research presents a procedure for interactive segmentation and automatic tracking of moving objects in a video sequence. The user outlines the region of interest (ROI) in the initial frame; the procedure builds a refined mask of the dominant object within the ROI. The refined mask is used to model a spline template of the object to be tracked. The tracking algorithm then employs a motion model to track the template through a sequence of frames and gathers the 3D affine motion parameters of the object from each frame. The extracted template is compared with a previously stored library of 3D shapes to determine the closest 3D object. If the extracted template is completely new, it is used to model a new 3D object which is added to the library. To recreate the motion, the motion parameters are applied to the 3D object in a virtual environment. The procedure described here can be applied to industrial problems such as traffic management and material flow congestion analysis.
17

Semi-automatic Road Extraction from Very High Resolution Remote Sensing Imagery by RoadModeler

Lu, Yao January 2009 (has links)
Accurate and up-to-date road information is essential for both effective urban planning and disaster management. Today, very high resolution (VHR) imagery acquired by airborne and spaceborne imaging sensors is the primary source for the acquisition of spatial information of increasingly growing road networks. Given the increased availability of the aerial and satellite images, it is necessary to develop computer-aided techniques to improve the efficiency and reduce the cost of road extraction tasks. Therefore, automation of image-based road extraction is a very active research topic. This thesis deals with the development and implementation aspects of a semi-automatic road extraction strategy, which includes two key approaches: multidirectional and single-direction road extraction. It requires a human operator to initialize a seed circle on a road and specify a extraction approach before the road is extracted by automatic algorithms using multiple vision cues. The multidirectional approach is used to detect roads with different materials, widths, intersection shapes, and degrees of noise, but sometimes it also interprets parking lots as road areas. Different from the multidirectional approach, the single-direction approach can detect roads with few mistakes, but each seed circle can only be used to detect one road. In accordance with this strategy, a RoadModeler prototype was developed. Both aerial and GeoEye-1 satellite images of seven different types of scenes with various road shapes in rural, downtown, and residential areas were used to evaluate the performance of the RoadModeler. The experimental results demonstrated that the RoadModeler is reliable and easy-to-use by a non-expert operator. Therefore, the RoadModeler is much better than the object-oriented classification. Its average road completeness, correctness, and quality achieved 94%, 97%, and 94%, respectively. These results are higher than those of Hu et al. (2007), which are 91%, 90%, and 85%, respectively. The successful development of the RoadModeler suggests that the integration of multiple vision cues potentially offers a solution to simple and fast acquisition of road information. Recommendations are given for further research to be conducted to ensure that this progress goes beyond the prototype stage and towards everyday use.
18

Semi-automatic Road Extraction from Very High Resolution Remote Sensing Imagery by RoadModeler

Lu, Yao January 2009 (has links)
Accurate and up-to-date road information is essential for both effective urban planning and disaster management. Today, very high resolution (VHR) imagery acquired by airborne and spaceborne imaging sensors is the primary source for the acquisition of spatial information of increasingly growing road networks. Given the increased availability of the aerial and satellite images, it is necessary to develop computer-aided techniques to improve the efficiency and reduce the cost of road extraction tasks. Therefore, automation of image-based road extraction is a very active research topic. This thesis deals with the development and implementation aspects of a semi-automatic road extraction strategy, which includes two key approaches: multidirectional and single-direction road extraction. It requires a human operator to initialize a seed circle on a road and specify a extraction approach before the road is extracted by automatic algorithms using multiple vision cues. The multidirectional approach is used to detect roads with different materials, widths, intersection shapes, and degrees of noise, but sometimes it also interprets parking lots as road areas. Different from the multidirectional approach, the single-direction approach can detect roads with few mistakes, but each seed circle can only be used to detect one road. In accordance with this strategy, a RoadModeler prototype was developed. Both aerial and GeoEye-1 satellite images of seven different types of scenes with various road shapes in rural, downtown, and residential areas were used to evaluate the performance of the RoadModeler. The experimental results demonstrated that the RoadModeler is reliable and easy-to-use by a non-expert operator. Therefore, the RoadModeler is much better than the object-oriented classification. Its average road completeness, correctness, and quality achieved 94%, 97%, and 94%, respectively. These results are higher than those of Hu et al. (2007), which are 91%, 90%, and 85%, respectively. The successful development of the RoadModeler suggests that the integration of multiple vision cues potentially offers a solution to simple and fast acquisition of road information. Recommendations are given for further research to be conducted to ensure that this progress goes beyond the prototype stage and towards everyday use.
19

Semi-automatic Semantic Video Annotation Tool

Aydinlilar, Merve 01 December 2011 (has links) (PDF)
Semantic annotation of video content is necessary for indexing and retrieval tasks of video management systems. Currently, it is not possible to extract all high-level semantic information from video data automatically. Video annotation tools assist users to generate annotations to represent video data. Generated annotations can also be used for testing and evaluation of content based retrieval systems. In this study, a semi-automatic semantic video annotation tool is presented. Generated annotations are in MPEG-7 metadata format to ensure interoperability. With the help of image processing and pattern recognition solutions, annotation process is partly automated and annotation time is reduced. Annotations can be done for spatio-temporal decompositions of video data. Extraction of low-level visual descriptions are included to obtain complete descriptions.
20

Reconstruction of 3D rigid body motion in a virtual environment from a 2D image sequence

Dasgupta, Sumantra 30 September 2004 (has links)
This research presents a procedure for interactive segmentation and automatic tracking of moving objects in a video sequence. The user outlines the region of interest (ROI) in the initial frame; the procedure builds a refined mask of the dominant object within the ROI. The refined mask is used to model a spline template of the object to be tracked. The tracking algorithm then employs a motion model to track the template through a sequence of frames and gathers the 3D affine motion parameters of the object from each frame. The extracted template is compared with a previously stored library of 3D shapes to determine the closest 3D object. If the extracted template is completely new, it is used to model a new 3D object which is added to the library. To recreate the motion, the motion parameters are applied to the 3D object in a virtual environment. The procedure described here can be applied to industrial problems such as traffic management and material flow congestion analysis.

Page generated in 0.1687 seconds