• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 664
  • 207
  • 62
  • 60
  • 53
  • 45
  • 12
  • 11
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • Tagged with
  • 1325
  • 1325
  • 211
  • 205
  • 159
  • 140
  • 139
  • 131
  • 117
  • 116
  • 114
  • 110
  • 110
  • 108
  • 101
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
591

Recuperação de imagens por conteúdo baseada em realimentação de relevância e classificador por floresta de caminhos ótimos / Content-based image retrieval based on relevance feedback and optimum-path forest classifier

Silva, André Tavares da 18 August 2018 (has links)
Orientadores: Léo Pini Magalhães, Alexandre Xavier Falcão / Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação / Made available in DSpace on 2018-08-18T18:42:18Z (GMT). No. of bitstreams: 1 Silva_AndreTavaresda_D.pdf: 6866624 bytes, checksum: 179040a4095c5042ee1d1b5b5f11ebfb (MD5) Previous issue date: 2011 / Resumo: Com o crescente aumento de coleções de imagens resultantes da popularização da Internet e das câmeras digitais, métodos eficientes de busca tornam-se cada vez mais necessários. Neste contexto, esta tese propõe novos métodos de recuperação de imagens por conteúdo baseados em realimentação de relevância e no classificador por floresta de caminhos ótimos (OPF - Optimum-Path Forest), sendo também a primeira vez que o classificador OPF é utilizado em conjuntos de treinamento pequenos. Esta tese denomina como guloso e planejado os dois paradigmas distintos de aprendizagem por realimentação de relevância considerando as imagens retornadas. O primeiro paradigma tenta retornar a cada iteração sempre as imagens mais relevantes para o usuário, enquanto o segundo utiliza no aprendizado as imagens consideradas mais informativas ou difíceis de classificar. São apresentados os algoritmos de realimentação de relevância baseados em OPF utilizando ambos os paradigmas com descritor único. São utilizadas também duas técnicas de combinação de descritores juntamente com os métodos de realimentação de relevância baseados em OPF para melhorar a eficácia do processo de aprendizagem. A primeira, MSPS (Multi-Scale Parameter Search), é utilizada pela primeira vez em recuperação de imagens por conteúdo, enquanto a segunda é uma técnica consolidada baseada em programação genética. Uma nova abordagem para realimentação de relevância utilizando o classificador OPF em dois níveis de interesse é também apresentada. Nesta abordagem é possível, em um nível de interesse, selecionar os pixels nas imagens, além de escolher as imagens mais relevantes a cada iteração no outro nível. Esta tese mostra que o uso do classificador OPF para recuperação de imagens por conteúdo é muito eficiente e eficaz, necessitando de poucas iterações de aprendizado para apresentar os resultados desejados aos usuários. As simulações mostram que os métodos propostos superam os métodos de referência baseados em múltiplos pontos de consulta e em máquina de vetor de suporte (SVM). Além disso, os métodos propostos de busca de imagens baseados no classificador por floresta de caminhos ótimos mostraram ser em média 52 vezes mais rápidos do que os métodos baseados em máquina de vetor de suporte / Abstract: Considering the increasing amount of image collections that result from popularization of the digital cameras and the Internet, efficient search methods are becoming increasingly necessary. In this context, this doctoral dissertation proposes new methods for content-based image retrieval based on relevance feedback and on the OPF (optimum-path forest) classifier, being also the first time that the OPF classifier is used in small training sets. This doctoral dissertation names as "greedy" and "planned" the two distinct learning paradigms for relevance feedback taking into account the returned images. The first paradigm attempts to return the images most relevant to the user at each iteration, while the second returns the images considered the most informative or difficult to be classified. The dissertation presents relevance feedback algorithms based on the OPF classifier using both paradigms with single descriptor. Two techniques for combining descriptors are also presented along with the relevance feedback methods based on OPF to improve the effectiveness of the learning process. The first one, MSPS (Multi-Scale Search Parameter), is used for the first time in content-based image retrieval and the second is a consolidated technique based on genetic programming. A new approach of relevance feedback using the OPF classifier at two levels of interest is also shown. In this approach it is possible to select the pixels in images at a level of interest and to choose the most relevant images at each iteration at another level. This dissertation shows that the use of the OPF classifier for content based image retrieval is very efficient and effective, requiring few learning iterations to produce the desired results to the users. Simulations show that the proposed methods outperform the reference methods based on multi-point query and support vector machine. Besides, the methods based on optimum-path forest have shown to be on the average 52 times faster than the SVM-based approaches / Doutorado / Engenharia de Computação / Doutor em Engenharia Elétrica
592

Correlação entre os métodos tradicionais de quantificação de fases e o método que utiliza o sistema de análise de imagens em aços ao carbono comum / not available

Euclides Castorino da Silva 03 October 1997 (has links)
Recentemente, com o rápido progresso da informática, a qual tem criado novos sistemas semi-automáticos ou mesmo completamente automáticos para avaliar os parâmetros metalográficos, tornou-se possível oferecer aos usuários da prática metalográfica, uma nova opção para determinar os parâmetros de interesse, cuja determinação em materiais metálicos é executada por vários métodos e utilizada extensivamente na prática metalográfica, pois estes parâmetros contribuem significativamente para a resistência mecânica dos aços ao carbono. Porém a escolha do melhor método a ser adotado é um assunto bastante discutido por vários autores. As nove amostras das chapas de aços ao carbono comum recozidas, com porcentagem de carbono variando de 0,05% a 0,56% apresentando estrutura ferrítica ou ferrítico-perlítica foram selecionadas, objetivando ao estudo de uma correlação entre os métodos tradicionais mais utilizados, como também o método que utiliza o sistema de análise de imagem digitalizada, que enfatiza a diferença dos níveis de cinza entre as fases presentes. Os atuais resultados dessa correlação, representam um dos primeiros trabalhos na área. Os resultados dos achados metalográficos,explorando na íntegra a particularidade de cada método, demonstram que o método de análise de imagem, relacionado aos métodos tradicionais, conduz a uma rápida e precisa obtenção dos parâmetros, com uma boa reprodutividade dos resultados. / The evaluation of the metallographic parameters in metallic materials has been carried out by many methods and they have been used extensively in the metallographic practice. However, the best method to be adopted has been a subject of many discussion. Nowadays, with the fast advance of computerisation, it has been possible to create semi-automatic or completely automatic systems for evaluation of these metallographic parameters, which are very important in determining the mechanical strength of carbon steels. In this work nine samples were removed from carbon steel plates with carbon content varying from 0,05 to 0,56 for evaluation of the correlation between the traditional methods, as well as with the method which use a digitised image analysis. This late method emphasise to the difference on the grey levels of the phases present. The results from this work are one of the first in this area, and they exploited the particularity of each method. Also, they demonstrated that the image analysis method, when compared with the traditional ones, gives a rapid and precise evaluation of the metallographic parameters, with a very good results reproducibility.
593

Towards Robust Machine Learning Models for Data Scarcity

January 2020 (has links)
abstract: Recently, a well-designed and well-trained neural network can yield state-of-the-art results across many domains, including data mining, computer vision, and medical image analysis. But progress has been limited for tasks where labels are difficult or impossible to obtain. This reliance on exhaustive labeling is a critical limitation in the rapid deployment of neural networks. Besides, the current research scales poorly to a large number of unseen concepts and is passively spoon-fed with data and supervision. To overcome the above data scarcity and generalization issues, in my dissertation, I first propose two unsupervised conventional machine learning algorithms, hyperbolic stochastic coding, and multi-resemble multi-target low-rank coding, to solve the incomplete data and missing label problem. I further introduce a deep multi-domain adaptation network to leverage the power of deep learning by transferring the rich knowledge from a large-amount labeled source dataset. I also invent a novel time-sequence dynamically hierarchical network that adaptively simplifies the network to cope with the scarce data. To learn a large number of unseen concepts, lifelong machine learning enjoys many advantages, including abstracting knowledge from prior learning and using the experience to help future learning, regardless of how much data is currently available. Incorporating this capability and making it versatile, I propose deep multi-task weight consolidation to accumulate knowledge continuously and significantly reduce data requirements in a variety of domains. Inspired by the recent breakthroughs in automatically learning suitable neural network architectures (AutoML), I develop a nonexpansive AutoML framework to train an online model without the abundance of labeled data. This work automatically expands the network to increase model capability when necessary, then compresses the model to maintain the model efficiency. In my current ongoing work, I propose an alternative method of supervised learning that does not require direct labels. This could utilize various supervision from an image/object as a target value for supervising the target tasks without labels, and it turns out to be surprisingly effective. The proposed method only requires few-shot labeled data to train, and can self-supervised learn the information it needs and generalize to datasets not seen during training. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2020
594

Inclusion of Gabor textural transformations and hierarchical structures within an object based analysis of a riparian landscape

Kutz, Kain Markus 01 May 2018 (has links)
Land cover mapping is an important part of resource management, planning, and economic predictions. Improvements in remote sensing, machine learning, image processing, and object based image analysis (OBIA) has made the process of identifying land cover types increasingly faster and reliable but these advances are unable to utilize the amount of information encompassed within ultra-high (sub-meter) resolution imagery. Previously, users have typically reduced the resolution of imagery in an attempt to more closely represent the interpretation or object scale in an image and rid the image of any extraneous information within the image that may cause the OBIA process to identify too small of objects when performing semi-automated delineation of objects based on an images’ properties (Mas et al., 2015; Eiesank et al., 2014; Hu et al., 2010). There have been few known attempts to try and maximize this detailed information in high resolution imagery using advanced textural components. In this study we try to circumnavigate the inherent problems associated with high resolution imagery by combining well researched data transformations that aid the OBIA process with a seldom used texture transformation in Geographic Object Based Image Analyses (GEOBIA) known as the Gabor Transform and the hierarchal organization of landscapes. We will observe the difference made in segmentation and classification accuracy of a random forest classifier when we fuse a Gabor transformed image to a Normalized Difference Vegetation Index (NDVI), high resolution multi-spectral imagery (RGB and NIR) and Light Detection and Ranging (LiDAR) derived canopy height model (CHM) within a riparian area in Southeast Iowa. Additionally, we will observe the effects on classification accuracy when adding multi-scale land cover data to objects. Both, the addition of hierarchical information and Gabor textural information, could aid the GEOBIA process in delineating and classifying the same objects that human experts would delineate within this riparian landscape.
595

Développement et validation d’outils pour l’analyse morphologique du cerveau de Macaque / Morphometry Analysis Tools for the Macaque Brain : Development and Validation

Balbastre, Yaël 17 October 2016 (has links)
La compréhension des mécanismes impliqués dans les maladies neurodégénératives ou développementales ainsi que la mise en place de nouvelles approches thérapeutiques reposent sur l’utilisation de modèles expérimentaux pertinents et de techniques d’imagerie adaptées. Dans ce contexte, l’IRM est un outil de choix pour l’exploration anatomique in vivo dans la mesure où elle permet d’effectuer un suivi longitudinal. Le succès translationnel des thérapies du laboratoire au patient repose sur une bonne caractérisation des modèles et une continuité des biomarqueurs utilisés. Or, si l'IRM est disponible en préclinique et en clinique, les outils d'analyse sont peu « génériques ». Au cours de cette thèse, en s'inspirant des travaux menés chez l'Homme, nous avons développé et validé des outils automatiques de segmentation des structures neuroanatomiques chez le Macaque. La méthode proposée repose sur la mise en registre avec l'IRM du sujet d'un atlas digital probabiliste suivi de l'optimisation d'un modèle statistique par mélanges de gaussiennes et champs aléatoires de Markov. Elle a été validée chez un ensemble de sujets sains adultes puis mise en application dans le contexte du développement néonatal normal du cerveau. Afin de poser les bases d'une évaluation permettant une comparaison des biomarqueurs IRM avec les biomarqueurs post mortem de référence, nous avons également mis au point une chaîne de traitement permettant la reconstruction 3D de volumes histologiques du cerveau de Macaque et l'avons appliqué à la caractérisation du contraste IRM au cours d'une greffe de cellules souches après lésion excitotoxique. / Understanding the mechanisms involved in neurodegenerative or developmental diseases and designing new therapeutic approaches are based on the use of relevant experimental models as well as appropriate imaging techniques. In this context, MRI is a prominent tool for in vivo investigation as it allows for longitudinal follow-up. Successful translation from bench to bedside calls for well-characterized models as well as transferable biomarkers. Yet, despite the existence of both clinical and preclinical scanners, analysis tools are hardly translational. In this work, inspired by standards developed in Humans, we've built and validated tools for the automated segmentation of neuroanatomical structures in the Macaque. This method is based on the registration of a digital probabilistic atlas followed by the fitting of a statistical model consisting of a gaussian mixture and Markov random fields. It was first validated in healthy adults and then applied to the study of neonatal brain development. Furthermore, to pave the way for comparisons with gold standard post mortem biomarkers, we developed a pipeline for the automated 3D reconstruction of histological volumes that we applied to the characterization of MRI contrast in a stem-cell graft following an excitotoxic lesion.
596

Quantification par approche micromorphologique couplée à de l’analyse d’images de l’effet de la mise en culture et de l’apport de matières organiques sur l’intensité et la dynamique des processus de lessivage et de bioturbation à l’échelle pluri décennale / Quantification by a combination of a micromorphological approach with image analysis of lessivage and bioturbation rates in soils in response to land use and agricultural practices changes at a pluri-decennial time scale

Sauzet, Ophélie 18 November 2016 (has links)
La capacité du sol à fournir de nombreux services écosystémiques dépend de propriétés qui évoluent en permanence sous l’effet de multiples processus. Or, malgré leur importance, les dynamiques des processus de lessivage et de bioturbation de la fraction < 2 μm sont peu connues. Nous nous sommes alors fixés pour objectifs de i) développer et valider une méthode de quantification par analyse d’images de l’intensité de ces deux processus, ii) quantifier l’effet d’un à deux siècles de mise en culture et d’une dizaine d’années d’apports répétés de fumier sur leur intensité, et iii) d’en déduire des informations sur leur dynamique. Notre procédure d’analyse d’images repose sur une approche colorimétrique et texturale permettant de prendre en compte les différents niveaux d’organisation des sols. Le volume de sol bioturbé depuis 10 000 à 15 000 ans, est compris entre 65% du volume total à 40 cm de profondeur et 20 à 30% du volume total à 150 cm de profondeur soit une masse de sol déplacée de l’ordre de 6 500 t.ha-1 ou 1 700 t.ha-1 de fraction fine. Le processus d’illuviation est, quant à lui, à l’origine d’un flux de fraction fine de 1 100 t.ha-1. Les processus étudiés se sont montrés sensibles et étonnement réactifs aux forçages anthropiques. Deux cents ans de mise en culture ont eu pour résultats : i) une évolution de la structuration des sols sur au moins un mètre de profondeur, ii) une modification de l’architecture du volume de sol remanié par les vers de terre, et iii) une intensification du processus de lessivage. Une dizaine d’années d’apports répétés de fumier ont à l’inverse été en mesure de tamponner la plupart de ces évolutions. Cette réactivité inattendue des sols représente une opportunité en ce qu’il est possible d’orienter ces évolutions en fonction d’un objectif d’atténuation des effets du changement climatique notamment. / The intensity at which soils provide ecosystem services are function of soil properties that permanently evolved according to numerous processes. Lessivage and bioturbation are of crucial importance as they imply the clay size fraction but are still poorly characterized. This study aims at i) developing a digital 2D image analysis method to quantify both processes intensity, ii) quantifying the effect of two centuries of continuous cultivation and of a decade of organic amendments spreading on their intensity, and iii) characterizing their dynamics. We succeeded in quantifying those processes by carefully considering different levels of soil organization while combining a colorimetric and a textural approach. The percentage volume of worm-worked soil since 10 000 to 15 000 years is 65% at 40 cm depth and between 20 and 30% at 150 cm depth that corresponds to a soil mass flow of 6 500 t.ha-1, i.e. 1 700 t.ha-1 of clay size fraction. Illuviation is responsible for a clay size fraction mass flow of 1 100 t.ha-1. On a time scale as short as two centuries, cultivation was found to induce i) a change of the soil poral network characteristics until 1 meter depth, ii) a modification of the structure of the worm-worked soil volume, and finally iii) an increase of the lessivage intensity. A decade of organic matter spreading tended to lower the intensity of lessivage. Finally, our study points out the fact that soils are highly reactive and that our method may be particularly helpful to predict soil evolution while facing climate change among others.
597

Deformation Behavior of adidas BOOST(TM) Foams Using In Situ X-ray Tomography and Correlative Microscopy

January 2020 (has links)
abstract: Energy return in footwear is associated with the damping behavior of midsole foams, which stems from the combination of cellular structure and polymeric material behavior. Recently, traditional ethyl vinyl acetate (EVA) foams have been replaced by BOOST(TM) foams, thereby reducing the energetic cost of running. These are bead foams made from expanded thermoplastic polyurethane (eTPU), which have a multi-scale structure consisting of fused porous beads, at the meso-scale, and thousands of small closed cells within the beads at the micro-scale. Existing predictive models coarsely describe the macroscopic behavior but do not take into account strain localizations and microstructural heterogeneities. Thus, enhancement in material performance and optimization requires a comprehensive understanding of the foam’s cellular structure at all length scales and its influence on mechanical response. This dissertation focused on characterization and deformation behavior of eTPU bead foams with a unique graded cell structure at the micro and meso-scale. The evolution of the foam structure during compression was studied using a combination of in situ lab scale and synchrotron x-ray tomography using a four-dimensional (4D, deformation + time) approach. A digital volume correlation (DVC) method was developed to elucidate the role of cell structure on local deformation mechanisms. The overall mechanical response was also studied ex situ to probe the effect of cell size distribution on the force-deflection behavior. The radial variation in porosity and ligament thickness profoundly influenced the global mechanical behavior. The correlation of changes in void size and shape helped in identifying potentially weak regions in the microstructure. Strain maps showed the initiation of failure in cell structure and it was found to be influenced by the heterogeneities around the immediate neighbors in a cluster of voids. Poisson’s ratio evaluated from DVC was related to the microstructure of the bead foams. The 4D approach taken here provided an in depth and mechanistic understanding of the material behavior, both at the bead and plate levels, that will be invaluable in designing the next generation of high-performance footwear. / Dissertation/Thesis / Doctoral Dissertation Materials Science and Engineering 2020
598

Analysing the road reserve encroachment in Maseru Lesotho using remote sensing and image analysis

Ralitsoele, Teboho 15 September 2021 (has links)
The increasing rate of urbanization and the problem of road reserve encroachment mean that there is no space for road expansion and sometimes for maintenance and road furniture, these and other problems have exposed the problem of road reserve encroachment. The main aim of this study was to investigate methods of finding the road reserve encroachment in Maseru Lesotho using aerial photos. The study used single image analysis and multiple image analysis methods. In single image analysis, the study used three methods of image classifications to find objects that are in the road reserve. Under classification, the study used both supervised and unsupervised image classifications. For supervised classification, the study used the direct image classification method where the aim was to look for every object found in the road reserve. For the indirect approach, the study looked for the ground to find objects in the road reserve. For unsupervised image classification, the study assumed that small clusters are encroachment. In multiple images analysis, the study used the 2015 and 2017 images to determine permanent objects found to have encroached road reserves. Here the assumption was that encroachment does not change over time, which means that unchanged objects during the change detection have encroached on the road reserve. The confusion matrix was used to tell the best performing method and the results show that the indirect method, both in Qoaling and Maqalika performed best. All the methods showed that there was an encroachment on a road reserve, and found that permanent objects were; houses, shops, and shopping centers. The study recommended the use of images with higher resolution and more bands, also that images be taken frequently.
599

Towards a framework for multi class statistical modelling of shape, intensity, and kinematics in medical images

Fouefack, Jean-Rassaire 10 August 2021 (has links)
Statistical modelling has become a ubiquitous tool for analysing of morphological variation of bone structures in medical images. For radiological images, the shape, relative pose between the bone structures and the intensity distribution are key features often modelled separately. A wide range of research has reported methods that incorporate these features as priors for machine learning purposes. Statistical shape, appearance (intensity profile in images) and pose models are popular priors to explain variability across a sample population of rigid structures. However, a principled and robust way to combine shape, pose and intensity features has been elusive for four main reasons: 1) heterogeneity of the data (data with linear and non-linear natural variation across features); 2) sub-optimal representation of three-dimensional Euclidean motion; 3) artificial discretization of the models; and 4) lack of an efficient transfer learning process to project observations into the latent space. This work proposes a novel statistical modelling framework for multiple bone structures. The framework provides a latent space embedding shape, pose and intensity in a continuous domain allowing for new approaches to skeletal joint analysis from medical images. First, a robust registration method for multi-volumetric shapes is described. Both sampling and parametric based registration algorithms are proposed, which allow the establishment of dense correspondence across volumetric shapes (such as tetrahedral meshes) while preserving the spatial relationship between them. Next, the framework for developing statistical shape-kinematics models from in-correspondence multi-volumetric shapes embedding image intensity distribution, is presented. The framework incorporates principal geodesic analysis and a non-linear metric for modelling the spatial orientation of the structures. More importantly, as all the features are in a joint statistical space and in a continuous domain; this permits on-demand marginalisation to a region or feature of interest without training separate models. Thereafter, an automated prediction of the structures in images is facilitated by a model-fitting method leveraging the models as priors in a Markov chain Monte Carlo approach. The framework is validated using controlled experimental data and the results demonstrate superior performance in comparison with state-of-the-art methods. Finally, the application of the framework for analysing computed tomography images is presented. The analyses include estimation of shape, kinematic and intensity profiles of bone structures in the shoulder and hip joints. For both these datasets, the framework is demonstrated for segmentation, registration and reconstruction, including the recovery of patient-specific intensity profile. The presented framework realises a new paradigm in modelling multi-object shape structures, allowing for probabilistic modelling of not only shape, but also relative pose and intensity as well as the correlations that exist between them. Future work will aim to optimise the framework for clinical use in medical image analysis.
600

Studium homogenity tenkých vrstev organických materiálů / Study of organic materials thin film homogenity

Lacinová, Eva January 2008 (has links)
This thesis deals with study homogeneity of thin organic layers using image analysis. The theoretical parts deal with preparation thin layers and some methods examining their surface, especially optical microscopy and profilometry. Optical microscope NIKON ECLIPSE E200, digital camera NIKON 5400 and computer was used for study homogeneity of organic layers by image analysis. Images of the organic layer and single electrodes, which were steamed on organic layer, were surveyed. Homogeneity of surfaces layers was assessed by errors related with common moments (roughness average, root mean square roughness, skewness, and kurtosis). Differences between single samples in connection with size their common moments and homogeneity are discussed at the close of this work.

Page generated in 0.1132 seconds