• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 240
  • 28
  • 12
  • 5
  • 4
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 335
  • 335
  • 215
  • 139
  • 131
  • 93
  • 78
  • 72
  • 70
  • 59
  • 55
  • 50
  • 36
  • 34
  • 31
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Processamento e análise de imagens histológicas de pólipos para o auxílio ao diagnóstico de câncer colorretal / Processing and analysis of histological images of polyps to aid in the diagnosis of colorectal cancer

Lopes, Antonio Alex 22 March 2019 (has links)
Segundo o Instituto Nacional do Câncer (INCA), o câncer de colorretal é o terceiro tipo de câncer mais comum entre os homens e o segundo entre as mulheres. Atualmente a avaliação visual feita por um patologista é o principal método utilizado para o diagnóstico de doenças a partir de imagens microscópicas obtidas por meio de amostras em exames convencionais de biópsia. A utilização de técnicas de processamento computacional de imagens possibilita a identificação de elementos e a extração de características, o que contribui com o estudo da organização estrutural dos tecidos e de suas variações patológicas, levando a um aumento da precisão no processo de tomada de decisão. Os conceitos e técnicas envolvendo redes complexas são recursos valiosos para o desenvolvimento de métodos de análise estrutural de componentes em imagens médicas. Dentro dessa perspectiva, o objetivo geral deste trabalho foi o desenvolvimento de um método capaz de realizar o processamento e a análise de imagens obtidas em exames de biópsias de tecidos de pólipo de cólon para classificar o grau de atipia da amostra, que pode variar em: sem atipia, baixo grau, alto grau e câncer. Foram utilizadas técnicas de processamento, incluindo um conjunto de operadores morfológicos, para realizar a segmentação e a identificação de estruturas glandulares. A seguir, procedeu-se à análise estrutural baseada na identificação das glândulas, usando técnicas de redes complexas. As redes foram criadas transformado os núcleos das células que compõem as glândulas em vértices, realizando a ligação dos mesmos com 1 até 20 arestas e a extração de medidas de rede para a criação de um vetor de características. A fim de avaliar comparativamente o método proposto, foram utilizados extratores clássicos de características de imagens, a saber, Descritores de Haralick, Momentos de Hu, Transformada de Hough, e SampEn2D. Após a avaliação do método proposto em diferentes cenários de análise, o valor de acurácia geral obtida pelo mesmo foi de 82.0%, superando os métodos clássicos. Conclui-se que o método proposto para classificação de imagens histológicas de pólipos utilizando análise estrutural baseada em redes complexas mostra-se promissor no sentido de aumentar a acurácia do diagnóstico de câncer colorretal / According to the National Cancer Institute (INCA), colorectal cancer is the third most common cancer among men and the second most common cancer among women. Currently the main method used for the diagnosis of diseases from microscopic images obtained through samples in conventional biopsy tests are the visual evaluation made by a pathologist. The use of computational image processing techniques allows the identification of elements and the extraction of characteristics, which contributes to the study of the structural organization of tissues and their pathological variations, leading to an increase of precision in the decision making process. Concepts and techniques involving complex networks are valuable resources for the development of structural analysis methods of components in medical images. In this perspective, the general objective of this work was the development of a method capable of performing the image processing and analysis obtained in biopsies of colon polyp tissue to classify the degree of atypia of the sample, which may vary in: without atypia, low grade, high grade and cancer. Processing techniques including a set of morphological operators, were used to perform the segmentation and identification of glandular structures. Next, structural analysis was performed based on glands identification, using complex network techniques.The networks were created transforming the core of the cells that make up the glands in vertices, making the connection of the same with 1 to 20 edges and the extraction of network measurements to create a vector of characteristics. In order to comparatively evaluate the proposed method, classical image characteristic extractors were used, namely, Haralicks Descriptors, Hus Moments, Hough Transform, and SampEn2D. After the evaluation of the proposed method in different analysis scenarios, the overall accuracy value obtained by it was 82.0%, surpassing the classical methods. It is concluded that the proposed method for the classification of histological images of polyps using structural analysis based on complex networks is promising in order to increase the accuracy of the diagnosis of colorectal cancer
152

Bone Fragment Segmentation Using Deep Interactive Object Selection

Estgren, Martin January 2019 (has links)
In recent years semantic segmentation models utilizing Convolutional Neural Networks (CNN) have seen significant success for multiple different segmentation problems. Models such as U-Net have produced promising results within the medical field for both regular 2D and volumetric imaging, rivalling some of the best classical segmentation methods. In this thesis we examined the possibility of using a convolutional neural network-based model to perform segmentation of discrete bone fragments in CT-volumes with segmentation-hints provided by a user. We additionally examined different classical segmentation methods used in a post-processing refinement stage and their effect on the segmentation quality. We compared the performance of our model to similar approaches and provided insight into how the interactive aspect of the model affected the quality of the result. We found that the combined approach of interactive segmentation and deep learning produced results on par with some of the best methods presented, provided there were adequate amount of annotated training data. We additionally found that the number of segmentation hints provided to the model by the user significantly affected the quality of the result, with convergence of the result around 8 provided hints.
153

Ambiente para avaliação de algoritmos de processamento de imagens médicas. / Environment for medical image processing algorithms assessment.

Santos, Marcelo dos 20 December 2006 (has links)
Constantemente, uma variedade de novos métodos de processamento de imagens é apresentada à comunidade. Porém poucos têm provado sua utilidade na rotina clínica. A análise e comparação de diferentes abordagens por meio de uma mesma metodologia são essenciais para a qualificação do projeto de um algoritmo. Porém, é difícil comparar o desempenho e adequabilidade de diferentes algoritmos de uma mesma maneira. A principal razão deve-se à dificuldade para avaliar exaustivamente um software, ou pelo menos, testá-lo num conjunto abrangente e diversificado de casos clínicos. Muitas áreas - como o desenvolvimento de software e treinamentos em Medicina - necessitam de um conjunto diverso e abrangente de dados sobre imagens e informações associadas. Tais conjuntos podem ser utilizados para desenvolver, testar e avaliar novos softwares clínicos, utilizando dados públicos. Este trabalho propõe o desenvolvimento de um ambiente de base de imagens médicas de diferentes modalidades para uso livre em diferentes propósitos. Este ambiente - implementado como uma arquitetura de base distribuída de imagens - armazena imagens médicas com informações de aquisição, laudos, algoritmos de processamento de imagens, gold standards e imagens pós-processadas. O ambiente também possui um modelo de revisão de documentos que garante a qualidade dos conjuntos de dados. Como exemplo da facilidade e praticidade de uso, são apresentadas as avaliações de duas categorias de métodos de processamento de imagens médicas: segmentação e compressão. Em adição, a utilização do ambiente em outras atividades, como no projeto do arquivo didático digital do HC-FMUSP, demonstra a robustez da arquitetura proposta e sua aplicação em diferentes propósitos. / Constantly, a variety of new image processing methods are presented to the community. However, few of them have proved to be useful when used in clinical routine. The task of analyzing and comparing different algorithms, methods and applications through a sound testing is an essential qualification of algorithm design. However, it is usually very difficult to compare the performance and adequacy of different algorithms in the same way. The main reason is due to the difficulty to assess exhaustively the software, or at least using a comprehensive and diverse number of clinical cases for comparison. Several areas such as software development, image processing and medical training need a diverse and comprehensive dataset of images and related information. Such datasets could be used to develop, test and evaluate new medical software, using public data. This work presents the development of a free, online, multipurpose and multimodality medical image database environment. The environment, implemented such as a distributed medical image database, stores medical images, reports, image processing softwares, gold standards and post-processed images. Also, this environment implements a peer review model which assures the quality of all datasets. As an example of feasibility and easyness of use, it is shown the evaluation in two categories of medical image processing methods: segmentation and compression. In addition, the use of the set of applications proposed in this work in other activities, such as the HC-FMUSP digital teaching file, shows the robustness of the proposed architecture and its applicability on different purposes.
154

Processamento de consultas por similaridade em imagens médicas visando à recuperação perceptual guiada pelo usuário / Similarity Queries Processing Aimed at Retrieving Medical Images Guided by the User´s Perception

Silva, Marcelo Ponciano da 19 March 2009 (has links)
O aumento da geração e do intercâmbio de imagens médicas digitais tem incentivado profissionais da computação a criarem ferramentas para manipulação, armazenamento e busca por similaridade dessas imagens. As ferramentas de recuperação de imagens por conteúdo, foco desse trabalho, têm a função de auxiliar na tomada de decisão e na prática da medicina baseada em estudo de casos semelhantes. Porém, seus principais obstáculos são conseguir uma rápida recuperação de imagens armazenadas em grandes bases e reduzir o gap semântico, caracterizado pela divergência entre o resultado obtido pelo computador e aquele esperado pelo médico. No presente trabalho, uma análise das funções de distância e dos descritores computacionais de características está sendo realizada com o objetivo de encontrar uma aproximação eficiente entre os métodos de extração de características de baixo nível e os parâmetros de percepção do médico (de alto nível) envolvidos na análise de imagens. O trabalho de integração desses três elementos (Extratores de Características, Função de Distância e Parâmetro Perceptual) resultou na criação de operadores de similaridade, que podem ser utilizados para aproximar o sistema computacional ao usuário final, visto que serão recuperadas imagens de acordo com a percepção de similaridade do médico, usuário final do sistema / The continuous growth of the medical images generation and their use in the day-to-day procedures in hospitals and medical centers has motivated the computer science researchers to develop algorithms, methods and tools to store, search and retrieve images by their content. Therefore, the content-based image retrieval (CBIR) field is also growing at a very fast pace. Algorithms and tools for CBIR, which are at the core of this work, can help on the decision making process when the specialist is composing the images analysis. This is based on the fact that the specialist can retrieve similar cases to the one under evaluation. However, the main reservation about the use of CBIR is to achieve a fast and effective retrieval, in the sense that the specialist gets what is expected for. That is, the problem is to bridge the semantic gap given by the divergence among the result automatically delivered by the system and what the user is expecting. In this work it is proposed the perceptual parameter, which adds to the relationship between the feature extraction algorithms and distance functions aimed at finding the best combination to deliver to the user what he/she expected from the query. Therefore, this research integrated the three main elements of similarity queries: the image features, the distance function and the perceptual parameter, what resulted in searching operators. The experiments performed show that these operators can narrow the distance between the system and the specialist, contributing to bridge the semantic gap
155

Filmes finos de iodeto de chumbo (PbI2)produzidos por spray pyrolysis / Thin films of lead iodide (PbI2) produced by spray pyrolysis

Condeles, José Fernando 31 October 2003 (has links)
Pesquisadores em todo o mundo buscam métodos alternativos que minimizem o tempo de deposição de filmes finos semicondutores cotados como promissores candidatos em aplicações médicas como detectores de raios-X em radiografias digitais. O iodeto de chumbo (PbI2) é considerado, entre outros, como um bom candidato para a fabricação de detectores usados à temperatura ambiente. Outros pesquisadores fabricaram protótipos de detectores usando esse material. Seus experimentos mostraram alta resolução e sensibilidade para imagens em tempo real, mostrando que o material possui potencialidade para aplicações médicas futuramente. Não obstante, uma das desvantagens de seus métodos é o longo tempo necessário para a deposição na fabricação de filmes finos. Este trabalho apresenta uma nova metodologia usada para a deposição de filmes finos de iodeto de chumbo (PbI2). O método alternativo de crescimento dos filmes é chamado de spray pyrolysis. A técnica possui uma vantagem intrínseca pelo fato de a deposição ser facilmente expandida para grandes áreas de substrato que é desejado nas linhas de produção industrial. O pó de iodeto de chumbo foi dissolvido em água deionizada a 100ºC (água em ebulição) onde a solubilidade é maior que à temperatura ambiente. Após a dissolução do pó, a solução foi resfriada até a temperatura ambiente e filtrada para a remoção do excesso de cristais formados. Os filmes foram depositados a partir de solução aquosa sobre substrato de vidro em diferentes temperaturas (de 150 a 270ºC). O tempo total de deposição foi de 2,5 horas levando a uma espessura de . Em adição foram investigadas as propriedades estruturais (Difração de raios-X e espalhamento Raman), eletrônicas (condutividade elétrica no escuro em função da temperatura) e da superfície (por AFM) obtidas com os filmes produzidos. Com o intuito de aumentar o tamanho dos grãos cristalinos após a deposição dos filmes, as amostras originais foram submetidas a tratamento térmico a 350ºC durante 3 horas em atmosfera ambiente e posteriormente em atmosfera controlada (N2). No primeiro caso foi observada a influência do oxigênio com dopante da amostra. Foram analisadas as dimensões dos grãos cristalinos (relativo ao pico principal – 001) para diferentes temperaturas de deposição e de tratamento térmico, bem como a energia de ativação no transporte elétrico. Obteve-se um valor de energia de ativação de aproximadamente 0,50 eV para filmes depositados a 200ºC. Para outras temperaturas de deposição entre 150 e 250ºC foi obtido um mínimo e máximo de energia de ativação de 0,45 e 0,66 eV, respectivamente. Em resumo, as propriedades estruturais e eletrônicas são discutidas e relacionadas com o método de deposição e tratamento térmico. Acreditamos que filmes finos com interessantes propriedades estruturais e eletrônicas podem ser produzidos por spray pyrolysis com baixo tempo de deposição. / Researchers in the whole world search alternative methods that minimize the time of deposition of thin films of promising semiconductor candidates for medical applications, such as X-rays detectors for digital radiography. Lead iodide (PbI2) has been among those as a good candidate for the fabrication of room temperature detectors. Other authors have fabricated prototype detectors using this material. Their experiments show high resolution and sensitivity for real time imaging, thus showing the material potentiality for medical applications in the future. Nevertheless, one of the drawbacks of their methods is the long deposition time needed for the fabrication of the thin films. In this work we present a new experimental methodology used for the deposition of thin films of lead iodide (PbI2). The alternative growth method is called spray pyrolysis. Note that an intrinsic advantage of the technique is the fact that it can be easily expanded for large area substrates as desired by the industrial fabrication line. Lead iodide powder was dissolved in deionized water at 100ºC (boiling water) where its solubility is higher than at room temperature. After the dissolution of the powder, the solution is cooled down to ambient temperature and filtered for the removal of the excess of formed crystals. The films were deposited from aqueous solutions on glass substrates sitting at different temperatures (from 150 to 270ºC). The total deposition time is about 2.5 hours leading to a film thickness of . In addition we also investigate the structural (X-ray diffraction and Raman scattering), electronic (dark conductivity as a function of temperature) and atomic force microscopy (AFM) properties of the obtained films. In order to induce crystalline grain growth after the deposition of the films, the original samples were also submitted to thermal treatment at 350ºC during 3 hours either in ambient or under controlled atmosphere (N2). The influence of oxygen doping was only observed in the first case. We analyze the variation of the size of the crystals (relative to the main peak - 001) and the activation energies for electric transport. The activation energy for films deposited at 200ºC is about 0.50 eV. For other deposition temperatures, varying from 150 to 250ºC, it was experimentally measured a minimum and maximum value of activation energy of 0.45 and 0.66 eV, respectively. In summary, the electronic and structural properties are correlated and discussed based on the deposition method, and thermal treatments. It is the present authors belief that thin films with interesting structural and electronic properties can be produced by spray pyrolysis with short deposition time.
156

Novel scalable and real-time embedded transceiver system

Mohammed, Rand Basil January 2017 (has links)
Our society increasingly relies on the transmission and reception of vast amounts of data using serial connections featuring ever-increasing bit rates. In imaging systems, for example, the frame rate achievable is often limited by the serial link between camera and host even when modern serial buses with the highest bit rates are used. This thesis documents a scalable embedded transceiver system with a bandwidth and interface standard that can be adapted to suit a particular application. This new approach for a real-time scalable embedded transceiver system is referred to as a Novel Reference Model (NRM), which connects two or more applications through a transceiver network in order to provide real-time data to a host system. Different transceiver interfaces for which the NRM model has been tested include: LVDS, GIGE, PMA-direct, Rapid-IO and XAUI, one support a specific range for transceiver speed that suites a special type for transceiver physical medium. The scalable serial link approach has been extended with loss-less data compression with the aim of further increasing dataflow at a given bit rate. Two lossless compression methods were implemented, based on Huffman coding and a novel method called Reduced Lossless Compression Method (RLCM). Both methods are integrated into the scalable transceivers providing a comprehensive solution for optimal data transmission over a variety of different interfaces. The NRM is implemented on a field programmable gate array (FPGA) using a system architecture that consists of three layers: application, transport and physical. A Terasic DE4 board was used as the main platform for implementing and testing the embedded system, while Quartus-II software and tools were used to design and debug the embedded hardware systems.
157

Ambiente para avaliação de algoritmos de processamento de imagens médicas. / Environment for medical image processing algorithms assessment.

Marcelo dos Santos 20 December 2006 (has links)
Constantemente, uma variedade de novos métodos de processamento de imagens é apresentada à comunidade. Porém poucos têm provado sua utilidade na rotina clínica. A análise e comparação de diferentes abordagens por meio de uma mesma metodologia são essenciais para a qualificação do projeto de um algoritmo. Porém, é difícil comparar o desempenho e adequabilidade de diferentes algoritmos de uma mesma maneira. A principal razão deve-se à dificuldade para avaliar exaustivamente um software, ou pelo menos, testá-lo num conjunto abrangente e diversificado de casos clínicos. Muitas áreas - como o desenvolvimento de software e treinamentos em Medicina - necessitam de um conjunto diverso e abrangente de dados sobre imagens e informações associadas. Tais conjuntos podem ser utilizados para desenvolver, testar e avaliar novos softwares clínicos, utilizando dados públicos. Este trabalho propõe o desenvolvimento de um ambiente de base de imagens médicas de diferentes modalidades para uso livre em diferentes propósitos. Este ambiente - implementado como uma arquitetura de base distribuída de imagens - armazena imagens médicas com informações de aquisição, laudos, algoritmos de processamento de imagens, gold standards e imagens pós-processadas. O ambiente também possui um modelo de revisão de documentos que garante a qualidade dos conjuntos de dados. Como exemplo da facilidade e praticidade de uso, são apresentadas as avaliações de duas categorias de métodos de processamento de imagens médicas: segmentação e compressão. Em adição, a utilização do ambiente em outras atividades, como no projeto do arquivo didático digital do HC-FMUSP, demonstra a robustez da arquitetura proposta e sua aplicação em diferentes propósitos. / Constantly, a variety of new image processing methods are presented to the community. However, few of them have proved to be useful when used in clinical routine. The task of analyzing and comparing different algorithms, methods and applications through a sound testing is an essential qualification of algorithm design. However, it is usually very difficult to compare the performance and adequacy of different algorithms in the same way. The main reason is due to the difficulty to assess exhaustively the software, or at least using a comprehensive and diverse number of clinical cases for comparison. Several areas such as software development, image processing and medical training need a diverse and comprehensive dataset of images and related information. Such datasets could be used to develop, test and evaluate new medical software, using public data. This work presents the development of a free, online, multipurpose and multimodality medical image database environment. The environment, implemented such as a distributed medical image database, stores medical images, reports, image processing softwares, gold standards and post-processed images. Also, this environment implements a peer review model which assures the quality of all datasets. As an example of feasibility and easyness of use, it is shown the evaluation in two categories of medical image processing methods: segmentation and compression. In addition, the use of the set of applications proposed in this work in other activities, such as the HC-FMUSP digital teaching file, shows the robustness of the proposed architecture and its applicability on different purposes.
158

Efficient optimization for labeling problems with prior information: applications to natural and medical images

Bai, Junjie 01 May 2016 (has links)
Labeling problem, due to its versatile modeling ability, is widely used in various image analysis tasks. In practice, certain prior information is often available to be embedded in the model to increase accuracy and robustness. However, it is not always straightforward to formulate the problem so that the prior information is correctly incorporated. It is even more challenging that the proposed model admits efficient algorithms to find globally optimal solution. In this dissertation, a series of natural and medical image segmentation tasks are modeled as labeling problems. Each proposed model incorporates different useful prior information. These prior information includes ordering constraints between certain labels, soft user input enforcement, multi-scale context between over-segmented regions and original voxel, multi-modality context prior, location context between multiple modalities, star-shape prior, and gradient vector flow shape prior. With judicious exploitation of each problem's intricate structure, efficient and exact algorithms are designed for all proposed models. The efficient computation allow the proposed models to be applied on large natural and medical image datasets using small memory footprint and reasonable time assumption. The global optimality guarantee makes the methods robust to local noise and easy to debug. The proposed models and algorithms are validated on multiple experiments, using both natural and medical images. Promising and competitive results are shown when compared to state-of-art.
159

Towards automatic detection and visualization of tissues in medical volume rendering

Dickens, Erik January 2006 (has links)
<p>The technique of volume rendering can be a powerful tool when visualizing 3D medical data sets. Its characteristic of capturing 3D internal structures within a 2D rendered image makes it attractive in the analysis. However, the applications that implement this technique fail to reach out to most of the supposed end-users at the clinics and radiology departments of today. This is primarily due to problems centered on the design of the Transfer Function (TF), the tool that makes tissues visually appear in the rendered image. The interaction with the TF is too complex for a supposed end-user and its capability of separating tissues is often insufficient. This thesis presents methods for detecting the regions in the image volume where tissues are contained. The tissues that are of interest can furthermore be identified among these regions. This processing and classification is possible thanks to the use of a priori knowledge, i.e. what is known about the data set and its domain in advance. The identified regions can finally be visualized using tissue adapted TFs that can create cleaner renderings of tissues where a normal TF would fail to separate them. In addition an intuitive user control is presented that allows the user to easily interact with the detection and the visualization.</p>
160

Towards automatic detection and visualization of tissues in medical volume rendering

Dickens, Erik January 2006 (has links)
The technique of volume rendering can be a powerful tool when visualizing 3D medical data sets. Its characteristic of capturing 3D internal structures within a 2D rendered image makes it attractive in the analysis. However, the applications that implement this technique fail to reach out to most of the supposed end-users at the clinics and radiology departments of today. This is primarily due to problems centered on the design of the Transfer Function (TF), the tool that makes tissues visually appear in the rendered image. The interaction with the TF is too complex for a supposed end-user and its capability of separating tissues is often insufficient. This thesis presents methods for detecting the regions in the image volume where tissues are contained. The tissues that are of interest can furthermore be identified among these regions. This processing and classification is possible thanks to the use of a priori knowledge, i.e. what is known about the data set and its domain in advance. The identified regions can finally be visualized using tissue adapted TFs that can create cleaner renderings of tissues where a normal TF would fail to separate them. In addition an intuitive user control is presented that allows the user to easily interact with the detection and the visualization.

Page generated in 0.059 seconds