• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 135
  • 64
  • 58
  • 45
  • 18
  • 12
  • 9
  • 9
  • 8
  • 5
  • 3
  • 3
  • 3
  • 3
  • 2
  • Tagged with
  • 405
  • 128
  • 110
  • 60
  • 47
  • 39
  • 38
  • 30
  • 28
  • 27
  • 26
  • 26
  • 25
  • 25
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

System zur Beschreibung der objektiven Bildgüte digitaler Filmbelichter

Kiening, Hans. Unknown Date (has links) (PDF)
Brandenburgische Techn. Universiẗat, Diss., 2002--Cottbus.
62

Plant canopy modeling from Terrestrial LiDAR System distance and intensity data / Modélisation géométrique de la canopée des plantes à partir des données d'intensité et de distance fournies par un Système LiDAR Terrestre

Balduzzi, Mathilde 24 November 2014 (has links)
Le défi de cette thèse est de reconstruire la géométrie 3D de la végétation à partir des données de distance et d'intensité fournies par un scanner de type LiDAR. Une méthode de « shape-from-shading » par propagation est développée pour être combinée avec une méthode de fusion de données type filtre de Kalman pour la reconstruction optimale des surfaces foliaires.-Introduction-L'analyse des données LiDAR nous permet de dire que la qualité du nuage de point est variable en fonction de la configuration de la mesure : lorsque le LiDAR mesure le bord d'une surface ou une surface fortement inclinée, il intègre dans sa mesure une partie de l'arrière plan. Ces configurations de mesures produisent des points aberrants. On retrouve souvent ce type de configuration pour la mesure de feuillages puisque ces derniers ont des géométries fragmentées et variables. Les scans sont en général de mauvaise qualité et la quantité d'objets présents dans le scan rend la suppression manuelle des points aberrants fastidieuse. L'objectif de cette thèse est de développer une méthodologie permettant d'intégrer les données d'intensité LiDAR aux distances pour corriger automatiquement ces points aberrants. -Shape-From-Shading-Le principe du Shape-From-Shading (SFS) est de retrouver les valeurs de distance à partir des intensités d'un objet pris en photo. La caméra (capteur LiDAR) et la source de lumière (laser LiDAR) ont la même direction et sont placés à l'infini relativement à la surface, ce qui rend l'effet de la distance sur l'intensité négligeable et l'hypothèse d'une caméra orthographique valide. En outre, la relation entre angle d'incidence lumière/surface et intensité est connue. Par la nature des données LiDAR, nous pourrons choisir la meilleure donnée entre distance et intensité à utiliser pour la reconstruction des surfaces foliaires. Nous mettons en place un algorithme de SFS par propagation le long des régions iso-intenses pour pouvoir intégrer la correction de la distance grâce à l'intensité via un filtre de type Kalman. -Design mathématique de la méthode-Les morceaux de surface correspondant aux régions iso-intenses sont des morceaux de surfaces dites d'égales pentes, ou de tas de sable. Nous allons utiliser ce type de surface pour reconstruire la géométrie 3D correspondant aux images d'intensité.Nous démontrons qu'à partir de la connaissance de la 3D d'un bord d'une région iso-intense, nous pouvons retrouver des surfaces paramétriques correspondant à la région iso-intense qui correspondent aux surfaces de tas de sable. L'initialisation de la région iso-intense initiale (graine de propagation) se fait grâce aux données de distance LiDAR. Les lignes de plus grandes pentes de ces surfaces sont générées. Par propagation de ces lignes (et donc génération du morceau de la surface en tas de sable), nous déterminons l'autre bord de la région iso-intense. Puis, par itération, nous propagerons la reconstruction de la surface. -Filtre de Kalman-Nous pouvons considérer cette propagation des lignes de plus grande pente comme étant le calcul d'une trajectoire sur la surface à reconstruire. Dans le cadre de notre étude, la donnée de distance est toujours disponible (données du scanner 3D). Ainsi il est possible de choisir, lors de la propagation, quelle donnée (distance ou intensité) utiliser pour la reconstruction. Ceci peut être fait notamment grâce à une fusion de type Kalman. -Algorithme-Pour procéder à la reconstruction par propagation, il est nécessaire d'hiérarchiser les domaines iso-intenses de l'image. Une fois que les graines de propagation sont repérées, elles sont initialisées avec l'image des distances. Enfin, pour chacun des nœuds de la hiérarchie (représentant un domaine iso-intense), la reconstruction d'un tas de sable est faite. C'est lors de cette dernière étape qu'une fusion de type Kalman peut être introduite. / The challenge of this thesis is reconstruct the 3D geometry of vegetation from distance and intensity data provided by a 3D scanner LiDAR. A method of “Shape-From-Shading” by propagation is developed to be combined with a fusion method of type “Kalman” to get an optimal reconstruction of the leaves. -Introduction-The LiDAR data analysis shows that the point cloud quality is variable. This quality depends upon the measurement set up. When the LiDAR laser beam reaches the edge of a surface (or a steeply inclined surface), it also integrate background measurement. Those set up produce outliers. This kind of set up is common for foliage measurement as foliages have in general fragmented and complex shape. LiDAR data are of bad quality and the quantity of leaves in a scan makes the correction of outliers fastidious. This thesis goal is to develop a methodology to allow us to integrate the LiDAR intensity data to the distance to make an automatic correction of those outliers. -Shape-from-shading-The Shape-from-shading principle is to reconstruct the distance values from intensities of a photographed object. The camera (LiDAR sensor) and the light source (LiDAR laser) have the same direction and are placed at infinity relatively to the surface. This makes the distance effect on intensity negligible and the hypothesis of an orthographic camera valid. In addition, the relationship between the incident angle light beam and intensity is known. Thanks to the LiDAR data analysis, we are able to choose the best data between distance and intensity in the scope of leaves reconstruction. An algorithm of propagation SFS along iso-intense regions is developed. This type of algorithm allows us to integrate a fusion method of type Kalman. -Mathematical design of the method-The patches of the surface corresponding to the iso-intense regions are patches of surfaces called the constant slope surfaces, or sand-pile surfaces. We are going to use those surfaces to rebuild the 3D geometry corresponding to the scanned surfaces. We show that from the knowledge of the 3d of an iso-intensity region, we can construct those sand-pile surfaces. The initialization of the first iso-intense regions contour (propagation seeds) is done with the 3D LiDAR data. The greatest slope lines of those surfaces are generated. Thanks to the propagation of those lines (and thus of the corresponding sand-pile surface), we build the other contour of the iso-intense region. Then, we propagate the reconstruction iteratively. -Kalman filter-We can consider this propagation as being the computation of a trajectory on the reconstructed surface. In our study framework, the distance data is always available (3D scanner data). It is thus possible to choose which data (intensity vs distance) is the best to reconstruct the object surface. This can be done with a fusion of type Kalman filter. -Algorithm-To proceed a reconstruction by propagation, it is necessary to order the iso-intensity regions. Once the propagation seeds are found, they are initialized with the distances provided by the LiDAR. For each nodes of the hierarchy (corresponding to an iso-intensity region), the sand-pile surface reconstruction is done. -Manuscript-The thesis manuscript gathers five chapters. First, we give a short description of the LiDAR technology and an overview of the traditional 3D surface reconstruction from point cloud. Then we make a state-of-art of the shape-from –shading methods. LiDAR intensity is studied in a third chapter to define the strategy of distance effect correction and to set up the incidence angle vs intensity relationship. A fourth chapter gives the principal results of this thesis. It gathers the theoretical approach of the SFS algorithm developed in this thesis. We will provide its description and results when applied to synthetic images. Finally, a last chapter introduces results of leaves reconstruction.
63

Multicentre evaluation of MRI variability in the quantification of infarct size in experimental focal cerebral ischaemia

Milidonis, Xenios January 2017 (has links)
Ischaemic stroke is a leading cause of death and disability in the developed world. Despite that considerable advances in experimental research enabled understanding of the pathophysiology of the disease and identified hundreds of potential neuroprotective drugs for treatment, no such drug has shown efficacy in humans. The failure in the translation from bench to bedside has been partially attributed to the poor quality and rigour of animal studies. Recently, it has been suggested that multicentre animal studies imitating the design of randomised clinical trials could improve the translation of experimental research. Magnetic resonance imaging (MRI) could be pivotal in such studies due to its non-invasive nature and its high sensitivity to ischaemic lesions, but its accuracy and concordance across centres has not yet been evaluated. This thesis focussed on the use of MRI for the assessment of late infarct size, the primary outcome used in stroke models. Initially, a systematic review revealed that a plethora of imaging protocols and data analysis methods are used for this purpose. Using meta-analysis techniques, it was determined that T2-weighted imaging (T2WI) was best correlated with gold standard histology for the measurement of infarctbased treatment effects. Then, geometric accuracy in six different preclinical MRI scanners was assessed using structural phantoms and automated data analysis tools developed in-house. It was found that geometric accuracy varies between scanners, particularly when centre-specific T2WI protocols are used instead of a standardised protocol, though longitudinal stability over six months is high. Finally, a simulation study suggested that the measured geometric errors and the different protocols are sufficient to render infarct volumes and related group comparisons across centres incomparable. The variability increases when both factors are taken into account and when infarct volume is expressed as a relative estimate. Data in this study were analysed using a custom-made semi-automated tool that was faster and more reliable in repeated analyses than manual analysis. Findings of this thesis support the implementation of standardised methods for the assessment and optimisation of geometric accuracy in MRI scanners, as well as image acquisition and analysis of in vivo data for the measurement of infarct size in multicentre animal studies. Tools and techniques developed as part of the thesis show great promise in the analysis of phantom and in vivo data and could be a step towards this endeavour.
64

A repercussão periodontal e dentária decorrente da expansão rápida da maxila assistida cirurgicamente: estudo em modelos digitais

Fernandes, Mariana dos Santos 19 February 2008 (has links)
Made available in DSpace on 2016-08-03T16:31:10Z (GMT). No. of bitstreams: 1 01-Introducao.pdf: 10577 bytes, checksum: 4d03882fe775b4b1c91190db6ba04a21 (MD5) Previous issue date: 2008-02-19 / The rapid maxillary expansion surgically assisted (RMESA) is one of the procedures of choice for the correction of cross-sectional deficiency in adult patients. This study evaluated the changes produced in upper and lower dental arches of 18 patients, being 6 male and 12 female, with mean age of 23,3 years old undergoing RMESA. For each patient were prepared 3 plaster models, that were digitalized with Scanner 3D, obtained in different phases: inicial, before the surgical procedure (T1); 03 months post-expansion (T2); 06 months post-expansion (T3). Were evaluated the cross-sectional distances of upper and lower dental arches, the dental tipping of the upper posterior teeth, the clinical crown height of upper posterior teeth and it was observed if there were correlation between the amount of dental tipping with the development of gingival recessions. For the results analysis were used the Variance Analysis, the Tukey Test and the Pearson Correlation Test, being that for the analysis of systematical error intra-examiner was used the coupled Test t and for the determination of casual error it was used the error calculus of Dahlberg. Based on the methodology used and in the results obtained, it was concluded that: 1. In relaction to the changes produced in cross-sectional sense of the upper arch, it was obtained an increase in all the variants from T2 to T3 showing effectiveness and stability of RMESA; 2. In the lower arch there were not statistically significant cross-sectional changes, with exception of the first molars; 3. In relation to the dental tipping, it was observed its increase from T1 to T2 in all the teeth, however, with statistical significance only for the second molar and first and second premolar on right side and first molar and second premolar on left side; 4. The RMESA did not caused the development of gingival recession in no one of the times; 5. There was not correlaction between the amount of dental tipping and the development of gingival recession.(AU) / A expansão rápida da maxila assistida cirurgicamente (ERMAC) é um dos procedimentos de escolha para correção da deficiência transversal em pacientes adultos. Este estudo avaliou as alterações produzidas nos arcos dentais superiores e inferiores de 18 pacientes, sendo seis do sexo masculino e 12 do sexo feminino, com média de idade de 23,3 anos submetidos à ERMAC. Para cada paciente foram preparados três modelos de gesso, que foram digitalizados por meio do Scanner 3D, obtidos em diferentes fases: inicial, antes do procedimento operatório (T1); três meses pós-expansão (T2); seis meses pós-expansão (T3). Foram avaliadas as distâncias transversais do arcos dentários superior e inferior, a inclinação dentária dos dentes posteriores superiores, a altura da coroa clínica dos dentes posteriores do arco superior e foi observado se havia correlação entre a quantidade de inclinação dentária com o desenvolvimento de recessões gengivais. Para análise dos resultados foram utilizados a análise de Variância, o Teste de Tukey e o Teste de Correlação de Pearson, sendo que para a análise do erro sistemático intra-examinador foi utilizado o teste t pareado e para determinação do erro casual utilizou-se o cálculo do erro de Dahlberg. Com base na metodologia utilizada e nos resultados obtidos, pode-se concluir que: 1. com relação as alterações produzidas no sentido transversal do arco superior, obteve-se um aumento em todas as variáveis de T1 para T2 e uma manutenção dos valores em todas as variáveis de T2 para T3 demonstrando efetividade e estabilidade do procedimento; 2. no arco inferior não houve alterações transversais estatisticamente significantes, com exceção dos primeiros molares; 3. com relação às inclinações dentárias, observou-se um aumento desta de T1 para T2 em todos os dentes, porém, com significância estatística apenas para segundo molar e primeiro e segundo pré-molar do lado direito e primeiro molar e segundo pré-molar do lado esquerdo.; 4. a ERMAC não acarretou o desenvolvimento de recessões gengivais em nenhum dos tempos; 5. não houve correlação entre a quantidade de inclinação dentária e o desenvolvimento de recessões gengivais.(AU)
65

Determinação da concentração de fósforo em latossolo amarelo por imagens digitais no município de Itacoatiara-AM

Clóvis, Alessandra de Matos, 992096585 30 November 2017 (has links)
Submitted by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2018-05-10T19:29:13Z No. of bitstreams: 2 DISSERTAÇÃO ELESSANDRA.pdf: 2478750 bytes, checksum: 7ea920d5558c7e5eec3d854af28a2403 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Approved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2018-05-10T19:29:25Z (GMT) No. of bitstreams: 2 DISSERTAÇÃO ELESSANDRA.pdf: 2478750 bytes, checksum: 7ea920d5558c7e5eec3d854af28a2403 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Made available in DSpace on 2018-05-10T19:29:25Z (GMT). No. of bitstreams: 2 DISSERTAÇÃO ELESSANDRA.pdf: 2478750 bytes, checksum: 7ea920d5558c7e5eec3d854af28a2403 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2017-11-30 / FAPEAM - Fundação de Amparo à Pesquisa do Estado do Amazonas / The objective of this work was to quantify the concentration of phosphorus available in the yellow latosol - type soil of the pineapple crop by means of a colorimetric method based on image analysis. The images of soil extracts were obtained using commercial scanner and free software. The determination of phosphorus is based on the reaction between orthophosphate ions (PO 3) and 2- molybdate (MoO4 ) in strongly acid medium, forming the molybdophosphoric complex, which is reduced by ascorbic acid to molybdenum blue. After color formation, absorbance is measured using a UV/VIS spectrophotometer in 660 nm. Simultaneously the extracts / phosphorous patterns were transferred to a microplate on the scanner, scanned and analyzed with the ImageJ program. The vector norm is used as the analytical response, which is related to the concentration of phosphorus (mg/dm³) that is determined from the linear model of the standard curve. Samples were collected in the Village of the Remanso New, municipality of Itacoatiara-AM, during the winter and summer periods of the region. Extraction of available phosphorus in the extracts was performed with Melich 1 extractor in the proportion of 1:10. Filtration of extracts, volume study of standard solutions on the microplate and area of the analyzed images contributed to the final analysis results. A relationship between the time of analysis of the patterns and the correlation of the standard curves. The concentrations of phosphorus in the samples collected in the dry period were higher than those collected in the rainy season, both by the reference method and by the proposed method. LD and LQ values were lower by digital images. After application of the T-test it was found that there was no significant difference between the two methodologies for a 95% confidence level (calculated = 0.4705 and t critical = 2.7764) and the relative standard deviations were less than 3.0 %. / Este trabalho teve por objetivo quantificar a concentração de fósforo disponível no solo tipo latossolo amarelo por meio de um método colorimétrico baseado em análise de imagens. As imagens dos extratos de solo foram obtidas com uso de scanner comercial e um software gratuito. A determinação do fósforo é baseada 3- 2- na reação entre os íons ortofosfato (PO4 ) com o molibdato (MoO4 ) em meio fortemente ácido, formando o complexo molibdofosfórico, que é reduzido pelo ácido ascórbico à azul de molibdênio. Após a formação de cor, a absorbância é medida com uso de espectrofotômetro UV/VIS em 660 nm. Simultaneamente os extratos/padrões de fosfóro foram transferidas para uma microplaca sobre o scanner, digitalizados e analisados com o programa ImageJ. A norma do vetor é usada como a resposta analítica, a qual se relaciona com a concentração de fósforo (mg/dm³) que é determinada a partir do modelo linear da curva padrão. As amostras foram coletadas na Vila do Novo Remanso, município de Itacoatiara-AM, nos períodos de inverno e verão da região. A extração de fósforo disponível nos extratos foi realizada com extrator Mehlich 1 na proporção de 1:10. A filtração dos extratos, o estudo de volume de soluções padrão na microplaca e área das imagens analisadas contribuíram nos resultados finais de análise. Foram observados uma relação entre o tempo de análise dos padrões com a correlação das curvas padrão. Os resultados das concentrações de fósforo nas amostras coletadas no período seco foram superiores às coletadas no período chuvoso, tanto pelo método de referência quanto pelo método proposto. Os valores de LD e LQ foram inferiores por imagens digitais. Após a aplicação do teste t verificou-se que não há diferença significativa entre as duas metodologias para um nível de confiança de 95% (tcalculado= 0,4705 e tcrítico= 2,7764) e os desvios padrão relativos foram inferiores a 3,0%.
66

Projeto, microfabricação e caracterização de defletor de luz de silicio acionado por indução

Barbaroto, Pedro Ricardo 02 August 2018 (has links)
Orientadores : Ioshiaki Doi, Luiz Otavio Saraiva Ferreira / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-02T00:54:41Z (GMT). No. of bitstreams: 1 Barbaroto_PedroRicardo_M.pdf: 3440594 bytes, checksum: 877325661cb0da7cfa412f86b247b558 (MD5) Previous issue date: 2002 / Mestrado
67

Comparação entre o processo topográfico e a monorestituição apoiada com laser scanner terrestre no levantamento arquitetônico de fachadas

Nascimento Junior, José Ozório do 31 January 2010 (has links)
Made available in DSpace on 2014-06-12T23:15:06Z (GMT). No. of bitstreams: 1 license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2010 / Com o advento da varredura 3D a laser, é possível a geração de aquisição de dados tridimensionais em apenas alguns minutos, possibilitando um trabalho mais rápido e eficaz. O Laser Scanner Terrestre é uma nova tecnologia que não requer luminosidade para extrair seus dados, representados através de uma nuvem de pontos. A Fotogrametria utiliza fotografias para informações de seus dados, empregando uma técnica para extrair suas formas e dimensões. Já a Topografia faz uso do auxílio da trigonometria, tendo a Estação Total um instrumento com capacidade de armazenar e calcular os dados em coordenadas (X,Y,Z) para determinar a posição dos pontos observados em campo. Este trabalho buscou informações necessárias para uma comparação tridimensional entre esses métodos indiretos que não requerem contato com o objeto e o processo topográfico com a utilização de Estação Total e a Monorestituição apoiada com Laser Scanner Terrestre. O experimento da pesquisa foi efetivado no bloco de Meteorologia situado no campus da Universidade Federal do Paraná - UFPR, localizado na cidade de Curitiba, Brasil
68

Desenvolvimento de um software para modelagem de tomógrafos por emissão de pósitrons

Vieira, Igor Fagner 31 January 2013 (has links)
Submitted by Amanda Silva (amanda.osilva2@ufpe.br) on 2015-03-03T13:37:46Z No. of bitstreams: 2 Dissertacao Igor Fagner Vieira.pdf: 11971580 bytes, checksum: 9b20669e6b9542d3990f183c304ff233 (MD5) license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) / Made available in DSpace on 2015-03-03T13:37:46Z (GMT). No. of bitstreams: 2 Dissertacao Igor Fagner Vieira.pdf: 11971580 bytes, checksum: 9b20669e6b9542d3990f183c304ff233 (MD5) license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Previous issue date: 2013 / CRCN-NE,CNEN e FACEPE / Há uma tendência cada vez mais crescente na comunidade cientifica, ou mesmo dentro das grandes empresas da área médica, de utilizar códigos de transporte das radiações para validar resultados experimentais ou mesmo para projetar novos experimentos e/ou equipamentos. Neste trabalho, um método para modelagem de tomógrafo por emissão de pósitrons utilizando o GATE (Geant4 Application for Tomographic Emission) foi proposto e inicialmente validado. O GATE é uma plataforma internacionalmente reconhecida e utilizada para desenvolvimento de Modelos Computacionais de Exposição (MCE) no contexto da Medicina Nuclear, embora atualmente hajam módulos dedicados para aplicações em Radioterapia e Tomografia Computadorizada (TC). O GATE usa métodos Monte Carlo (MC) e tem uma linguagem de script própria. A escrita dos scripts para simulação de um PET scanner no GATE envolve um conjunto de passos interligados, sendo a acurácia da simulação dependente do arranjo correto das geometrias envolvidas, já que os processos físicos dependem destas, bem como da modelagem da eletrônica dos detectores no módulo Digitizer, por exemplo. A realização manual desse setup pode ser fonte de erros, sobretudo para usuários que não tenham experiência alguma no campo das simulações ou familiaridade prévia com uma linguagem de programação, considerando também o fato de todo este processo de modelagem no GATE ainda permanecer vinculado ao terminal do LINUX/UNIX, um ambiente familiar apenas para poucos. Isso se torna um obstáculo para iniciantes e inviabiliza o uso do GATE por uma gama maior de usuários, interessados em otimizar seus experimentos e/ou protocolos clínicos por meio de um modo mais acessível, rápido e amigável. O objetivo deste trabalho consiste, portanto, em desenvolver um software amigável para modelagens de Tomógrafos por Emissão de Pósitrons, chamado GUIGATE (Graphical User Interface for GATE), com módulos específicos e dedicados a controle de qualidade em PET scanners. Os resultados obtidos exibem os recursos disponíveis no GUIGATE, presentes em um conjunto de janelas que permitem ao usuário criar seus arquivos de entrada (os inputs), executar e visualizar em tempo real o seu modelo, bem como analisar seus arquivo de saída (os outputs) em um único ambiente, viabilizando assim de modo intuitivo o acesso a toda a arquitetura de simulação do GATE e ao analisador de dados do CERN, o ROOT.
69

An artificial intelligent system for oncological volumetric medical PET classification

Sharif, Mhd Saeed January 2013 (has links)
Positron emission tomography (PET) imaging is an emerging medical imaging modality. Due to its high sensitivity and ability to model physiological function, it is effective in identifying active regions that may be associated with different types of tumour. Increasing numbers of patient scans have led to an urgent need for the development of new efficient data analysis system to aid clinicians in the diagnosis of disease and save decent amount of the processing time, as well as the automatic detection of small lesions. In this research, an automated intelligent system for oncological PET volume analysis has been developed. Experimental NEMA (national electrical manufacturers association) IEC (International Electrotechnical Commission) body phantom data set, Zubal anthropomorphic phantom data set with simulated tumours, clinical data set from patient with histologically proven non-small cell lung cancer, and clinical data sets from seven patients with laryngeal squamous cell carcinoma have been utilised in this research. The initial stage of the developed system involves different thresholding approaches, and transforming the processed volumes into the wavelet domain at different levels of decomposition by deploying Haar wavelet transform. K-means approach is also deployed to classify the processed volume into a distinct number of classes. The optimal number of classes for each processed data set has been obtained automatically based on Bayesian information criterion. The second stage of the system involves artificial intelligence approaches including feedforward neural network, adaptive neuro-fuzzy inference system, self organising map, and fuzzy C-means. The best neural network design for PET application has been thoroughly investigated. All the proposed classifiers have been evaluated and tested on the experimental, simulated and clinical data sets. The final stage of the developed system includes the development of new optimised committee machine for PET application and tumour classification. Objective and subjective evaluations have been carried out for all the systems outputs, they show promising results for classifying patient lesions. The new approach results have been compared with all of the results obtained from the investigated classifiers and the developed committee machines. Superior results have been achieved using the new approach. An accuracy of 99.95% is achieved for clinical data set of patient with histologically proven lung tumour, and an average accuracy of 98.11% is achieved for clinical data set of seven patients with laryngeal tumour.
70

Vizualizace dat z 3D laserového skeneru / Tools for visualisation of data from 3D laser scanners

Střižík, Jakub January 2013 (has links)
Master thesis deals with the creation of the data visualization measured by 3D laser scanner using the Point Cloud method. Measured data were parameterized after loading for use in programming environment of Microsoft Visual Studio 2010 and in platform XNA. Individual data points forms the center of defined cubes which are displayed and create a scene where is possible to move through a user input in the form of a keyboard or mouse. Created algorithms were analyzed to determine the total running speed of the program, the individual as well as critical sections. The algorithms were optimized to a higher running speed of the program on the basis of analyzed data. Optimization was focused on the selection of retrieved data and on the method of their saving within the program environment. The next optimization process was based on the using of the other method for displaying of measured data points. Individual data points were displayed in form of square 2D texture replacing the cube. This square is rotating according to move of observer. Designed algorithm optimization leads to faster running of the program.

Page generated in 0.0657 seconds