• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 45
  • 15
  • 6
  • 6
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 95
  • 33
  • 15
  • 12
  • 11
  • 11
  • 10
  • 9
  • 9
  • 9
  • 9
  • 9
  • 9
  • 8
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Using measured photography to obtain optimal results from CCD color scanners /

Milburn, David L. January 1993 (has links)
Thesis (M.S.)--Rochester Institute of Technology, 1993. / Typescript. Includes bibliographical references (leaf [82]).
32

Δημιουργία 3D μοντέλων με χρήση 3D scanner και μέθοδοι ανάκτησης

Βλάχος, Απόστολος 09 March 2011 (has links)
Η εργασία αυτή εξηγεί την λειτουργία των 3D Scanners και την χρήση τους με σκοπό την δημιουργία τρισδιάστατων μοντέλων, καθώς και προτείνει μεθόδους ανάκτησης απο τις υπάρχουσες βάσεις δεδομένων 3D μοντέλων. / This project explains 3D Scanner basics and their use in order to create 3 dimensional models, as well as suggest retrieval methods from existing 3D model databases.
33

Uma Abordagem para Construção das Etapas de Análise de um Compilador.

MÉLO, Daniel Gondim Ernesto de. 07 February 2018 (has links)
Submitted by Gustavo Nascimento (gustavo.diniz@ufcg.edu.br) on 2018-02-07T11:57:47Z No. of bitstreams: 1 DANIEL GONDIM ERNESTO DE MÉLO - DISSERTAÇÃO PPGCC 2014.pdf: 1076740 bytes, checksum: 6b8c2f71701a3e6c6fa28f87faaeba62 (MD5) / Made available in DSpace on 2018-02-07T11:57:47Z (GMT). No. of bitstreams: 1 DANIEL GONDIM ERNESTO DE MÉLO - DISSERTAÇÃO PPGCC 2014.pdf: 1076740 bytes, checksum: 6b8c2f71701a3e6c6fa28f87faaeba62 (MD5) Previous issue date: 2014-06-20 / Capes / Compiladores são programas que traduzem um código escrito em alguma linguagem, conhecida como linguagem fonte, para um outro programa semanticamente equivalente em outra linguagem, conhecida como linguagem destino. Existem compiladores que traduzem códigos entre linguagens de alto nível. Porém, em geral, a linguagem destino mais utilizada é a linguagem de máquina ou código de máquina. Várias linguagens e ferramentas têm sido propostas dentro desse escopo a exemplo de Xtext, Stratego, CUP, ANTLR, etc. Apesar da grande quantidade, atualmente, os frameworks existentes para a construção de compiladores são de difícil compreensão e não evidenciam ao programador várias estruturas importantes, como tabela de símbolos e árvores de derivação. Adicionalmente, são muitos os detalhes específicos de cada plataforma concebida com esse propósito. Outrossim, em sua maioria, cada framework concentra-se e provê serviços para apenas uma etapa de um compilador, muitas vezes para prover serviços para mais de uma etapa se faz necessário o uso de linguagens de propósito geral, o que eleva o grau de complexidade para o projetista de Compiladores. Nesse sentido, propomos UCL (Unified Compiler Language), uma linguagem de domínio específico para o desenvolvimento das etapas de análise de Compiladores, de forma independente de plataforma e unificada. Com UCL é possível ao projetista do Compilador, especificar questões de design, tais como escolha de algoritmos a serem utilizados, tipo de scanner, entre outras características. A avaliação deste trabalho foi realizada por meio da condução de dois surveys com alunos da disciplina de Compiladores da Universidade Federal de Campina Grande, durante a execução dos projetos, que consiste no desenvolvimento de Compiladores. / Compilers are softwares that translate program codes written in some language, known as source language, to another semantically equivalent program in another programming language, known as target language. There are compilers that translate codes between high level languages. However, in general, the most widely used target language is the machine language or machine code. Several languages and tools have been proposed within this escope, e.g. Xtext, Stratego, CUP, ANTLR, etc. Despite the great quantity, currently, the existing frameworks for building compilers are difficult to understand and does not show the programmer several important structures, such as symbol table and syntax tree. Additionally, there are many specific details of each platform designed for that purpose. Moreover, in most cases, each framework focuses and provides services for only one module of a compiler. Often to provide services for more than one step it is necessary to use general purpose languages, which increases the degree of complexity. In this context, we propose UCL (Unified Compiler Language), a domain specific language for the development of the analysis modules, in a unified and platform independent way. With UCL it is possible for the compiler designer, specify design issues, such as, choice of algorithms to be used, type of scanner, among other features. The evaluation of this work was conducted through the application of two surveys with students of the compilers course from the Federal University of Campina Grande, during project execution, consisting in the development of compilers.
34

Why Johnny Still Can’t Pentest: A Comparative Analysis of Open-source Black-box Web Vulnerability Scanners

Khalil, Rana Fouad 19 December 2018 (has links)
Black-box web application vulnerability scanners are automated tools that are used to crawl a web application to look for vulnerabilities. These tools are often used in one of two ways. In the first approach, scanners are used as Point-and-Shoot tools where a scanner is only given the root URL of an application and asked to scan the site. Whereas, in the second approach, scanners are first configured to maximize the crawling coverage and vulnerability detection accuracy. Although the performance of leading commercial scanners has been thoroughly studied, very little research has been done to evaluate open-source scanners. This paper presents a feature and performance evaluation of five open-source scanners. We analyze the crawling coverage, vulnerability detection accuracy, scanning speed, report- ing and usability features. The scanners are tested against two well known benchmarks: WIVET and WAVSEP. Additionally, the scanners are tested against a realistic web application called WackoPicko. The chosen benchmarks are composed of a wide range of vulnerabilities and crawling challenges. Each scanner is tested in two modes: default and configured. Lastly, the scanners are compared with the state of the art commercial scanner Burp Suite Professional. Our results show that being able to properly crawl a web application is a critical task in detecting vulnerabilities. Unfortunately, the majority of the scanners evaluated had difficulty crawling through common web technologies such as dynamically generated JavaScript content and Flash applications. We also identified several classes of vulnerabilities that are not being detected by the scanners. Furthermore, our results show that scanners displayed considerable improvement when run in configured mode.
35

A Comparison of Digital Intraoral Scanners and Alginate Impressions: Time & Patient Satisfaction

Burzynski, Jennifer Ann 16 June 2017 (has links)
No description available.
36

Advances in real-time optical scanning holography

Schilling, Bradley Wade 12 September 2009 (has links)
Real-time holography using an active optical heterodyne scanning technique for recording and electron beam addressed spatial light modulator-based reconstruction has recently been studied and demonstrated. Advances in this area are presented in this thesis. For the first time, holograms of two dimensional objects have been recorded and two-dimensional images have been reconstructed using this system. The ability to digitally store holograms recorded by this method has been added to the system. This capability increases the robustness of the overall system and allows for digital processing of the holograms for improved reconstruction. Nonlinear digital processing for fringe contrast enhancement is demonstrated. The use of an intermediate display process has previously been identified as a major drawback in the real-time optical scanning holographic system. A digital frame memory is introduced into the system, eliminating the need for the intermediate display process, and thus improving the system. The two systems are compared. / Master of Science
37

Validité, fiabilité et reproductibilité des modèles digitaux obtenus avec iTero (Align Technology) et Unitek TMP Digital (3M) en comparaison avec les modèles de plâtre

Péloquin, Vincent-Claude 06 1900 (has links)
Objectif: L'objectif primaire de cette étude était d'évaluer la validité, la fiabilité et la reproductibilité des mesures dentaires obtenues sur les modèles digitaux iTero (Align Technology, San Jose, Californie) et Unitek TMP Digital (3M, Monrovia, Californie) en comparaison avec celles obtenues sur les modèles de plâtre (gold standard). L'objectif secondaire était de comparer les deux différents matériaux à empreinte (l'alginate et le polyvinylsiloxane-PVS) afin de déterminer si le choix du matériau affectait la précision des mesures. Méthodes: Le premier volet de l'étude impliquait les modèles digitaux Unitek et iTero, obtenus à partir de 25 paires de modèles de plâtre choisis de façon randomisée et provenant de la pratique privée d'un des co-auteurs. Des empreintes d'alginate et de PVS ont été prises sur les modèles de plâtre et numérisées par le scanner Unitek. Les modèles ont ensuite été numérisés avec le scanner iTero. Le deuxième volet de l'étude cherchait à comparer les modèles digitaux iTero (numérisation intra-orale) avec les modèles de plâtre (empreintes d'alginate et de PVS) obtenus à partir de 25 patients de la clinique d'orthodontie de l'Université de Montréal ayant besoin d'un traitement orthodontique. Dans les deux volets de l'étude, deux auteurs ont pris les mesures suivantes sur les différents modèles: largeur mésio-distale de chaque dent de la première molaire à l'autre première molaire, le périmètre d'arcade, les distances intermolaire et intercanine, le surplomb vertical et le surplomb horizontal. Les ratios et excès Bolton 6 et 12, l'espace requis et les différentiels d'espace au maxillaire et à la mandibule, ont été calculés. Résultats: La fiabilité (ICC) entre les modèles digitaux (Unitek et iTero) et les modèles de plâtre était bonne à excellente pour toutes les mesures [ICC=0,762–0,998], et la fiabilité entre les deux matériaux à empreinte était excellente [ICC=0,947–0,996]. Dans les deux volets de l'étude, les mesures faites sur les modèles iTero étaient généralement plus grandes que celles faites sur les modèles de plâtre. Les plus grandes différences moyennes pour la comparaison iTero-plâtre étaient trouvées au niveau de l'espace disponible au maxillaire et à la mandibule (systématiquement plus grande pour cette variable), soit 2,24 mm et 2,02 mm respectivement dans le premier volet, et 1,17 mm et 1,39 mm respectivement dans le deuxième volet. Les différences étaient considérées cliniquement non significatives pour toutes les variables. La reproductibilité intra-examinateur était bonne à excellente pour les modèles de plâtre et les modèles digitaux, à l'exception du différentiel d'espace à la mandibule pour les modèles Unitek [ICC=0,690-0,692]. La reproductibilité inter-examinateur était bonne à excellente pour les modèles de plâtre et les modèles digitaux dans les deux volets de l'étude, mais acceptable à modérée pour les modèles Unitek au niveau des analyses Bolton 6 et 12, et des différentiels d'espace au maxillaire et à la mandibule [ICC=0,362-0,548]. Conclusions: La précision et la fiabilité des mesures dentaires faites sur les modèles digitaux Unitek et iTero étaient cliniquement acceptables et reproductibles en comparaison avec les celles faites sur les modèles de plâtre. Le choix de matériel à empreinte entre l'alginate et le PVS n'affectait pas la précision des mesures. Cette étude semble démontrer que les modèles digitaux Unitek et iTero, utilisés avec leur logiciel respectif, sont une alternative fiable et reproductible aux modèles de plâtre pour le diagnostic et l’analyse des modèles orthodontiques. / Objective: The primary objective of this study was to evaluate the validity, reliability and reproducibility of dental measurements obtained on digital models produced by iTero (Align Technology, San Jose, California) and by Unitek TMP Digital (3M, Monrovia, California) in comparison with those obtained on plaster models (gold standard). The secondary objective was to compare two different impression materials (alginate and polyvinylsiloxane-PVS) to determine whether the material used affects accuracy of the measurements. Methods: The first part of the study involved Unitek and iTero digital models, which were all obtained from 25 pairs of plaster models randomly selected from one of the co-author's private practice. Alginate and PVS impressions were taken on plaster models and were scanned by the Unitek scanner. The same models were then scanned with the iTero scanner. The second part of the study sought to compare iTero digital models (intraoral scans) with plaster models (alginate and PVS impressions) taken on 25 patients requiring treatment from the Orthodontic clinic of the University of Montreal. In both parts of the study, two authors took the following measurements on the different models: mesio-distal width of each tooth from first molar to the other first molar, intermolar and intercanine distances, overbite and overjet. Bolton 6 and 12 ratios and excesses, maxillary and mandibular space available and required were also calculated in order to determine space differentials. Results: A good to excellent reliability (ICC) was found for all measurements when comparing digital (Unitek and iTero) and plaster models [ICC=0.762–0.998], and excellent reliability when comparing both impression materials [ICC=0.947–0.996]. In the two parts of the study, measurements on iTero models were generally larger than on plaster models. Highest mean differences for iTero-plaster were found for maxillary space available and mandibular space available (systematically larger for that variable): 2.24 mm and 2.02 mm respectively in the first part of the study, 1.17 mm and 1.39 mm respectively in the second part. Differences were considered clinically insignificant for all variables. Intraexaminer reproducibility was good to excellent for plaster and digital models, except for mandibular space differential on Unitek models [ICC=0.690-0.692]. Interexaminer reproducibility was good to excellent for plaster and digital models in both parts of study, but fair to moderate for Unitek models regarding Bolton 6 and 12, and maxillary and mandibular space differentials [ICC=0.362-0.548]. Conclusions: The accuracy and reliability of dental measurements done on Unitek and iTero digital models were clinically acceptable and reproducible when compared with measurements done on traditional plaster models. The choice of impression material between alginate and PVS did not affect accuracy of the measurements. This study tends to indicate that Unitek and iTero digital models examined with their associated software can be reliably used for orthodontic cast analysis and diagnosis.
38

Study of rock joint roughness using 3D laser scanning technique

Tam, Chung-yan, Candy., 譚頌欣. January 2008 (has links)
published_or_final_version / Civil Engineering / Master / Master of Philosophy
39

A METHOD FOR THE DETECTION OF FOCUS ERRORS.

Towner, David Kenney. January 1982 (has links)
No description available.
40

Investigações sobre digitalização de imagens mamográficas: padronização da qualidade da imagem e seu efeito no desempenho de esquemas de processamento / Investigations on scanning mammography: standardization of image quality and its effect on performance of processing schemes

Góis, Renata de Freitas 20 December 2010 (has links)
Este trabalho trata, inicialmente, de uma extensa investigação acerca do efeito que o processo de digitalização da imagem exerce num esquema de processamento de imagens em mamografia. Como todas as etapas de processamento tomam por base a imagem original digitalizada, os diferentes equipamentos, tecnologias, softwares de aquisição e características do processo contribuem para produzir diferentes aspectos na digitalização de um mesmo filme mamográfico. Por conseguinte, as etapas envolvidas do pré-processamento à classificação, passando principalmente pela segmentação dessas imagens, poderão produzir diferentes resultados dependendo da fidelidade da imagem digital em relação à mamografia original. Assim, essa pesquisa focaliza uma avaliação comparativa das características das imagens mamográficas adquiridas em diferentes sistemas de digitalização, tomando por base o efeito que esse processo exerce na sensibilidade de módulos de um esquema CAD (de Computer-Aided Diagnosis) previamente desenvolvido em nosso grupo. Com base nisso, propõe-se um modelo computacional que permite compensar as eventuais degradações introduzidas no processo de digitalização, em busca de uma maior uniformidade das imagens mamográficas digitais, independentemente do equipamento utilizado na digitalização. Testes conduzidos com as imagens digitais geradas em vários sistemas diferentes - desde equipamentos mais comuns, com adaptadores de transparência, até os mais sofisticados, com varredura a laser, e de alto custo - utilizando o driver de digitalização aqui proposto mostraram que houve um aumento da sensibilidade na detecção de microcalcificações para todos os casos em relação à utilização das imagens originalmente digitalizadas sem a aplicação do modelo. Além disso, foi significativa também a redução na taxa de falsos-positivos (entre 70 e 90%) nas mesmas condições. Como efeito, a presente proposta torna acessível a qualquer instituição radiológica a possibilidade de aplicação de esquemas de processamento para auxiliar a detecção e/ou diagnóstico de estruturas suspeitas em mamografia, mesmo que utilizando digitalizadores menos sofisticados - e, portanto, de menor custo - para a produção das imagens mamográficas digitais e sem perda de qualidade do desempenho. / This work corresponds to an extensive investigation on the effect of the image digitization process on an image processing scheme in mammography. As all the processing steps are based on a digitized image, different equipment, technologies, acquisition softwares and characteristics regarding such a procedure contribute to yield different features during the digitization of a same mammographic film. As a consequence, all the steps since the pre-processing up to the classification, mainly the images segmentation, could give different results depending on the digital image fidelity relatively to the original mammogram. Therefore, this research concentrates on a comparative evaluation of mammographic images acquired from several digitization systems, concerning the effect of this procedure on the sensitivity of a CAD scheme sections previously developed in our research group. A computer model is proposed in order to compensate some flaws introduced in the digitization procedure. The purpose is assuring more uniformity to the digital mammography images, no matter the digitizer used. Tests with digital images from several different systems - since common equipment, with transparency adaptors, up to the most expensive and sophisticated, with laser scanning - by using the digitization driver proposed in this work have shown an increase in microcalcifications detection sensitivity for all cases relatively to the use of original digitized images without making use of the proposed model. Furthermore, the reduction in false positive rates was significant (from 70% up to 90%) in the same conditions. Therefore, the current application makes possible to any radiological institution the use of processing schemes to aid the detection and/or diagnosis of suspect structures in mammography, even using less sophisticated - and thus, with low cost - digitizers and keeping the performance quality.

Page generated in 0.0343 seconds