• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 27
  • 8
  • 5
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 59
  • 59
  • 12
  • 11
  • 11
  • 10
  • 9
  • 9
  • 8
  • 7
  • 7
  • 6
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

A New Segmentation Algorithm for Prostate Boundary Detection in 2D Ultrasound Images

Chiu, Bernard January 2003 (has links)
Prostate segmentation is a required step in determining the volume of a prostate, which is very important in the diagnosis and the treatment of prostate cancer. In the past, radiologists manually segment the two-dimensional cross-sectional ultrasound images. Typically, it is necessary for them to outline at least a hundred of cross-sectional images in order to get an accurate estimate of the prostate's volume. This approach is very time-consuming. To be more efficient in accomplishing this task, an automated procedure has to be developed. However, because of the quality of the ultrasound image, it is very difficult to develop a computerized method for defining boundary of an object in an ultrasound image. The goal of this thesis is to find an automated segmentation algorithm for detecting the boundary of the prostate in ultrasound images. As the first step in this endeavour, a semi-automatic segmentation method is designed. This method is only semi-automatic because it requires the user to enter four initialization points, which are the data required in defining the initial contour. The discrete dynamic contour (DDC) algorithm is then used to automatically update the contour. The DDC model is made up of a set of connected vertices. When provided with an energy field that describes the features of the ultrasound image, the model automatically adjusts the vertices of the contour to attain a maximum energy. In the proposed algorithm, Mallat's dyadic wavelet transform is used to determine the energy field. Using the dyadic wavelet transform, approximate coefficients and detailed coefficients at different scales can be generated. In particular, the two sets of detailed coefficients represent the gradient of the smoothed ultrasound image. Since the gradient modulus is high at the locations where edge features appear, it is assigned to be the energy field used to drive the DDC model. The ultimate goal of this work is to develop a fully-automatic segmentation algorithm. Since only the initialization stage requires human supervision in the proposed semi-automatic initialization algorithm, the task of developing a fully-automatic segmentation algorithm is reduced to designing a fully-automatic initialization process. Such a process is introduced in this thesis. In this work, the contours defined by the semi-automatic and the fully-automatic segmentation algorithm are compared with the boundary outlined by an expert observer. Tested using 8 sample images, the mean absolute difference between the semi-automatically defined and the manually outlined boundary is less than 2. 5 pixels, and that between the fully-automatically defined and the manually outlined boundary is less than 4 pixels. Automated segmentation tools that achieve this level of accuracy would be very useful in assisting radiologists to accomplish the task of segmenting prostate boundary much more efficiently.
42

Étude sur l'équivalence de termes extraits automatiquement d'un corpus parallèle : contribution à l'extraction terminologique bilingue

Le Serrec, Annaïch January 2008 (has links)
Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal
43

Automated Reasoning Support for Invasive Interactive Parallelization

Moshir Moghaddam, Kianosh January 2012 (has links)
To parallelize a sequential source code, a parallelization strategy must be defined that transforms the sequential source code into an equivalent parallel version. Since parallelizing compilers can sometimes transform sequential loops and other well-structured codes into parallel ones automatically, we are interested in finding a solution to parallelize semi-automatically codes that compilers are not able to parallelize automatically, mostly because of weakness of classical data and control dependence analysis, in order to simplify the process of transforming the codes for programmers.Invasive Interactive Parallelization (IIP) hypothesizes that by using anintelligent system that guides the user through an interactive process one can boost parallelization in the above direction. The intelligent system's guidance relies on a classical code analysis and pre-defined parallelizing transformation sequences. To support its main hypothesis, IIP suggests to encode parallelizing transformation sequences in terms of IIP parallelization strategies that dictate default ways to parallelize various code patterns by using facts which have been obtained both from classical source code analysis and directly from the user.In this project, we investigate how automated reasoning can supportthe IIP method in order to parallelize a sequential code with an acceptable performance but faster than manual parallelization. We have looked at two special problem areas: Divide and conquer algorithms and loops in the source codes. Our focus is on parallelizing four sequential legacy C programs such as: Quick sort, Merge sort, Jacobi method and Matrix multipliation and summation for both OpenMP and MPI environment by developing an interactive parallelizing assistance tool that provides users with the assistanceneeded for parallelizing a sequential source code.
44

Recherche d'images par le contenu application à la proposition de mots clés / Image search by content and keyword proposal

Zhou, Zhyiong 08 February 2018 (has links)
La recherche d'information dans des masses de données multimédia et l'indexation de ces grandes bases d'images par le contenu sont des problématiques très actuelles. Elles s'inscrivent dans un type de gestion de données qu'on appelle le Digital Asset Management (ou DAM) ; Le DAM fait appel à des techniques de segmentation d'images et de classification de données. Nos principales contributions dans cette thèse peuvent se résumer en trois points :- Analyse des utilisations possibles des différentes méthodes d'extraction des caractéristiques locales en exploitant la technique de VLAD.- Proposition d'une nouvelle méthode d'extraction de l'information relative à la couleur dominante dans une image.- Comparaison des Machine à Supports de Vecteur (SVM - Support Vector Machine) à différents classifieurs pour la proposition de mots clés d'indexation. Ces contributions ont été testées et validées sur des données de synthèse et sur des données réelles. Nos méthodes ont alors été largement utilisées dans le système DAM ePhoto développé par la société EINDEN, qui a financé la thèse CIFRE dans le cadre de laquelle ce travail a été effectué. Les résultats sont encourageants et ouvrent de nouvelles perspectives de recherche. / The search for information in masses of multimedia data and the indexing of these large databases by the content are very current problems. They are part of a type of data management called Digital Asset Management (or DAM) ; The DAM uses image segmentation and data classification techniques.Our main contributions in this thesis can be summarized in three points : - Analysis of the possible uses of different methods of extraction of local characteristics using the VLAD technique.- Proposed a new method for extracting dominant color information in an image.- Comparison of Support Vector Machines (SVM) to different classifiers for the proposed indexing keywords. These contributions have been tested and validated on summary data and on actual data. Our methods were then widely used in the DAM ePhoto system developed by the company EINDEN, which financed the CIFRE thesis in which this work was carried out. The results are encouraging and open new perspectives for research.
45

[en] THE CREATION OF A SEMI-AUTOMATIC CLASSIFICATION MODEL USING GEOGRAPHIC KNOWLEDGE: A CASE STUDY IN THE NORTHERN PORTION OF THE TIJUCA MASSIF - RJ / [pt] A CRIAÇÃO DE UM MODELO DE CLASSIFICAÇÃO SEMI-AUTOMÁTICA UTILIZANDO CONHECIMENTO GEOGRÁFICO: UM ESTUDO DE CASO NA PORÇÃO SETENTRIONAL DO MACIÇO DA TIJUCA - RJ

RAFAEL DA SILVA NUNES 30 August 2018 (has links)
[pt] Os processos de transformação da paisagem são resultantes da interação de elementos (bióticos e abióticos) que compõe a superfície da Terra. Baseia-se, a partir de uma perspectiva holística, no inter-relacionamento de uma série de ações e objetos que confluem para que a paisagem seja percebida como um momento sintético da confluência de inúmeras temporalidades. Desta maneira, as geotecnologias passam a se constituir como um importante aparato técnico-científico para a interpretação desta realidade ao possibilitar novas e diferentes formas do ser humano interpretar a paisagem. Um dos produtos gerados a partir desta interpretação é a classificação de uso e cobertura do solo e que se configura como um instrumento central para a análise das dinâmicas territoriais. Desta maneira, o objetivo do presente trabalho é elaboração de um modelo de classificação semi-automática baseada em conhecimento geográfico para o levantamento do padrão de uso e cobertura da paisagem a partir da utilização de imagens de satélite de alta resolução, tendo como recorte analítico uma área na porção setentrional no Maciço da Tijuca. O modelo baseado na análise de imagens baseadas em objetos, quando confrontados com a classificação visual, culminou em um valor acima de 80 por cento de correspondência tanto para imagens de 2010 e 2009, apresentando valores bastante elevados também na comparação classe a classe. A elaboração do presente modelo contribuiu diretamente para a otimização da produção dos dados elaborados contribuindo sobremaneira para a aceleração da interpretação das imagens analisadas, assim como para a minimização de erros ocasionados pela subjetividade atrelada ao próprio classificador. / [en] The transformation processes of the landscape are results from the interaction of factors (biotic and abiotic) that makes up the Earth s surface. This interaction, from a holistic perspective, is then based on the inter-relationship of a series of actions and objects that converge so that landscape is perceived as a moment of confluence of numerous synthetic temporalities. Thus, the geotechnologies come to constitute an important technical and scientific apparatus for the interpretation of this reality by enabling new and different ways of interpreting the human landscape. One of the products that can be generated from this interpretation is the use classification and land cover and is configured as a central instrument for the analysis of territorial dynamics. Thus, the aim of this work is the development of a semi-automatic classification model based on geographic knowledge to survey the pattern of land use and cover the landscape from the use of satellite images of high resolution, with the analytical approach an area in the northern portion of the Tijuca Massif. The model built on an Object-Based Image Analysis, when confronted with the visual classification, culminated in a value above 80 percent match for 2010 and 2009, with very high values in the comparison class to class. The development of this model directly contributed to the optimization of the production of processed data contributing greatly to the acceleration of the interpretation of the images analyzed, as well as to minimize errors caused by the subjectivity linked to the classifier itself.
46

Geração semi-automática de extratores de dados da web considerando contextos fracos / Semi-automatic generation of web data extractors considering weak contexts

Oliveira, Daniel Pereira de 03 March 2006 (has links)
Made available in DSpace on 2015-04-11T14:03:04Z (GMT). No. of bitstreams: 1 Daniel Pereira de Oliveira.pdf: 1962605 bytes, checksum: 022c425ec0a87d2146c7cae3f274903b (MD5) Previous issue date: 2006-03-03 / In the current days, the Internet has become the largest information repository available. However, this huge variety of information is mostly represented in textual format and it necessarily requires human intervention to be effectively used. On the other hand, there exists a large set of Web pages that are in fact composed of collections of implicit data objects. For instance, on-line catalogs, digital libraries and e-commerce Web sites in general. Extracting the contents of these pages and identifying the structure of the data objects available allow for more sophisticated forms of processing besides hyperlink browsing and keyword-based searching. The task of extracting data from Web pages is usually executed by specialized programs called wrappers. In the present work we propose and evaluate a new approach to the wrapper development problem. In this approach, the user is only responsible for providing examples for the atomic items that constitute the objects of interest. Based on these examples, our method automatically generates expressions for extracting other atomics items similar to those presented as example and infers a plausible and meaningful structure to organize them. Our method for generating extraction expression uses techniques inherited from solutions for the multiple string alignment problem. The method is able to produce good extraction expressions that can be easily encoded as regular expressions. Inferring a meaningful structure for the objects whose atomic values were extracted is the task of the HotCycles algorithm, that were previously proposed and which we have revised and extended in this work. The algorithm assembles an adjacency graph for these atomic values, and executes a structural analysis over this graph, looking for patterns that resemble structural constructs such as tuples and lists. From such constructs, a complex object type can be assigned to the extracted data. The experiments carried out using 21 collections of real Web pages have demonstrated the feasibility of our extraction method, reaching 94% of effectiveness using no more than 10 examples for each attribute. The HotCycles algorithm was able to infer a meaningful structure for the objects present in all used collections. Its effectiveness, combined with our atom extraction method, reached 97% of structures correctly inferred, also using no more than 10 examples per attribute. The association of these two methods has demonstrated to be extremely feasible. The high number of correctly inferred structures together with the high precision and recall values of the extraction process demonstrates that this new approach is indeed a promising one. / Hoje em dia a Web se apresenta como o maior repositório de informações da humanidade. Contudo, essa imensa gama de informação é formada principalmente por conteúdo textual e necessariamente requer interpretação humana para se tornar útil. Por outro lado, existe uma grande quantidade de páginas na Web que são, na verdade, formadas por um conjunto implícito de objetos. Isso ocorre, por exemplo, em páginas oriundas de sites de catálogos on-line, bibliotecas digitais e comércio eletrônico em geral. A extração desse conteúdo e a identificação da estrutura dos objetos disponíveis permite uma forma mais sofisticada de processamento além da tradicional navegação por hiperlinks e consultas por palavras-chave. A tarefa de extrair dados de páginas Web é executada por progamas chamados extratores ou wrappers. Neste trabalho propomos uma nova abordagem para o desenvolvimento de extratores. Nessa abordagem o usuário se restringe a fornecer exemplos de treinamento para os atributos que constituem os objetos de interesse. Baseado nesses exemplos, são gerados automaticamente padrões para extrair dados inseridos em contextos similares áqueles fornecidos como exemplos. Em seguida, esses dados são automaticamente organizados segundo uma estrutura plausível. Nosso método de geração de padrões de extração utiliza técnicas herdadas de soluções para o problema do alinhamento múltiplo de seqüências. O método é capaz de produzir padrões de extração que podem ser facilmente transformados em expressões regulares. A tarefa de inferir uma estrutura plausível para os objetos extraídos é realizada pelo algoritmo HotCycles, que foi previamente proposto e que foi revisto e ampliado neste trabalho. O algoritmo constrói um grafo de adjacências para esses dados, e realiza nele, uma análise estrutural em busca de padrões que indiquem construtores estruturais como tuplas e listas. A partir de tais construtores, é associado um tipo aninhado aos dados que foram extraídos da página. Experimentos realizados em 21 coleções de páginas reais da Web demonstram a viabilidade do método de extração de valores atômicos, obtendo um desempenho superior a 94% e utilizando no máximo 10 exemplos de treinamento por atributo. O algoritmo HotCycles foi capaz de inferir uma estrutura plausível para os objetos em todas as coleções utilizadas. Seu desempenho combinado com o método de extração de valores atômicos chegou a 97% de estruturas corretamente inferidas com a utilização também até 10 exemplos por atributo. A combinação desses dois métodos demonstrou-se extremamente viável. Os altos índices de estruturas corretamente inferidas juntamente com os elevados índices de precisão e revocação do processo de extração demonstram que esta é sem dúvida uma abordagem promissora.
47

Pořízení a zpracování sbírky registračních značek vozidel / Obtaining and Processing of a Set of Vehicle License Plates

Kvapilová, Aneta January 2019 (has links)
This master thesis focuses on creating and processing a dataset, which contains semi-automatically processed images of vehicles licence plates. The main goal is to create videos and a set of tools, which are able to transform  input videos into a dataset used for traffic monitoring neural networks. Used programming language is Python, graphical library OpenCV and framework PyTorch for implementation of neural network.
48

Effektivisering av en halvautomatisk monteringsstation för en vattenblandare : Nya sekvenser för byte av komponenter

Thorin, Elin January 2020 (has links)
Rapporten behandlar utveckling av en halvautomatisk monteringsstation för en elektronisk vattenblandare. Monteringsstationen roterar blandarhuset till vattenblandaren i olika positioner så att en operatör manuellt kan montera på komponenter på blandarhuset. Nya behov har uppmärksammats under tiden monteringsstationen varit i bruk och nya sekvenser för byte av redan monterade komponenter på en färdigmonterad produkt har tagits fram. Främst handlar behovet om att kunna byta en enskild komponent på en färdigmonterad produkt när det har visat sig att den monterade komponenten inte håller måttet i kontroller direkt efter monteringsprocessen. Kartläggning av aktuella sekvenser har gjorts genom observationer när monteringsstationen i varit i bruk. För att få fram nya optimerade sekvenser har behovet av förbättringar kartlagts genom att fråga produktionstekniker och operatörer i monteringen om upplevda problem och önskemål på förändringar i monteringsstationen för att underlätta monteringsarbetet och öka motivationen. Byte av en blandventil/termostatinsats på en redan monterad vägghängd Tronic-blandare är ett av de största upplevda problemen vid monteringsstationen. En ny tidsoptimerad sekvens för byte av blandventil/termostatinsats i monteringsstationen har tagits fram som innebär att monteringsstationen kommer användas i 5 steg istället för i 10 steg som i gamla monteringssekvensen. Arbetet har även uppmärksammat vikten av att ha tillgång till aktuell programmering och dokumentation för maskiner/stationer som används i produktionen på företaget. / The report deals with the development of a semi-automatic assembly station for an electronic water mixer. The assembly station rotates the mixer housing in various positions for the operator to be able to manually assemble components on the mixer housing. The operators have identified that alternate sequences in the assembly station might improve the assembly process. New sequences for replacing already assembled components on the assembled mixer product have been developed. The main need is to be able to change a single component on an already assembled mixer product when the component has been shown not to meet the quality standards. Mapping of current sequences has been done through observations when the assembly station has been in use. To obtain new optimized sequences the need for improvements has been mapped by asking production technicians and operators in the assembly about perceived problems and wishes for changes in in the assembly station to ease the assembly work and increase motivation. Replacing a mixing valve/thermostat insert on an already assembled wall-mounted Tronic mixer is one of the biggest problems experienced at the assembly station. A new time-optimized sequence for replacing the mixing valve/thermostat insert in the assembly station has been developed. The new sequence in the assembly station will use 5 steps to finish the change instead of 10 steps as in the old sequence. The report has also highlighted the importance of having access to up-to-date programming, and documentation for machines/stations used in the production line at the company.
49

A Semi-Automatic Method for Intracortical Porosity Quantification With Application to Intraskeletal Variability

Cole, Mary Elizabeth 01 August 2014 (has links)
No description available.
50

Exploiting BioPortal as Background Knowledge in Ontology Alignment

Chen, Xi 11 August 2014 (has links)
No description available.

Page generated in 0.0447 seconds