• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 18
  • 13
  • 1
  • 1
  • Tagged with
  • 69
  • 69
  • 36
  • 26
  • 15
  • 15
  • 15
  • 13
  • 12
  • 10
  • 10
  • 9
  • 9
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Um método iterativo e escalonável para super-resolução de imagens usando a interpolação DCT e representação esparsa. / Iterative and scalable image super-resolution method with DCT interpolation and sparse representation.

Saulo Roberto Sodré dos Reis 23 April 2014 (has links)
Num cenário em que dispositivos de aquisição de imagens e vídeo possuem recursos limitados ou as imagens disponíveis não possuem boa qualidade, as técnicas de super-resolução (SR) apresentam uma excelente alternativa para melhorar a qualidade das imagens. Nesta tese é apresentada uma proposta para super-resolução de imagem única que combina os benefícios da interpolação no domínio da transformada DCT e a eficiência dos métodos de reconstrução baseados no conceito de representação esparsa de sinais. A proposta busca aproveitar as melhorias já alcançadas na qualidade e eficiência computacional dos principais algoritmos de super-resolução existentes. O método de super-resolução proposto implementa algumas melhorias nas etapas de treinamento e reconstrução da imagem final. Na etapa de treinamento foi incluída uma nova etapa de extração de características utilizando técnicas de aguçamento por máscara de nitidez e construção de um novo dicionário. Esta estratégia busca extrair mais informações estruturais dos fragmentos de baixa e alta resolução do conjunto de treinamento e ao mesmo tempo reduzir o tamanho dos dicionários. Outra importante contribuição foi a inclusão de um processo iterativo e escalonável no algoritmo, reinserindo no conjunto de treinamento e na etapa de reconstrução, uma imagem de alta resolução obtida numa primeira iteração. Esta solução possibilitou uma melhora na qualidade da imagem de alta resolução final utilizando poucas imagens no conjunto de treinamento. As simulações computacionais demonstraram a capacidade do método proposto em produzir imagens com qualidade e com tempo computacional reduzido. / In a scenario in which the acquisition systems have limited resources or available images do not have good quality, the super-resolution (SR) techniques have become an excellent alternative for improving the image quality. In this thesis, we propose a single-image super-resolution (SR) method that combines the benefits of the DCT interpolation and efficiency of sparse representation method for image reconstruction. Also, the proposed method seeks to take advantage of the improvements already achieved in quality and computational efficiency of the existing SR algorithms. The proposed method implements some improvements in the dictionary training and the reconstruction process. A new dictionary was built by using an unsharp mask technique to characteristics extraction. Simultaneously, this strategy aim to extract more structural information of the low resolution and high resolution patches and reduce the dictionaries size. Another important contribution was the inclusion of an iterative and scalable process by reinserting the HR image obtained of first iteration. This solution aim to improve the quality of the final HR image using a few images in the training set. The results have demonstrated the ability of the proposed method to produce high quality images with reduced computational time.
32

Identifying Critical Regions for Robot Planning Using Convolutional Neural Networks

January 2019 (has links)
abstract: In this thesis, a new approach to learning-based planning is presented where critical regions of an environment with low probability measure are learned from a given set of motion plans. Critical regions are learned using convolutional neural networks (CNN) to improve sampling processes for motion planning (MP). In addition to an identification network, a new sampling-based motion planner, Learn and Link, is introduced. This planner leverages critical regions to overcome the limitations of uniform sampling while still maintaining guarantees of correctness inherent to sampling-based algorithms. Learn and Link is evaluated against planners from the Open Motion Planning Library (OMPL) on an extensive suite of challenging navigation planning problems. This work shows that critical areas of an environment are learnable, and can be used by Learn and Link to solve MP problems with far less planning time than existing sampling-based planners. / Dissertation/Thesis / Masters Thesis Computer Science 2019
33

A atuação do professor na construção do conhecimento dos estudantes durante o processo tutoral no curso de medicina da UESB: a visão do professor-tutor.

Pinheiro, Carla Cristiane de Oliveira January 2009 (has links)
Submitted by Suelen Reis (suziy.ellen@gmail.com) on 2013-04-29T14:55:52Z No. of bitstreams: 1 Dissertacao Carla Pinheiro.pdf: 549778 bytes, checksum: 03cc0d51435f2b6589e336f37a4f9a55 (MD5) / Approved for entry into archive by Maria Auxiliadora Lopes(silopes@ufba.br) on 2013-04-30T17:03:07Z (GMT) No. of bitstreams: 1 Dissertacao Carla Pinheiro.pdf: 549778 bytes, checksum: 03cc0d51435f2b6589e336f37a4f9a55 (MD5) / Made available in DSpace on 2013-04-30T17:03:07Z (GMT). No. of bitstreams: 1 Dissertacao Carla Pinheiro.pdf: 549778 bytes, checksum: 03cc0d51435f2b6589e336f37a4f9a55 (MD5) Previous issue date: 2009 / Este trabalho relata a compreensão dos professores-tutores do curso de Medicina da Universidade Estadual do Sudoeste da Bahia (UESB), acerca da construção do conhecimento pelos estudantes durante o processo tutoral na metodologia Aprendizagem Baseada em Problemas (ABP) também conhecida como Problem- Based Learning (PBL), adotada por esse curso. Para esse fim, foram utilizados os recursos investigativos proporcionados pela pesquisa qualitativa, com a realização de entrevistas abertas. A análise das falas desenvolvidas durante as entrevistas pelos professores revelou elementos sobre a compreensão que os mesmos possuem a respeito da construção do conhecimento durante o processo tutoral no Curso de Medicina da UESB. Assim, observou-se as considerações dos professores em relação a aspectos relacionados à construção do conhecimento, à avaliação e às dificuldades encontradas durante o processo tutoral. Dessa maneira, tentou-se relacionar esses aspectos à construção do conhecimento tendo como referencial a aprendizagem significativa de Ausubel, a educação transformadora de Paulo Freire, e o construtivismo de Piaget. / Salvador
34

A experiência do professor médico com métodos ativos de ensino-aprendizagem formação permanente e gestão como mediadoras /

Pio, Danielle Abdel Massih. January 2017 (has links)
Orientador: Silvia Cristina Mangini Bocchi / Resumo: Introdução: Escolas Médicas, como cenários operacionalizadores das diretrizes curriculares para cursos médicos, demandam interações responsáveis e reflexivas, entre professores, estudantes e gestores, sobre a ação educativa. Nessa perspectiva, constituem-se atores em contínua formação, localizando-se o professor e o estudante como protagonistas do processo ensino-aprendizagem inovador e transformador. Objetivos: Compreender o processo experiencial do professor médico com a formação profissional do estudante de Medicina de 1a e 2a séries e 5a e 6a séries do curso e elaborar um modelo teórico representativo dessa experiência. Método: Pesquisa qualitativa, conduzida na Faculdade de Medicina de Marília (Famema) e realizada com professores médicos, atuantes no curso de graduação, critério para inclusão na pesquisa. Foram considerados, para a coleta dos dados, apenas os professores dos dois primeiros e dos dois últimos anos do curso, cenários distintos e em que predominam, respectivamente, a atenção primária e a hospitalar, caracterizados os primeiros pela integração entre os cursos de medicina e enfermagem no cenário de prática profissional. A saturação teórica se configurou mediante a análise da 19a entrevista, segundo os passos da Teoria Fundamentada nos Dados. Resultados: As categorias identificadas e as relações teóricas das ações e das interações que compõem a experiência do professor médico desdobram-se em quatro subprocessos: Aproximando-se: tornando-se professor em métodos... (Resumo completo, clicar acesso eletrônico abaixo) / Doutor
35

Classificação semi-supervisionada baseada em desacordo por similaridade / Semi-supervised learning based in disagreement by similarity

Victor Antonio Laguna Gutiérrez 03 May 2010 (has links)
O aprendizado semi-supervisionado é um paradigma do aprendizado de máquina no qual a hipótese é induzida aproveitando tanto os dados rotulados quantos os dados não rotulados. Este paradigma é particularmente útil quando a quantidade de exemplos rotulados é muito pequena e a rotulação manual dos exemplos é uma tarefa muito custosa. Nesse contexto, foi proposto o algoritmo Cotraining, que é um algoritmo muito utilizado no cenário semi-supervisionado, especialmente quando existe mais de uma visão dos dados. Esta característica do algoritmo Cotraining faz com que a sua aplicabilidade seja restrita a domínios multi-visão, o que diminui muito o potencial do algoritmo para resolver problemas reais. Nesta dissertação, é proposto o algoritmo Co2KNN, que é uma versão mono-visão do algoritmo Cotraining na qual, ao invés de combinar duas visões dos dados, combina duas estratégias diferentes de induzir classificadores utilizando a mesma visão dos dados. Tais estratégias são chamados de k-vizinhos mais próximos (KNN) Local e Global. No KNN Global, a vizinhança utilizada para predizer o rótulo de um exemplo não rotulado é conformada por aqueles exemplos que contém o novo exemplo entre os seus k vizinhos mais próximos. Entretanto, o KNN Local considera a estratégia tradicional do KNN para recuperar a vizinhança de um novo exemplo. A teoria do Aprendizado Semi-supervisionado Baseado em Desacordo foi utilizada para definir a base teórica do algoritmo Co2KNN, pois argumenta que para o sucesso do algoritmo Cotraining, é suficiente que os classificadores mantenham um grau de desacordo que permita o processo de aprendizado conjunto. Para avaliar o desempenho do Co2KNN, foram executados diversos experimentos que sugerem que o algoritmo Co2KNN tem melhor performance que diferentes algoritmos do estado da arte, especificamente, em domínios mono-visão. Adicionalmente, foi proposto um algoritmo otimizado para diminuir a complexidade computacional do KNN Global, permitindo o uso do Co2KNN em problemas reais de classificação / Semi-supervised learning is a machine learning paradigm in which the induced hypothesis is improved by taking advantage of unlabeled data. Semi-supervised learning is particularly useful when labeled data is scarce and difficult to obtain. In this context, the Cotraining algorithm was proposed. Cotraining is a widely used semisupervised approach that assumes the availability of two independent views of the data. In most real world scenarios, the multi-view assumption is highly restrictive, impairing its usability for classifification purposes. In this work, we propose the Co2KNN algorithm, which is a one-view Cotraining approach that combines two different k-Nearest Neighbors (KNN) strategies referred to as global and local k-Nearest Neighbors. In the global KNN, the nearest neighbors used to classify a new instance are given by the set of training examples which contains this instance within its k-nearest neighbors. In the local KNN, on the other hand, the neighborhood considered to classify a new instance is the set of training examples computed by the traditional KNN approach. The Co2KNN algorithm is based on the theoretical background given by the Semi-supervised Learning by Disagreement, which claims that the success of the combination of two classifiers in the Cotraining framework is due to the disagreement between the classifiers. We carried out experiments showing that Co2KNN improves significatively the classification accuracy specially when just one view of training data is available. Moreover, we present an optimized algorithm to cope with time complexity of computing the global KNN, allowing Co2KNN to tackle real classification problems
36

Analyzing symbols in architectural floor plans via traditional computer vision and deep learning approaches

Rezvanifar, Alireza 13 December 2021 (has links)
Architectural floor plans are scale-accurate 2D drawings of one level of a building, seen from above, which convey structural and semantic information related to rooms, walls, symbols, textual data, etc. They consist of lines, curves, symbols, and textual markings, showing the relationships between rooms and all physical features, required for the proper construction or renovation of the building. First, this thesis provides a thorough study of state-of-the-art on symbol spotting methods for architectural drawings, an application domain providing the document image analysis and graphic recognition communities with an interesting set of challenges linked to the sheer complexity and density of embedded information, that have yet to be resolved. Second, we propose a hybrid method that capitalizes on strengths of both vector-based and pixel-based symbol spotting techniques. In the description phase, the salient geometric constituents of a symbol are extracted by a variety of vectorization techniques, including a proposed voting-based algorithm for finding partial ellipses. This enables us to better handle local shape irregularities and boundary discontinuities, as well as partial occlusion and overlap. In the matching phase, the spatial relationship between the geometric primitives is encoded via a primitive-aware proximity graph. A statistical approach is then used to rapidly yield a coarse localization of symbols within the plan. Localization is further refined with a pixel-based step implementing a modified cross-correlation function. Experimental results on the public SESYD synthetic dataset and real-world images demonstrate that our approach clearly outperforms other popular symbol spotting approaches. Traditional on-the-fly symbol spotting methods are unable to address the semantic challenge of graphical notation variability, i.e. low intra-class symbol similarity, an issue that is particularly important in architectural floor plan analysis. The presence of occlusion and clutter, characteristic of real-world plans, along with a varying graphical symbol complexity from almost trivial to highly complex, also pose challenges to existing spotting methods. Third, we address all the above issues by leveraging recent advances in deep learning-based neural networks and adapting an object detection framework based on the YOLO (You Only Look Once) architecture. We propose a training strategy based on tiles, avoiding many issues particular to deep learning-based object detection networks related to the relatively small size of symbols compared to entire floor plans, aspect ratios, and data augmentation. Experimental results demonstrate that our method successfully detects architectural symbols with low intra-class similarity and of variable graphical complexity, even in the presence of heavy occlusion and clutter. / Graduate
37

Maskininlärningsmetoder för bildklassificering av elektroniska komponenter / Machine learning based image classification of electronic components

Goobar, Leonard January 2013 (has links)
Micronic Mydata AB utvecklar och tillverkar maskiner för att automatisk montera elektroniska komponenter på kretskort, s.k. ”Pick and place” (PnP) maskiner. Komponenterna blir lokaliserade och inspekterade optiskt innan de monteras på kretskorten, för att säkerhetsställa att de monteras korrekt och inte är skadade. En komponent kan t.ex. plockas på sidan, vertikalt eller missas helt. Det nuvarande systemet räknar ut uppmätta parametrar så som: längd, bredd och kontrast.Projektet syftar till att undersöka olika maskininlärningsmetoder för att klassificera felaktiga plock som kan uppstå i maskinen. Vidare skall metoderna minska antalet defekta komponenter som monteras samt minska antalet komponenter som felaktigt avvisas. Till förfogande finns en databas innehållande manuellt klassificerade komponenter och tillhörande uppmätta parametrar och bilder. Detta kan användas som träningsdata för de maskininlärningsmetoder som undersöks och testas. Projektet skall även undersöka hur dessa maskininlärningsmetoder lämpar sig allmänt i mekatroniska produkter, med hänsyn till problem så som realtidsbegräsningar.Fyra olika maskininlärningsmetoder har blivit utvärderade och testade. Metoderna har blivit utvärderade för ett test set där den nuvarande metoden presterar mycket bra. Dels har de nuvarande parametrarna använts, samt en alternativ metod som extraherar parametrar (s.k. SIFT descriptor) från bilderna. De nuvarande parametrarna kan användas tillsammans med en SVM eller ett ANN och uppnå resultat som reducerar defekta och monterade komponenter med upp till 64 %. Detta innebär att dessa fel kan reduceras utan att uppgradera de nuvarande bildbehandlingsalgoritmerna. Genom att använda SIFT descriptor tillsammans med ett ANN eller en SVM kan de vanligare felen som uppstår klassificeras med en noggrannhet upp till ca 97 %. Detta överstiger kraftigt de resultat som uppnåtts när de nuvarande parametrarna har använts. / Micronic Mydata AB develops and builds machines for mounting electronic component onto PCBs, i.e. Pick and Place (PnP) machines. Before being mounted the components are localized and inspected optically, to ensure that the components are intact and picked correctly. Some of the errors which may occur are; the component is picked sideways, vertically or not picked at all. The current vision system computes parameter such as: length, width and contrast.The project strives to investigate and test machine learning approaches which enable automatic error classification. Additionally the approaches should reduce the number of defect components which are mounted, as well as reducing the number of components which are falsely rejected. At disposal is a large database containing the calculated parameters and images of manually classified components. This can be used as training data for the machine learning approaches. The project also strives to investigate how machine learning approaches can be implemented in mechatronic systems, and how limitations such as real-time constraints could affect the feasibility.Four machine learning approaches have been evaluated and verified against a test set where the current implementation performs very well. The currently calculated parameters have been used as inputs, as well as a new approach which extracts (so called SIFT descriptor) parameters from the raw images. The current parameters can be used with an ANN or a SVM and achieve results which reduce the number of poorly mounted components by up to 64 %. Hence, these defects can be decreased without updating the current vision algorithms. By using SIFT descriptors and an ANN or a SVM the more common classes can be classified with accuracies up to approximately 97 %. This greatly exceeds results achieved when using the currently computed parameters.
38

Digital Image Processing And Machine Learning Research: Digital Color Halftoning, Printed Image Artifact Detection And Quality Assessment, And Image Denoising.

Yi Yang (12481647) 29 April 2022 (has links)
<p>To begin with, we describe a project in which three screens for Cyan, Magenta, and Yellow colorants were designed jointly using the Direct Binary Search algorithm (DBS). The screen set generated by the algorithm can be used to halftone color images easily and quickly. The halftoning results demonstrate that by utilizing the screen sets, it is possible to obtain high-quality color halftone images while significantly reducing computational complexity.</p> <p>Our next research focuses on defect detection and quality assessment of printed images. We measure and analyze macro-uniformity, banding, and color plane misregistration. For these three defects, we designed different pipelines for them and developed a series of digital image processing and computer vision algorithms for the purpose of quantifying and evaluating these printed image defects. Additionally, we conduct a human psychophysical experiment to collect perceptual assessments and use machine learning approaches to predict image quality scores based on human vision.</p> <p>We study modern deep convolutional neural networks for image denoising and propose a network designed for AWGN image denoising. </p> <p>Our network removes the bias at each layer to achieve the benefits of scaling invariant network; additionally, it implements a mix loss function to boost performance. We train and evaluate our denoising results using PSNR, SSIM, and LPIPS, and demonstrate that our results achieve impressive performance on both objective and subjective IQA assessments.</p>
39

A MULTI-FIDELITY MODELING AND EXPERIMENTAL TESTBED FOR TESTING & EVALUATION OF LEARNING-BASED SYSTEMS

Atharva Mahesh Sonanis (17123428) 10 October 2023 (has links)
<p dir="ltr">Learning-based systems (LBS) have become essential in various domains, necessitating the development of testing and evaluation (T&E) procedures specifically tailored to address the unique characteristics and challenges of LBS. However, existing frameworks designed for traditional systems do not adequately capture the intricacies of LBS, including their evolving nature, complexity, and susceptibility to adversarial actions. This study advocates for a paradigm shift in T&E, proposing its integration throughout the entire life cycle of LBS, starting from the early stages of development and extending to operations and sustainment. The research objectives focus on exploring innovative approaches for designing LBS-specific T&E strategies, creating an experimental testbed with multi-fidelity modeling capabilities, investigating the optimal degree of test and evaluation required for LBS, and examining the impact of system knowledge access and the delicate balance between T&E activities and data/model rights. These objectives aim to overcome the challenges associated with LBS and contribute to the development of effective testing approaches that assess their capabilities and limitations throughout the life cycle. The proposed experimental testbed will provide a versatile environment for comprehensive testing and evaluation, enabling researchers and practitioners to assess LBS performance across varying levels of complexity. The findings from this study will contribute the development of efficient testing strategies and practical approaches that strike a balance between thorough evaluation and data/model rights. Ultimately, the integration of continuous T&E insights throughout the life cycle of LBS aims to enhance the effectiveness and efficiency of capability delivery by enabling adjustments and improvements at each stage.</p>
40

UBIQUITOUS HUMAN SENSING NETWORK FOR CONSTRUCTION HAZARD IDENTIFICATION USING WEARABLE EEG

Jungho Jeon (13149345) 25 July 2022 (has links)
<p>  </p> <p>Hazard identification is one of the most significant components in safety management at construction jobsites to prevent undesired fatalities and injuries of construction workers. The current practice, which relies on a limited number of safety managers’ manual and subjective inspections, and existing research efforts analyzing workers’ physical and physiological signals have achieved limited success, leaving many hazards unidentified at the jobsites. Motivated by this critical need, this research aims to develop a human sensing network that allows for ubiquitous hazard identification in the construction workplace.</p> <p>To attain this overarching goal, this research analyzes construction workers’ collective EEG signals collected from wearable EEG sensors based on machine learning, virtual reality (VR), and advanced signal processing techniques. Three specific research objectives are: (1) establishing a relationship between EEG signals and the existence of construction hazards, (2) identifying correlations between EEG signals/physiological states (e.g., emotion) and different hazard types, and (3) developing an integrated platform for real-time construction hazard mapping and comparing the results developed based on VR and real-world experimental settings.</p> <p>Specifically, the first objective establishes the relationship by investigating the feasibility of identifying construction hazards using a binary EEG classifier developed in VR, which can capture EEG signals associated with perceived hazards. In the second objective, correlations are discovered by testing the feasibility of differentiating construction hazard types based on a multi-class classifier constructed in VR. In the first and second objectives, the complex relationships are also analyzed in terms of brain dynamics and EEG signal components. In the third objective, the platform is developed by fusing EEG signals with heterogeneous data (e.g., location), and the discrepancies in VR and real-world environments are quantitatively assessed in terms of hazard identification performance and human behavioral responses.</p> <p>The primary outcome of this research is that the proposed approach can be applied to actual construction jobsites and used to detect all potential hazards, which was challenging to be achieved based on the current practice and existing research efforts. Also, the human cognitive mechanisms revealed in this research discover new neurocognitive knowledge in construction workers’ hazard perception. As a result, this research contributes to enhancing current hazard identification capability and improving construction workers’ safety and health.</p>

Page generated in 0.1202 seconds