• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 4
  • 2
  • 2
  • 1
  • Tagged with
  • 29
  • 29
  • 8
  • 7
  • 6
  • 6
  • 6
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Visual Teach and Repeat Using Appearance-based Lidar - A Method For Planetary Exploration

McManus, Colin 14 December 2011 (has links)
Future missions to Mars will place heavy emphasis on scientific sample and return operations, which will require a rover to revisit sites of interest. Visual Teach and Repeat (VT&R) has proven to be an effective method to enable autonomous repeating of any previously driven route without a global positioning system. However, one of the major challenges in recognizing previously visited locations is lighting change, as this can drastically change the appearance of the scene. In an effort to achieve lighting invariance, this thesis details the design of a VT&R system that uses a laser scanner as the primary sensor. The key novelty is to apply appearance-based vision techniques traditionally used with camera systems to laser intensity images for motion estimation. Field tests were conducted in an outdoor environment over an entire diurnal cycle, covering more than 11km with an autonomy rate of 99.7% by distance.
12

Towards Interpretable Vision Systems

Zhang, Peng 06 December 2017 (has links)
Artificial intelligent (AI) systems today are booming and they are used to solve new tasks or improve the performance on existing ones. However, most AI systems work in a black-box fashion, which prevents the users from accessing the inner modules. This leads to two major problems: (i) users have no idea when the underlying system will fail and thus it could fail abruptly without any warning or explanation, and (ii) users' lack of proficiency about the system could fail pushing the AI progress to its state-of-the-art. In this work, we address these problems in the following directions. First, we develop a failure prediction system, acting as an input filter. It raises a flag when the system is likely to fail with the given input. Second, we develop a portfolio computer vision system. It is able to predict which of the candidate computer vision systems perform the best on the input. Both systems have the benefit of only looking at the inputs without running the underlying vision systems. Besides, they are applicable to any vision system. By equipped such systems on different applications, we confirm the improved performance. Finally, instead of identifying errors, we develop more interpretable AI systems, which reveal the inner modules directly. We take two tasks as examples, words semantic matching and Visual Question Answering (VQA). In VQA, we take binary questions on abstract scenes as the first stage, then we extend to all question types on real images. In both cases, we take attention as an important intermediate output. By explicitly forcing the systems to attend correct regions, we ensure the correctness in the systems. We build a neural network to directly learn the semantic matching, instead of using the relation similarity between words. Across all the above directions, we show that by diagnosing errors and making more interpretable systems, we are able to improve the performance in the current models. / Ph. D.
13

GPU computing for cognitive robotics

Peniak, Martin January 2014 (has links)
This thesis presents the first investigation of the impact of GPU computing on cognitive robotics by providing a series of novel experiments in the area of action and language acquisition in humanoid robots and computer vision. Cognitive robotics is concerned with endowing robots with high-level cognitive capabilities to enable the achievement of complex goals in complex environments. Reaching the ultimate goal of developing cognitive robots will require tremendous amounts of computational power, which was until recently provided mostly by standard CPU processors. CPU cores are optimised for serial code execution at the expense of parallel execution, which renders them relatively inefficient when it comes to high-performance computing applications. The ever-increasing market demand for high-performance, real-time 3D graphics has evolved the GPU into a highly parallel, multithreaded, many-core processor extraordinary computational power and very high memory bandwidth. These vast computational resources of modern GPUs can now be used by the most of the cognitive robotics models as they tend to be inherently parallel. Various interesting and insightful cognitive models were developed and addressed important scientific questions concerning action-language acquisition and computer vision. While they have provided us with important scientific insights, their complexity and application has not improved much over the last years. The experimental tasks as well as the scale of these models are often minimised to avoid excessive training times that grow exponentially with the number of neurons and the training data. This impedes further progress and development of complex neurocontrollers that would be able to take the cognitive robotics research a step closer to reaching the ultimate goal of creating intelligent machines. This thesis presents several cases where the application of the GPU computing on cognitive robotics algorithms resulted in the development of large-scale neurocontrollers of previously unseen complexity enabling the conducting of the novel experiments described herein.
14

Sistema de visão robótica para reconhecimento de contornos de componentes na aplicação de processos industriais

Foresti, Renan Luís January 2006 (has links)
O presente trabalho trata da implementação de um sistema de visão robótica para reconhecimento de formas bidimensionais e transformação do contorno em trajetória para um manipulador industrial. A aquisição da imagem ocorre através de uma câmera CCD sobre a área específica de captura. O uso de uma webcam também é testado. A imagem captada é enviada para o computador com processamento realizado em MATLAB, através de rotina de software de controle, escrita em VB.NET. São analisadas variações de contraste e resolução com objetos distintos, onde o sistema identifica os pixels que delimitam o contorno do objeto utilizando limiarização pelo Método de Otsu e algoritmos morfológicos. A posição de cada pixel é processada, transformada em coordenadas cartesianas e enviada para o controle do manipulador robótico, que efetua a trajetória, simulando um processo industrial. A transmissão ao controle do manipulador é realizada em protocolo especial, via porta paralela de um microcomputador à placa de aquisição de sinais digitais do controle do manipulador. Um processo de simulação em uma célula de manufatura proposto para validar o sistema, identifica objetos distintos que chegam de forma desordenada através de uma esteira transportadora. / This work approaches the implementation of a robotics vision system to recognize 2D forms and contour transformation in trajectory to an industrial manipulator. The image acquisition occur by CCD video camera on the specific capture area. A webcam system is also tested. The captured image is sent to a MATLAB computer processing, through a control software routine, written in VB. NET. The contrast and resolution changes are analyzed with different objects where the system identifies the pixels of object contour using Otsu’s Thresholding Method and morphological algorithms. The position of each pixel is processed, transformed in cartesian coordinates and sent to the robotic manipulator control, which executes the trajectory simulating an industrial process. The transmission to the manipulator control is realized in a special protocol, using the parallel port of computer and a digital signal acquisition card of the manipulator control. A simulation process in a manufacturing cell is aimed to validate the system, identifying distinct objects that coming in a disorientated form from a belt conveyor.
15

Sistema de visão robótica para reconhecimento de contornos de componentes na aplicação de processos industriais

Foresti, Renan Luís January 2006 (has links)
O presente trabalho trata da implementação de um sistema de visão robótica para reconhecimento de formas bidimensionais e transformação do contorno em trajetória para um manipulador industrial. A aquisição da imagem ocorre através de uma câmera CCD sobre a área específica de captura. O uso de uma webcam também é testado. A imagem captada é enviada para o computador com processamento realizado em MATLAB, através de rotina de software de controle, escrita em VB.NET. São analisadas variações de contraste e resolução com objetos distintos, onde o sistema identifica os pixels que delimitam o contorno do objeto utilizando limiarização pelo Método de Otsu e algoritmos morfológicos. A posição de cada pixel é processada, transformada em coordenadas cartesianas e enviada para o controle do manipulador robótico, que efetua a trajetória, simulando um processo industrial. A transmissão ao controle do manipulador é realizada em protocolo especial, via porta paralela de um microcomputador à placa de aquisição de sinais digitais do controle do manipulador. Um processo de simulação em uma célula de manufatura proposto para validar o sistema, identifica objetos distintos que chegam de forma desordenada através de uma esteira transportadora. / This work approaches the implementation of a robotics vision system to recognize 2D forms and contour transformation in trajectory to an industrial manipulator. The image acquisition occur by CCD video camera on the specific capture area. A webcam system is also tested. The captured image is sent to a MATLAB computer processing, through a control software routine, written in VB. NET. The contrast and resolution changes are analyzed with different objects where the system identifies the pixels of object contour using Otsu’s Thresholding Method and morphological algorithms. The position of each pixel is processed, transformed in cartesian coordinates and sent to the robotic manipulator control, which executes the trajectory simulating an industrial process. The transmission to the manipulator control is realized in a special protocol, using the parallel port of computer and a digital signal acquisition card of the manipulator control. A simulation process in a manufacturing cell is aimed to validate the system, identifying distinct objects that coming in a disorientated form from a belt conveyor.
16

Sistema de visão robótica para reconhecimento de contornos de componentes na aplicação de processos industriais

Foresti, Renan Luís January 2006 (has links)
O presente trabalho trata da implementação de um sistema de visão robótica para reconhecimento de formas bidimensionais e transformação do contorno em trajetória para um manipulador industrial. A aquisição da imagem ocorre através de uma câmera CCD sobre a área específica de captura. O uso de uma webcam também é testado. A imagem captada é enviada para o computador com processamento realizado em MATLAB, através de rotina de software de controle, escrita em VB.NET. São analisadas variações de contraste e resolução com objetos distintos, onde o sistema identifica os pixels que delimitam o contorno do objeto utilizando limiarização pelo Método de Otsu e algoritmos morfológicos. A posição de cada pixel é processada, transformada em coordenadas cartesianas e enviada para o controle do manipulador robótico, que efetua a trajetória, simulando um processo industrial. A transmissão ao controle do manipulador é realizada em protocolo especial, via porta paralela de um microcomputador à placa de aquisição de sinais digitais do controle do manipulador. Um processo de simulação em uma célula de manufatura proposto para validar o sistema, identifica objetos distintos que chegam de forma desordenada através de uma esteira transportadora. / This work approaches the implementation of a robotics vision system to recognize 2D forms and contour transformation in trajectory to an industrial manipulator. The image acquisition occur by CCD video camera on the specific capture area. A webcam system is also tested. The captured image is sent to a MATLAB computer processing, through a control software routine, written in VB. NET. The contrast and resolution changes are analyzed with different objects where the system identifies the pixels of object contour using Otsu’s Thresholding Method and morphological algorithms. The position of each pixel is processed, transformed in cartesian coordinates and sent to the robotic manipulator control, which executes the trajectory simulating an industrial process. The transmission to the manipulator control is realized in a special protocol, using the parallel port of computer and a digital signal acquisition card of the manipulator control. A simulation process in a manufacturing cell is aimed to validate the system, identifying distinct objects that coming in a disorientated form from a belt conveyor.
17

Mitigation Of Motion Sickness Symptoms In 360 Degree Indirect Vision Systems

Quinn, Stephanie 01 January 2013 (has links)
The present research attempted to use display design as a means to mitigate the occurrence and severity of symptoms of motion sickness and increase performance due to reduced “general effects” in an uncoupled motion environment. Specifically, several visual display manipulations of a 360° indirect vision system were implemented during a target detection task while participants were concurrently immersed in a motion simulator that mimicked off-road terrain which was completely separate from the target detection route. Results of a multiple regression analysis determined that the Dual Banners display incorporating an artificial horizon (i.e., AH Dual Banners) and perceived attentional control significantly contributed to the outcome of total severity of motion sickness, as measured by the Simulator Sickness Questionnaire (SSQ). Altogether, 33.6% (adjusted) of the variability in Total Severity was predicted by the variables used in the model. Objective measures were assessed prior to, during and after uncoupled motion. These tests involved performance while immersed in the environment (i.e., target detection and situation awareness), as well as postural stability and cognitive and visual assessment tests (i.e., Grammatical Reasoning and Manikin) both before and after immersion. Response time to Grammatical Reasoning actually decreased after uncoupled motion. However, this was the only significant difference of all the performance measures. Assessment of subjective workload (as measured by NASA-TLX) determined that participants in Dual Banners display conditions had a significantly lower level of perceived physical demand than those with Completely Separated display designs. Further, perceived iv temporal demand was lower for participants exposed to conditions incorporating an artificial horizon. Subjective sickness (SSQ Total Severity, Nausea, Oculomotor and Disorientation) was evaluated using non-parametric tests and confirmed that the AH Dual Banners display had significantly lower Total Severity scores than the Completely Separated display with no artificial horizon (i.e., NoAH Completely Separated). Oculomotor scores were also significantly different for these two conditions, with lower scores associated with AH Dual Banners. The NoAH Completely Separated condition also had marginally higher oculomotor scores when compared to the Completely Separated display incorporating the artificial horizon (AH Completely Separated). There were no significant differences of sickness symptoms or severity (measured by self-assessment, postural stability, and cognitive and visual tests) between display designs 30- and 60-minutes post-exposure. Further, 30- and 60- minute post measures were not significantly different from baseline scores, suggesting that aftereffects were not present up to 60 minutes post-exposure. It was concluded that incorporating an artificial horizon onto the Dual Banners display will be beneficial in mitigating symptoms of motion sickness in manned ground vehicles using 360° indirect vision systems. Screening for perceived attentional control will also be advantageous in situations where selection is possible. However, caution must be made in generalizing these results to missions under terrain or vehicle speed different than what is used for this study, as well as those that include a longer immersion time.
18

VOLUME MEASUREMENT OF BIOLOGICAL MATERIALS IN LIVESTOCK OR VEHICULAR SETTINGS USING COMPUTER VISION

Matthew B Rogers (13171323) 28 July 2022 (has links)
<p>A Velodyne Puck VLP-16 LiDAR and a Carnegie Robotics Multisense S21 stereo camera were placed in an environmental testing chamber to investigate dust and lighting effects on depth returns. The environmental testing chamber was designed and built with varied lighting conditions with corn dust plumes forming the atmosphere. Specific software employing ROS, Python, and OpenCV were written for point cloud streaming and publishing. Dust chamber results showed while dust effects were present in point clouds produced by both instruments, the stereo camera was able to “see” the far wall of the chamber and did not image the dust plume, unlike the LiDAR sensor. The stereo camera was also set up to measure the volume of total mixed ration (TMR) and shelled grain in various volume scenarios with mixed surface terrains. Calculations for finding actual pixel area based on depth were utilized along with a volume formula exploiting the depth capability of the stereo camera for the results. Resulting accuracy was good for a target of 8 liters of shelled corn with final values between 6.8 and 8.3 liters from three varied surface scenarios. Lessons learned from the chamber and volume measurements were applied to loading large grain vessels being filled from a 750-bushel grain cart in the form of calculating the volume of corn grain and tracking the location of the vessel in near real time. Segmentation, masking, and template matching were the primary software tools used within ROS, OpenCV, and Python. The S21 was the center hardware piece. Resulting video and images show some lag between depth and color images, dust blocking depth pixels, and template matching misses. However, results were sufficient to show proof of concept of tracking and volume estimation. </p>
19

Towards the Utilization of Machine Vision Systems as an Integral Component of Industrial Quality Monitoring Systems

Megahed, Fadel Mounir 05 January 2010 (has links)
Recent research discussed the development of image processing tools as a part of the quality control framework in manufacturing environments. This research could be divided into two image-based fault detection approaches: 1) MVS; and 2) MVS and control charts. Despite the intensive research in both groups, there is a disconnect between research and the actual needs on the shop-floor. This disconnect is mainly attributed to the following: • The literature for the first category has mainly focused on improving fault detection accuracy through the use of special setups without considering its impact on the manufacturing process. Therefore, many of these methods have not been utilized by industry, and these tools lack the capability of using images already present on the shop floor. • The studies presented on the second category have been mainly developed in isolation. In addition, most of these studies have focused more on introducing the concept of utilizing control charts on image data rather than tackling specific industry problems. • In this thesis, these limitations are investigated and are disseminated to the research community through two different journal papers. In the first paper, it was shown that a face-recognition tool could be successfully used to detect faults in real-time in stamped processes, where the changes in image lighting conditions and part location were allowed to emulate actual manufacturing environments. On the other hand, the second paper reviewed the literature on image-based control charts and suggested recommendations for future research. / Master of Science
20

Geração de espécies reativas de oxigênio e neuroinflamação induzidas por enucleação ocular no sistema visual de ratos. / Reactive oxygen species generation and neuroinflammation induced by ocular enucleation in the rat visual system.

Hernandes, Marina Sorrentino 29 April 2011 (has links)
O modelo de enucleação ocular em roedores é frequentemente empregado para o estudo dos efeitos da desaferentação de estruturas retinorrecipientes. Avaliamos a geração de espécies reativas de oxigênio (EROs), no colículo superior (CS) e núcleo geniculado lateral dorsal do tálamo (GLD) após enucleação ocular. A oxidação da dihidroetidina revelou o aumento da geração de EROs no CS e no GLD após a lesão. Os resultados de RT-PCR revelaram aumento da expressão gênica de Nox 2 em ambas as estruturas avaliadas. Em contrapartida, observou-se aumento da expressão gênica de Nox 1 e 4 apenas no CS. Com a finalidade de avaliarmos o envolvimento de EROs no remodelamento estrutural após a lesão, animais foram tratados com apocinina e ensaios de imuno-histoquímica foram realizados utilizando-se anticorpos contra neurofilamentos (NFs) e proteínas associadas a microtúbulos-2 (MAP-2). Os resultados revelaram que a enucleação ocular produz um aumento na imunorreatividade para NFs e MAP-2 no CS e GLD, o que foi atenuado pelo tratamento com apocinina. / Unilateral ocular enucleation represents a useful model to study visual system plasticity. We evaluated the reactive oxygen species (ROS) generation in the main visual relays of the mammalian brain, namely the superior colliculus (SC) and the dorsal lateral geniculate nucleus (DLG), after ocular enucleation. Dihydroethidium oxidation revealed increased ROS generation in SC and DLG. ROS generation was decreased by the Nox inhibitors DPI and apocynin. Real-time PCR results revealed that Nox 2 was upregulated in both retinorecipient structures after deafferentation, whereas Nox 1 and Nox 4 were upregulated only in the SC. To evaluate the role of ROS in structural remodeling after the lesions, apocynin was given to enucleated rats and immunohistochemistry was conducted for markers of neuronal remodeling into SC and DLG. Immunohistochemical data showed that ocular enucleation produces an increase of neurofilament and microtubule-associated protein-2 immunostaining in both SC and DLG, which was markedly attenuated by apocynin treatment.

Page generated in 0.1102 seconds