• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 13
  • 4
  • 1
  • 1
  • Tagged with
  • 40
  • 40
  • 40
  • 22
  • 13
  • 12
  • 11
  • 10
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Simultaneous Localization And Mapping Using a Kinect In a Sparse Feature Indoor Environment / Simultan lokalisering och kartering med hjälp av en Kinect i en inomhusmiljö med få landmärken

Hjelmare, Fredrik, Rangsjö, Jonas January 2012 (has links)
Localization and mapping are two of the most central tasks when it comes to autonomous robots. It has often been performed using expensive, accurate sensors but the fast development of consumer electronics has made similar sensors available at a more affordable price. In this master thesis a TurtleBot, robot and a Microsoft Kinect, camera are used to perform Simultaneous Localization And Mapping, SLAM. The thesis presents modifications to an already existing open source SLAM algorithm. The original algorithm, based on visual odometry, is extended so that it can also make use of measurements from wheel odometry and asingle axis gyro. Measurements are fused using an Extended Kalman Filter, EKF, operating in a multirate fashion. Both the SLAM algorithm and the EKF are implemented in C++ using the framework Robot Operating System, ROS. The implementation is evaluated on two different data sets. One set is recorded in an ordinary office room which constitutes an environment with many landmarks. The other set is recorded in a conference room where one of the walls is flat and white. This gives a partially sparse featured environment. The result by providing additional sensor information is a more robust algorithm. Periods without credible visual information does not make the algorithm lose its track and the algorithm can thus be used in a larger variety of environments including such where the possibility to extract landmarks is low. The result also shows that the visual odometry can cancel out drift introduced by wheel odometry and gyro sensors.
12

Simultaneous Localization And Mapping Using a Kinect in a Sparse Feature Indoor Environment / Simultan lokalisering och kartering med hjälp av en Kinect i en inomhusmiljö med få landmärken

Hjelmare, Fredrik, Rangsjö, Jonas January 2012 (has links)
Localization and mapping are two of the most central tasks when it comes toautonomous robots. It has often been performed using expensive, accurate sensorsbut the fast development of consumer electronics has made similar sensorsavailable at a more affordable price. In this master thesis a TurtleBot\texttrademark\, robot and a MicrosoftKinect\texttrademark\, camera are used to perform Simultaneous Localization AndMapping, SLAM. The thesis presents modifications to an already existing opensource SLAM algorithm. The original algorithm, based on visual odometry, isextended so that it can also make use of measurements from wheel odometry and asingle axis gyro. Measurements are fused using an Extended Kalman Filter,EKF, operating in a multirate fashion. Both the SLAM algorithm and the EKF areimplemented in C++ using the framework Robot Operating System, ROS. The implementation is evaluated on two different data sets. One set isrecorded in an ordinary office room which constitutes an environment with manylandmarks. The other set is recorded in a conference room where one of the wallsis flat and white. This gives a partially sparse featured environment. The result by providing additional sensor information is a more robust algorithm.Periods without credible visual information does not make the algorithm lose itstrack and the algorithm can thus be used in a larger variety of environmentsincluding such where the possibility to extract landmarks is low. The resultalso shows that the visual odometry can cancel out drift introduced bywheel odometry and gyro sensors.
13

Evaluation of ROS and Arduino Controllers for the OBDH Subsystem of a CubeSat / Evaluation of ROS and Arduino Controllers for the OBDH Subsystem of a CubeSat

Ande, Rama kanth, Amarawadi, Sharath Chandra January 2012 (has links)
CubeSat projects in various universities around the world have become predominant in the study and research for developing CubeSats. Such projects have broadened the scope for understanding this new area of space research. Different CubeSats have been developed by other universities and institutions for different applications. The process of design, development and deployment of CubeSats involves several stages of theoretical and practical work ranging from understanding the concepts associated with communication subsystems, data handling subsystems to innovations in the field like implementing compatible operating systems in the CubeSat processors and new designs of transceivers and other components. One of the future trend setting research areas in CubeSat projects is the implementation of ROS in CubeSat. Robot Operating System (ROS) is aiming to capture the future of many embedded systems including Robotics. In this thesis, an attempt is made to understand the challenges faced during implementing ROS in CubeSat to provide a foundation for the OBDH subsystem and provide important guidelines for future developers relying on ROS run CubeSats. Since using traditional transceivers and power supply would be expensive, we have tried simulating Arduino to act as transceiver and power supply subsystems. Arduino is an open-source physical computing platform based on a simple microcontroller board, and a development environment for writing software for the board designed to make the process of using electronics in major embedded projects more accessible and inexpensive. Another important focus in this thesis has been to establish communication between CubeSat kit and Arduino. The major motivating factor for this thesis was to experiment with and come up with alternate ways which could prove as important measures in future to develop an effective and useful CubeSat by cutting down on development costs. An extensive literature review is carried out on the concepts of Arduino boards and ROS and its uses in Robotics which served as a base to understand its use in CubeSat. Experiment is conducted to communicate the CubeSat kit with Arduino. The results from the study of ROS and experiments with Arduino have been highly useful in drafting major problems and complications that developers would encounter while implementing ROS in CubeSat. Comprehensive analysis to the results obtained serve as important suggestions and guidelines for future researchers working in this field. / One of the future trend setting research areas in CubeSat projects is the implementation of ROS in CubeSat. Robot Operating System (ROS) is aiming to capture the future of many embedded systems including Robotics. In this thesis, an attempt is made to understand the challenges faced during implementing ROS in CubeSat to provide a foundation for the OBDH subsystem and provide important guidelines for future developers relying on ROS run CubeSats. Since using traditional transceivers and power supply would be expensive, we have tried simulating Arduino to act as transceiver and power supply subsystems. Arduino is an open-source physical computing platform based on a simple microcontroller board, and a development environment for writing software for the board designed to make the process of using electronics in major embedded projects more accessible and inexpensive.
14

Navigace mobilního robotu B2 ve venkovním prostředí / Navigation of B2 mobile robot in outdoor environment

Hoffmann, David January 2019 (has links)
This master’s thesis deals with the navigation of a mobile robot that uses the ROS framework. The aim is to improve the ability of the existing B2 robot to move autonomously outdoors. The theoretical part contains a description of the navigation core, which consists of the move_base library and the packages used for planning. The practical part describe the aws of the existing solution, the design and implementation of changes and the results of subsequent testing in the urban park environment.
15

Interaktivní rozhraní pro vzdáleného robota / Interactive Interface for Robot Remote Control

Lokaj, Tomáš January 2012 (has links)
This work deals with the interactive interface for remote controlled robots and examines some of the existing visualization and simulation tools and robotic platforms. It also designs and implements interactive elements suitable for representation of detected objects, such as bounding box or billboard, and proposes interactive elements to eliminate some of the problems associated with remote control of the robot, such as bad perception of distances and the orientation in the environment. The interactive interface is implemented in the Robot Operating System using offered means for visualization, communication and operations management. Graphics primitives are represented by Interactive Markers that, in addition to the visualization, offers also possibilities of interaction. With these markers, a simple tool for controlling the movement of the robot is designed.
16

OCTREE 3D VISUALIZATION MAPPING BASED ON CAMERA INFORMATION

Benhao Wang (8803199) 07 May 2020 (has links)
<p>Today, computer science and robotics have been highly developed. Simultaneous Localization and Mapping (SLAM) is widely used in mobile robot navigation, game design, and autonomous vehicles. It can be said that in the future, most scenarios where mobile robots are applied will require localization and mapping. Among them, the construction of three-dimensional(3D) maps is particularly important for environment visualization which is the focus of this research.</p> <p>In this project, the data used for visualization was collected using a vision sensor. The data collected by the vision sensor is processed by ORB-SLAM2 to generate the 3D cloud point maps of the environment. Because, there are a lot of noise in the map points cloud, filters are used to remove the noise. The generated map points are processed by the straight-through filter to cut off the points out of the specific range. Statistical filters are then used to remove sparse outlier noise. Thereafter, in order to improve the calculation efficiency and retain the necessary terrain details, a voxel filter is used for downsampling. In order to improve the composition effect, it is necessary to appropriately increase the sampling amount to increase surface smoothness. Finally, the processed map points are visualized using Octomap. The implementation utilizes the services provided by the Robot Operating System (ROS). The powerful Rviz software on the ROS platform is used. The processed map points as cloud data are published in ROS and visualized using Octomap. </p> <p>Simulation results confirm that Octomap can show the terrain details well in the 3D visualization of the environment. After the simulations, visualization experiments for two environments of different complexity are performed. The experimental results show that the approach can mitigate the influence of noise on the visualization results to a certain extent. It is shown that for static high-precision point clouds, Octomap provides a good visualization. The simulation and experimental results demonstrate the applicably of the approach to visualize 3D map points for the purpose of autonomous navigation.</p><br>
17

SIMULATION AND CONTROL ENHANCEMENTS FOR THE DA VINCI SURGICAL ROBOT™

Shkurti, Thomas E. 23 May 2019 (has links)
No description available.
18

Real-Time Computational Scheduling with Path Planning for Autonomous Mobile Robots

Chen, David Xitai 05 June 2024 (has links)
With the advancement in technology, modern autonomous vehicles are required to perform more complex tasks and navigate through challenging terrains. Thus, the amount of computation resources to accurately accomplish those tasks have exponentially grown in the last decade. With growing computational intensity and limited computational resources on embedded devices, schedulers are necessary to manage and fully optimize computational loads between the GPU and CPU as well as reducing the power consumption to maximize time in the field. Thus far, it has been proven the effectiveness of schedulers and path planners on computational load on embedded devices through numerous bench testing and simulated environments. However, there have not been any significant data collection in the real-world with all hardware and software combined. This thesis focuses on the implementation of various computational loads (i.e. scheduler, path planner, RGB-D camera, object detection, depth estimation, etc.) on the NVIDIA Jetson AGX Xavier and real-world experimentation on the Clearpath Robotics Jackal. We compare the computation response time and effectiveness of all systems tested in the real-world versus the same software and hardware architecture on the bench. / Master of Science / Modern autonomous vehicles are required to perform more complex tasks with limited computational resources, power and operating frequency. In recent past, the research around autonomous vehicles have been focused on proving the effectiveness of using software-based programming on embedded devices with integrated GPU to improve the overall performance by speeding up task completion. Our goal is to perform real-world data collection and experimentation with both hardware and software frameworks onboard the Clearpath Robotics Jackal. This will validate the efficiency and computational load of the software framework under multiple varying environments.
19

An open-source digital twin of the wire arc directed energy deposition process for interpass temperature regulation

Stokes, Ryan Mitchell 10 May 2024 (has links) (PDF)
The overall goal of this work is to create an open-source digital twin of the wire arc directed energy deposition process using robot operating system 2 for interpass temperature regulation of a maraging steel alloy. This framework takes a novel approach to regulating the interpass temperatures by using in-situational infrared camera data and a closed loop feedback control that is enabled by robot operating system 2. This is the first implementation of robot operating system 2 for wire arc directed energy deposition and this framework outlines a sensor and machine agnostic approach for creating a digital twin of this additive manufacturing process. In-situ control of the welding process is conducted on a maraging steel alloy demonstrating interpass temperature regulation leads to improved as-built surface roughness and more consistent as-built hardness. An evaluation of three distinct weld modes: Pulsed MIG, CMT MIX, and CMT Universal and two primary process parameters: travel speed and wire feed speed was conducted to identify suitable process windows for welding the maraging alloy. Single track welds for each parameter and weld mode combination were produced and evaluated against current weld bead metrics in the literature. Non destructive profilometry and destructive characterization were performed on the single track welds to evaluate geometric features like wetting angle, dilution percentage, and cross sectional area. In addition, the role of material feed rate on heat input and the cross sectional area was examined in relation to the as-built hardness. The robot operating system 2 digital twin provides a visualization environment to monitor and record real time data from a variety of sensors including robot position, weld data, and thermal camera images. Point cloud data is visualized, in real time, to provide insight to the captured weld meta data. Capturing in-situ data from the wire arc directed energy deposition process is critical to establishing an improved understanding of the process for parameter optimization, tool path planning, with both required to build repeatable, quality components. This work presents an open-source method to capture multi-modal data into a shared environment for improved data capture, data sharing, data synchronization, and data visualization. This digital twin provides users enhanced process control capabilities and greater flexibility by utilizing the robot operating system 2 as a middleware to provide interoperability between sensors and machines.
20

Exploração robótica ativa usando câmera de profundidade / Active robotic exploration using depth camera

Viecili, Eduardo Brendler 17 March 2014 (has links)
Made available in DSpace on 2016-12-12T20:22:52Z (GMT). No. of bitstreams: 1 Eduardo B Viecilli.pdf: 12003318 bytes, checksum: 049902e80d65ca85726715d69e30469a (MD5) Previous issue date: 2014-03-17 / Conselho Nacional de Desenvolvimento Científico e Tecnológico / Mobile robots should be able to seek (explore the environment and recognize the objects) autonomously and efficiently. This work developed a mobile robot capable of performing the search for a 3D object in an unknown environment, using only one depth camera (RGB + Depth) as sensor and executing a strategy of active vision. The Microsoft Kinect was adopted as sensor. Also a mobile robot (XKBO) was build using the Robot Operating System (ROS), and with its architecture adapted from the norm STANAG 4586. A new active exploration strategy was developed in which considers the effort of the robot to move to the frontier regions (occult areas), and the presence of traces of the object. The metrics used demonstrated that the use of depth cameras for visual search tasks have a potential for deployment because associates visual and depth information, allowing the robot to better understand the environment and the target object of the search. / Robôs móveis devem ter a capacidade de buscar (explorar o ambiente e reconhecer objetos) de forma autônoma e eficiente. Este trabalho desenvolveu um robô móvel capaz de executar a busca a um objeto (3D) em ambiente desconhecido, utilizando somente uma câmera de profundidade (RGB + distancia) como sensor e executando uma estratégia de visão ativa. A Microsoft Kinect foi a câmera adotada. Também construiu-se um robô móvel (XKBO) que utiliza o Sistema Operacional Robótico (ROS), e com a arquitetura adaptada da norma STANAG 4586. Foi possível usar algoritmos existentes para reconhecer objetos 3D usando o Kinect graças as ferramentas presentes no ROS. E o uso do Kinect facilitou a geração de mapas do ambiente. Desenvolveu-se uma nova estratégia de exploração ativa que considera o esforço de movimentação para as regiões de fronteiras (áreas ocultas), e a existência de indícios da presença do objeto. As métricas utilizadas demonstram que o uso de câmeras de profundidade para tarefas de busca tem potencial para evolução por associar informação visuais com as de profundidade, permitindo que o robô possa entender o ambiente e o objeto alvo da busca. Palavras-chave: Robô Móvel. Exploração. Busca Visual. Câmera RGB-D.

Page generated in 0.1156 seconds