Spelling suggestions: "subject:"cobots -- Motion."" "subject:"kobots -- Motion.""
71 |
Intelligent automated guided vehicle (AGV) with genetic algorithm decision making capabilitiesLubbe, Hendrik Gideon January 2007 (has links)
Thesis (M.Tech.) - Central University of Technology, Free State, 2006 / The ultimate goal regarding this research was to make an intelligent learning machine, thus a new method had to be developed. This was to be made possible by creating a programme that generates another programme. By constantly changing the generated programme to improve itself, the machines are given the ability to adapt to there surroundings and, thus, learn from experience.
This generated programme had to perform a specific task. For this experiment the programme was generated for a simulated PIC microcontroller aboard a simulated robot. The goal was to get the robot as close to a specific position inside a simulated maze as possible. The robot therefore had to show the ability to avoid obstacles, although only the distance to the destination was given as an indication of how well the generated programme was performing.
The programme performed experiments by randomly changing a number of instructions in the current generated programme. The generated programme was evaluated by simulating the reactions of the robot. If the change to the generated programme resulted in getting the robot closer to the destination, then the changed generated programme was kept for future use. If the change resulted in a less desired reaction, then the newly generated programme was removed and the unchanged programme was kept for future use. This process was repeated for a total of one hundred thousand times before the generated program was considered valid.
Because there was a very slim chance that the instruction chosen will be advantageous to the programme, it will take many changes to get the desired instruction and, thus, the desired result. After each change an evaluation was made through simulation. The amount of necessary changes to the programme is greatly reduced by giving seemingly desirable instructions a higher chance of being chosen than the other seemingly unsatisfactory instructions.
Due to the extensive use of the random function in this experiment, the results differ from one another. To overcome this barrier, many individual programmes had to be generated by simulating and changing an instruction in the generated programme a hundred thousand times.
This method was compared against Genetic Algorithms, which were used to generate a programme for the same simulated robot. The new method made the robot adapt much faster to its surroundings than the Genetic Algorithms.
A physical robot, similar to the virtual one, was build to prove that the programmes generated could be used on a physical robot.
There were quite a number of differences between the generated programmes and the way in which a human would generally construct the programme. Therefore, this method not only gives programmers a new perspective, but could also possibly do what human programmers have not been able to achieve in the past.
|
72 |
Visual arctic navigation: techniques for autonomous agents in glacial environmentsWilliams, Stephen Vincent 15 June 2011 (has links)
Arctic regions are thought to be more sensitive to climate change fluctuations, making weather data from these regions more valuable for climate modeling. Scientists have expressed an interest in deploying a robotic sensor network in these areas, minimizing the exposure of human researchers to the harsh environment, while allowing dense, targeted data collection to commence. For any such robotic system to be successful, a certain set of base navigational functionality must be developed. Further, these navigational algorithms must rely on the types of low-cost sensors that would be viable for use in a multi-agent system. A set of vision-based processing techniques have been proposed, which augment current robotic technologies for use in glacial terrains. Specifically, algorithms for estimating terrain traversability, robot localization, and terrain reconstruction have been developed which use data collected exclusively from a single camera and other low-cost robotic sensors. For traversability assessment, a custom algorithm was developed that uses local scale surface texture to estimate the terrain slope. Additionally, a horizon line estimation system has been proposed that is capable of coping with low-contrast, ambiguous horizons. For localization, a monocular simultaneous localization and mapping (SLAM) filter has been fused with consumer-grade GPS measurements to produce full robot pose estimates that do not drift over long traverses. Finally, a terrain reconstruction methodology has been proposed that uses a Gaussian process framework to incorporate sparse SLAM landmarks with dense slope estimates to produce a single, consistent terrain model. These algorithms have been tested within a custom glacial terrain computer simulation and against multiple data sets acquired during glacial field trials. The results of these tests indicate that vision is a viable sensing modality for autonomous glacial robotics, despite the obvious challenges presented by low-contrast glacial scenery. The findings of this work are discussed within the context of the larger arctic sensor network project, and a direction for future work is recommended.
|
73 |
Human emotions toward stimuli in the uncanny valley: laddering and index constructionHo, Chin-Chang January 2015 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Human-looking computer interfaces, including humanoid robots and animated humans, may elicit in their users eerie feelings. This effect, often called the uncanny valley, emphasizes our heightened ability to distinguish between the human and merely humanlike using both perceptual and cognitive approaches. Although reactions to uncanny characters are captured more accurately with emotional descriptors (e.g., eerie and creepy) than with cognitive descriptors (e.g., strange), and although previous studies suggest the psychological processes underlying the uncanny valley are more perceptual and emotional than cognitive, the deep roots of the concept of humanness imply the application of category boundaries and cognitive dissonance in distinguishing among robots, androids, and humans. First, laddering interviews (N = 30) revealed firm boundaries among participants’ concepts of animated, robotic, and human. Participants associated human traits like soul, imperfect, or intended exclusively with humans, and they simultaneously devalued the autonomous accomplishments of robots (e.g., simple task, limited ability, or controlled). Jerky movement and humanlike appearance were associated with robots, even though the presented robotic stimuli were humanlike. The facial expressions perceived in robots as improper were perceived in animated characters as mismatched. Second, association model testing indicated that the independent evaluation based on the developed indices is a viable quantitative technique for the laddering interview. Third, from the interviews several candidate items for the eeriness index were validated in a large representative survey (N = 1,311). The improved eeriness index is nearly orthogonal to perceived humanness (r = .04). The improved indices facilitate plotting relations among rated characters of varying human likeness, enhancing perspectives on humanlike robot design and animation creation.
|
74 |
A new, robust, and generic method for the quick creation of smooth paths and near time-optimal path trackingBott, M. P. January 2011 (has links)
Robotics has been the subject of academic study from as early as 1948. For much of this time, study has focused on very specific applications in very well controlled environments. For example, the first commercial robots (1961) were introduced in order to improve the efficiency of production lines. The tasks undertaken by these robots were simple, and all that was required of a control algorithm was speed, repetitiveness and reliability in these environments. Now however, robots are being used to move around autonomously in increasingly unpredictable environments, and the need for robotic control algorithms that can successfully react to such conditions is ever increasing. In addition to this there is an ever-increasing array of robots available, the control algorithms for which are often incompatible. This can result in extensive redesign and large sections of code being re-written for use on different architectures. The thesis presented here is that a new generic approach can be created that provides robust high quality smooth paths and time-optimal path tracking to substantially increase applicability and efficiency of autonomous motion plans. The control system developed to support this thesis is capable of producing high quality smooth paths, and following these paths to a high level of accuracy in a robust and near time-optimal manner. The system can control a variety of robots in environments that contain 2D obstacles of various shapes and sizes. The system is also resilient to sensor error, spatial drift, and wheel-slip. In achieving the above, this system provides previously unavailable functionality by generically creating and tracking high quality paths so that only minor and clear adjustments are required between different robots and also be being capable of operating in environments that contain high levels of perturbation. The system is comprised of five separate novel component algorithms in order to cater for five different motion challenges facing modern robots. Each algorithm provides guaranteed functionality that has previously been unavailable in respect to its challenges. The challenges are: high quality smooth movement to reach n-dimensional goals in regions without obstacles, the navigation of 2D obstacles with guaranteed completeness, high quality smooth movement for ground robots carrying out 2D obstacle navigation, near time-optimal path tracking, and finally, effective wheel-slip detection and compensation. In meeting these challenges the algorithms have tackled adherence to non-holonomic constraints, applicability to a wide range of robots and tasks, fast real-time creation of paths and controls, sensor error compensation, and compensation for perturbation. This thesis presents each of the above algorithms individually. It is shown that existing methods are unable to produce the results provided by this thesis, before detailing the operation of each algorithm. The methodology employed is varied in accordance with each of the five core challenges. However, a common element of methodology throughout the thesis is that of gradient descent within a new type of potential field, which is dynamic and capable of the simultaneous creation of high-quality paths and the controls required to execute them. By relating global to local considerations through subgoals, this methodology (combined with other elements) is shown to be fully capable of achieving the aims of the thesis. It is concluded that the produced system represents a novel and significant contribution as there is no other system (to the author’s knowledge) that provides all of the functionality given. For each component algorithm there are many control systems that provide one or more of its features, but none that are capable of all of the features. Applications for this work are wide ranging as it is comprised of five component algorithms each applicable in their own right. For example, high quality smooth paths may be created and followed in any dimensionality of space if time optimality and obstacle avoidance are not required. Broadly speaking, and in summary, applications are to ground-based robotics in the areas of smooth path planning, time optimal travel, and compensation for unpredictable perturbation.
|
75 |
Haptic control and operator-guided gait coordination of a pneumatic hexapedal rescue robotGuerriero, Brian A. 10 July 2008 (has links)
The Compact Rescue Crawler is a pneumatic legged robot. Two legs of a hexapod were designed and built. The legs are controlled directly from operator inputs. The operator gives foot position inputs through two PHANToM haptic controllers. A PD controller with a supplementary force gain-scheduler control stroke lengths of each cylinder. The force-based position control technique allows the robot feet to track operator inputs to within 10% position error.
A guided gait algorithm was developed to allow the operator to control all 6 legs simply by haptically guiding the front two. The operator records successful and collision-free trajectories and the gait coordinator plays the trajectories through the rear legs as they approach the detected obstacles. This hybrid gait algorithm allows the robot to proceed through a hazardous environment, guided by an operator, but without taxing the input capabilities of the human operator.
|
76 |
PERCEPTION AND CONTROL OF AN MRI-GUIDED ROBOTIC CATHETER IN DEFORMABLE ENVIRONMENTSTuna, Eser Erdem 21 June 2021 (has links)
No description available.
|
77 |
An Automated Grid-Based Robotic Alignment System for Pick and Place ApplicationsBearden, Lukas R. 12 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / This thesis proposes an automated grid-based alignment system utilizing lasers and an array of light-detecting photodiodes. The intent is to create an inexpensive and scalable alignment system for pick-and-place robotic systems. The system utilizes the transformation matrix, geometry, and trigonometry to determine the movements to align the robot with a grid-based array of photodiodes.
The alignment system consists of a sending unit utilizing lasers, a receiving module consisting of photodiodes, a data acquisition unit, a computer-based control system, and the robot being aligned. The control system computes the robot movements needed to position the lasers based on the laser positions detected by the photodiodes. A transformation matrix converts movements from the coordinate system of the grid formed by the photodiodes to the coordinate system of the robot. The photodiode grid can detect a single laser spot and move it to any part of the grid, or it can detect up to four laser spots and use their relative positions to determine rotational misalignment of the robot.
Testing the alignment consists of detecting the position of a single laser at individual points in a distinct pattern on the grid array of photodiodes, and running the entire alignment process multiple times starting with different misalignment cases. The first test provides a measure of the position detection accuracy of the system, while the second test demonstrates the alignment accuracy and repeatability of the system.
The system detects the position of a single laser or multiple lasers by using a method similar to a center-of-gravity calculation. The intensity of each photodiode is multiplied by the X-position of that photodiode. The summed result from each photodiode intensity and position product is divided by the summed value of all of the photodiode intensities to get the X-position of the laser. The same thing is done with the Y-values to get the Y-position of the laser. Results show that with this method the system can read a single laser position value with a resolution of 0.1mm, and with a maximum X-error of 2.9mm and Y-error of 2.0mm. It takes approximately 1.5 seconds to process the reading.
The alignment procedure calculates the initial misalignment between the robot and the grid of photodiodes by moving the robot to two distinct points along the robot’s X-axis so that only one laser is over the grid. Using these two detected points, a movement trajectory is generated to move that laser to the X = 0, Y = 0 position on the grid. In the process, this moves the other three lasers over the grid, allowing the system to detect the positions of four lasers and uses the positions to determine the rotational and translational offset needed to align the lasers to the grid of photodiodes. This step is run in a feedback loop to update the adjustment until it is within a permissible error value. The desired result for the complete alignment is a robot manipulator positioning within ±0.5mm along the X and Y-axes. The system shows a maximum error of 0.2mm in the X-direction and 0.5mm in the Y-direction with a run-time of approximately 4 to 5 minutes per alignment. If the permissible error value of the final alignment is tripled the alignment time goes down to 1 to 1.5 minutes and the maximum error goes up to 1.4mm in both the X and Y-directions. The run time of the alignment decreases because the system runs fewer alignment iterations.
|
Page generated in 0.0765 seconds