Spelling suggestions: "subject:"cotensor data"" "subject:"condensor data""
51 |
Datenqualität in SensordatenströmenKlein, Anja 19 June 2009 (has links)
Die stetige Entwicklung intelligenter Sensorsysteme erlaubt die Automatisierung und Verbesserung komplexer Prozess- und Geschäftsentscheidungen in vielfältigen Anwendungsszenarien.
Sensoren können zum Beispiel zur Bestimmung optimaler Wartungstermine oder zur Steuerung von Produktionslinien genutzt werden. Ein grundlegendes Problem bereitet dabei die Sensordatenqualität, die durch Umwelteinflüsse und Sensorausfälle
beschränkt wird. Ziel der vorliegenden Arbeit ist die Entwicklung eines Datenqualitätsmodells, das Anwendungen und Datenkonsumenten Qualitätsinformationen für eine umfassende Bewertung unsicherer Sensordaten zur Verfügung stellt. Neben Datenstrukturen zur
effizienten Datenqualitätsverwaltung in Datenströmen und Datenbanken wird eine umfassende Datenqualitätsalgebra zur Berechnung der Qualität von Datenverarbeitungsergebnissen
vorgestellt. Darüber hinaus werden Methoden zur Datenqualitätsverbesserung entwickelt, die speziell auf die Anforderungen der Sensordatenverarbeitung angepasst sind. Die Arbeit wird durch Ansätze zur nutzerfreundlichen Datenqualitätsanfrage
und -visualisierung vervollständigt.
|
52 |
Transformation of Time-based Sensor Data to Material Quality Data in Stainless Steel ProductionInersjö, Adam January 2020 (has links)
Quality assurance in the stainless steel production requires large amounts of sensor data to monitor the processing steps. Digitalisation of the production would allow higher levels of control to both evaluate and increase the quality of the end products. At Outokumpu Avesta Works, continuous processing of coils creates sensor data without connecting it to individual steel coils, a connection needed to achieve the promises of digitalisation. In this project, the time series data generated from 12 sensors in the continuous processing was analysed and four alternative methods to connect the data to coils were presented. A method based on positional time series was deemed the most suitable for the data and was selected for implementation over other methods that would apply time series analysis on the sensor data itself. Evaluations of the selected method showed that it was able to connect sensor data to 98.10 % of coils, just short of reaching the accuracy requirement of 99 %. Because the overhead of creating the positional time series was constant regardless of the number of sensors, the performance per sensor improved with increased number of sensors. The median processing time for 24 hours of sensor data was less than 20 seconds per sensor when batch processing eight or more sensors. The performance for processing fewer than four sensors was not as good, requiring further optimization to reach the requirement of 30 seconds per sensor. Although the requirements were not completely fulfilled, the implemented method can still be used on historical production data to facilitate further quality estimation of stainless steel coils / Kvalitetssäkring av rostfritt stål produktion kräver stora mängder av sensordata för att övervaka processtegen. Digitalisering av produktionen skulle ge större kontroll för att både bedöma och öka kvaliteten på slutprodukterna. Vid Outokumpu Avesta Works skapas sensordata vid kontinuerlig bearbetning av stålband utan att datan sammankopplas till enskilda band, trots att denna sammankoppling krävs för att uppnå löftena som digitaliseringens ger. I detta projekt analyserades tidsseriedata från 12 sensorer vid den kontinuerliga bearbetningen av band och fyra alternativa metoder för att sammankoppla sensordatan till stålband presenterades. En metod som byggde på tidsserier med positionsvärden bedömdes vara mest passande för sensordatan och valdes för implementation över andra metoder som byggde på tidsserieanalys av själva sensordatan. Evaluering av den valda metoden visade att den kunde sammankoppla sensordata till 98.10 % av ståldbanden, något lägre än kravet på 99 % korrekthet. På grund av att skapandet av tidsserierna med positionsvärden tog lika lång tid oberoende av antalet sensorer så förbättrades bearbetningstiden desto fler sensorer som bearbetades. För bearbetning av 24 timmar av sensordata låg median bearbetningstiden på mindre än 20 sekunder per sensor när åtta eller fler sensorer bearbetades tillsammans. Prestandan för bearbetning av färre än fyra sensorer var inte lilka bra och kräver ytterliga optimering för att nå kravet på 30 sekunder per sensor. Fastän kraven på metoden inte uppnåddes till fullo kan den implementerade metoden ändå användas på historisk data för att främja kvalitetsbedömning av rostfria stålband.
|
53 |
Genauigkeitsuntersuchung von inertialen Messsensoren aus dem Niedrigpreissegment unter Nutzung verschiedener AuswertestrategienDöhne, Thorben 20 August 2019 (has links)
Für viele Anwendungen auf bewegten Plattformen wird eine genaue Information zur Orientierung der Plattform benötigt. Zur Bestimmung der Lagewinkel werden dabei inertiale Messsensoren verwendet, welche zu einer inertialen Messeinheit (Inertial Measurement Unit, IMU) zusammengefasst werden. In dieser Arbeit werden vier IMUs aus dem Niedrigpreissegment auf die zu erhaltene Genauigkeit der Lagewinkel untersucht. Die untersuchten IMUs sind dabei als Mikrosysteme (Microelectromechanical systems) gefertigt, was neben den Vorteilen eines geringen Preises, eines geringen Gewichts und eines geringen Energieverbrauchs allerdings auch den Nachteil einer schlechteren Genauigkeit gegenüber klassischen IMUs hat. In dieser Arbeit wird die Genauigkeitsuntersuchung anhand eines Datensatzes einer Flugkampagne durchgeführt, für welche auch eine Referenzlösung vorliegt. Die Messungen der IMUs werden über ein Erweitertes Kalman-Filter mit einer genauen GNSS- (Global Navigation Satellite System) Lösung gestützt. Neben der Navigationslösung werden dabei auch die Fehler der Sensoren mitgeschätzt. Aufgrund von zu großen Fehlern der Startwerte kommt es bei einigen Schätzungen teilweise zur Divergenz. Zur Lösung dieses Problems wird eine iterative Auswertung angewendet, wodurch eine stabile Lösung möglich ist. Eine weitere Verbesserung wird über eine Glättung erzielt. Einzelne, kleine Fehler in der Zeitstempelung, welche sich stark auf die Genauigkeit der Lösung auswirken, werden über eine Interpolation der Daten auf Zeitstempel in regelmäßigen Abständen ausgeglichen. Damit können für zwei der vier untersuchten IMUs auf den Fluglinien Genauigkeiten der Roll-, Pitch- und Yaw-Winkel von 0,05°, 0,10° und 0,20° erreicht werden. Die Genauigkeiten der zwei weiteren IMUs fallen teilweise erheblich schlechter aus, was auf die ungenaue Zeitstempelung bei der Datenaufnahme zurückgeführt wird. Für die Anwendung von Laserscanning auf bewegten Plattformen wird in einer Genauigkeitsabschätzung gezeigt, dass Genauigkeiten der Höhenkomponente von besser als 1 dm mit den erhaltenen Lagewinkelgenauigkeiten der beiden besseren IMUs möglich sind.
|
54 |
Representing Data Quality in Sensor Data Streaming EnvironmentsLehner, Wolfgang, Klein, Anja 20 May 2022 (has links)
Sensors in smart-item environments capture data about product conditions and usage to support business decisions as well as production automation processes. A challenging issue in this application area is the restricted quality of sensor data due to limited sensor precision and sensor failures. Moreover, data stream processing to meet resource constraints in streaming environments introduces additional noise and decreases the data quality. In order to avoid wrong business decisions due to dirty data, quality characteristics have to be captured, processed, and provided to the respective business task. However, the issue of how to efficiently provide applications with information about data quality is still an open research problem.
In this article, we address this problem by presenting a flexible model for the propagation and processing of data quality. The comprehensive analysis of common data stream processing operators and their impact on data quality allows a fruitful data evaluation and diminishes incorrect business decisions. Further, we propose the data quality model control to adapt the data quality granularity to the data stream interestingness.
|
55 |
Maskininlärning: avvikelseklassificering på sekventiell sensordata. En jämförelse och utvärdering av algoritmer för att klassificera avvikelser i en miljövänlig IoT produkt med sekventiell sensordataHeidfors, Filip, Moltedo, Elias January 2019 (has links)
Ett företag har tagit fram en miljövänlig IoT produkt med sekventiell sensordata och vill genom maskininlärning kunna klassificera avvikelser i sensordatan. Det har genom åren utvecklats ett flertal väl fungerande algoritmer för klassificering men det finns emellertid ingen algoritm som fungerar bäst för alla olika problem. Syftet med det här arbetet var därför att undersöka, jämföra och utvärdera olika klassificerare inom "supervised machine learning" för att ta reda på vilken klassificerare som ger högst träffsäkerhet att klassificera avvikelser i den typ av IoT produkt som företaget tagit fram. Genom en litteraturstudie tog vi först reda på vilka klassificerare som vanligtvis använts och fungerat bra i tidigare vetenskapliga arbeten med liknande applikationer. Vi kom fram till att jämföra och utvärdera Random Forest, Naïve Bayes klassificerare och Support Vector Machines ytterligare. Vi skapade sedan ett dataset på 513 exempel som vi använde för träning och validering för respektive klassificerare. Resultatet visade att Random Forest hade betydligt högre träffsäkerhet med 95,7% jämfört med Naïve Bayes klassificerare (81,5%) och Support Vector Machines (78,6%). Slutsatsen för arbetet är att Random Forest med sina 95,7% ger en tillräckligt hög träffsäkerhet så att företaget kan använda maskininlärningsmodellen för att förbättra sin produkt. Resultatet pekar också på att Random Forest, för det här arbetets specifika klassificeringsproblem, är den klassificerare som fungerar bäst inom "supervised machine learning" men att det eventuellt finns möjlighet att få ännu högre träffsäkerhet med andra tekniker som till exempel "unsupervised machine learning" eller "semi-supervised machine learning". / A company has developed a environment-friendly IoT device with sequential sensor data and want to use machine learning to classify anomalies in their data. Throughout the years, several well working algorithms for classifications have been developed. However, there is no optimal algorithm for every problem. The purpose of this work was therefore to investigate, compare and evaluate different classifiers within supervised machine learning to find out which classifier that gives the best accuracy to classify anomalies in the kind of IoT device that the company has developed. With a literature review we first wanted to find out which classifiers that are commonly used and have worked well in related work for similar purposes and applications. We concluded to further compare and evaluate Random Forest, Naïve Bayes and Support Vector Machines. We created a dataset of 513 examples that we used for training and evaluation for each classifier. The result showed that Random Forest had superior accuracy with 95.7% compared to Naïve Bayes (81.5%) and Support Vector Machines (78.6%). The conclusion for this work is that Random Forest, with 95.7%, gives a high enough accuracy for the company to have good use of the machine learning model. The result also indicates that Random Forest, for this thesis specific classification problem, is the best classifier within supervised machine learning but that there is a potential possibility to get even higher accuracy with other techniques such as unsupervised machine learning or semi-supervised machine learning.
|
56 |
Zero-padding Network Coding and Compressed Sensing for Optimized Packets TransmissionTaghouti, Maroua 04 November 2022 (has links)
Ubiquitous Internet of Things (IoT) is destined to connect everybody and everything on a never-before-seen scale. Such networks, however, have to tackle the inherent issues created by the presence of very heterogeneous data transmissions over the same shared network. This very diverse communication, in turn, produces network packets of various sizes ranging from very small sensory readings to comparatively humongous video frames. Such a massive amount of data itself, as in the case of sensory networks, is also continuously captured at varying rates and contributes to increasing the load on the network itself, which could hinder transmission efficiency. However, they also open up possibilities to exploit various correlations in the transmitted data due to their sheer number. Reductions based on this also enable the networks to keep up with the new wave of big data-driven communications by simply investing in the promotion of select techniques that efficiently utilize the resources of the communication systems. One of the solutions to tackle the erroneous transmission of data employs linear coding techniques, which are ill-equipped to handle the processing of packets with differing sizes. Random Linear Network Coding (RLNC), for instance, generates unreasonable amounts of padding overhead to compensate for the different message lengths, thereby suppressing the pervasive benefits of the coding itself. We propose a set of approaches that overcome such issues, while also reducing the decoding delays at the same time. Specifically, we introduce and elaborate on the concept of macro-symbols and the design of different coding schemes. Due to the heterogeneity of the packet sizes, our progressive shortening scheme is the first RLNC-based approach that generates and recodes unequal-sized coded packets. Another of our solutions is deterministic shifting that reduces the overall number of transmitted packets. Moreover, the RaSOR scheme employs coding using XORing operations on shifted packets, without the need for coding coefficients, thus favoring linear encoding and decoding complexities.
Another facet of IoT applications can be found in sensory data known to be highly correlated, where compressed sensing is a potential approach to reduce the overall transmissions. In such scenarios, network coding can also help. Our proposed joint compressed sensing and real network coding design fully exploit the correlations in cluster-based wireless sensor networks, such as the ones advocated by Industry 4.0. This design focused on performing one-step decoding to reduce the computational complexities and delays of the reconstruction process at the receiver and investigates the effectiveness of combined compressed sensing and network coding.
|
57 |
ANT+ sensors for data gathering : Using wireless technology to elevate the well-being of wheelchair usersShahda, Madhat January 2023 (has links)
Användare av manuella rullstolar möter ofta betydande utmaningar när det gäller fysisk ansträngning, energiförbrukning och tidsåtgång för att hantera vardagliga situationer. Denna kontinuerliga påfrestning utsätter deras överkropp, särskilt armar och axlar, för stort slitage. Följaktligen kan omfattande långvarig användning av manuella rullstolar tillsammans med kroppsliga obalanser, bidra till ytterligare hälsokomplikationer som negativt påverkar användarens både allmänna fysiska och mentala hälsa. Målet med detta examensarbete är att utvärdera möjligheten att utveckla mobiltelefonprogramvara som kan samverka med kommersiella ANT+ sensorer, vilka installeras på rullstolen för att samla, beräkna och presentera betydelsefulla data för användaren. Data som kan användas för att underlätta justering av användarens rörelsemönster för att förbättra användarens hälsostatus. Mjukvaran kommer dessutom att tillåta oss att utvärdera både användbarhet och lämplighet hos ANT+ sensorerna. Både för sig själva och jämfört med andra trådlösa sensorer. Mjukvaran kommer dessutom att låta oss utvärdera hur exakta data från sensorerna är och om de att vara behjälpliga för att optimera rullstolarnas förmåga att tillämpa användarnas data. Användandet av dem kan hjälpa till att minimera potentiell och undvikbar skada hos användarna av rullstolarna. Undersökningens resultat visar att utvecklandet av programvaran inte var lika enkel som förväntat. Eftersom de verktyg som tillhandahölls av ANT+ blivit utdaterade. Sensorerna har ändå genererat tillräckligt rimliga, korrekta och användbara resultat. Resultaten har visat sig vara givande för brukarna i den här undersökningen. Dessutom ansåg en majoritet av testpersonerna att sensorerna från ANT+ var mer användarvänliga jämfört med andra trådlösa sensorer. De framhöll speciellt den användarvänliga anslutningsprocessen. / Manual wheelchair users often encounter significant challenges in terms of energy expenditure, time allocation and physical effort, to navigate everyday life scenarios. This continuous strain exposes their upper body, particularly arms and shoulders, to extreme fatigue, leading eventually to long-term damage. Consequently, extensive long-term usage combined with bodily imbalances can contribute to further health complications affecting both their general physical and mental health detrimentally. The objective of this thesis is to assess the possibility of building mobile phone software that can work in conjunction with commercial ANT+ sensors that are installed on the chair to gather, calculate, and present the user with valuable data that facilitate the adjustment of their movement patterns to enhance health outcomes. Furthermore, this software will allow us to evaluate the connectivity convenience of ANT+ sensors, both in itself and in comparison to other wireless sensors. Additionally, it will allow us to gauge the accuracy of the sensors and whether they can be of help in optimizing wheelchair functionality aiming to amplify the user’s output and minimizing potential and avoidable damage. The results show that while the development of the application wasn’t as straight forward as expected due to ANT’s provided development tools being outdated, the sensors provided reasonably accurate data, proving beneficial to the users in this context. Moreover, the majority of testers found that connecting to ANT+ sensors was notably easier in comparison to other wireless sensors, highlighting the user-friendly nature of the connection process itself.
|
58 |
Qualitative Distances and Qualitative Description of Images for Indoor Scene Description and Recognition in RoboticsFalomir Llansola, Zoe 28 November 2011 (has links)
The automatic extraction of knowledge from the world by a robotic system as human beings interpret their environment through their senses is still an unsolved task in Artificial Intelligence. A robotic agent is in contact with the world through its sensors and other electronic components which obtain and process mainly numerical information. Sonar, infrared and laser sensors obtain distance information. Webcams obtain digital images that are represented internally as matrices of red, blue and green (RGB) colour coordinate values. All this numerical values obtained from the environment need a later interpretation in order to provide the knowledge required by the robotic agent in order to carry out a task.
Similarly, light wavelengths with specific amplitude are captured by cone cells of human eyes obtaining also stimulus without meaning. However, the information that human beings can describe and remember from what they see is expressed using words, that is qualitatively.
The exact process carried out after our eyes perceive light wavelengths and our brain interpret them is quite unknown. However, a real fact in human cognition is that people go beyond the purely perceptual experience to classify things as members of categories and attach linguistic labels to them.
As the information provided by all the electronic components incorporated in a robotic agent is numerical, the approaches that first appeared in the literature giving an interpretation of this information followed a mathematical trend. In this thesis, this problem is addressed from the other side, its main aim is to process these numerical data in order to obtain qualitative information as human beings can do.
The research work done in this thesis tries to narrow the gap between the acquisition of low level information by robot sensors and the need of obtaining high level or qualitative information for enhancing human-machine communication and for applying logical reasoning processes based on concepts. Moreover, qualitative concepts can be added a meaning by relating them to others. They can be used for reasoning applying qualitative models that have been developed in the last twenty years for describing and interpreting metrical and mathematical concepts such as orientation, distance, velocity, acceleration, and so on. And they can be also understood by human-users both written and read aloud.
The first contributions presented are the definition of a method for obtaining fuzzy distance patterns (which include qualitative distances such as ‘near’, far’, ‘very far’ and so on) from the data obtained by any kind of distance sensors incorporated in a mobile robot and the definition of a factor to measure the dissimilarity between those fuzzy patterns. Both have been applied to the integration of the distances obtained by the sonar and laser distance sensors incorporated in a Pioneer 2 dx mobile robot and, as a result, special obstacles have been detected as ‘glass window’, ‘mirror’, and so on. Moreover, the fuzzy distance patterns provided have been also defuzzified in order to obtain a smooth robot speed and used to classify orientation reference systems into ‘open’ (it defines an open space to be explored) or ‘closed’.
The second contribution presented is the definition of a model for qualitative image description (QID) by applying the new defined models for qualitative shape and colour description and the topology model by Egenhofer and Al-Taha [1992] and the orientation models by Hernández [1991] and Freksa [1992]. This model can qualitatively describe any kind of digital image and is independent of the image segmentation method used. The QID model have been tested in two scenarios in robotics: (i) the description of digital images captured by the camera of a Pioneer 2 dx mobile robot and (ii) the description of digital images of tile mosaics taken by an industrial camera located on a platform used by a robot arm to assemble tile mosaics.
In order to provide a formal and explicit meaning to the qualitative description of the images generated, a Description Logic (DL) based ontology has been designed and presented as the third contribution. Our approach can automatically process any random image and obtain a set of DL-axioms that describe it visually and spatially. And objects included in the images are classified according to the ontology schema using a DL reasoner. Tests have been carried out using digital images captured by a webcam incorporated in a Pioneer 2 dx mobile robot. The images taken correspond to the corridors of a building at University Jaume I and objects with them have been classified into ‘walls’, ‘floor’, ‘office doors’ and ‘fire extinguishers’ under different illumination conditions and from different observer viewpoints.
The final contribution is the definition of a similarity measure between qualitative descriptions of shape, colour, topology and orientation. And the integration of those measures into the definition of a general similarity measure between two qualitative descriptions of images. These similarity measures have been applied to: (i) extract objects with similar shapes from the MPEG7 CE Shape-1 library; (ii) assemble tile mosaics by qualitative shape and colour similarity matching; (iii) compare images of tile compositions; and (iv) compare images of natural landmarks in a mobile robot world for their recognition.
The contributions made in this thesis are only a small step forward in the direction of enhancing robot knowledge acquisition from the world. And it is also written with the aim of inspiring others in their research, so that bigger contributions can be achieved in the future which can improve the life quality of our society.
|
59 |
Human-Inspired Robot Task Teaching and LearningWu, Xianghai 28 October 2009 (has links)
Current methods of robot task teaching and learning have several limitations: highly-trained personnel are usually required to teach robots specific tasks; service-robot systems are limited in learning different types of tasks utilizing the same system; and the teacher’s expertise in the task is not well exploited. A human-inspired robot-task teaching and learning method is developed in this research with the aim of allowing general users to teach different object-manipulation tasks to a service robot, which will be able to adapt its learned tasks to new task setups.
The proposed method was developed to be interactive and intuitive to the user. In a closed loop with the robot, the user can intuitively teach the tasks, track the learning states of the robot, direct the robot attention to perceive task-related key state changes, and give timely feedback when the robot is practicing the task, while the robot can reveal its learning progress and refine its knowledge based on the user’s feedback.
The human-inspired method consists of six teaching and learning stages: 1) checking and teaching the needed background knowledge of the robot; 2) introduction of the overall task to be taught to the robot: the hierarchical task structure, and the involved objects and robot hand actions; 3) teaching the task step by step, and directing the robot to perceive important state changes; 4) demonstration of the task in whole, and offering vocal subtask-segmentation cues in subtask transitions; 5) robot learning of the taught task using a flexible vote-based algorithm to segment the demonstrated task trajectories, a probabilistic optimization process to assign obtained task trajectory episodes (segments) to the introduced subtasks, and generalization of the taught task trajectories in different reference frames; and 6) robot practicing of the learned task and refinement of its task knowledge according to the teacher’s timely feedback, where the adaptation of the learned task to new task setups is achieved by blending the task trajectories generated from pertinent frames.
An agent-based architecture was designed and developed to implement this robot-task teaching and learning method. This system has an interactive human-robot teaching interface subsystem, which is composed of: a) a three-camera stereo vision system to track user hand motion; b) a stereo-camera vision system mounted on the robot end-effector to allow the robot to explore its workspace and identify objects of interest; and c) a speech recognition and text-to-speech system, utilized for the main human-robot interaction.
A user study involving ten human subjects was performed using two tasks to evaluate the system based on time spent by the subjects on each teaching stage, efficiency measures of the robot’s understanding of users’ vocal requests, responses, and feedback, and their subjective evaluations. Another set of experiments was done to analyze the ability of the robot to adapt its previously learned tasks to new task setups using measures such as object, target and robot starting-point poses; alignments of objects on targets; and actual robot grasp and release poses relative to the related objects and targets. The results indicate that the system enabled the subjects to naturally and effectively teach the tasks to the robot and give timely feedback on the robot’s practice performance. The robot was able to learn the tasks as expected and adapt its learned tasks to new task setups. The robot properly refined its task knowledge based on the teacher’s feedback and successfully applied the refined task knowledge in subsequent task practices. The robot was able to adapt its learned tasks to new task setups that were considerably different from those in the demonstration. The alignments of objects on the target were quite close to those taught, and the executed grasping and releasing poses of the robot relative to objects and targets were almost identical to the taught poses. The robot-task learning ability was affected by limitations of the vision-based human-robot teleoperation interface used in hand-to-hand teaching and the robot’s capacity to sense its workspace. Future work will investigate robot learning of a variety of different tasks and the use of more robot in-built primitive skills.
|
60 |
Human-Inspired Robot Task Teaching and LearningWu, Xianghai 28 October 2009 (has links)
Current methods of robot task teaching and learning have several limitations: highly-trained personnel are usually required to teach robots specific tasks; service-robot systems are limited in learning different types of tasks utilizing the same system; and the teacher’s expertise in the task is not well exploited. A human-inspired robot-task teaching and learning method is developed in this research with the aim of allowing general users to teach different object-manipulation tasks to a service robot, which will be able to adapt its learned tasks to new task setups.
The proposed method was developed to be interactive and intuitive to the user. In a closed loop with the robot, the user can intuitively teach the tasks, track the learning states of the robot, direct the robot attention to perceive task-related key state changes, and give timely feedback when the robot is practicing the task, while the robot can reveal its learning progress and refine its knowledge based on the user’s feedback.
The human-inspired method consists of six teaching and learning stages: 1) checking and teaching the needed background knowledge of the robot; 2) introduction of the overall task to be taught to the robot: the hierarchical task structure, and the involved objects and robot hand actions; 3) teaching the task step by step, and directing the robot to perceive important state changes; 4) demonstration of the task in whole, and offering vocal subtask-segmentation cues in subtask transitions; 5) robot learning of the taught task using a flexible vote-based algorithm to segment the demonstrated task trajectories, a probabilistic optimization process to assign obtained task trajectory episodes (segments) to the introduced subtasks, and generalization of the taught task trajectories in different reference frames; and 6) robot practicing of the learned task and refinement of its task knowledge according to the teacher’s timely feedback, where the adaptation of the learned task to new task setups is achieved by blending the task trajectories generated from pertinent frames.
An agent-based architecture was designed and developed to implement this robot-task teaching and learning method. This system has an interactive human-robot teaching interface subsystem, which is composed of: a) a three-camera stereo vision system to track user hand motion; b) a stereo-camera vision system mounted on the robot end-effector to allow the robot to explore its workspace and identify objects of interest; and c) a speech recognition and text-to-speech system, utilized for the main human-robot interaction.
A user study involving ten human subjects was performed using two tasks to evaluate the system based on time spent by the subjects on each teaching stage, efficiency measures of the robot’s understanding of users’ vocal requests, responses, and feedback, and their subjective evaluations. Another set of experiments was done to analyze the ability of the robot to adapt its previously learned tasks to new task setups using measures such as object, target and robot starting-point poses; alignments of objects on targets; and actual robot grasp and release poses relative to the related objects and targets. The results indicate that the system enabled the subjects to naturally and effectively teach the tasks to the robot and give timely feedback on the robot’s practice performance. The robot was able to learn the tasks as expected and adapt its learned tasks to new task setups. The robot properly refined its task knowledge based on the teacher’s feedback and successfully applied the refined task knowledge in subsequent task practices. The robot was able to adapt its learned tasks to new task setups that were considerably different from those in the demonstration. The alignments of objects on the target were quite close to those taught, and the executed grasping and releasing poses of the robot relative to objects and targets were almost identical to the taught poses. The robot-task learning ability was affected by limitations of the vision-based human-robot teleoperation interface used in hand-to-hand teaching and the robot’s capacity to sense its workspace. Future work will investigate robot learning of a variety of different tasks and the use of more robot in-built primitive skills.
|
Page generated in 0.0619 seconds