Spelling suggestions: "subject:"humanrobot 1interaction"" "subject:"humanrobot 3dinteraction""
201 |
Etude de la direction du regard dans le cadre d'interactions sociales incluant un robot / Gaze direction in the context of social human-robot interactionMassé, Benoît 29 October 2018 (has links)
Les robots sont de plus en plus utilisés dans un cadre social. Il ne suffit plusde partager l’espace avec des humains, mais aussi d’interagir avec eux. Dansce cadre, il est attendu du robot qu’il comprenne un certain nombre de signauxambiguës, verbaux et visuels, nécessaires à une interaction humaine. En particulier, on peut extraire beaucoup d’information, à la fois sur l’état d’esprit despersonnes et sur la dynamique de groupe à l’œuvre, en connaissant qui ou quoichaque personne regarde. On parle de la Cible d’attention visuelle, désignéepar l’acronyme anglais VFOA. Dans cette thèse, nous nous intéressons auxdonnées perçues par un robot humanoı̈de qui participe activement à une in-teraction sociale, et à leur utilisation pour deviner ce que chaque personneregarde.D’une part, le robot doit “regarder les gens”, à savoir orienter sa tête(et donc la caméra) pour obtenir des images des personnes présentes. Nousprésentons une méthode originale d’apprentissage par renforcement pourcontrôler la direction du regard d’un robot. Cette méthode utilise des réseauxde neurones récurrents. Le robot s’entraı̂ne en autonomie à déplacer sa tête enfonction des données visuelles et auditives. Il atteint une stratégie efficace, quilui permet de cibler des groupes de personnes dans un environnement évolutif.D’autre part, les images du robot peuvent être utilisée pour estimer lesVFOAs au cours du temps. Pour chaque visage visible, nous calculons laposture 3D de la tête (position et orientation dans l’espace) car très fortementcorrélée avec la direction du regard. Nous l’utilisons dans deux applications.Premièrement, nous remarquons que les gens peuvent regarder des objets quine sont pas visible depuis le point de vue du robot. Sous l’hypothèse quelesdits objets soient regardés au moins une partie du temps, nous souhaitonsestimer leurs positions exclusivement à partir de la direction du regard despersonnes visibles. Nous utilisons une représentation sous forme de carte dechaleur. Nous avons élaboré et entraı̂né plusieurs réseaux de convolutions afinde d’estimer la régression entre une séquence de postures des têtes, et les posi-tions des objets. Dans un second temps, les positions des objets d’intérêt, pou-vant être ciblés, sont supposées connues. Nous présentons alors un modèleprobabiliste, suggéré par des résultats en psychophysique, afin de modéliserla relation entre les postures des têtes, les positions des objets, la directiondu regard et les VFOAs. La formulation utilise un modèle markovien à dy-namiques multiples. En appliquant une approches bayésienne, nous obtenonsun algorithme pour calculer les VFOAs au fur et à mesure, et une méthodepour estimer les paramètres du modèle.Nos contributions reposent sur la possibilité d’utiliser des données, afind’exploiter des approches d’apprentissage automatique. Toutes nos méthodessont validées sur des jeu de données disponibles publiquement. De plus, lagénération de scénarios synthétiques permet d’agrandir à volonté la quantitéde données disponibles; les méthodes pour simuler ces données sont explicite-ment détaillée. / Robots are more and more used in a social context. They are required notonly to share physical space with humans but also to interact with them. Inthis context, the robot is expected to understand some verbal and non-verbalambiguous cues, constantly used in a natural human interaction. In particular,knowing who or what people are looking at is a very valuable information tounderstand each individual mental state as well as the interaction dynamics. Itis called Visual Focus of Attention or VFOA. In this thesis, we are interestedin using the inputs from an active humanoid robot – participating in a socialinteraction – to estimate who is looking at whom or what.On the one hand, we want the robot to look at people, so it can extractmeaningful visual information from its video camera. We propose a novelreinforcement learning method for robotic gaze control. The model is basedon a recurrent neural network architecture. The robot autonomously learns astrategy for moving its head (and camera) using audio-visual inputs. It is ableto focus on groups of people in a changing environment.On the other hand, information from the video camera images are used toinfer the VFOAs of people along time. We estimate the 3D head poses (lo-cation and orientation) for each face, as it is highly correlated with the gazedirection. We use it in two tasks. First, we note that objects may be lookedat while not being visible from the robot point of view. Under the assump-tion that objects of interest are being looked at, we propose to estimate theirlocations relying solely on the gaze direction of visible people. We formulatean ad hoc spatial representation based on probability heat-maps. We designseveral convolutional neural network models and train them to perform a re-gression from the space of head poses to the space of object locations. Thisprovide a set of object locations from a sequence of head poses. Second, wesuppose that the location of objects of interest are known. In this context, weintroduce a Bayesian probabilistic model, inspired from psychophysics, thatdescribes the dependency between head poses, object locations, eye-gaze di-rections, and VFOAs, along time. The formulation is based on a switchingstate-space Markov model. A specific filtering procedure is detailed to inferthe VFOAs, as well as an adapted training algorithm.The proposed contributions use data-driven approaches, and are addressedwithin the context of machine learning. All methods have been tested on pub-licly available datasets. Some training procedures additionally require to sim-ulate synthetic scenarios; the generation process is then explicitly detailed.
|
202 |
Segmentation et reconaissance des gestes pour l'interaction homme-robot cognitive / Gesture Segmentation and Recognition for Cognitive Human-Robot InteractionSimao, Miguel 17 December 2018 (has links)
Cette thèse présente un cadre formel pour l'interaction Homme-robot (HRI), qui reconnaître un important lexique de gestes statiques et dynamiques mesurés par des capteurs portatifs. Gestes statiques et dynamiques sont classés séparément grâce à un processus de segmentation. Les tests expérimentaux sur la base de données de gestes UC2017 ont montré une haute précision de classification. La classification pas à pas en ligne utilisant des données brutes est fait avec des réseaux de neurones profonds « Long-Short Term Memory » (LSTM) et à convolution (CNN), et sont plus performants que les modèles statiques entraînés avec des caractéristiques spécialement conçues, au détriment du temps d'entraînement et d'inférence. La classification en ligne des gestes permet une classification prédictive avec réussit. Le rejet des gestes hors vocabulaire est proposé par apprentissage semi-supervisé par un réseau de neurones du type « Auxiliary Conditional Generative Adversarial Networks ». Le réseau propose a atteint une haute précision de rejet de les gestes non entraînés de la base de données UC2018 DualMyo. / This thesis presents a human-robot interaction (HRI) framework to classify large vocabularies of static and dynamic hand gestures, captured with wearable sensors. Static and dynamic gestures are classified separately thanks to the segmentation process. Experimental tests on the UC2017 hand gesture dataset showed high accuracy. In online frame-by-frame classification using raw incomplete data, Long Short-Term Memory (LSTM) deep networks and Convolutional Neural Networks (CNN) performed better than static models with specially crafted features at the cost of training and inference time. Online classification of dynamic gestures allows successful predictive classification. The rejection of out-of-vocabulary gestures is proposed to be done through semi-supervised learning of a network in the Auxiliary Conditional Generative Adversarial Networks framework. The proposed network achieved a high accuracy on the rejection of untrained patterns of the UC2018 DualMyo dataset.
|
203 |
Arquitetura de controle inteligente para interação humano-robô / Control architecture for human-robot interactionAlves, Silas Franco dos Reis 01 April 2016 (has links)
Supondo-se que os robôs coexistirão conosco num futuro próximo, é então evidente a necessidade de Arquiteturas de Controle Inteligentes voltadas para a Interação Humano-Robô. Portanto, este trabalho desenvolveu uma Organização de Arquitetura de Controle Inteligente comportamental, cujo objetivo principal é permitir que o robô interaja com as pessoas de maneira intuitiva e que motive a colaboração entre pessoas e robôs. Para isso, um módulo emocional sintético, embasado na teoria bidimensional de emoções, foi integrado para promover a adaptação dos comportamentos do robô, implementados por Esquemas Motores, e a comunicação de seu estado interno de modo inteligível. Esta Organização subsidiou a implantação da Arquitetura de Controle em uma aplicação voltada para a área assistencial da saúde, consistindo, destarte, em um estudo de caso em robótica social assistiva como ferramenta auxiliar para educação especial. Os experimentos realizados demonstraram que a arquitetura de controle desenvolvida é capaz de atender aos requisitos da aplicação, conforme estipulado pelos especialistas consultados. Isto posto, esta tese contribui com o projeto de uma arquitetura de controle capaz de agir mediante a avaliação subjetiva baseada em crenças cognitivas das emoções, o desenvolvimento de um robô móvel de baixo-custo, e a elaboração do estudo de caso em educação especial. / Assuming that robots will coexist with humans in the near future, it is conspicuous the need of Intelligent Control Architectures suitable for Human-Robot Interaction. Henceforth, this research has developed a behavioral Control Architecture Organization, whose main purpose is to allow the intuitive interaction of robot and people, thus fostering the collaboration between them. To this end, a synthetic emotional module, based on the Circumplex emotion theory, promoted the adaptation of the robot behaviors, implemented using Motor Schema theory, and the communication of its internal state. This Organization supported the adoption of the Control Architecture into an assistive application, which consists of the case study of assistive social robots as an auxiliary tool for special education. The experiments have demonstrated that the developed control architecture was able to meet the requirements of the application, which were conceived according to the opinion of the consulted experts. Thereafter, this thesis contributes with the design of a control architecture that is able to act upon the subjective evaluation based on cognitive beliefs of emotions, the development of a low-cost mobile robot, and the development of the case study in special education.
|
204 |
Reconhecimento visual de gestos para imitação e correção de movimentos em fisioterapia guiada por robô / Visual gesture recognition for mimicking and correcting movements in robot-guided physiotherapyRicardo Fibe Gambirasio 16 November 2015 (has links)
O objetivo deste trabalho é tornar possível a inserção de um robô humanoide para auxiliar pacientes em sessões de fisioterapia. Um sistema robótico é proposto que utiliza um robô humanoide, denominado NAO, visando analisar os movimentos feitos pelos pacientes e corrigi-los se necessário, além de motivá-los durante uma sessão de fisioterapia. O sistema desenvolvido permite que o robô, em primeiro lugar, aprenda um exercício correto de fisioterapia observando sua execução por um fisioterapeuta; em segundo lugar, que ele demonstre o exercício para que um paciente possa imitá-lo; e, finalmente, corrija erros cometidos pelo paciente durante a execução do exercício. O exercício correto é capturado por um sensor Kinect e dividido em uma sequência de estados em dimensão espaço-temporal usando k-means clustering. Estes estados então formam uma máquina de estados finitos para verificar se os movimentos do paciente estão corretos. A transição de um estado para o próximo corresponde a movimentos parciais que compõem o movimento aprendido, e acontece somente quando o robô observa o mesmo movimento parcial executado corretamente pelo paciente; caso contrário o robô sugere uma correção e pede que o paciente tente novamente. O sistema foi testado com vários pacientes em tratamento fisioterapêutico para problemas motores. Os resultados obtidos, em termos de precisão e recuperação para cada movimento, mostraram-se muito promissores. Além disso, o estado emocional dos pacientes foi também avaliado por meio de um questionário aplicado antes e depois do tratamento e durante o tratamento com um software de reconhecimento facial de emoções e os resultados indicam um impacto emocional bastante positivo e que pode vir a auxiliar pacientes durante tratamento fisioterapêuticos. / This dissertation develops a robotic system to guide patients through physiotherapy sessions. The proposed system uses the humanoid robot NAO, and it analyses patients movements to guide, correct, and motivate them during a session. Firstly, the system learns a correct physiotherapy exercise by observing a physiotherapist perform it; secondly, it demonstrates the exercise so that the patient can reproduce it; and finally, it corrects any mistakes that the patient might make during the exercise. The correct exercise is captured via Kinect sensor and divided into a sequence of states in spatial-temporal dimension using k-means clustering. Those states compose a finite state machine that is used to verify whether the patients movements are correct. The transition from one state to the next corresponds to partial movements that compose the learned exercise. If the patient executes the partial movement incorrectly, the system suggests a correction and returns to the same state, asking that the patient try again. The system was tested with multiple patients undergoing physiotherapeutic treatment for motor impairments. Based on the results obtained, the system achieved high precision and recall across all partial movements. The emotional impact of treatment on patients was also measured, via before and after questionnaires and via a software that recognizes emotions from video taken during treatment, showing a positive impact that could help motivate physiotherapy patients, improving their motivation and recovery.
|
205 |
Arquitetura de controle inteligente para interação humano-robô / Control architecture for human-robot interactionSilas Franco dos Reis Alves 01 April 2016 (has links)
Supondo-se que os robôs coexistirão conosco num futuro próximo, é então evidente a necessidade de Arquiteturas de Controle Inteligentes voltadas para a Interação Humano-Robô. Portanto, este trabalho desenvolveu uma Organização de Arquitetura de Controle Inteligente comportamental, cujo objetivo principal é permitir que o robô interaja com as pessoas de maneira intuitiva e que motive a colaboração entre pessoas e robôs. Para isso, um módulo emocional sintético, embasado na teoria bidimensional de emoções, foi integrado para promover a adaptação dos comportamentos do robô, implementados por Esquemas Motores, e a comunicação de seu estado interno de modo inteligível. Esta Organização subsidiou a implantação da Arquitetura de Controle em uma aplicação voltada para a área assistencial da saúde, consistindo, destarte, em um estudo de caso em robótica social assistiva como ferramenta auxiliar para educação especial. Os experimentos realizados demonstraram que a arquitetura de controle desenvolvida é capaz de atender aos requisitos da aplicação, conforme estipulado pelos especialistas consultados. Isto posto, esta tese contribui com o projeto de uma arquitetura de controle capaz de agir mediante a avaliação subjetiva baseada em crenças cognitivas das emoções, o desenvolvimento de um robô móvel de baixo-custo, e a elaboração do estudo de caso em educação especial. / Assuming that robots will coexist with humans in the near future, it is conspicuous the need of Intelligent Control Architectures suitable for Human-Robot Interaction. Henceforth, this research has developed a behavioral Control Architecture Organization, whose main purpose is to allow the intuitive interaction of robot and people, thus fostering the collaboration between them. To this end, a synthetic emotional module, based on the Circumplex emotion theory, promoted the adaptation of the robot behaviors, implemented using Motor Schema theory, and the communication of its internal state. This Organization supported the adoption of the Control Architecture into an assistive application, which consists of the case study of assistive social robots as an auxiliary tool for special education. The experiments have demonstrated that the developed control architecture was able to meet the requirements of the application, which were conceived according to the opinion of the consulted experts. Thereafter, this thesis contributes with the design of a control architecture that is able to act upon the subjective evaluation based on cognitive beliefs of emotions, the development of a low-cost mobile robot, and the development of the case study in special education.
|
206 |
Programmation d'un robot par des non-experts / End-user Robot Programming in Cobotic EnvironmentsLiang, Ying Siu 12 June 2019 (has links)
Le sujet de recherche est dans la continuité des travaux réalisés au cours de mon M2R sur la programmation par démonstration appliqué à la cobotique en milieu industriel. Ce sujet est à la croisée de plusieurs domaines (interaction Humain-Robot, planification automatique, apprentissage artificiel). Il s'agit maintenant d'aller au delà de ces premiers résultats obtenus au cours de mon M2R et de trouver un cadre générique pour la programmation de « cobots » (robots collaboratifs) en milieu industriel. L'approche cobotique consiste à ce qu'un opérateur humain, en tant qu'expert métier directement impliqué dans la réalisation des tâches en ligne, apprenne au robot à effectuer de nouvelles tâches et à utiliser le robot comme assistant « agile ». Dans ce contexte la thèse propose un mode d'apprentissage de type « end-user programming », c'est-à-dire simple et ne nécessitant pas d'être expert en robotique pour programmer le robot industriel Baxter. / The increasing presence of robots in industries has not gone unnoticed.Cobots (collaborative robots) are revolutionising industries by allowing robots to work in close collaboration with humans.Large industrial players have incorporated them into their production lines, but smaller companies hesitate due to high initial costs and the lack of programming expertise.In this thesis we introduce a framework that combines two disciplines, Programming by Demonstration and Automated Planning, to allow users without programming knowledge to program a robot.The user constructs the robot's knowledge base by teaching it new actions by demonstration, and associates their semantic meaning to enable the robot to reason about them.The robot adopts a goal-oriented behaviour by using automated planning techniques, where users teach action models expressed in a symbolic planning language.In this thesis we present preliminary work on user experiments using a Baxter Research Robot to evaluate our approach.We conducted qualitative user experiments to evaluate the user's understanding of the symbolic planning language and the usability of the framework's programming process.We showed that users with little to no programming experience can adopt the symbolic planning language, and use the framework.We further present our work on a Programming by Demonstration system used for organisation tasks.The system includes a goal inference model to accelerate the programming process by predicting the user's intended product configuration.
|
207 |
Indoor Navigation for Mobile Robots : Control and RepresentationsAlthaus, Philipp January 2003 (has links)
This thesis deals with various aspects of indoor navigationfor mobile robots. For a system that moves around in ahousehold or office environment,two major problems must betackled. First, an appropriate control scheme has to bedesigned in order to navigate the platform. Second, the form ofrepresentations of the environment must be chosen. Behaviour based approaches have become the dominantmethodologies for designing control schemes for robotnavigation. One of them is the dynamical systems approach,which is based on the mathematical theory of nonlineardynamics. It provides a sound theoretical framework for bothbehaviour design and behaviour coordination. In the workpresented in this thesis, the approach has been used for thefirst time to construct a navigation system for realistic tasksin large-scale real-world environments. In particular, thecoordination scheme was exploited in order to combinecontinuous sensory signals and discrete events for decisionmaking processes. In addition, this coordination frameworkassures a continuous control signal at all times and permitsthe robot to deal with unexpected events. In order to act in the real world, the control system makesuse of representations of the environment. On the one hand,local geometrical representations parameterise the behaviours.On the other hand, context information and a predefined worldmodel enable the coordination scheme to switchbetweensubtasks. These representations constitute symbols, on thebasis of which the system makes decisions. These symbols mustbe anchored in the real world, requiring the capability ofrelating to sensory data. A general framework for theseanchoring processes in hybrid deliberative architectures isproposed. A distinction of anchoring on two different levels ofabstraction reduces the complexity of the problemsignificantly. A topological map was chosen as a world model. Through theadvanced behaviour coordination system and a proper choice ofrepresentations,the complexity of this map can be kept at aminimum. This allows the development of simple algorithms forautomatic map acquisition. When the robot is guided through theenvironment, it creates such a map of the area online. Theresulting map is precise enough for subsequent use innavigation. In addition, initial studies on navigation in human-robotinteraction tasks are presented. These kinds of tasks posedifferent constraints on a robotic system than, for example,delivery missions. It is shown that the methods developed inthis thesis can easily be applied to interactive navigation.Results show a personal robot maintaining formations with agroup of persons during social interaction. <b>Keywords:</b>mobile robots, robot navigation, indoornavigation, behaviour based robotics, hybrid deliberativesystems, dynamical systems approach, topological maps, symbolanchoring, autonomous mapping, human-robot interaction
|
208 |
Mission Specialist Human-Robot Interaction in Micro Unmanned Aerial SystemsPeschel, Joshua Michael 2012 August 1900 (has links)
This research investigated the Mission Specialist role in micro unmanned aerial systems (mUAS) and was informed by human-robot interaction (HRI) and technology findings, resulting in the design of an interface that increased the individual performance of 26 untrained CBRN (chemical, biological, radiological, nuclear) responders during two field studies, and yielded formative observations for HRI in mUAS. Findings from the HRI literature suggested a Mission Specialist requires a role-specific interface that shares visual common ground with the Pilot role and allows active control of the unmanned aerial vehicle (UAV) payload camera. Current interaction technology prohibits this as responders view the same interface as the Pilot and give verbal directions for navigation and payload control. A review of interaction principles resulted in a synthesis of five design guidelines and a system architecture that were used to implement a Mission Specialist interface on an Apple iPad. The Shared Roles Model was used to model the mUAS human-robot team using three formal role descriptions synthesized from the literature (Flight Director, Pilot, and Mission Specialist). The Mission Specialist interface was evaluated through two separate field studies involving 26 CBRN experts who did not have mUAS experience. The studies consisted of 52 mission trials to surveil, evaluate, and capture imagery of a chemical train derailment incident staged at Disaster City. Results from the experimental study showed that when a Mission Specialist was able to actively control the UAV payload camera and verbally coordinate with the Pilot, greater role empowerment (confidence, comfort, and perceived best individual and team performance) was reported by a majority of participants for similar tasks; thus, a role-specific interface is preferred and should be used by untrained responders instead of viewing the same interface as the Pilot in mUAS. Formative observations made during this research suggested: i) establishing common ground in mUAS is both verbal and visual, ii) type of coordination (active or passive) preferred by the Mission Specialist is affected by command-level experience and perceived responsibility for the robot, and iii) a separate Pilot role is necessary regardless of preferred coordination type in mUAS. This research is of importance to HRI and CBRN researchers and practitioners, as well as those in the fields of robotics, human-computer interaction, and artificial intelligence, because it found that a human Pilot role is necessary for assistance and understanding, and that there are hidden dependencies in the human-robot team that affect Mission Specialist performance.
|
209 |
Indoor Navigation for Mobile Robots : Control and RepresentationsAlthaus, Philipp January 2003 (has links)
<p>This thesis deals with various aspects of indoor navigationfor mobile robots. For a system that moves around in ahousehold or office environment,two major problems must betackled. First, an appropriate control scheme has to bedesigned in order to navigate the platform. Second, the form ofrepresentations of the environment must be chosen.</p><p>Behaviour based approaches have become the dominantmethodologies for designing control schemes for robotnavigation. One of them is the dynamical systems approach,which is based on the mathematical theory of nonlineardynamics. It provides a sound theoretical framework for bothbehaviour design and behaviour coordination. In the workpresented in this thesis, the approach has been used for thefirst time to construct a navigation system for realistic tasksin large-scale real-world environments. In particular, thecoordination scheme was exploited in order to combinecontinuous sensory signals and discrete events for decisionmaking processes. In addition, this coordination frameworkassures a continuous control signal at all times and permitsthe robot to deal with unexpected events.</p><p>In order to act in the real world, the control system makesuse of representations of the environment. On the one hand,local geometrical representations parameterise the behaviours.On the other hand, context information and a predefined worldmodel enable the coordination scheme to switchbetweensubtasks. These representations constitute symbols, on thebasis of which the system makes decisions. These symbols mustbe anchored in the real world, requiring the capability ofrelating to sensory data. A general framework for theseanchoring processes in hybrid deliberative architectures isproposed. A distinction of anchoring on two different levels ofabstraction reduces the complexity of the problemsignificantly.</p><p>A topological map was chosen as a world model. Through theadvanced behaviour coordination system and a proper choice ofrepresentations,the complexity of this map can be kept at aminimum. This allows the development of simple algorithms forautomatic map acquisition. When the robot is guided through theenvironment, it creates such a map of the area online. Theresulting map is precise enough for subsequent use innavigation.</p><p>In addition, initial studies on navigation in human-robotinteraction tasks are presented. These kinds of tasks posedifferent constraints on a robotic system than, for example,delivery missions. It is shown that the methods developed inthis thesis can easily be applied to interactive navigation.Results show a personal robot maintaining formations with agroup of persons during social interaction.</p><p><b>Keywords:</b>mobile robots, robot navigation, indoornavigation, behaviour based robotics, hybrid deliberativesystems, dynamical systems approach, topological maps, symbolanchoring, autonomous mapping, human-robot interaction</p>
|
210 |
Methodology for creating human-centered robots : design and system integration of a compliant mobile baseWong, Pius Duc-min 30 July 2012 (has links)
Robots have growing potential to enter the daily lives of people at home, at work, and in cities, for a variety of service, care, and entertainment tasks. However, several challenges currently prevent widespread production and use of such human-centered robots. The goal of this thesis was first to help overcome one of these broad challenges: the lack of basic safety in human-robot physical interactions. Whole-body compliant control algorithms had been previously simulated that could allow safer movement of complex robots, such as humanoids, but no such robots had yet been documented to actually implement these algorithms. Therefore a wheeled humanoid robot "Dreamer" was developed to implement the algorithms and explore additional concepts in human-safe robotics. The lower mobile base part of Dreamer, dubbed "Trikey," is the focus of this work. Trikey was iteratively developed, undergoing cycles of concept generation, design, modeling, fabrication, integration, testing, and refinement. Test results showed that Trikey and Dreamer safely performed movements under whole-body compliant control, which is a novel achievement. Dreamer will be a platform for future research and education in new human-friendly traits and behaviors. Finally, this thesis attempts to address a second broad challenge to advancing the field: the lack of standard design methodology for human-centered robots. Based on the experience of building Trikey and Dreamer, a set of consistent design guidelines and metrics for the field are suggested. They account for the complex nature of such systems, which must address safety, performance, user-friendliness, and the capability for intelligent behavior. / text
|
Page generated in 0.1105 seconds