Spelling suggestions: "subject:"humanrobot interaction"" "subject:"humanoidrobot interaction""
211 |
A Formal Approach to Social Learning: Exploring Language Acquisition Through ImitationCederborg, Thomas 10 December 2013 (has links) (PDF)
The topic of this thesis is learning through social interaction, consisting of experiments that focus on word acquisition through imitation, and a formalism aiming to provide stronger theoretical foundations. The formalism is designed to encompass essentially any situation where a learner tries to figure out what a teacher wants it to do by interaction or observation. It groups learners that are interpreting a broad range of information sources under the same theoretical framework. A teachers demonstration, it's eye gaze during a reproduction attempt and a teacher speech comment are all treated as the same type of information source. They can all tell the imitator what the demonstrator wants it to do, and they need to be interpreted in some way. By including them all under the same framework, the formalism can describe any agent that is trying to figure out what a human wants it to do. This allows us to see parallels between existing research, and it provides a framing that makes new avenues of research visible. The concept of informed preferences is introduced to deal with cases such as "the teacher would like the learner to perform an action, but if it knew the consequences of that action, would prefer another action" or "the teacher is very happy with the end result after the learner has cleaned the apartment, but if it knew that the cleaning produced a lot of noise that disturbed the neighbors, it would not like the cleaning strategy". The success of a learner is judged according to the informed teachers opinion of what would be best for the uninformed version. A series of simplified setups are also introduced showing how a toy world setup can be reduced to a crisply defined inference problem with a mathematically defined success criteria (any learner architecture-setup pair has a numerical success value). An example experiment is presented where a learner is concurrently estimating the task and what the evaluative comments of a teacher means. This experiment shows how the ideas of learning to interpret information sources can be used in practice. The first of the learning from demonstration experiments presented investigates a learner, specifically an imitator, that can learn an unknown number of tasks from unlabeled demonstrations. The imitator has access to a set of demonstrations, but it must infer the number of tasks and determine what demonstration is of what task (there are no symbols or labels attached to the demonstrations). The demonstrator is attempting to teach the imitator a rule where the task to perform is dependent on the 2D position of an object. The objects 2D position is set at a random location within four different, well separated, rectangles, each location indicating that a specific task should be performed. Three different coordinate systems were available, and each task was defined in one of them (for example ''move the hand to the object and then draw a circle around it"). To deal with this setup, a local version of Gaussian Mixture Regression (GMR) was used called Incremental Local Online Gaussian Mixture Regression (ILO-GMR). A small and fixed number of gaussians are fitted to local data, informs policy, and then new local points are gathered. Three other experiments extends the types of contexts to include the actions of another human, making the investigation of language learning possible (a word is learnt by imitating how the demonstrator responds to someone uttering the word). The robot is presented with a setup containing two humans, one demonstrator (who performs hand movements), and an interactant (who might perform some form of communicative act). The interactants behavior is treated as part of the context and the demonstrators behavior is assumed to be an appropriate response to this extended context. Two experiments explore the simultaneous learning of linguistic and non linguistic tasks (one demonstration could show the appropriate response to an interactant speech utterance and another demonstration could show the appropriate response to an object position). The imitator is not given access to any symbolic information about what word or hand sign was spoken, and must infer how many words where spoken, how many times linguistic information was present, and what demonstrations where responses to what word. Another experiment explores more advanced types of linguistic conventions and demonstrator actions (simple word order grammar in interactant communicative acts, and the imitation of internal cognitive operations performed by the demonstrator as a response). Since a single general imitation learning mechanism can deal with the acquisition of all the different types of tasks, it opens up the possibility that there might not be a need for a separate language acquisition system. Being able to learn a language is certainly very useful when growing up in a linguistic community, but this selection pressure can not be used to explain how the linguistic community arose in the first place. It will be argued that a general imitation learning mechanism is both useful in the absence of language, and will result in language given certain conditions such as shared intentionality and the ability to infer the intentions and mental states of others (all of which can be useful to develop in the absence of language). It will be argued that the general tendency to adopt normative rules is a central ingredient for language (not sufficient, and not necessary while adopting an already established language, but certainly very conducive for a community establishing linguistic conventions).
|
212 |
Modeling of operator action for intelligent control of haptic human-robot interfacesGallagher, William John 13 January 2014 (has links)
Control of systems requiring direct physical human-robot interaction (pHRI) requires special consideration of the motion, dynamics, and control of both the human and the robot. Humans actively change their dynamic characteristics during motion, and robots should be designed with this in mind. Both the case of humans trying to control haptic robots using physical contact and the case of using wearable robots that must work with human muscles are pHRI systems.
Force feedback haptic devices require physical contact between the operator and the machine, which creates a coupled system. This human contact creates a situation in which the stiffness of the system changes based on how the operator modulates the stiffness of their arm. The natural human tendency is to increase arm stiffness to attempt to stabilize motion. However, this increases the overall stiffness of the system, making it more difficult to control and reducing stability. Instability poses a threat of injury or load damage for large assistive haptic devices with heavy loads. Controllers do not typically account for this, as operator stiffness is often not directly measurable. The common solution of using a controller with significantly increased controller damping has the disadvantage of slowing the device and decreasing operator efficiency. By expanding the information available to the controller, it can be designed to adjust a robot's motion based on the how the operator is interacting with it and allow for faster movement in low stiffness situations. This research explored the utility of a system that can estimate operator arm stiffness and compensate accordingly. By measuring muscle activity, a model of the human arm was utilized to estimate the stiffness level of the operator, and then adjust the gains of an impedance-based controller to stabilize the device. This achieved the goal of reducing oscillations and increasing device performance, as demonstrated through a series of user trials with the device. Through the design of this system, the effectiveness of a variety of operator models were analyzed and several different controllers were explored. The final device has the potential to increase the performance of operators and reduce fatigue due to usage, which in industrial settings could translate into better efficiency and higher productivity.
Similarly, wearable robots must consider human muscle activity. Wearable robots, often called exoskeleton robots, are used for a variety of tasks, including force amplification, rehabilitation, and medical diagnosis. Force amplification exoskeletons operate much like haptic assist devices, and could leverage the same adaptive control system. The latter two types, however, are designed with the purpose of modulating human muscles, in which case the wearer's muscles must adapt to the way the robot moves, the reverse of the robot adapting to how the human moves. In this case, the robot controller must apply a force to the arm to cause the arm muscles to adapt and generate a specific muscle activity pattern. This related problem is explored and a muscle control algorithm is designed that allows a wearable robot to induce a specified muscle pattern in the wearer's arm.
The two problems, in which the robot must adapt to the human's motion and in which the robot must induce the human to adapt its motion, are related critical problems that must be solved to enable simple and natural physical human robot interaction.
|
213 |
Collective dynamics and control of a fleet of heterogeneous marine vehiclesWang, Chuanfeng 13 January 2014 (has links)
Cooperative control enables combinations of sensor data from multiple autonomous underwater vehicles (AUVs) so that multiple AUVs can perform smarter behaviors than a single AUV. In addition, in some situations, a human-driven underwater vehicle (HUV) and a group of AUVs need to collaborate and preform formation behaviors. However, the collective dynamics of a fleet of heterogeneous underwater vehicles are more complex than the non-trivial single vehicle dynamics, resulting in challenges in analyzing the formation behaviors of a fleet of heterogeneous underwater vehicles. The research addressed in this dissertation investigates the collective dynamics and control of a fleet of heterogeneous underwater vehicles, including multi-AUV systems and systems comprised of an HUV and a group of AUVs (human-AUV systems). This investigation requires a mathematical motion model of an underwater vehicle. This dissertation presents a review of a six-degree-of-freedom (6DOF) motion model of a single AUV and proposes a method of identifying all parameters in the model based on computational fluid dynamics (CFD) calculations. Using the method, we build a 6DOF model of the EcoMapper and validate the model by field experiments. Based upon a generic 6DOF AUV model, we study the collective dynamics of a multi-AUV system and develop a method of decomposing the collective dynamics. After the collective dynamics decomposition, we propose a method of achieving orientation control for each AUV and formation control for the multi-AUV system. We extend the results and propose a cooperative control for a human-AUV system so that an HUV and a group of AUVs will form a desired formation while moving along a desired trajectory as a team. For the post-mission stage, we present a method of analyzing AUV survey data and apply this method to AUV measurement data collected from our field experiments carried out in Grand Isle, Louisiana in 2011, where AUVs were used to survey a lagoon, acquire bathymetric data, and measure the concentration of reminiscent crude oil in the water of the lagoon after the BP Deepwater Horizon oil spill in the Gulf of Mexico in 2010.
|
214 |
Development of Integration Algorithms for Vision/Force Robot Control with Automatic Decision SystemBdiwi, Mohamad 12 August 2014 (has links) (PDF)
In advanced robot applications, the challenge today is that the robot should perform different successive subtasks to achieve one or more complicated tasks similar to human. Hence, this kind of tasks required to combine different kind of sensors in order to get full information about the work environment. However, from the point of view of control, more sensors mean more possibilities for the structure of the control system. As shown previously, vision and force sensors are the most common external sensors in robot system. As a result, in scientific papers it can be found numerous control algorithms and different structures for vision/force robot control, e.g. shared, traded control etc. The lacks in integration of vision/force robot control could be summarized as follows:
• How to define which subspaces should be vision, position or force controlled?
• When the controller should switch from one control mode to another one?
• How to insure that the visual information could be reliably used?
• How to define the most appropriated vision/force control structure?
In many previous works, during performing a specified task one kind of vision/force control structure has been used which is pre-defined by the programmer. In addition to that, if the task is modified or changed, it would be much complicated for the user to describe the task and to define the most appropriated vision/force robot control especially if the user is inexperienced. Furthermore, vision and force sensors are used only as simple feedback (e.g. vision sensor is used usually as position estimator) or they are intended to avoid the obstacles. Accordingly, much useful information provided by the sensors which help the robot to perform the task autonomously is missed.
In our opinion, these lacks of defining the most appropriate vision/force robot control and the weakness in the utilization from all the information which could be provided by the sensors introduce important limits which prevent the robot to be versatile, autonomous, dependable and user-friendly. For this purpose, helping to increase autonomy, versatility, dependability and user-friendly in certain area of robotics which requires vision/force integration is the scope of this thesis. More concretely:
1. Autonomy: In the term of an automatic decision system which defines the most appropriated vision/force control modes for different kinds of tasks and chooses the best structure of vision/force control depending on the surrounding environments and a priori knowledge.
2. Versatility: By preparing some relevant scenarios for different situations, where both the visual servoing and force control are necessary and indispensable.
3. Dependability: In the term of the robot should depend on its own sensors more than on reprogramming and human intervention. In other words, how the robot system can use all the available information which could be provided by the vision and force sensors, not only for the target object but also for the features extraction of the whole scene.
4. User-friendly: By designing a high level description of the task, the object and the sensor configuration which is suitable also for inexperienced user.
If the previous properties are relatively achieved, the proposed robot system can:
• Perform different successive and complex tasks.
• Grasp/contact and track imprecisely placed objects with different poses.
• Decide automatically the most appropriate combination of vision/force feedback for every task and react immediately to the changes from one control cycle to another because of occurrence of some unforeseen events.
• Benefit from all the advantages of different vision/force control structures.
• Benefit from all the information provided by the sensors.
• Reduce the human intervention or reprogramming during the execution of the task.
• Facilitate the task description and entering of a priori-knowledge for the user, even if he/she is inexperienced.
|
215 |
A Learning-based Control Architecture for Socially Assistive Robots Providing Cognitive InterventionsChan, Jeanie 05 December 2011 (has links)
Due to the world’s rapidly growing elderly population, dementia is becoming increasingly prevalent. This poses considerable health, social, and economic concerns as it impacts individuals, families and healthcare systems. Current research has shown that cognitive interventions may slow the decline of or improve brain functioning in older adults. This research investigates the use of intelligent socially assistive robots to engage individuals in person-centered cognitively stimulating activities. Specifically, in this thesis, a novel learning-based control architecture is developed to enable socially assistive robots to act as social motivators during an activity. A hierarchical reinforcement learning approach is used in the architecture so that the robot can learn appropriate assistive behaviours based on activity structure and personalize an interaction based on the individual’s behaviour and user state. Experiments show that the control architecture is effective in determining the robot’s optimal assistive behaviours for a memory game interaction and a meal assistance scenario.
|
216 |
A Learning-based Control Architecture for Socially Assistive Robots Providing Cognitive InterventionsChan, Jeanie 05 December 2011 (has links)
Due to the world’s rapidly growing elderly population, dementia is becoming increasingly prevalent. This poses considerable health, social, and economic concerns as it impacts individuals, families and healthcare systems. Current research has shown that cognitive interventions may slow the decline of or improve brain functioning in older adults. This research investigates the use of intelligent socially assistive robots to engage individuals in person-centered cognitively stimulating activities. Specifically, in this thesis, a novel learning-based control architecture is developed to enable socially assistive robots to act as social motivators during an activity. A hierarchical reinforcement learning approach is used in the architecture so that the robot can learn appropriate assistive behaviours based on activity structure and personalize an interaction based on the individual’s behaviour and user state. Experiments show that the control architecture is effective in determining the robot’s optimal assistive behaviours for a memory game interaction and a meal assistance scenario.
|
217 |
Companion Robots Behaving with Style : Towards Plasticity in Social Human-Robot Interaction / Des robots compagnons avec du style : vers de la plasticité en interaction homme-robotBenkaouar johal, Wafa 30 October 2015 (has links)
De nos jours, les robots compagnons présentent de réelles capacités et fonctionnalités. Leurs acceptabilité dans nos habitats est cependant toujours un objet d'étude du fait que les motivations et la valeur du companionage entre robot est enfant n'a pas encore été établi. Classiquement, les robots sociaux avaient des comportements génériques qui ne prenaient pas en compte les différences inter-individuelles. De plus en plus de travaux en Interaction Humain-Robot se penchent sur la personnalisation du compagnon. Personnalisation et contrôle du compagnon permettrai une meilleure compréhension de ses comportements par l'utilisateur. Proposer une palette d'expressions du compagnon jouant un rôle social permettrait à l'utilisateur de customiser leur compagnon en fonction de leur préférences.Dans ce travail, nous proposons un système de plasticité pour l'interaction humain-robot. Nous utilisons une méthode de Design Basée Scenario pour expliciter les rôles sociaux attendu des robot compagnons. Puis en nous appuyant sur la littérature de plusieurs disciplines, nous proposons de représenter ces variations de comportement d'un robot compagnon par les styles comportementaux. Les styles comportementaux sont défini en fonction du rôle social grâce à des paramètres d'expressivité non-verbaux. Ces paramètres (statiques, dynamiques et décorateurs) permettent de transformer des mouvements dit neutres en mouvements stylés. Nous avons mener une étude basée sur des vidéos, qui montraient deux robots avec des mouvement stylés, afin d'évaluer l'expressivité de deux styles parentaux par deux types de robots. Les résultats montrent que les participants étaient capable de différentier les styles en termes de dominance et d'autorité, en accord avec la théorie en psychologie sur ces styles. Nous avons constater que le style préféré par les parents n'étaient pas corréler à leur propre style en tant que parents. En conséquence, les styles comportementaux semblent être des outils pertinents pour la personnalisation social du robot compagnon par les parents.Une seconde expérience, dans un appartement impliquant 16 enfants dans des interaction enfant-robot, a montré que parents et enfants attendent plutôt d'un robot d'être polyvalent et de pouvoir jouer plusieurs rôle à la maison. Cette étude a aussi montré que les styles comportementaux ont une influence sur l'attitude corporelle des enfants pendant l'interaction avec le robot. Des dimensions classiquement utilisées en communication non-verbal nous ont permises de développer des mesures pour l'interaction enfant-robot, basées sur les données capturées avec un capteur Kinect 2.Dans cette thèse nous proposons également la modularisation d'une architecture cognitive et affective précédemment proposé résultant dans l'architecture Cognitive et Affective orientées Interaction (CAIO) pour l'interaction social humain-robot. Cette architecture a été implémenter en ROS, permettant son utilisation par des robots sociaux. Nous proposons aussi l'implémentation des Stimulus Evaluation Checks (SECs) de [Scherer, 2009] pour deux plateformes robotiques permettant l'expression dynamique d'émotion.Nous pensons que les styles comportementaux et l'architecture CAIO pourront s'avérer utile pour l'amélioration de l'acceptabilité et la sociabilité des robot compagnons. / Companion robots are technologically and functionally more and more efficient. Capacities and usefulness of companion robots is nowadays a reality. These robots that have now more efficient are however not accepted yet in home environments as worth of having such robot and companionship hasn't been establish. Classically, social robots were displaying generic social behaviours and not taking into account inter-individual differences. More and more work in Human-Robot Interaction goes towards personalisation of the companion. Personalisation and control of the companion could lead to better understanding of the robot's behaviour. Proposing several ways of expression for companion robots playing role would allow user to customize their companion to their social preferences.In this work, we propose a plasticity framework for Human-Robot Interaction. We used a Scenario-Based Design method to elicit social roles for companion robots. Then, based on the literature in several disciplines, we propose to depict variations of behaviour of the companion robot with behavioural styles. Behavioural styles are defined according to the social role with non-verbal expressive parameters. The expressive parameters (static, dynamic and decorators) allow to transform neutral motions into styled motion. We conducted a perceptual study through a video-based survey showing two robots displaying styles allowing us to evaluate the expressibility of two parenting behavioural styles by two kind robots. We found that, participants were indeed able to discriminate between the styles in term of dominance and authoritativeness, which is in line with the psychological theory on these styles. Most important, we found that styles preferred by parents for their children was not correlated to their own parental practice. Consequently, behavioural styles are relevant cues for social personalisation of the companion robot by parents.A second experimental study in a natural environment involving child-robot interaction with 16 children showed that parents and children were expected a versatile robot able to play several social role. This study also showed that behavioural styles had an influence on the child's bodily attitudes during the interaction. Common dimension studied in non-verbal communication allowed us to develop measures for child-robot interaction, based on data captured with a Kinect2 sensor .In this thesis, we also propose a modularisation of a previously proposed affective and cognitive architecture resulting in the new Cognitive, Affective Interaction Oriented (CAIO) architecture. This architecture has been implemented in ROS framework allowing it to use it on social robots. We also proposed instantiations of the Stimulus Evaluation Checks of [Scherer, 2009]for two robotic platforms allowing dynamic expression of emotions.Both behavioural style framework and CAIO architecture can be useful in socialise companion robots and improving their acceptability.
|
218 |
Teaching mobile robots to use spatial wordsDobnik, Simon January 2009 (has links)
The meaning of spatial words can only be evaluated by establishing a reference to the properties of the environment in which the word is used. For example, in order to evaluate what is to the left of something or how fast is fast in a given context, we need to evaluate properties such as the position of objects in the scene, their typical function and behaviour, the size of the scene and the perspective from which the scene is viewed. Rather than encoding the semantic rules that define spatial expressions by hand, we developed a system where such rules are learned from descriptions produced by human commentators and information that a mobile robot has about itself and its environment. We concentrate on two scenarios and words that are used in them. In the first scenario, the robot is moving in an enclosed space and the descriptions refer to its motion ('You're going forward slowly' and 'Now you're turning right'). In the second scenario, the robot is static in an enclosed space which contains real-size objects such as desks, chairs and walls. Here we are primarily interested in prepositional phrases that describe relationships between objects ('The chair is to the left of you' and 'The table is further away than the chair'). The perspective can be varied by changing the location of the robot. Following the learning stage, which is performed offline, the system is able to use this domain specific knowledge to generate new descriptions in new environments or to 'understand' these expressions by providing feedback to the user, either linguistically or by performing motion actions. If a robot can be taught to 'understand' and use such expressions in a manner that would seem natural to a human observer, then we can be reasonably sure that we have captured at least something important about their semantics. Two kinds of evaluation were performed. First, the performance of machine learning classifiers was evaluated on independent test sets using 10-fold cross-validation. A comparison of classifier performance (in regard to their accuracy, the Kappa coefficient (κ), ROC and Precision-Recall graphs) is made between (a) the machine learning algorithms used to build them, (b) conditions under which the learning datasets were created and (c) the method by which data was structured into examples or instances for learning. Second, with some additional knowledge required to build a simple dialogue interface, the classifiers were tested live against human evaluators in a new environment. The results show that the system is able to learn semantics of spatial expressions from low level robotic data. For example, a group of human evaluators judged that the live system generated a correct description of motion in 93.47% of cases (the figure is averaged over four categories) and that it generated the correct description of object relation in 59.28% of cases.
|
219 |
“Do you want to take a short survey?” : Evaluating and improving the UX and VUI of a survey skill in the social robot Furhat: a qualitative case studyBengtsson, Camilla, Englund, Caroline January 2018 (has links)
The purpose of this qualitative case study is to evaluate an early stage survey skill developed for the social robot Furhat, and look into how the user experience (UX) and voice user interface (VUI) of that skill can be improved. Several qualitative methods have been used: expert evaluations using heuristics for human-robot interaction (HRI), user evaluations including observations and interviews, as well as a quantitative questionnaire (RoSAS – Robot Social Attribution Scale). The empirical findings have been classified into the USUS Evaluation Framework for Human-Robot Interaction. The user evaluations were performed in two modes, one group of informants talked and interacted with Furhat with the support of a graphical user interface (GUI), and the other group without the GUI. A positive user experience was identified in both modes, showing that the informants found interacting with Furhat a fun, engaging and interesting experience. The mode with the supportive GUI could be suitable in noisy environments, and for longer surveys with many response alternatives to choose from, whereas the other mode could work better for less noisy environments and for shorter surveys. General improvements that can contribute to a better user experience in both modes were found; such as having the robot adopt a more human-like character when it comes to the dialogue and the facial expressions and movements, along with addressing a number of technical and usability issues. / Syftet med den här kvalitativa fallstudien är att utvärdera en enkätskill för den sociala roboten Furhat. Förutom utvärderingen av denna skill, som är i ett tidigt skede av utvecklingen, är syftet även att undersöka hur användarupplevelsen (UX) och röstgränssnittet (VUI) kan förbättras. Olika kvalitativa metoder har använts: expertutvärderingar med heuristik för MRI (människa-robot-interaktion), användarutvärderingar bestående av observationer och intervjuer, samt ett kvantitativt frågeformulär (RoSAS – Robot Social Attribution Scale). Resultaten från dessa har placerats in i ramverket USUS Evaluation Framework for Human- Robot Interaction. Användarutvärderingarna utfördes i två olika grupper: en grupp pratade och interagerade med Furhat med stöd av ett grafiskt användargränssnitt (GUI), den andra hade inget GUI. En positiv användarupplevelse konstaterades i båda grupperna: informanterna tyckte att det var roligt, engagerande och intressant att interagera med Furhat. Att ha ett GUI som stöd kan passa bättre för bullriga miljöer och för längre enkäter med många svarsalternativ att välja bland, medan ett GUI inte behövs för lugnare miljöer och kortare enkäter. Generella förbättringar som kan bidra till att höja användarupplevelsen hittades i båda grupperna; till exempel att roboten bör agera mer människolikt när det kommer till dialogen och ansiktsuttryck och rörelser, samt att åtgärda ett antal tekniska problem och användbarhetsproblem.
|
220 |
Humanoid Robots and Artificial Intelligence in Aircraft Assembly : A case study and state-of-the-art reviewR. Santana, Estela January 2018 (has links)
Increasing demands, a need for more efficient manufacturing processes and pressure to remain competitive have been driving the development and use of technology in the industry since the industrial revolution. The number of operational industrial robots worldwide have been increasing every year and is expected to reach 3 billion by 2020. The aerospace industry still faces difficulty when it comes to automation due to the complexity of the products and low production volumes. These aspects make the use of traditional fixed robots very difficult to implement and economically unfeasible, which is the reason why the assembly process of aircrafts is mainly a manual work. These challenges have led the industry to consider other possibilities of automation, bringing the attention of many companies to humanoid robots. The aim of this thesis was to investigate the applicability of autonomous humanoid robots in aircraft assembly activities by focusing on four domains: mobility, manipulation, instruction supply and human-robot interaction. A case study was made in one workstation of the pre-assembly process of a military aircraft at Saab AB, in order to collect technical requirements for a humanoid robot to perform in this station. Also, a state-of-the-art literature review was made focusing on commercially available products and ongoing research projects. The crossing of information gathered by the case study and the state-of-the-art review, provided an idea of how close humanoid robots are to performing in the aircraft assembly process in each of the four domains. In general, the findings show that the mechanical structure and other hardware are not the biggest challenge when it comes to creating highly autonomous humanoid robots. Physically, such robots already exist, but they mostly lack autonomy and intelligence. In conclusion, the main challenges concern the degree of intelligence for autonomous operation, including the capability to reason, learn from experience, make decisions and act on its own, as well as the integration of all the different technologies into one single platform. In many domains, sub-problems have been addressed individually, but full solutions for, for example, autonomous indoor navigation and object manipulation, are still under development.
|
Page generated in 0.1472 seconds