• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 40
  • 14
  • 8
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 75
  • 75
  • 33
  • 28
  • 28
  • 22
  • 17
  • 15
  • 15
  • 12
  • 12
  • 10
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Stereo visual servoing from straight lines / Asservissement visuel stéréo à partir de droites

Alkhalil, Fadi 24 September 2012 (has links)
L'emploi d'un retour visuel dans le but d'effectuer une commande en boucle fermée de robot s'est largement répandu et concerne de nos jours tous les domaines de la robotique. Un tel retour permet d'effectuer une comparaison entre un état désiré et l'état actuel, à l'aide de mesures visuelles. L'objectif principal de cette thèse consiste à concevoir plusieurs types de lois de commande cinématiques par vision stéréo. Ceci concerne aussi l'étude de la stabilité du système en boucle fermée et la convergence des fonctions de tâche. C'est essentiellement le découplage des lois de commandes cinématiques en rotation et en translation qui est recherché ici, selon le nombre d'indices visuels considérés.Les mesures visuelles utilisées dans cette thèse sont les lignes droites 3D. Les intérêts apportés à ce type de mesures visuelles sont la robustesse contre le bruit, et la possibilité de représenter d'autres primitives comme des couples de points ou de plans par la modélisation de Plücker. / Closing the control loop of a manipulator robot with vision feedback is widelyknown. It concerns nowadays all areas of robotics. Such a return can make a comparison between a desired state and current state, using visual measurements. The main objective of this doctoral thesis is to design several types of kinematic control laws for stereo visual servoing. It strongly involves the formalism of the task function which is a well-known and useful mathematical tool to express the visual error as a function of state vectors.We have investigated the decoupling between the rotational and translational velocities control laws together with the epipolar constraint with a stereo visual feedback.That is why, the visual measurements and features used in this thesis are the 3Dstraight lines.The interests of this type of visual features rely on the robustness against the noise, and the possibility to represent straight lines or other features like points or planes pairs by the Plücker coordinates, as a 3D straight line can be represented as well by two points or the intersection of two planes. This makes all the control laws designed in this thesis valid for another visual features like points
32

Ultra Low Latency Visual Servoing for High Speed Object Tracking Using Multi Focal Length Camera Arrays

McCown, Alexander Steven 01 July 2019 (has links)
In high speed applications of visual servoing, latency from the recognition algorithm can cause significant degradation of in response time. Hardware acceleration allows for recognition algorithms to be applied directly during the raster scan from the image sensor, thereby removing virtually all video processing latency. This paper examines one such method, along with an analysis of design decisions made to optimize for use during high speed airborne object tracking tests for the US military. Designing test equipment for defense use involves working around unique challenges that arise from having many details being deemed classified or highly sensitive information. Designing tracking system without knowing any exact numbers for speeds, mass, distance or nature of the objects being tracked requires a flexible control system that can be easily tuned after installation. To further improve accuracy and allow rapid tuning to a yet undisclosed set of parameters, a machine learning powered auto-tuner is developed and implemented as a control loop optimizer.
33

Vision based navigation in a dynamic environment / Navigation référencée vision dans un environnement dynamique

Futterlieb, Marcus 10 July 2017 (has links)
Cette thèse s'intéresse au problème de la navigation autonome au long cours de robots mobiles à roues dans des environnements dynamiques. Elle s'inscrit dans le cadre du projet FUI Air-Cobot. Ce projet, porté par Akka Technologies, a vu collaborer plusieurs entreprises (Akka, Airbus, 2MORROW, Sterela) ainsi que deux laboratoires de recherche, le LAAS et Mines Albi. L'objectif est de développer un robot collaboratif (ou cobot) capable de réaliser l'inspection d'un avion avant le décollage ou en hangar. Différents aspects ont donc été abordés : le contrôle non destructif, la stratégie de navigation, le développement du système robotisé et de son instrumentation, etc. Cette thèse répond au second problème évoqué, celui de la navigation. L'environnement considéré étant aéroportuaire, il est hautement structuré et répond à des normes de déplacement très strictes (zones interdites, etc.). Il peut être encombré d'obstacles statiques (attendus ou non) et dynamiques (véhicules divers, piétons, ...) qu'il conviendra d'éviter pour garantir la sécurité des biens et des personnes. Cette thèse présente deux contributions. La première porte sur la synthèse d'un asservissement visuel permettant au robot de se déplacer sur de longues distances (autour de l'avion ou en hangar) grâce à une carte topologique et au choix de cibles dédiées. De plus, cet asservissement visuel exploite les informations fournies par toutes les caméras embarquées. La seconde contribution porte sur la sécurité et l'évitement d'obstacles. Une loi de commande basée sur les spirales équiangulaires exploite seulement les données sensorielles fournies par les lasers embarqués. Elle est donc purement référencée capteur et permet de contourner tout obstacle, qu'il soit fixe ou mobile. Il s'agit donc d'une solution générale permettant de garantir la non collision. Enfin, des résultats expérimentaux, réalisés au LAAS et sur le site d'Airbus à Blagnac, montrent l'efficacité de la stratégie développée. / This thesis is directed towards the autonomous long range navigation of wheeled robots in dynamic environments. It takes place within the Air-Cobot project. This project aims at designing a collaborative robot (cobot) able to perform the preflight inspection of an aircraft. The considered environment is then highly structured (airport runway and hangars) and may be cluttered with both static and dynamic unknown obstacles (luggage or refueling trucks, pedestrians, etc.). Our navigation framework relies on previous works and is based on the switching between different control laws (go to goal controller, visual servoing, obstacle avoidance) depending on the context. Our contribution is twofold. First of all, we have designed a visual servoing controller able to make the robot move over a long distance thanks to a topological map and to the choice of suitable targets. In addition, multi-camera visual servoing control laws have been built to benefit from the image data provided by the different cameras which are embedded on the Air-Cobot system. The second contribution is related to obstacle avoidance. A control law based on equiangular spirals has been designed to guarantee non collision. This control law, based on equiangular spirals, is fully sensor-based, and allows to avoid static and dynamic obstacles alike. It then provides a general solution to deal efficiently with the collision problem. Experimental results, performed both in LAAS and in Airbus hangars and runways, show the efficiency of the developed techniques.
34

Robot visual servoing with iterative learning control

Jiang, Ping, Unbehauen, R. January 2002 (has links)
Yes / This paper presents an iterative learning scheme for vision guided robot trajectory tracking. At first, a stability criterion for designing iterative learning controller is proposed. It can be used for a system with initial resetting error. By using the criterion, one can convert the design problem into finding a positive definite discrete matrix kernel and a more general form of learning control can be obtained. Then, a three-dimensional (3-D) trajectory tracking system with a single static camera to realize robot movement imitation is presented based on this criterion.
35

Multistage Localization for High Precision Mobile Manipulation Tasks

Mobley, Christopher James 03 March 2017 (has links)
This paper will present a multistage localization approach for an autonomous industrial mobile manipulator (AIMM). This approach allows tasks with an operational scope outside the range of the robot's manipulator to be completed without having to recalibrate the position of the end-effector each time the robot's mobile base moves to another position. This is achieved by localizing the AIMM within its area of operation (AO) using adaptive Monte Carlo localization (AMCL), which relies on the fused odometry and sensor messages published by the robot, as well as a 2-D map of the AO, which is generated using an optimization-based smoothing simultaneous localization and mapping (SLAM) technique. The robot navigates to a predefined start location in the map incorporating obstacle avoidance through the use of a technique called trajectory rollout. Once there, the robot uses its RGB-D sensor to localize an augmented reality (AR) tag in the map frame. Once localized, the identity and the 3-D position and orientation, collectively known as pose, of the tag are used to generate a list of initial feature points and their locations based on a priori knowledge. After the end-effector moves to the approximate location of a feature point provided by the AR tag localization, the feature point's location, as well as the end-effector's pose are refined to within a user specified tolerance through the use of a control loop, which utilizes images from a calibrated machine vision camera and a laser pointer, simulating stereo vision, to localize the feature point in 3-D space using computer vision techniques and basic geometry. This approach was implemented on two different ROS enabled robots, the Clearpath Robotics' Husky and the Fetch Robotics' Fetch, in order to show the utility of the multistage localization approach in executing two tasks which are prevalent in both manufacturing and construction: drilling and sealant application. The proposed approach was able to achieve an average accuracy of ± 1 mm in these operations, verifying its efficacy for tasks which have a larger operational scope than that of the range of the AIMM's manipulator and its robustness to general applications in manufacturing. / Master of Science
36

A universal iterative learning stabilizer for a class of MIMO systems.

Jiang, Ping, Chen, H., Bamforth, C.A. January 2006 (has links)
No / Design of iterative learning control (ILC) often requires some prior knowledge about a system's control matrix. In some applications, such as uncalibrated visual servoing, this kind of knowledge may be unavailable so that a stable learning control cannot always be achieved. In this paper, a universal ILC is proposed for a class of multi-input multi-output (MIMO) uncertain nonlinear systems with no prior knowledge about the system control gain matrix. It consists of a gain matrix selector from the unmixing set and a learned compensator in a form of the positive definite discrete matrix kernel, corresponding to rough gain matrix probing and refined uncertainty compensating, respectively. Asymptotic convergence for a trajectory tracking within a finite time interval is achieved through repetitive tracking. Simulations and experiments of uncalibrated visual servoing are carried out in order to verify the validity of the proposed control method.
37

A Hybrid Tracking Approach for Autonomous Docking in Self-Reconfigurable Robotic Modules

Sohal, Shubhdildeep Singh 02 July 2019 (has links)
Active docking in modular robotic systems has received a lot of interest recently as it allows small versatile robotic systems to coalesce and achieve the structural benefits of larger robotic systems. This feature enables reconfigurable modular robotic systems to bridge the gap between small agile systems and larger robotic systems. The proposed self-reconfigurable mobile robot design exhibits dual mobility using a tracked drive for longitudinal locomotion and wheeled drive for lateral locomotion. The two degrees of freedom (DOF) docking interface referred to as GHEFT (Genderless, High strength, Efficient, Fail-Safe, high misalignment Tolerant) allows for an efficient docking while tolerating misalignments in 6-DOF. In addition, motion along the vertical axis is also achieved via an additional translational DOF, allowing for toggling between tracked and wheeled locomotion modes by lowering and raising the wheeled assembly. This thesis also presents a visual-based onboard Hybrid Target Tracking algorithm to detect and follow a target robot leading to autonomous docking between the modules. As a result of this proposed approach, the tracked features are then used to bring the robots in sufficient proximity for the docking procedure using Image Based Visual Servoing (IBVS) control. Experimental results to validate the robustness of the proposed tracking method, as well as the reliability of the autonomous docking procedure, are also presented in this thesis. / Master of Science / Active docking in modular robotic systems has received a lot of interest recently as it allows small versatile robotic systems to coalesce and achieve the structural benefits of larger robotic systems. This feature enables reconfigurable modular robotic systems to bridge the gap between small agile systems and larger robotic systems. Such robots can prove useful in environments that are either too dangerous or inaccessible to humans. Therefore, in this research, several specific hardware and software development aspects related to self-reconfigurable mobile robots are proposed. In terms of hardware development, a robotic module was designed that is symmetrically invertible and exhibits dual mobility using a tracked drive for longitudinal locomotion and wheeled drive for lateral locomotion. Such interchangeable mobility is important when the robot operates in a constrained workspace. The mobile robot also has integrated two degrees of freedom (DOF) docking mechanisms referred to as GHEFT (Genderless, High strength, Efficient, Fail-Safe, high misalignment Tolerant). The docking interface allows for an efficient docking while tolerating misalignments in 6-DOF. In addition, motion along the vertical axis is also performed via an additional translational DOF, allowing for lowering and raising the wheeled assembly. The robot is equipped with sensors to provide positional feedback of the joints relative to the target robot. In terms of software development, a visual-based onboard Hybrid Target Tracking algorithm for high-speed consistent tracking iv of colored targets is also presented in this work. The proposed technique is used to detect and follow a colored target attached to the target robot leading to autonomous docking between the modules using Image Based Visual Servoing (IBVS). Experimental results to validate the robustness of the proposed tracking approach, as well as the reliability of the autonomous docking procedure, are also presented in the thesis. The thesis is concluded with discussions about future research in both structured and unstructured terrains.
38

Hardware Testbed for Relative Navigation of Unmanned Vehicles Using Visual Servoing

Monda, Mark J. 12 June 2006 (has links)
Future generations of unmanned spacecraft, aircraft, ground, and submersible vehicles will require precise relative navigation capabilities to accomplish missions such as formation operations and autonomous rendezvous and docking. The development of relative navigation sensing and control techniques is quite challenging, in part because of the difficulty of accurately simulating the physical relative navigation problems in which the control systems are designed to operate. A hardware testbed that can simulate the complex relative motion of many different relative navigation problems is being developed. This testbed simulates near-planar relative motion by using software to prescribe the motion of an unmanned ground vehicle and provides the attached sensor packages with realistic relative motion. This testbed is designed to operate over a wide variety of conditions in both indoor and outdoor environments, at short and long ranges, and its modular design allows it to easily test many different sensing and control technologies. / Master of Science
39

Utilisation of photometric moments in visual servoing / Utilisation de moments photométriques en asservissement visuel

Bakthavatchalam, Manikandan 17 March 2015 (has links)
Cette thèse s'intéresse à l'asservissement visuel, une technique de commande à retour d'information visuelle permettant de contrôler le mouvement de systèmes équipées de caméras tels que des robots. Pour l'asservissement visuel, il est essentiel de synthétiser les informations obtenues via la caméra et ainsi établir la relation entre l'évolution de ces informations et le déplacement de la caméra dans l'espace. Celles-ci se basent généralement sur l'extraction et le suivi de primitives géométriques comme des points ou des lignes droites dans l'image. Il a été montré que le suivi visuel et les méthodes de traitement d'images restent encore un frein à l'expansion des techniques d'asservissement visuel. C'est pourquoi la distribution de l'intensité lumineuse de l'image a également été utilisée comme caractéristique visuelle. Finalement, les caractéristiques visuelles basée sur les moments de l'image ont permis de définir des lois de commande découplées. Cependant ces lois de commande sont conditionnées par l'obtention d'une région parfaitement segmentée ou d'un ensemble discret de points dans la scène. Ce travail propose donc une stratégie de capture de l'intensité lumineuse de façon indirecte, par le biais des moments calculés sur toute l'image. Ces caractéristiques globales sont dénommées moments photométriques. Les développements théoriques établis dans cette thèse tendent à définir une modélisation analytique de la matrice d'interaction relative aux moments photométriques. Ces derniers permettent de réaliser une tâche d'asservissement visuel dans des scènes complexes sans suivi visuel ni appariement. Un problème pratique rencontré par cette méthode dense d'asservissement visuel est l'apparition et la disparition de portions de l'image durant la réalisation de la tâche. Ce type de problème peut perturber la commande, voire dans le pire des cas conduire à l’échec de la réalisation de la tâche. Afin de résoudre ce problème, une modélisation incluant des poids spatiaux est proposée. Ainsi, la pondération spatiale, disposant d'une structure spécifique, est introduite de telle sorte qu'un modèle analytique de la matrice d'interaction peut être obtenue comme une simple fonction de la nouvelle formulation des moments photométriques. Une partie de ce travail apporte également une contribution au problème de la commande simultanée des mouvements de rotation autour des axes du plan image. Cette approche définit les caractéristiques visuelles de façon à ce que l'asservissement soit optimal en fonction de critères spécifiques. Quelques critères de sélection basées sur la matrice d'interaction ont été proposés. Ce travail ouvre donc sur d'intéressantes perspectives pour la sélection d'informations visuelles pour l'asservissement visuel basé sur les moments de l'image. / This thesis is concerned with visual servoing, a feedback control technique for controlling camera-equipped actuated systems like robots. For visual servoing, it is essential to synthesize visual information from the camera image in the form of visual features and establish the relationship between their variations and the spatial motion of the camera. The earliest visual features are dependent on the extraction and visual tracking of geometric primitives like points and straight lines in the image. It was shown that visual tracking and image processing procedures are a bottleneck to the expansion of visual servoing methods. That is why the image intensity distribution has also been used directly as a visual feature. Finally, visual features based on image moments allowed to design decoupled control laws but they are restricted by the availability of a well-segmented regions or a discrete set of points in the scene. This work proposes the strategy of capturing the image intensities not directly, but in the form of moments computed on the whole image plane. These global features have been termed photometric moments. Theoretical developments are made to derive the analytical model for the interaction matrix of the photometric moments. Photometric moments enable to perform visual servoing on complex scenes without visual tracking or image matching procedures, as long as there is no severe violation of the zero border assumption (ZBA). A practical issue encountered in such dense VS methods is the appearance and disappearance of portions of the scene during the visual servoing. Such unmodelled effects strongly violate the ZBA assumption and can disturb the control and in the worst case, result in complete failure to convergence. To handle this important practical problem, an improved modelling scheme for the moments that allows for inclusion of spatial weights is proposed. Then, spatial weighting functions with a specific structure are exploited such that an analytical model for the interaction matrix can be obtained as simple functions of the newly formulated moments. A part of this work provides an additional contribution towards the problem of simultaneous control of rotational motions around the image axes. The approach is based on connecting the design of the visual feature such that the visual servoing is optimal with respect to specific criteria. Few selection criteria based on the interaction matrix was proposed. This contribution opens interesting possibilities and finds immediate applications in the selection of visual features in image moments-based VS.
40

Robot Visual Servoing Using Discontinuous Control

Muñoz Benavent, Pau 03 November 2017 (has links)
This work presents different proposals to deal with common problems in robot visual servoing based on the application of discontinuous control methods. The feasibility and effectiveness of the proposed approaches are substantiated by simulation results and real experiments using a 6R industrial manipulator. The main contributions are: - Geometric invariance using sliding mode control (Chapter 3): the defined higher-order invariance is used by the proposed approaches to tackle problems in visual servoing. Proofs of invariance condition are presented. - Fulfillment of constraints in visual servoing (Chapter 4): the proposal uses sliding mode methods to satisfy mechanical and visual constraints in visual servoing, while a secondary task is considered to properly track the target object. The main advantages of the proposed approach are: low computational cost, robustness and fully utilization of the allowed space for the constraints. - Robust auto tool change for industrial robots using visual servoing (Chapter 4): visual servoing and the proposed method for constraints fulfillment are applied to an automated solution for tool changing in industrial robots. The robustness of the proposed method is due to the control law of the visual servoing, which uses the information acquired by the vision system to close a feedback control loop. Furthermore, sliding mode control is simultaneously used in a prioritized level to satisfy the aforementioned constraints. Thus, the global control accurately places the tool in the warehouse, but satisfying the robot constraints. - Sliding mode controller for reference tracking (Chapter 5): an approach based on sliding mode control is proposed for reference tracking in robot visual servoing using industrial robot manipulators. The novelty of the proposal is the introduction of a sliding mode controller that uses a high-order discontinuous control signal, i.e., joint accelerations or joint jerks, in order to obtain a smoother behavior and ensure the robot system stability, which is demonstrated with a theoretical proof. - PWM and PFM for visual servoing in fully decoupled approaches (Chapter 6): discontinuous control based on pulse width and pulse frequency modulation is proposed for fully decoupled position based visual servoing approaches, in order to get the same convergence time for camera translation and rotation. Moreover, other results obtained in visual servoing applications are also described. / Este trabajo presenta diferentes propuestas para tratar problemas habituales en el control de robots por realimentación visual, basadas en la aplicación de métodos de control discontinuos. La viabilidad y eficacia de las propuestas se fundamenta con resultados en simulación y con experimentos reales utilizando un robot manipulador industrial 6R. Las principales contribuciones son: - Invariancia geométrica utilizando control en modo deslizante (Capítulo 3): la invariancia de alto orden definida aquí es utilizada después por los métodos propuestos, para tratar problemas en control por realimentación visual. Se apuertan pruebas teóricas de la condición de invariancia. - Cumplimiento de restricciones en control por realimentación visual (Capítulo 4): esta propuesta utiliza métodos de control en modo deslizante para satisfacer restricciones mecánicas y visuales en control por realimentación visual, mientras una tarea secundaria se encarga del seguimiento del objeto. Las principales ventajas de la propuesta son: bajo coste computacional, robustez y plena utilización del espacio disponible para las restricciones. - Cambio de herramienta robusto para un robot industrial mediante control por realimentación visual (Capítulo 4): el control por realimentación visual y el método propuesto para el cumplimiento de las restricciones se aplican a una solución automatizada para el cambio de herramienta en robots industriales. La robustez de la propuesta radica en el uso del control por realimentación visual, que utiliza información del sistema de visión para cerrar el lazo de control. Además, el control en modo deslizante se utiliza simultáneamente en un nivel de prioridad superior para satisfacer las restricciones. Así pues, el control es capaz de dejar la herramienta en el intercambiador de herramientas de forma precisa, a la par que satisface las restricciones del robot. - Controlador en modo deslizante para seguimiento de referencia (Capítulo 5): se propone un enfoque basado en el control en modo deslizante para seguimiento de referencia en robots manipuladores industriales controlados por realimentación visual. La novedad de la propuesta radica en la introducción de un controlador en modo deslizante que utiliza la señal de control discontinua de alto orden, i.e. aceleraciones o jerks de las articulaciones, para obtener un comportamiento más suave y asegurar la estabilidad del sistema robótico, lo que se demuestra con una prueba teórica. - Control por realimentación visual mediante PWM y PFM en métodos completamente desacoplados (Capítulo 6): se propone un control discontinuo basado en modulación del ancho y frecuencia del pulso para métodos completamente desacoplados de control por realimentación visual basados en posición, con el objetivo de conseguir el mismo tiempo de convergencia para los movimientos de rotación y traslación de la cámara . Además, se presentan también otros resultados obtenidos en aplicaciones de control por realimentación visual. / Aquest treball presenta diferents propostes per a tractar problemes habituals en el control de robots per realimentació visual, basades en l'aplicació de mètodes de control discontinus. La viabilitat i eficàcia de les propostes es fonamenta amb resultats en simulació i amb experiments reals utilitzant un robot manipulador industrial 6R. Les principals contribucions són: - Invariància geomètrica utilitzant control en mode lliscant (Capítol 3): la invariància d'alt ordre definida ací és utilitzada després pels mètodes proposats, per a tractar problemes en control per realimentació visual. S'aporten proves teòriques de la condició d'invariància. - Compliment de restriccions en control per realimentació visual (Capítol 4): aquesta proposta utilitza mètodes de control en mode lliscant per a satisfer restriccions mecàniques i visuals en control per realimentació visual, mentre una tasca secundària s'encarrega del seguiment de l'objecte. Els principals avantatges de la proposta són: baix cost computacional, robustesa i plena utilització de l'espai disponible per a les restriccions. - Canvi de ferramenta robust per a un robot industrial mitjançant control per realimentació visual (Capítol 4): el control per realimentació visual i el mètode proposat per al compliment de les restriccions s'apliquen a una solució automatitzada per al canvi de ferramenta en robots industrials. La robustesa de la proposta radica en l'ús del control per realimentació visual, que utilitza informació del sistema de visió per a tancar el llaç de control. A més, el control en mode lliscant s'utilitza simultàniament en un nivell de prioritat superior per a satisfer les restriccions. Així doncs, el control és capaç de deixar la ferramenta en l'intercanviador de ferramentes de forma precisa, a la vegada que satisfà les restriccions del robot. - Controlador en mode lliscant per a seguiment de referència (Capítol 5): es proposa un enfocament basat en el control en mode lliscant per a seguiment de referència en robots manipuladors industrials controlats per realimentació visual. La novetat de la proposta radica en la introducció d'un controlador en mode lliscant que utilitza senyal de control discontínua d'alt ordre, i.e. acceleracions o jerks de les articulacions, per a obtindre un comportament més suau i assegurar l'estabilitat del sistema robòtic, la qual cosa es demostra amb una prova teòrica. - Control per realimentació visual mitjançant PWM i PFM en mètodes completament desacoblats (Capítol 6): es proposa un control discontinu basat en modulació de l'ample i la freqüència del pols per a mètodes completament desacoblats de control per realimentació visual basats en posició, amb l'objectiu d'aconseguir el mateix temps de convergència per als moviments de rotació i translació de la càmera. A més, es presenten també altres resultats obtinguts en aplicacions de control per realimentació visual. / Muñoz Benavent, P. (2017). Robot Visual Servoing Using Discontinuous Control [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/90430 / TESIS

Page generated in 0.129 seconds