• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 74
  • 34
  • Tagged with
  • 108
  • 108
  • 108
  • 65
  • 44
  • 44
  • 44
  • 34
  • 31
  • 21
  • 21
  • 21
  • 21
  • 19
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Procedura di progettazione di protesi a basso costo per l'arto inferiore

Borghi, Corrado <1979> 23 April 2009 (has links)
L'attività di ricerca descritta in questa tesi fornisce linee guida per la progettazione di arti protesici inferiori, con particolare riferimento alla progettazione di protesi a basso costo. La necessità di efficienti protesi a basso costo risulta infatti sentita nei Paesi in via di sviluppo ma anche dalle fasce meno abbienti dei paesi occidentali. Al fine di comprendere le strategie adottate dall'apparato locomotorio per muoversi con le protesi sono analizzati il cammino fisiologico, le protesi presenti sul mercato ed infine le modalità con cui le loro prestazioni sono valutate. Con il presente lavoro, dopo aver osservato la presenza di una scarsa strutturazione della metodologia di progettazione che riguarda specialmente il settore del basso costo, si propone una metodologia il più possibile oggettiva e ripetibile tesa ad individuare quali sono gli aspetti essenziali di una protesi per garantire al paziente una buona qualità di vita. Solo questi aspetti dovranno essere selezionati al fine di ottenere la massima semplificazione della protesi e ridurre il più possibile i costi. Per la simulazione delle attività di locomozione, in particolare del cammino, è stato elaborato un apposito modello spaziale del cammino. Il modello proposto ha 7 membri rigidi (corrispondenti a piedi, tibie, femori e bacino) e 24 gradi di libertà. Le articolazioni e l'appoggio dei piedi al suolo sono modellati con giunti sferici. La pianta del piede consente tre possibili punti di appoggio. I criteri di realizzazione delle simulazioni possono comprendere aspetti energetici, cinematici e dinamici considerati come obiettivo dall'apparato locomotorio. In questa tesi vengono trattati in particolare gli aspetti cinematici ed è mostrata un'applicazione della procedura nella quale vengono dapprima identificati i riferimenti fisiologici del cammino e quindi simulato il cammino in presenza di una menomazione al ginocchio (eliminazione della flessione in fase di appoggio). Viene quindi lasciato a sviluppi futuri il completamento della procedura e la sua implementazione in un codice di calcolo. / Inexpensive and efficient prostheses are needed both for developing and Western countries. The research activity described in this thesis aims at providing guidelines for the development of lower limb prostheses, referring in particular to the design of low cost prostheses. Physiological gait, commercial prostheses and prostheses evaluation methods are analyzed in order to understand the strategies adopted by the human locomotion system to walk with such artificial devices. This work outlines a lack of systematic approaches for the design prostheses, in particular for the low cost ones. This lack is overcome by suggesting a metodology, which is as much objective and repeatable as possible, oriented to the definition of the essential aspects that provide the patient with a good quality of life. Only these aspects should be selected to design the low cost prostheses, i.e. in order to obtain the maximum simplification and thus the maximum cost reduction. A model for the simulation of gait has been implemented. The spatial model presented is made up of 7 rigid members (correspondent to feet, shanks, thighs and pelvis) and has 24 degrees of freedom. The articular joints and the contact of the foot with ground are modeled as spherical joints. The foot sole allows three different supporting points. The simulations are executed utilizing criteria that consider the energetic, kinematic and dynamic issues as addressed by the locomotion system. An application of the procedure is shown. The identification of the kinematic physiological parameters and the simulation of a maimed gait (without stance knee flexion) are presented.
92

Metodologia di validazione dell'affidabilità e della sicurezza dei sistemi e prodotti industriali

Pavlovic, Ana <1981> 25 May 2011 (has links)
Il rapido progresso della tecnologia, lo sviluppo di prodotti altamente sofisticati, la forte competizione globale e l’aumento delle aspettative dei clienti hanno messo nuove pressioni sui produttori per garantire la commercializzazione di beni caratterizzati da una qualità sempre crescente. Sono gli stessi clienti che da anni si aspettano di trovare sul mercato prodotti contraddistinti da un livello estremo di affidabilità e sicurezza. Tutti siamo consapevoli della necessità per un prodotto di essere quanto più sicuro ed affidabile possibile; ma, nonostante siano passati oramai 30 anni di studi e ricerche, quando cerchiamo di quantificare ingegneristicamente queste caratteristiche riconducibili genericamente al termine qualità, oppure quando vogliamo provare a calcolare i benefici concreti che l’attenzione a questi fattori quali affidabilità e sicurezza producono su un business, allora le discordanze restano forti. E le discordanze restano evidenti anche quando si tratta di definire quali siano gli “strumenti più idonei” da utilizzare per migliorare l’affidabilità e la sicurezza di un prodotto o processo. Sebbene lo stato dell’arte internazionale proponga un numero significativo di metodologie per il miglioramento della qualità, tutte in continuo perfezionamento, tuttavia molti di questi strumenti della “Total Quality” non sono concretamente applicabili nella maggior parte delle realtà industriale da noi incontrate. La non applicabilità di queste tecniche non riguarda solo la dimensione più limitata delle aziende italiane rispetto a quelle americane e giapponesi dove sono nati e stati sviluppati questi strumenti, oppure alla poca possibilità di effettuare investimenti massicci in R&D, ma è collegata anche alla difficoltà che una azienda italiana avrebbe di sfruttare opportunamente i risultati sui propri territori e propri mercati. Questo lavoro si propone di sviluppare una metodologia semplice e organica per stimare i livelli di affidabilità e di sicurezza raggiunti dai sistemi produttivi e dai prodotti industriali. Si pone inoltre di andare al di là del semplice sviluppo di una metodologia teorica, per quanto rigorosa e completa, ma di applicare in forma integrata alcuni dei suoi strumenti a casi concreti di elevata valenza industriale. Questa metodologia come anche, più in generale, tutti gli strumenti di miglioramento di affidabilità qui presentati, interessano potenzialmente una vasta gamma di campi produttivi, ma si prestano con particolare efficacia in quei settori dove coesistono elevate produzioni e fortissime esigenze qualitative dei prodotti. Di conseguenza, per la validazione ed applicazione ci si è rivolti al settore dell’automotive, che da sempre risulta particolarmente sensibile ai problemi di miglioramento di affidabilità e sicurezza. Questa scelta ha portato a conclusioni la cui validità va al di là di valori puramente tecnici, per toccare aspetti non secondari di “spendibilità” sul mercato dei risultati ed ha investito aziende di primissimo piano sul panorama industriale italiano.
93

Extended Momentum Model for Single and Multiple Hydrokinetic Turbines in Subcritical Flows

Cacciali, Luca 19 April 2023 (has links)
This thesis proposes equations extending the Free Surface Actuator Disc Theory to yield drag forces and interference factors from a series of two porous discs in open channel flows. The new model includes blockage ratio and Froude number as independent variables, which are inferred in advance to yield a single solution in the prescribed domain. The theoretical extension is integrated with the Blade Element Theory in a Double Multiple Streamtube model (DMS) to predict axial loads and the performance of confined Darrieus turbines. The turbine thrust force influences the flow approaching the rotor. Hence, a momentum method is applied to solve the hydraulic transition in the channel, achieving the unknown inflow factor from the undisturbed flow imposed downstream. The upstream blockage ratio and Froude number are thus updated iteratively to adapt the DMS to subcritical applications. The DMS is corrected further to account for the energy losses due to mechanical struts and turbine shaft, flow curvature, turbine depth, and streamtube expansion. Sub-models from the literature are partly corrected to comply with the extended actuator disc model. The turbine model is validated with experimental data of a high-solidity cross-flow hydrokinetic turbine that was previously tested at increasing rotor speeds. Turbine arrays are investigated by integrating the previous turbine model with wake sub-models to predict the plant layout maximizing the array power. An assessment of multi-row plants shows that the array power improves with closely spaced turbines. In addition, highly spaced arrays allow a partial recovery of the available power to be exploited upstream by a new turbine array. The highest array power is predicted by simulations on different array layouts considering constant array blockage ratio and rotor solidity. Finally, assuming a long ideal channel, the deviation in the inflow depth is speculated to become asymptotic after many arrays, implying almost identical power conversion upstream.
94

Data fusion of images and 3D range data

Fornaser, Alberto January 2014 (has links)
A robot is a machine that embodies decades of research and development. Born as a simple mechanical devices, these machines evolved together with our technology and knowledge, reaching levels of automation never imagined before. The modern dream is represented by the cooperative robotics, where the robots do not just work for the people, but together with the people. Such result can be achieved only if these machines are able to acquire knowledge through perception, in other words they need to collect sensor measurements from which they extract meaningful information of the environment in order to adapt their behavior. This thesis speaks about the topic of the autonomous object recognition and picking for Automated Guided Vehicles, AGVs, robots employed nowadays in the automatic logistic plants. The development of a technology capable of achieving such task would be a significant technological improvement compared to the structure currently used in this field: rigid, strongly constrained and with a very limited human machine interaction. Automating the process of picking by making such vehicles more smart would open to many possibilities, both in terms of organization of the plants, both for the remarkable economic implications deriving from the abatement of many of the associated fixed costs. The logistics field is indeed a niche, in which the costs of the technology represent the true limit to its spread, costs due mainly to the limitations of the current technology. The work is therefore aimed at creating a stand-alone technology, usable directly on board of the modern AGVs, with minimal modifications in terms of hardware and software. The elements that made possible such development are the multi-sensor approach and data-fusion. The thesis starts with the analysis of the state of the art related of the field of the automated logistic, focusing mostly on the most innovative applications and researches on the automatization of the load/unload of the goods in the modern logistic plants. What emerges form the analysis it is that there is a technological gap between the world of the research and the industrial reality: the results and solutions proposed by the first seem not match the requirements and specification of the second. The second part of the thesis is dedicated to the sensors used: industrial cameras, planar 2D safety laser scanners and 3D time of flight cameras (TOF). For every device a specific (and independent) process is developed in order to recognize and localize Euro pallets: the information that AGVs require in order to perform the picking of an object are the three coordinates that define its pose in the 2D space, $[x,y,\theta]$, position and attitude. The focus is addressed both on the maximization of the reliability of the algorithms and both on the capability in providing a correct estimation of uncertainty of the results. The information content that comes from the uncertainty represents a key aspect for this work, in which the probabilistic characterization of the results and the adoption of the guidelines of the measurement field are the basis for a new approach to the problem. That allowed both the modification of state of the art algorithms both the development of new ones, developing a system that in the final implementation and tests has shown a reliability in the identification process sufficiently high to fulfill the industrial standards, 99\% of positive identifications. The third part is devoted to the calibration of system. In order to ensure a reliable process of identification and picking it is indeed fundamental to evaluate the relations between the sensing devices, sensor-sensor calibration, but also to relate the results obtained with the machine, sensor-robot calibration. These calibrations are critical steps that characterize the measurement chain between the target object and the robot controller. From that chain depends the overall accuracy in performing the forking procedure and, more important, the safety of such operation. The fourth part represents the core element of the thesis, the fusion of the identifications obtained from the different sensors. The multi-sensor approach is a strategy that allows the overcome of possible operational limits due to the measurement capabilities of the individual sensors, taking the best from the different devices and thus improving the performance of the entire system. This is particularly true in the case in which there are independent information sources, these, once fused, provide results way more reliable than the simple comparison of the data. Because of the different typology of the sensors involved, Cartesian ones like the laser and the TOF, and perspective ones like the camera, a specific fusion strategy is developed. The main benefit that the fusion provides is a reliable rejection of the possible false positives, which could cause very dangerous situations like the impact with objects or worst. A further contribution of this thesis is the risk prediction for the maneuver of picking. Knowing the uncertainty in the identification process, in calibration and in the motion of the vehicle it is possible to evaluate the confidence interval associated to a safe forking, the one that occurs without impact between the tines and the pallet. That is critical for the decision making logic of the AGV in order to ensure a safe functionality of the machine during all daily operations. Last part of the thesis presents the experimental results. The aforementioned topics have been implemented on a real robot, testing the behavior of the developed algorithms in various operative conditions.
95

Drag-free Spacecraft Technologies: criticalities in the initialization of geodesic motion

Zanoni, Carlo January 2015 (has links)
Present and future space missions rely on systems of increasingly demanding performance for being successful. Drag-free technology is one of the technologies that is fundamental for LISA-Pathfinder, a European Space Agency mission whose launch is planned for the end of September 2015. A purely drag-free object is defined by the absence of all external forces other than gravity. This is not a natural condition and therefore a shield has to be used in order to eliminate the effect of undesired interactions. In space, this is achieved by properly designing the spacecraft that surrounds the object, usually called test mass (TM). Once the TM is subjected to gravity alone its motion is used as a reference for the spacecraft orbit. The satellite orbit is controlled by measuring the relative TM-to-spacecraft position and feeding back the command to the propulsion system that counteracts any non gravitational force acting on the spacecraft. Ideally, the TM should be free from all forces and the hosting spacecraft should follow a pure geodesic orbit. However, the purity of the orbit depends on the spacecraft’s capability of protecting the TM from disturbances, which indeed has limitations. According to a NASA study, such a concept is capable of decreasing operation and fuel costs, increasing navigation accuracy. At the same time, a drag-free motion is required in many missions of fundamental physics. eLISA is an ESA concept mission aimed at opening a new window to the universe, black holes, and massive binary systems by means of gravitational waves. This mission will be extremely challenging and needs to be demonstrated in flight. LISA-Pathfinder is in charge of proving this concept by demonstrating the possibility of reducing the non-gravitational disturbance below a certain demanding threshold. The success of this mission relies on recent technologies in the field of propulsion, interferometry, and space mechanisms. In this frame, the system holding the TM during launch and releasing it in free-fall before the science phase represents a single point of failure for the whole mission. This thesis describes the phenomena, operations, issues, tests, activities, and simulations linked to the release following a system engineering approach. Great emphasis is given to the adhesion (or cold welding) that interferes with the release. Experimental studies have been carried out to investigate this phenomenon in conditions representative of the LISA-Pathfinder flight environment. The last part of the thesis is dedicated to the preliminary design of the housing of the TM in the frame for a low-cost mission conceived at Stanford (USA). Analysis and results are through out presented and discussed. The goal of this thesis is a summary of the activities aimed at a successful LISA-Pathfinder mission. The ambition is to increase the maturity of the technology needed in drag-free projects and therefore provide a starting point for future fascinating and challenging missions of this kind.
96

Optimal-Control-Based Adas for Driver Warning and Autonomous Intervention Using Manoeuvre Jerks for Risk Assessment

Galvani, Marco January 2013 (has links)
In this research work, two ADAS have been proposed, both based on optimal control and manoeuvre jerks as parameters for threat assessment. The first is named “Codriver”, and is a system for driver warning. The second is a sort of completion of the first, since it is designed for autonomous vehicle intervention if the driver does not react to the warnings. The Codriver has been developed by the Mechatronics Group of the University of Trento, which the author is part of, in the framework of the European Project “interactIVe”, to warn the driver for all-around threats safety. It has been then implemented on a real vehicle of Centro Ricerche Fiat, which has been widely tested at the end of the project. On the other hand, for the second system only the main components have been developed by the author during a research period at the University of Tokyo, Japan, and its application is restricted to autonomous obstacle avoidance. In particular, a motion planning algorithm has been used together with a control loop de- signed to execute the planned trajectories. Both systems exploit Optimal Control (OC) for motion planning: the Codriver uses OC to plan real-time ma- noeuvres with humanlike criteria, so that they can be compared to what the driver is doing in order to infer his/her intentions, and warn him if these are not safe; the second system uses OC instead to plan emergency manoeuvres, i.e. neglecting driver actuation limitations and pushing the vehicle towards its physical limits. The initial longitudinal and lateral jerks of the planned manoeuvres are used by both the systems as parameters for risk assessment. Manoeuvre jerks are proportional to pedal and steering wheel velocities, and their initial values thus describe the entity of the correction needed by the driver to achieve a given goal. Since human drivers plan and act with minimum jerk criteria, and are jerk-limited, more and more severe manoeuvres at a given point are not reachable anymore by a human driver, since they require too high initial jerks: initial jerks can be thus considered proportional to the risk level of current situation. For this reason, when the manoeuvres to handle current scenario require jerks beyond a given threshold, the Codriver outputs a warning. This threshold must be lower than driver limits, so that he/she will be able to react to the warning and still have the chance to perform a safe manoeuvre. When the required jerks exceed drivers’ actuation limits, the risk level raises to an upper step, where driver warning would be not effective and autonomous vehicle intervention should be enabled. In obstacle avoidance scenarios, it was demonstrated during driving simulator tests that manoeuvre jerks are more robust parameters for risk assessment than for example time headways, since they are less affected by driver’s age and gender.
97

Modelling and simulation in tribology of complex interfaces

Guarino, Roberto January 2019 (has links)
Tribology is known as the science of surfaces in relative motion and involves complex interactions over multiple length and time scales. Therefore, friction, lubrication and wear of materials are intrinsically highly multiphysics and multiscale phenomena. Several modelling and simulation tools have been developed in the last decades, always requiring a trade-off between the available computational power and the accurate replication of the experimental results. Despite nowadays it is possible to model with extreme precision elastic problems at various scales, further eorts are needed for taking into account phenomena like plasticity, adhesion, wear, third-body friction and boundary and solid lubrication. The situation becomes even more challenging if considering non-conventional nano-, as in the case of polymer surfaces and interfaces, or microstructures, as for the hierarchical organisations observed in biological systems. Specically, biological surface structures have been demonstrated to present exceptional tribological properties, for instance in terms of adhesion (e.g., the gecko pad), superhydrophobicity (e.g., the lotus leaf) or fluid-dynamic drag reduction (e.g., the shark skin). This has suggested the study and development of hierarchical and/or bio-inspired structures for applications in tribology. Therefore, by taking inspiration from Nature, we investigate the effect of property gradients on the frictional behaviour of sliding interfaces, considering lateral variations in surface and bulk properties. 3D finite-element simulations are compared with a 2D spring-block model to show how lateral gradients can be used to tune the macroscopic coefficients of friction and control the propagation of detachment fronts. Complex microscale phenomena govern the macroscopic behaviour also of lubricated contacts. An example is represented by solid lubrication or third-body friction, which we study with 3D discreteelement simulations. We show the effects of surface waviness and of the modelling parameters on the macroscopic coefficient of friction. Many other natural systems present complex interfacial interactions and tribological behaviour. Plant roots, for instance, display optimised performance during the frictional penetration of soil, especially thanks to a particular apex morphology. Starting from experimental investigations of different probe geometries, we employ the discrete-element method to compute the expended work during the penetration of a granular packing, conrming the optimal bio-inspired shape. This has allowed to follow also an integrated approach including image acquisition and processing of the actual geometries, 3D printing, experiments and numerical simulations. Finally, another interesting example of advanced biological interface with optimised behaviour is represented by biosensing strucviii tures. We employ fluid-structure interaction numerical simulations for studying the response of spiders' trichobothria, which are among the most sensitive biosensors in Nature. Our results highlight the role of the fluid-dynamic drag on the system performance and allow to determine the optimal hair density observed experimentally. Both the third-body problem and the possibility to tune the frictional properties can be considered as the next grand challenges in tribology, which is going to live a "golden age" in the coming years. We believe the results discussed in this Doctoral Thesis could pave the way towards the design of novel bio-inspired structures with optimal tribological properties, for the future development of smart materials and innovative solutions for sliding interfaces.
98

An New Energetic Approach to the Modeling of Human Joint Kinematics: Application to the Ankle

Conconi, Michele <1979> 11 May 2010 (has links)
The objective of this dissertation is to develop and test a predictive model for the passive kinematics of human joints based on the energy minimization principle. To pursue this goal, the tibio-talar joint is chosen as a reference joint, for the reduced number of bones involved and its simplicity, if compared with other sinovial joints such as the knee or the wrist. Starting from the knowledge of the articular surface shapes, the spatial trajectory of passive motion is obtained as the envelop of joint configurations that maximize the surfaces congruence. An increase in joint congruence corresponds to an improved capability of distributing an applied load, allowing the joint to attain a better strength with less material. Thus, joint congruence maximization is a simple geometric way to capture the idea of joint energy minimization. The results obtained are validated against in vitro measured trajectories. Preliminary comparison provide strong support for the predictions of the theoretical model.
99

Numerical models for the simulation of shot peening induced residual stress fields: from flat to notched targets

Marini, Michelangelo 10 June 2020 (has links)
Shot peening is a cold-working surface treatment, basically consisting in pelting the surface of the to-be-treated component with a high number of small hard particles blown at relatively high velocity. This causes the plasticization of the surface layer of the substrate, and the generation of a compressive residual stress field beneath the component surface. The surface topology modification can be beneficial for coating adhesion, and the work hardening enhances the fretting resistance of components, but the most commonly appreciated advantage of the process is the increased fatigue resistance in the treated component, due to the compressive residual stress which inhibits the nucleation and propagation of fatigue cracks. In spite of its widespread use, the mechanisms underlying the shot peening process are not completely clear. Many process parameters are involved (material, dimension, velocity of the shots, coverage, substrate mechanical behavior) and their complex mutual interaction affects the success of the process as well as the jeopardizing of any beneficial effect due to the increased surface roughness. Experimental measurements are excessively expensive and time-costly to deal with the wide variability of the process parameters, and their feasibility is not always granted. The effect of shot peening is indeed particularly effective where geometrical details (e.g. notches or grooves) act as stress raisers and where the direct measurement of residual stresses is very difficult. Nonetheless, the knwoledge of the effects of the treatment in this crictical locations would be extremely useful for the quantitative assessment of the effect of shot peening and, ultimately, for the optimization fo the process as well as its complete integration in the design process. The implementation of the finite element method for the simulation of shot peening has been studied since many years. In this thesis the simulation of shot peening is studied, in order to progress towards a simulation approach to be used in the industrial practice. Specifically, the B120 micro shot peening treatment performed with micrometric ceramic beads is studied, which has proven to be very effective of aluminum alloys, such as the aeronautical grade Al7075-T651 alloy considered in this work. The simulation of shot peening on a flat surface is addressed at first. The nominal process parameters are used, to include stochastic variability of the shot dimensions and velocity. A MatLab routine based on the linearization of the impact dent dimension, on the shot dimension and velocity is used to assess the coverage level prior to the simulation and predict the number of shots to full coverage. To best reproduce the hardening phenomena of the substrate material under repeated impacts, the Lemaitre-Chaboche model is tuned on cyclic strain tests. Explicit dynamic finite element simulations are carried out and the statistical nature of the peening treatment is taken into account. The results extracted from the numerical analyses are the final surface roughness and residual stresses, which are compared to the experimentally measured values. A specific novel procedure is devised to account for the effect of surface roughness and radiation penetration in the in-depth residual stress profile. In addition, a static finite element model is devised to assess the concentration effect exerted by the increased surface roughness on an external stress. The simulation of shot peening on an edge is then addressed as a first step towards more complex geometries. Since the true peening conditions are not known in this locations, a synergistic discrete element - finite element method approach is chosen for the correct modelization of the process. A discrete element model of the peening process on a flat surface is used to tune the simulation on the nominal process parameters, i.e. mass flow rate and average shot velocity, and to assess the nozzle translational velocity. Discrete element simulations are used to simulate the process when the nozzle turns around the edge tip. To lower the computing cost, the process is linearized into static-nozzle simulations at different tilting angles. The number of impacting shots and their impact velocity distribution are used to set up the finite element simulations, from which the resulting residual stress field is obtained. In addition to the realistic simulation, two simplified simulation approaches for the practical industrial use are devised. The resulting residual stress fields are compared with the reference residual stress field computed using thermal fields in a finite element simulation, tuned with experimental XRD measurements. The effect of the dimension of the fillet on the edge tip is studied by modifying the finite element model of shot peening on an edge. 3 different fillet radii (up to 40 um) are considered, on the basis of experimental observations. The resulting residual stress field are compared to analyze the effect of the precise geometry of the substrate. Lastly, the simplified simulation approach devised in the case of the edge is used to simulate shot peening on the root of a notch. The resulting residual stress field is again compared to the reconstructed reference one.
100

Agent for Autonomous Driving based on Simulation Theories

Donà, Riccardo 16 April 2021 (has links)
The field of automated vehicle demands outstanding reliability figures to be matched by the artificially driving agents. The software architectures commonly used originate from decades of automation engineering, when robots operated only in confined environments on predefined tasks. On the other hand, autonomous driving represents an “into the wild” application for robotics. The architectures embraced until now may not be sufficiently robust to comply with such an ambitious goal. This research activity proposes a bio-inspired sensorimotor architecture for cognitive robots that addresses the lack of autonomy inherent to the rules-based paradigm. The new architecture finds its realization in an agent for autonomous driving named “Co-driver”. The Agent synthesis was extensively inspired by biological principles that contribute to give the Co-driver some cognitive abilities. Worth to be mentioned are the “simulation hypothesis of cognition” and the “affordance competition hypothesis”. The former is mainly concerned with how the Agent builds its driving skills, whereas the latter yields an interpretable agent notwithstanding the complex behaviors produced. Throughout the essay, the Agent is explained in detail, together with the bottom-up learning framework adopted. Overall, the research effort bore an effectively performing autonomous driving agent whose underlying architecture provides considerable adaptation capability. The thesis also discusses the aspects related to the implementation of the proposed ideas into a versatile software that supports both simulation environments and real vehicle platforms. The step-by-step explanation of the Co-driver is made up of theoretical considerations supported by working simulation examples, some of which are also released open-source to the research community as a driving benchmark. Eventually, guidelines are given for future research activities that may originate from the Agent and the hierarchical training framework devised. First and foremost, the exploitation of the hierarchical training framework to discover optimized longer-term driving policies.

Page generated in 0.0495 seconds