• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 69
  • 35
  • Tagged with
  • 104
  • 77
  • 77
  • 68
  • 48
  • 47
  • 47
  • 26
  • 25
  • 25
  • 22
  • 21
  • 21
  • 21
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Numerical models for the simulation of shot peening induced residual stress fields: from flat to notched targets

Marini, Michelangelo 10 June 2020 (has links)
Shot peening is a cold-working surface treatment, basically consisting in pelting the surface of the to-be-treated component with a high number of small hard particles blown at relatively high velocity. This causes the plasticization of the surface layer of the substrate, and the generation of a compressive residual stress field beneath the component surface. The surface topology modification can be beneficial for coating adhesion, and the work hardening enhances the fretting resistance of components, but the most commonly appreciated advantage of the process is the increased fatigue resistance in the treated component, due to the compressive residual stress which inhibits the nucleation and propagation of fatigue cracks. In spite of its widespread use, the mechanisms underlying the shot peening process are not completely clear. Many process parameters are involved (material, dimension, velocity of the shots, coverage, substrate mechanical behavior) and their complex mutual interaction affects the success of the process as well as the jeopardizing of any beneficial effect due to the increased surface roughness. Experimental measurements are excessively expensive and time-costly to deal with the wide variability of the process parameters, and their feasibility is not always granted. The effect of shot peening is indeed particularly effective where geometrical details (e.g. notches or grooves) act as stress raisers and where the direct measurement of residual stresses is very difficult. Nonetheless, the knwoledge of the effects of the treatment in this crictical locations would be extremely useful for the quantitative assessment of the effect of shot peening and, ultimately, for the optimization fo the process as well as its complete integration in the design process. The implementation of the finite element method for the simulation of shot peening has been studied since many years. In this thesis the simulation of shot peening is studied, in order to progress towards a simulation approach to be used in the industrial practice. Specifically, the B120 micro shot peening treatment performed with micrometric ceramic beads is studied, which has proven to be very effective of aluminum alloys, such as the aeronautical grade Al7075-T651 alloy considered in this work. The simulation of shot peening on a flat surface is addressed at first. The nominal process parameters are used, to include stochastic variability of the shot dimensions and velocity. A MatLab routine based on the linearization of the impact dent dimension, on the shot dimension and velocity is used to assess the coverage level prior to the simulation and predict the number of shots to full coverage. To best reproduce the hardening phenomena of the substrate material under repeated impacts, the Lemaitre-Chaboche model is tuned on cyclic strain tests. Explicit dynamic finite element simulations are carried out and the statistical nature of the peening treatment is taken into account. The results extracted from the numerical analyses are the final surface roughness and residual stresses, which are compared to the experimentally measured values. A specific novel procedure is devised to account for the effect of surface roughness and radiation penetration in the in-depth residual stress profile. In addition, a static finite element model is devised to assess the concentration effect exerted by the increased surface roughness on an external stress. The simulation of shot peening on an edge is then addressed as a first step towards more complex geometries. Since the true peening conditions are not known in this locations, a synergistic discrete element - finite element method approach is chosen for the correct modelization of the process. A discrete element model of the peening process on a flat surface is used to tune the simulation on the nominal process parameters, i.e. mass flow rate and average shot velocity, and to assess the nozzle translational velocity. Discrete element simulations are used to simulate the process when the nozzle turns around the edge tip. To lower the computing cost, the process is linearized into static-nozzle simulations at different tilting angles. The number of impacting shots and their impact velocity distribution are used to set up the finite element simulations, from which the resulting residual stress field is obtained. In addition to the realistic simulation, two simplified simulation approaches for the practical industrial use are devised. The resulting residual stress fields are compared with the reference residual stress field computed using thermal fields in a finite element simulation, tuned with experimental XRD measurements. The effect of the dimension of the fillet on the edge tip is studied by modifying the finite element model of shot peening on an edge. 3 different fillet radii (up to 40 um) are considered, on the basis of experimental observations. The resulting residual stress field are compared to analyze the effect of the precise geometry of the substrate. Lastly, the simplified simulation approach devised in the case of the edge is used to simulate shot peening on the root of a notch. The resulting residual stress field is again compared to the reconstructed reference one.
92

Agent for Autonomous Driving based on Simulation Theories

Donà, Riccardo 16 April 2021 (has links)
The field of automated vehicle demands outstanding reliability figures to be matched by the artificially driving agents. The software architectures commonly used originate from decades of automation engineering, when robots operated only in confined environments on predefined tasks. On the other hand, autonomous driving represents an “into the wild” application for robotics. The architectures embraced until now may not be sufficiently robust to comply with such an ambitious goal. This research activity proposes a bio-inspired sensorimotor architecture for cognitive robots that addresses the lack of autonomy inherent to the rules-based paradigm. The new architecture finds its realization in an agent for autonomous driving named “Co-driver”. The Agent synthesis was extensively inspired by biological principles that contribute to give the Co-driver some cognitive abilities. Worth to be mentioned are the “simulation hypothesis of cognition” and the “affordance competition hypothesis”. The former is mainly concerned with how the Agent builds its driving skills, whereas the latter yields an interpretable agent notwithstanding the complex behaviors produced. Throughout the essay, the Agent is explained in detail, together with the bottom-up learning framework adopted. Overall, the research effort bore an effectively performing autonomous driving agent whose underlying architecture provides considerable adaptation capability. The thesis also discusses the aspects related to the implementation of the proposed ideas into a versatile software that supports both simulation environments and real vehicle platforms. The step-by-step explanation of the Co-driver is made up of theoretical considerations supported by working simulation examples, some of which are also released open-source to the research community as a driving benchmark. Eventually, guidelines are given for future research activities that may originate from the Agent and the hierarchical training framework devised. First and foremost, the exploitation of the hierarchical training framework to discover optimized longer-term driving policies.
93

Numerical Methods for Optimal Control Problems with Application to Autonomous Vehicles

Frego, Marco January 2014 (has links)
In the present PhD thesis an optimal problem suite is proposed as benchmark for the test of numerical solvers. The problems are divided in four categories, classic, singular, constrained and hard problems. Apart from the hard problems, where it is not possible to give the analytical solution but only some details, all other problems are supplied with the derivation of the solution. The exact solution allows a precise comparison of the performance of the considered software. All of the proposed problems were taken from published papers or books, but it turned out that an analytic exact solution was only rarely provided, thus a true and reliable comparison among numerical solvers could not be done before. A typical wrong conclusion when a solver obtains a lower value of the target functional with respect to other solvers is to claim it better than the others, but it is not recognized that it has only underestimated the true value. In this thesis, a cutting edge application of optimal control to vehicles is showed: the optimization of the lap time in a race circuit track considering a number of realistic constraints. A new algorithm for path planning is completely described for the construction of a quasi G2 fitting of the GPS data with a clothoid spline in terms of the G1 Hermite interpolation problem. In particular the present algorithm is proved to work better than state of the art algorithms in terms of both efficiency and precision.
94

Biometrics in wearable products: Reverse Engineering and numerical modeling

Rao, Andrea January 2011 (has links)
The Reverse Engineering (RE) techniques and the Finite Element Modelling (FEM) are widely used tools in many scientific fields. They were firstly developed for the mechanics but in the last times became common for other disciplines. In the thesis these techniques are used for the customization of the wearable products. It is possible to observe that the geometry of whatever wearable product is fundamental for the comfort. In particular, starting from the need of wearable product it is possible to analyse the relative body part and to study the products most appropriate interface geometry to maximize the comfort. The related disciplines are biometrics, biomechanics and anthropometry. In the thesis four different non-contact RE techniques are taken into account: shape from stereo, shape from silhouette, shape from laser and range finding. The first instrument which has been developed is based on the multi stereo vision, focusing the attention to the data filtering and to the generation of the solid model represented by mesh. The second instrument is based on the model generation starting from the silhouette. These two techniques are compared to another laser instrument available on the market. The tolerance on the reconstruction give an error on the total length of the foot of about 2 mm. The tolerance is acceptable for the study of a footwear product anyway it is not sufficient for a scientific research. For this reason a fourth RE system based on range finding is studied. A lot of possible methods were analysed, the multifrequencies, belonging as Fringe Projection Profilometry (FPP) group, has been considered the best compromise between precision, accuracy and elaboration times. An instrument has been developed which in few seconds performs the reconstruction using common, cheap products such as a projector and a camera. The use of the aforementioned RE techniques allowed to adequately reconstruct the geometrical model of the foot, then the deformation of the foot is studied using a Finite Element Analysis (FEA). A model characterized by nearly 200000 elements has been developed. The deformations are congruent with literature data. Anyway, considering the complex validation process of the FE model, caused by the difficulties on measuring the real displacement of the foot under loading condition, a direct matching between the acquired geometry and the final shape of the wearable product has been preferred. A function, capable to analyse the fitting between foot and shoe, through a coefficient called comfort index has been developed.
95

Modellazione dell’impatto del cambiamento climatico sulla interazione pianta - patogeni a livello regionale nel Trentino – Italia. / MODELLING THE IMPACT OF CLIMATE CHANGE ON THE INTERACTION BETWEEN HOST AND PEST/PATHOGEN PHENOLOGIES AT REGIONAL LEVEL: 'TRENTINO' - ITALY

RINALDI, MONICA FERNANDA 21 February 2013 (has links)
Il controllo in agricoltura delle malattie causate da patogeni fungini può essere effettuato attraverso l’uso di modelli di previsione che si basano comunemente sul monitoraggio in tempo reale di una serie di variabili di input. Queste informazioni generalmente combinano dati metereologici locali con modelli matematici costruiti allo scopo di predire il rischio di malattie. Il processo decisionale si attiva quando un avvertimento sul potenziale rischio viene riconosciuto da parte dei modelli. Diversi modelli epidemiologici sono stati sviluppati e validati nel mondo. Negli Stati Uniti d’America, ad esempio, l’università della California ha sviluppato un supporto decisionale on-line per gestire la coltura secondo i principi della lotta integrata (Integrated Pest Management - IPM). Ciascun agricoltore può consultare il proprio database informativo e prendere decisioni sui trattamenti da effettuare basandosi su dati sito-specifici. Le difficoltà sorgono quando non sono disponibili dati meteorologici da stazioni poste nelle vicinanze del sito in studio o per le zone montane caratterizzate da una forte variabilità altimetrica. Inoltre i dati meteorologici disponibili possono presentarsi in formato non adeguato rispetto alle esigenze del modello previsionale. Con l’intento di avere una visione regionale e una maggiore accuratezza nella gestione del controllo delle malattie, l’obiettivo della tesi è stato l’utilizzo contemporaneo di modelli epidemiologici (Lobesia botrana e Erysiphe necator, agente causale dell’oidio della vite) con modelli fenologici (cultivar di vite Chardonnay) utilizzando parametri meteorologici come la temperatura per creare mappe a livello regionale, a frequenza giornaliera e con una risoluzione spaziale di 200 metri. L’utilizzo contemporaneo di entrambi i modelli aiuta ad essere più precisi nel consigliare interventi colturali nel periodo di sensibilità dell’ospite nei confronti del patogeno o della malattia in modo da poterne stimare il reale rischio di diffusione o insorgenza. Dopo aver calibrato e validato i modelli in Trentino-Alto Adige (Nord Italia) con dati metereologici locali, basandoci sul modello del cambiamento climatico HadAM3 dell’Hadley Centre (Pope et al., 2000),l’andamento climatico previsto è stato proiettato e statisticamente portato. in scala, utilizzando lo scenario A2 e B2. L’algoritmo statistico utilizzato per ridurre la scala giornaliera di risoluzione è chiamato “transfer function” (Eccel et al., 2009). Per completare l’analisi, è stato inoltre utilizzato lo scenario ridimensionato di ENSEMBLES attraverso l’uso di set di dati provenienti da 49 stazioni meteorologiche della FEM e dal pacchetto “RMAWGEN” (Cordano et al., 2012) creato con il software statistico R. (Gentleman et al., 1997). Per mappare i modelli è stata sviluppata una semplice piattaforma modulare WEB-GIS chiamata ENVIRO. I moduli sono “Open Source” e seguono gli standard internazionali dell’“Open Geospatial Consortium” (OGC) e sono stati implementati come segue: i) enviDB è il database per i dati spazio-temporali, ii) enviGRID permette agli utenti di navigare attraverso i dati e i modelli nello spazio e nel tempo, iii) enviMapper è l’interfaccia web per prendere le decisioni, consiste in uno stato dell’arte per mappare la vulnerabilità del cambiamento climatico a diverse scale di aggregazione nello spazio e nel tempo, iv) enviModel è l’interfaccia web per i ricercatori a cui viene fornita una piattaforma per processare e condividere modelli di rischio ambientali utilizzando il “web processing Technologies” (WPS) seguendo gli standard OGC. Con l’obiettivo di diventare ancora più accurati nelle previsioni dei volumi per i trattamenti contro insetti e malattie, in accordo con la direttiva 2009/128/EC, il seguente lavoro dimostra che il sensore LIDAR può essere utilizzato per caratterizzare la geometria della pianta della vite e stimare l’area fogliare (LAI) ad ogni stadio di crescita. Inoltre permette di calcolare il volume da applicare (Tree Row Volume -TRV) visualizzato nelle mappe 3D in GRASS. (Neteler et al., 2008, Neteler et al., 2012). / Control of agricultural pests and diseases is often based on forecasting models commonly based on real time monitoring of inputs variables. This information generally combines meteorological local databases and mathematical models designed to forecast pest and disease risk. The decision process starts when an alert or a potential risk event from the outputs of the models is issued. Epidemiological models based on local datasets have been created and validated worldwide, for example in USA, the University of California developed the online Integrated Pest Management (IPM) program where each farmer can consult with his own database and make the pest management decision based on site-specific conditions. Difficulties arise when no data from a close weather station are available, in mountain areas where weather conditions highly depend on the altimetry, or if data are not in a standard format to feed the model. In a view of having a regional vision and an increased accuracy in the pest control management, the goal of this thesis was to run contemporaneously epidemiological (the pest Lobesia botrana and the pathogen causing Powdery mildew Erysiphe necator) and phenological models (grapevine cv. chardonnay) using environmental variables as temperature and to create maps at regional level, with 200 meters of resolution and daily scale or frequency. Running both models together helps to be more precise in the sensibility period of the host versus the pest or the disease and to understand the real final risk. After calibrating and validating the models in the Trentino-Alto Adige Region (Italy) with local weather data, the forecasted climate was projected and statistically downscaled, based on the output of the Hadley Centre climate model - HadAM3 (Pope et al., 2000) under scenarios A2 and B2. The statistical downscaling algorithm was “transfer function method” (Eccel et al., 2009) at daily resolution. In order to complete the analysis, the downscaled scenario from ENSEMBLES was also used with the datasets of 49 weather stations from FEM and the “RMAWGEN” packages (Cordano et al., 2012) created for this project in R statistical open source software (Gentleman et al., 1997). In order to map the models, a friendly modular WEB-GIS platform called ENVIRO was developed. Modules are Open Source, follow international Open Geospatial Consortium (OGC) standards and were implemented as follows: i) enviDB is the database for spatial temporal data, ii) enviGRID allows users to navigate through data and model in space and time, iii) enviMapper is the web interface for decision makers, a state of the art client to map vulnerability to climate change at different aggregation scales in time and space; finally, iv) enviModel is the web interface for researchers that provides a platform to process and share environmental risk models using web geo-processing technologies (WPS) following OGC standards. With the aim of being even more accurate in pests and diseases spraying volumes and according with the Directive 2009/128/EC, the current work shows that the LIDAR sensor can be used to characterize the geometry of the grapevine and the Leaf Area Index (LAI) at each growth stage and calculate the Tree Row Volume (TRV) visualized in 3D maps in GRASS (Neteler et al., 2008, Neteler et al., 2012).
96

Large strain computational modeling of high strain rate phenomena in perforating gun devices by Lagrangian/Eulerian FEM simulations

Gambirasio, Luca January 2013 (has links)
The present doctoral thesis deals with the study and the analysis of large strain and high strain rate behavior of materials and components. Theoretical, experimental and computational aspects are taken into consideration. Particular reference is made to the modeling of metallic materials, although other kinds of materials are considered as well. The work may be divided into three main parts. The first part of the work consists in a critical review of the constitutive modeling of materials subjected to large strains and high to very high strain rates. Specific attention is paid to the opportunity of adopting so-called strength models and equations of state. Damage and failure modeling is discussed as well. In this part, specific interest is addressed to reviewing the so-called Johnson-Cook strength model, by critically highlighting its positive and negative aspects. One of the main tackled issue consists in a reasoned assessment of the various procedures adoptable in order to calibrate the parameters of the model. This phase is enriched and clarified by applying different calibration strategies to a real case, i.e. the evaluation of the model parameters for a structural steel. The consequences determined by each calibration approach are then carefully evaluated and compared. The second part of the work aims at introducing a new strength model, that consists in a generalization of the Johnson-Cook model. The motivations for the introduction of this model are first exposed and discussed. The features of the new strength model are then described. Afterwards, the various procedures adoptable for the determination of the material parameters are presented. The new strength model is then applied to a real case, i.e. a structural steel as above, and the results are compared to those obtained from the original Johnson-Cook model. Comparing to that, the obtained outcomes show that the new model displays a better capacity in reproducing experimental data. Results are discussed and commented. The third and final part of the work deals with an application of the studied topics to a real industrial case of interest. A device called perforating gun is analyzed in its structural problematics and critical aspects. This challenging application involves the modeling of several typologies of material, large strains, very high strain rate phenomena, high temperatures, explosions, hypervelocity impacts, damage, fracture and phase changes. In this regard, computational applications of the studied theories are presented and their outcomes are assessed and discussed. Several finite element techniques are considered. In particular, tridimensional Eulerian simulations are presented. The obtained results appear to be very promising in terms of the possibilities of a fruitful use in the design process of the device, in particular in order to achieve an optimization of its key features.
97

Design of Suspension Systems and Control Algorithms for Heavy Duty Vehicles

Grott, Matteo January 2010 (has links)
This work is focused on the development of controllable suspension systems for heavy-duty vehicles, in particular for agricultural tractors. In this field the research activity is not complete, as confirmed by the lack of scientific literature and for the few examples of commercial application for this kind of vehicles present in the market. For off-highway vehicles the load conditions can vary considerably and have an effect on the dynamic behaviour of the vehicle. Moreover, in many cases (such as tractors in agriculture), only the front axle is provided with a suspension. Typical applications of suspensions in off-highway industry include the cabin suspension (known as secondary suspension system) and the front axle suspension (known as primary suspension system). Up to now, the performance improvements have been reached through new solutions developed for the secondary systems, while the primary systems are generally implemented with passive systems, due to economical motivations and their limited energy demand. Obviously, such technical solutions partially satisfy the system requirements. Moreover, during the past few years there has been an increasing demand in power capabilities, loads and driving speeds of heavy duty vehicles. Therefore, off-highway vehicle manufacturers have shown their interest in employing controllable suspension, assumed as a potential way to reach the desired dynamic performances. The main targets of this activity is the study of the dynamical behaviour of agricultural tractors and the design of a cost-effective controllable suspension, capable to adapt the tractor dynamical behaviour, under different operating conditions. This work is part of a collaboration between Dana Corp. and the University of Trento. The main objective consists in the acquisition of competence in relation to the dynamic control of the vehicle. In particular the development of mechatronic systems according to the Model Based Design approach and the rapid prototyping of control algorithms. On this purpose, a simulation and experimental system was developed, for the testing of suspension systems and control algorithms for primary suspension systems. The first part of the thesis investigates the state of the art of the scientific literature of suspension systems for heavy duty vehicles, referring to different technologies and control solutions. In particular, attention was focused on the analysis and experimental characterization of commercial applications for this kind of vehicles present in the market. In the second part of the thesis the design development of a hydro-pneumatic suspension system is presented. The design of the control algorithms is based on the development of different multibody models of the actual tractor, including the pitch motion of the sprung mass, the load transfer effects during braking and forward-reverse maneuvers and the non-linear dynamics of the system. For an advanced analysis, a novel thermo-hydraulic model of the hydraulic system has been implemented. Several damping controls are analyzed for the specific case study. Therefore, the most promising damping strategy is integrated with other control functions, namely a self-leveling control, an original control algorithm for the reduction of the pitch motion, an anti-impact system for the hydraulic actuator and an on-line adaptation scheme, which preserves an optimal damping ratio of the suspension, even against large variations in operating conditions. According to the system requirements, the control is firstly integrated with other functionalities, such as the calibration of the suspension set-points and the procedures for the lock of the suspension. Finally, in accordance to the industrial product development, the control scheme is redefined in a Finite State Machine, useful for the subsequent generation of the ECU (Electronic Control Unit) Embedded Code. The final section of this work presents the development of an industrial prototype of suspension system, composed of a hydraulic suspension unit and a controller (hardware and software units). The prototype is tested by using a suspension bench test and Rapid Prototyping Tools for testing real-time control systems. Conclusions and final remarks and are reported in the last section.
98

Symbolic Computation Methods for the Numerical Solution of Dynamic Systems Described by Differential-Algebraic Equations

Stocco, Davide 01 August 2024 (has links)
In modern engineering, the accurate and efficient numerical simulation of dynamic systems is crucial, providing valuable insights across various fields such as automotive, aerospace, robotics, and electrical engineering. These simulations help to understand system behaviors, to optimize performance, and to guide design decisions. Nonetheless, systems described by Ordinary Differential Equations (ODEs) and Differential-Algebraic Equations (DAEs) are central to such simulations. While ODEs can be easily solved, they often fall short of modeling systems with constraints or algebraic relationships. DAEs, however, offer a more comprehensive framework, making them suitable for a wider range of dynamic systems. However, the inherent complexity of DAEs poses significant challenges for numerical integration and solution. In vehicle dynamics, the simulation of systems described by DAEs is particularly relevant. The advances in autonomous and high-performance cars rely heavily on robust simulations that accurately reflect the interactions between mechanical components, control systems, and environmental factors. Achieving accuracy and speed in these simulations is critical for Real-Time (RT) applications, where rapid decision-making and control are essential. The challenges faced in vehicle dynamics simulations, such as equations' stiffness and complexity, are representative of broader issues in dynamic system simulations. This thesis addresses these challenges by integrating symbolic computation with numerical methods to solve DAEs efficiently and accurately. Specifically, the index reduction approach transforms high-index DAEs into low-index formulations more suitable for numerical integration, enhancing the speed and stability of solvers. Symbolic computation, which handles mathematical expressions in their exact form, aids this process by simplifying the involved expressions, detecting redundancies and symbolic cancellations, and thereby ensuring equations' consistency while keeping complexity at the minimum. Hence, combining symbolic and numerical methods leverages the strengths of both techniques, aiming at improved performance and reliability. Such a hybrid framework is designed to handle the specific requirements of vehicle dynamics and other applications in engineering. The thesis encompasses several advancements in dynamic system simulation by integrating symbolic computation with numerical methods to reduce computational overhead and improve performance. The research focuses on developing new algorithms for DAEs index reduction, transforming high-index DAEs into more suitable for standard numerical integration methods. Specifically, such an index reduction process is based on symbolic matrix factorization with simultaneous expression swell mitigation. This novel methodology is validated through practical applications, applying the proposed technique to real-world simulation problems to assess its performance, accuracy, and efficiency. Additionally, the research aims to enhance Hard Real-Time (HRT) vehicle dynamic simulation by designing dedicated algorithms and models for simulating tire-road interactions and vehicle structures' deformation, improving both speed and fidelity. Altogether, this thesis introduces several open-source software libraries made available to the research community with comprehensive documentation and examples. In summary, this work bridges the gap between symbolic computation and numerical methods for the simulation of dynamic systems described by DAEs. Thanks to mixed symbolic-numeric frameworks, innovative algorithms, and practical tools, it contributes to the advancement of simulation techniques, setting the stage for further investigations and applications in engineering.
99

Modeling, Optimization and Control of Hybrid Powertrains

De Pascali, Luca 14 October 2019 (has links)
To cope with the increasing demand of a more sustainable mobility, the main Original Equipment Manufacturers are producing vehicles equipped with hybrid propulsion systems that increase the overall vehicle efficiency and mitigate the emission problem at a local level. The newly gained degrees of freedom of the hybrid powertrain need to be handled by advanced energy management techniques that allow to fully exploit the system capabilities. In this thesis we propose an optimal control approach to the solution of the energy management problem, putting emphasis on the importance of accurate models for the reliability of the optimization solution. In the first part of the thesis we address the energy management problem for a hybrid electric vehicle, including the mitigation of the battery aging mechanisms. We show that, with an optimal management strategy, we could extend the battery life up to 25% for some driving cycles while keeping the fuel savings performance substantially unaltered. In the second part of the thesis we focus on the hydrostatic hybrid transmission, a different hybridization solution that is able to fulfill the high power demand of heavy duty off-highway vehicles. Also in this case, we formulate the energy management problem as an optimal control problem, dealing with the complexity introduced by the discrete valve actuations in the framework of mixed-integer optimal control. We show that, using hydraulic accumulators to recover energy from the regenerative braking, we could reduce fuel consumption up to 13% for a typical driving cycle. In the third and last part of the thesis we show how the optimization approach can be used to systematically design and calibrate control algorithms, casting the calibration problem into a Linear Matrix Inequality. We first develop a non-overshooting closed-loop control for the actuation pressure of a wet clutch, proving the effectiveness of the control on an experimental setup. Finally, we focus on the design of a dead-zone based kinematic observer for the estimation of the lateral velocity of a road vehicle. The structure of the observer presents good noise rejection performance, allowing for the selection of a higher observer gain that improves the estimation accuracy.
100

High order numerical methods for a unified theory of fluid and solid mechanics

Chiocchetti, Simone 10 June 2022 (has links)
This dissertation is a contribution to the development of a unified model of continuum mechanics, describing both fluids and elastic solids as a general continua, with a simple material parameter choice being the distinction between inviscid or viscous fluid, or elastic solids or visco-elasto-plastic media. Additional physical effects such as surface tension, rate-dependent material failure and fatigue can be, and have been, included in the same formalism. The model extends a hyperelastic formulation of solid mechanics in Eulerian coordinates to fluid flows by means of stiff algebraic relaxation source terms. The governing equations are then solved by means of high order ADER Discontinuous Galerkin and Finite Volume schemes on fixed Cartesian meshes and on moving unstructured polygonal meshes with adaptive connectivity, the latter constructed and moved by means of a in- house Fortran library for the generation of high quality Delaunay and Voronoi meshes. Further, the thesis introduces a new family of exponential-type and semi- analytical time-integration methods for the stiff source terms governing friction and pressure relaxation in Baer-Nunziato compressible multiphase flows, as well as for relaxation in the unified model of continuum mechanics, associated with viscosity and plasticity, and heat conduction effects. Theoretical consideration about the model are also given, from the solution of weak hyperbolicity issues affecting some special cases of the governing equations, to the computation of accurate eigenvalue estimates, to the discussion of the geometrical structure of the equations and involution constraints of curl type, then enforced both via a GLM curl cleaning method, and by means of special involution-preserving discrete differential operators, implemented in a semi-implicit framework. Concerning applications to real-world problems, this thesis includes simulation ranging from low-Mach viscous two-phase flow, to shockwaves in compressible viscous flow on unstructured moving grids, to diffuse interface crack formation in solids.

Page generated in 0.052 seconds