• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 78
  • 30
  • 4
  • Tagged with
  • 112
  • 112
  • 82
  • 82
  • 82
  • 82
  • 82
  • 48
  • 41
  • 27
  • 13
  • 12
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Modeling the interaction of light with photonic structures by direct numerical solution of Maxwell's equations

Vaccari, Alessandro January 2015 (has links)
The present work analyzes and describes a method for the direct numerical solution of the Maxwell's equations of classical electromagnetism. This is the FDTD (Finite-Difference Time-Domain) method, along with its implementation in an "in-house" computing code for large parallelized simulations. Both are then applied to the modelization of photonic and plasmonic structures interacting with light. These systems are often too complex, either geometrically and materially, in order to be mathematically tractable and an exact analytic solution in closed form, or as a series expansion, cannot be obtained. The only way to gain insight on their physical behavior is thus to try to get a numerical approximated, although convergent, solution. This is a current trend in modern physics because, apart from perturbative methods and asymptotic analysis, which represent, where applicable, the typical instruments to deal with complex physico-mathematical problems, the only general way to approach such problems is based on the direct approximated numerical solution of the governing equations. Today this last choice is made possible through the enormous and widespread computational capabilities offered by modern computers, in particular High Performance Computing (HPC) done using parallel machines with a large number of CPUs working concurrently. Computer simulations are now a sort of virtual laboratories, which can be rapidly and costless setup to investigate various physical phenomena. Thus computational physics has become a sort of third way between the experimental and theoretical branches. The plasmonics application of the present work concerns the scattering and absorption analysis from single and arrayed metal nanoparticles, when surface plasmons are excited by an impinging beam of light, to study the radiation distribution inside a silicon substrate behind them. This has potential applications in improving the eciency of photovoltaic cells. The photonics application of the present work concerns the analysis of the optical reflectance and transmittance properties of an opal crystal. This is a regular and ordered lattice of macroscopic particles which can stops light propagation in certain wavelenght bands, and whose study has potential applications in the realization of low threshold laser, optical waveguides and sensors. For these latters, in fact, the crystal response is tuned to its structure parameters and symmetry and varies by varying them. The present work about the FDTD method represents an enhacement of a previous one made for my MSc Degree Thesis in Physics, which has also now geared toward the visible and neighboring parts of the electromagnetic spectrum. It is organized in the following fashion. Part I provides an exposition of the basic concepts of electromagnetism which constitute the minimum, although partial, theoretical background useful to formulate the physics of the systems here analyzed or to be analyzed in possible further developments of the work. It summarizes Maxwell's equations in matter and the time domain description of temporally dispersive media. It addresses also the plane wave representation of an electromagnetic field distribution, mainly the far field one. The Kirchhoff formula is described and deduced, to calculate the angular radiation distribution around a scatterer. Gaussian beams in the paraxial approximation are also slightly treated, along with their focalization by means of an approximated diraction formula useful for their numericall FDTD representation. Finally, a thorough description of planarly multilayered media is included, which can play an important ancillary role in the homogenization procedure of a photonic crystal, as described in Part III, but also in other optical analyses. Part II properly concerns the FDTD numerical method description and implementation. Various aspects of the method are treated which globally contribute to a working and robust overall algorithm. Particular emphasis is given to those arguments representing an enhancement of previous work.These are: the analysis from existing literature of a new class of absorbing boundary conditions, the so called Convolutional-Perfectly Matched Layer, and their implementation; the analysis from existing literature and implementation of the Auxiliary Differential Equation Method for the inclusion of frequency dependent electric permittivity media, according to various and general polarization models; the description and implementation of a "plane wave injector" for representing impinging beam of lights propagating in an arbitrary direction, and which can be used to represent, by superposition, focalized beams; the parallelization of the FDTD numerical method by means of the Message Passing Interface (MPI) which, by using the here proposed, suitable, user dened MPI data structures, results in a robust and scalable code, running on massively parallel High Performance Computing Machines like the IBM/BlueGeneQ with a core number of order 2X10^5. Finally, Part III gives the details of the specific plasmonics and photonics applications made with the "in-house" developed FDTD algorithm, to demonstrate its effectiveness. After Chapter 10, devoted to the validation of the FDTD code implementation against a known solution, Chapter 11 is about plasmonics, with the analytical and numerical study of single and arrayed metal nanoparticles of different shapes and sizes, when surface plasmon are excited on them by a light beam. The presence of a passivating embedding silica layer and a silicon substrate are also included. The next Chapter 12 is about the FDTD modelization of a face-cubic centered (FCC) opal photonic crystal sample, with a comparison between the numerical and experimental transmittance/reflectance behavior. An homogenization procedure is suggested of the lattice discontinuous crystal structure, by means of an averaging procedure and a planarly multilayered media analysis, through which better understand the reflecting characteristic of the crystal sample. Finally, a procedure for the numerical reconstruction of the crystal dispersion banded omega-k curve inside the first Brillouin zone is proposed. Three Appendices providing details about specific arguments dealt with during the exposition conclude the work.
102

Computer Simulation of Biological Systems

Battisti, Anna January 2012 (has links)
This thesis investigates two biological systems using atomistic modelling and molecular dynamics simulation. The work is focused on: (a) the study of the interaction between a segment of a DNA molecule and a functionalized surface; (b) the dynamical modelling of protein tau, an intrinsically disordered protein. We briefly describe here the two problems; for their detailed introduction we refer respectively to chapter DNA and chapter TAU. The interest in the study of the adsorption of DNA on functionalized surfaces is related to the considerable effort that in recent years has been devoted in developing technologies for faster and cheaper genome sequencing. In order to sequence a DNA molecule, it has to be extracted from the cell where it is stored (e.g. the blood cells). As a consequence any genomic analysis requires a purification process in order to remove from the DNA molecule proteins, lipids and any other contaminants. The extraction and purification of DNA from biological samples is hence the first step towards an efficient and cheap genome sequencing. Using the chemical and physical properties of DNA it is possible to generate an attractive interaction between this macromolecule and a properly treated surface. Once positioned on the surface, the DNA can be more easily purified. In this work we set up a detailed molecular model of DNA interacting with a surface functionalized with amino silanes. The intent is to investigate the free energy of adsorption of small DNA oligomers as a function of the pH and ionic strength of the solution. The tau protein belongs to the category of Intrinsically Disordered Proteins (IDP), which in their native state do not have an average stable structure and fluctuate between many conformations. In its physiological state, tau protein helps nucleating and stabilizing the microtubules in the axons of the neurons. On the other hand, the same tau - in a pathological aggregation - is involved in the development of the Alzheimer disease. IDPs do not have a definite 3D structure, therefore their dynamical simulation cannot start from a known list of atomistic positions, like a protein data bank file. We first introduce a procedure to find an initial dynamical state for a generic IDP, and we apply it to the tau protein. We then analyze the dynamical properties of tau, like the propensity of residues to form temporary secondary structures like beta-sheets or alpha-helices.
103

Protein structural dynamics and thermodynamics from advanced simulation techniques

Cazzolli, Giorgia January 2013 (has links)
In this work we apply simulation techniques, namely Monte Carlo simulations and a path integral based method called Dominant Reaction Pathways (DRP) approach, in order to study aspects of dynamics and thermodynamics in three different families of peculiar proteins. These proteins are, for reasons such as the presence of an intermediate state in the folding path or topological constraints or large size, different from ideal systems, as may be considered small globular proteins that fold in a two state manner. The first treated topic is represented by the colicin immunity proteins IM9 and IM7, very similar in structure but with an apparently different folding mechanism. Our simulations suggest that the two proteins should fold with a similar folding mechanism via a populated on-pathway intermediate state. Then, two classes of pheromones that live in temperate and arctic water respectively are investigated. The two types of pheromones, despite the high structural similarity, show a different thermodynamic behavior, that could be explained, according to our results, by considering the role played by the location of CYS-CYS bonds along the chain. Finally, the conformational changes occurring in serpin proteins are studied. The serpins are very flexible, with a large size, more than 350 residues, and slow dynamics, from hours to weeks, completely beyond the possibilities of the simulation techniques to date. In this thesis we present the first all-atom simulations, obtained with the DRP approach, of the mechanism related to serpins and a complete characterization of the serpin dynamics is performed. Moreover, important implications for what concerns medical research field, in particular in drug design, are drown from this detailed analysis.
104

Network identification via multivariate correlation analysis

Chiari, Diana Elisa January 2019 (has links)
In this thesis an innovative approach to assess connectivity in a complex network was proposed. In network connectivity studies, a major problem is to estimate the links between the elements of a system in a robust and reliable way. To address this issue, a statistical method based on Pearson’s correlation coefficient was proposed. The former inherits the versatility of the latter, declined in a general applicability to any kind of system and the capability to evaluate cross–correlation of time series pairs both simultaneously and at different time lags. In addition, our method has an increased “investigation power”, allowing to estimate correlation at different time scale–resolutions. The method was tested on two very different kind of systems: the brain and a set of meteorological stations in the Trentino region. In both cases, the purpose was to reconstruct the existence of significant links between the elements of the two systems at different temporal resolutions. In the first case, the signals used to reconstruct the networks are magnetoencephalographic (MEG) recordings acquired from human subjects in resting–state. Zero–delays cross–correlations were estimated on a set of MEG time series corresponding to the regions belonging to the default mode network (DMN) to identify the structure of the fully–connected brain networks at different time scale resolutions. A great attention was devoted to test the correlation significance, estimated by means of surrogates of the original signal. The network structure is defined by means of the selection of four parameter values: the level of significance α, the efficiency η0, and two ranking parameters, R1 and R2, used to merge the results obtained from the whole dataset in a single average behav- ior. In the case of MEG signals, the functional fully–connected networks estimated at different time scale resolutions were compared to identify the best observation window at which the network dynamics can be highlighted. The resulting best time scale of observation was ∼ 30 s, in line with the results present in the scientific liter- ature. The same method was also applied to meteorological time series to possibly assess wind circulation networks in the Trentino region. Although this study is pre- liminary, the first results identify an interesting clusterization of the meteorological stations used in the analysis.
105

Silicon nanocrystals downshifting for photovoltaic applications

Sgrignuoli, Fabrizio January 2013 (has links)
In conventional silicon solar cell, the collection probability of light generated carries shows a drop in the high energy range 280-400nm. One of the methods to reduce this loss, is to implement nanometre sized semiconductors on top of a solar cell where high energy photons are absorbed and low energy photons are re-emitted. This effect, called luminescence down-shifter (LDS), modifies the incident solar spectrum producing an enhancement of the energy conversion efficiency of a cell. We investigate this innovative effect using silicon nanoparticles dispersed in a silicon dioxide matrix as active material. In particular, I proposed to model these structures using a transfer matrix approach to simulate its optical properties in combination with a 2D device simulator to estimate the electrical performance. Based on the optimized layer sequences, high efficiency cells were produced within the european project LIMA characterized by silicon quantum dots as active layer. Experimental results demonstrate the validity of this approach by showing an enhancement of the short circuit current density with up to 4%. In addition, a new configuration was proposed to improve the solar cell performances. Here the silicon nanoparticles are placed on a cover glass and not directly on the silicon cells. The aim of this study was to separate the silicon nanocrystals (Si-NCs) layer from the cell. In this way, the solar device is not affected by the Si-NCs layer during the fabrication process, i.e. the surface passivation quality of the cell remains unaffected after the application of the LDS layer. Using this approach, the downshifting contribution can be quantified separately from the passivation effect, as compared with the previous method based on the Si-NCs deposition directly on the solar devices. By suitable choice of the dielectric structures, an improvement in short circuit current of up 1% due to the LDS effect is demonstrated and simulated.
106

Progress of Monte Carlo methods in nuclear physics using EFT-based NN interaction and in hypernuclear systems.

Armani, Paolo January 2011 (has links)
Introduction In this thesis I report the work of my PhD; it treated two different topics, both related by a third one, that is the computational method that I use to solve them. I worked on EFT-theories for nuclear systems and on Hypernuclei. I tried to compute the ground state properties of both systems using Monte Carlo methods. In the first part of my thesis I briefly describe the Monte Carlo methods that I used: VMC (Variational Monte Carlo), DMC (Diffusion Monte Carlo), AFDMC (Auxiliary Field Diffusion Monte Carlo) and AFQMC (Auxiliary Field Quantum Monte Carlo) algorithms. I also report some new improvements relative to these methods that I tried or suggested: I remember the fixed hypernode extension (§ 2.6.2) for the DMC algorithm, the inclusion of the L2 term (§ 3.10) and of the exchange term (§ 3.11) into the AFDMC propagator. These last two are based on the same idea used by K. Schmidt to include the spin-orbit term in the AFDMC propagator (§ 3.9). We mainly use the AFDMC algorithm but at the end of the first part I describe also the AFQMC method. This is quite similar in principle to AFDMC, but it was newer used for nuclear systems. Moreover, there are some details that let us hope to be able to overcome with AFQMC some limitations that we find in AFDMC algorithm. However we do not report any result relative to AFQMC algorithm, because we start to implement it in the last months and our code still requires many tests and debug. In the second part I report our attempt of describing the nucleon-nucleon interaction using EFT-theory within AFDMC method. I explain all our tests to solve the ground state of a nucleus within this method; hence I show also the problems that we found and the attempts that we tried to overcome them before to leave this project. In the third part I report our work about Hypernuclei; we tried to fit part of the ΛN interaction and to compute the Hypernuclei Λ-hyperon separation energy. Nevertheless we found some good and encouraging results, we noticed that the fixed-phase approximation used in AFDMC algorithm was not so small like assumed. Because of that, in order to obtain interesting results, we need to improve this approximations or to use a better method; hence we looked at AFQMC algorithm aiming to quickly reach good results.
107

A new approach to optimal embedding of time series

Perinelli, Alessio 20 November 2020 (has links)
The analysis of signals stemming from a physical system is crucial for the experimental investigation of the underlying dynamics that drives the system itself. The field of time series analysis comprises a wide variety of techniques developed with the purpose of characterizing signals and, ultimately, of providing insights on the phenomena that govern the temporal evolution of the generating system. A renowned example in this field is given by spectral analysis: the use of Fourier or Laplace transforms to bring time-domain signals into the more convenient frequency space allows to disclose the key features of linear systems. A more complex scenario turns up when nonlinearity intervenes within a system's dynamics. Nonlinear coupling between a system's degrees of freedom brings about interesting dynamical regimes, such as self-sustained periodic (though anharmonic) oscillations ("limit cycles"), or quasi-periodic evolutions that exhibit sharp spectral lines while lacking strict periodicity ("limit tori"). Among the consequences of nonlinearity, the onset of chaos is definitely the most fascinating one. Chaos is a dynamical regime characterized by unpredictability and lack of periodicity, despite being generated by deterministic laws. Signals generated by chaotic dynamical systems appear as irregular: the corresponding spectra are broad and flat, prediction of future values is challenging, and evolutions within the systems' state spaces converge to strange attractor sets with noninteger dimensionality. Because of these properties, chaotic signals can be mistakenly classified as noise if linear techniques such as spectral analysis are used. The identification of chaos and its characterization require the assessment of dynamical invariants that quantify the complex features of a chaotic system's evolution. For example, Lyapunov exponents provide a marker of unpredictability; the estimation of attractor dimensions, on the other hand, highlights the unconventional geometry of a chaotic system's state space. Nonlinear time series analysis techniques act directly within the state space of the system under investigation. However, experimentally, full access to a system's state space is not always available. Often, only a scalar signal stemming from the dynamical system can be recorded, thus providing, upon sampling, a scalar sequence. Nevertheless, by virtue of a fundamental theorem by Takens, it is possible to reconstruct a proxy of the original state space evolution out of a single, scalar sequence. This reconstruction is carried out by means of the so-called embedding procedure: m-dimensional vectors are built by picking successive elements of the scalar sequence delayed by a lag L. On the other hand, besides posing some necessary conditions on the integer embedding parameters m and L, Takens' theorem does not provide any clue on how to choose them correctly. Although many optimal embedding criteria were proposed, a general answer to the problem is still lacking. As a matter of fact, conventional methods for optimal embedding are flawed by several drawbacks, the most relevant being the need for a subjective evaluation of the outcomes of applied algorithms. Tackling the issue of optimally selecting embedding parameters makes up the core topic of this thesis work. In particular, I will discuss a novel approach that was pursued by our research group and that led to the development of a new method for the identification of suitable embedding parameters. Rather than most conventional approaches, which seek a single optimal value for m and L to embed an input sequence, our approach provides a set of embedding choices that are equivalently suitable to reconstruct the dynamics. The suitability of each embedding choice m, L is assessed by relying on statistical testing, thus providing a criterion that does not require a subjective evaluation of outcomes. The starting point of our method are embedding-dependent correlation integrals, i.e. cumulative distributions of embedding vector distances, built out of an input scalar sequence. In the case of Gaussian white noise, an analytical expression for correlation integrals is available, and, by exploiting this expression, a gauge transformation of distances is introduced to provide a more convenient representation of correlation integrals. Under this new gauge, it is possible to test—in a computationally undemanding way—whether an input sequence is compatible with Gaussian white noise and, subsequently, whether the sequence is compatible with the hypothesis of an underlying chaotic system. These two statistical tests allow ruling out embedding choices that are unsuitable to reconstruct the dynamics. The estimation of correlation dimension, carried out by means of a newly devised estimator, makes up the third stage of the method: sets of embedding choices that provide uniform estimates of this dynamical invariant are deemed to be suitable to embed the sequence.The method was successfully applied to synthetic and experimental sequences, providing new insight into the longstanding issue of optimal embedding. For example, the relevance of the embedding window (m-1)L, i.e. the time span covered by each embedding vector, is naturally highlighted by our approach. In addition, our method provides some information on the adequacy of the sampling period used to record the input sequence.The method correctly distinguishes a chaotic sequence from surrogate ones generated out of it and having the same power spectrum. The technique of surrogate generation, which I also addressed during my Ph. D. work to develop new dedicated algorithms and to analyze brain signals, allows to estimate significance levels in situations where standard analytical algorithms are unapplicable. The novel embedding approach being able to tell apart an original sequence from surrogate ones shows its capability to distinguish signals beyond their spectral—or autocorrelation—similarities.One of the possible applications of the new approach concerns another longstanding issue, namely that of distinguishing noise from chaos. To this purpose, complementary information is provided by analyzing the asymptotic (long-time) behaviour of the so-called time-dependent divergence exponent. This embedding-dependent metric is commonly used to estimate—by processing its short-time linearly growing region—the maximum Lyapunov exponent out of a scalar sequence. However, insights on the kind of source generating the sequence can be extracted from the—usually overlooked—asymptotic behaviour of the divergence exponent. Moreover, in the case of chaotic sources, this analysis also provides a precise estimate of the system's correlation dimension. Besides describing the results concerning the discrimination of chaotic systems from noise sources, I will also discuss the possibility of using the related correlation dimension estimates to improve the third stage of the method introduced above for the identification of suitable embedding parameters. The discovery of chaos as a possible dynamical regime for nonlinear systems led to the search of chaotic behaviour in experimental recordings. In some fields, this search gave plenty of positive results: for example, chaotic dynamics was successfully identified and tamed in electronic circuits and laser-based optical setups. These two families of experimental chaotic systems eventually became versatile tools to study chaos and its possible applications. On the other hand, chaotic behaviour is also looked for in climate science, biology, neuroscience, and even economics. In these fields, nonlinearity is widespread: many smaller units interact nonlinearly, yielding a collective motion that can be described by means of few, nonlinearly coupled effective degrees of freedom. The corresponding recorded signals exhibit, in many cases, an irregular and complex evolution. A possible underlying chaotic evolution—as opposed to a stochastic one—would be of interest both to reveal the presence of determinism and to predict the system's future states. While some claims concerning the existence of chaos in these fields have been made, most results are debated or inconclusive. Nonstationarity, low signal-to-noise ratio, external perturbations and poor reproducibility are just few among the issues that hinder the search of chaos in natural systems. In the final part of this work, I will briefly discuss the problem of chasing chaos in experimental recordings by considering two example sequences, the first one generated by an electronic circuit and the second one corresponding to recordings of brain activity. The present thesis is organized as follows. The core concepts of time series analysis, including the key features of chaotic dynamics, are presented in Chapter 1. A brief review of the search for chaos in experimental systems is also provided; the difficulties concerning this quest in some research fields are also highlighted. Chapter 2 describes the embedding procedure and the issue of optimally choosing the related parameters. Thereupon, existing methods to carry out the embedding choice are reviewed and their limitations are pointed out. In addition, two embedding-dependent nonlinear techniques that are ordinarily used to characterize chaos, namely the estimation of correlation dimension by means of correlation integrals and the assessment of maximum Lyapunov exponent, are presented. The new approach for the identification of suitable embedding parameters, which makes up the core topic of the present thesis work, is the subject of Chapter 3 and 4. While Chapter 3 contains the theoretical outline of the approach, as well as its implementation details, Chapter 4 discusses the application of the approach to benchmark synthetic and experimental sequences, thus illustrating its perks and its limitations. The study of the asymptotic behaviour of the time-dependent divergent exponent is presented in Chapter 5. The alternative estimator of correlation dimension, which relies on this asymptotic metric, is discussed as a possible improvement to the approach described in Chapters 3, 4. The search for chaos out of experimental data is discussed in Chapter 6 by means of two examples of real-world recordings. Concluding remarks are finally drawn in Chapter 7.
108

Isolated objects in quadratic gravity

Silveravalle, Samuele Marco 07 June 2023 (has links)
Quadratic curvature terms are commonly introduced in the action as first-order corrections of General Relativity, and, in this thesis, we investigated their impact on the most simple isolated objects, that are the static and spherically symmetric ones. Most of the work has been done in the context of Stelle's theory of gravity, in which the most general quadratic contractions of curvature tensors are added to the action of General Relativity without a cosmological constant. We studied this theory's possible static, spherically symmetric and asymptotically flat solutions with both analytical approximations and numerical methods. We found black holes with Schwarzschild and non-Schwarzschild nature, naked singularities which can have either an attractive or repulsive gravitational potential in the origin, non-symmetric wormholes which connects an asymptotically flat spacetime with an asymptotically singular one, and non-vacuum solutions modeled by perfect fluids with different equations of state. We described the general geometrical properties of these solutions and linked these short-scale behaviors to the values of the parameters which characterize the gravitational field at large distances. We studied linear perturbations of these solutions, finding that most are unstable, and presented a first attempt to picture the parameter space of stable solutions. We also studied the Thermodynamics of black holes and described their evaporation process: we found that either evaporation leads black holes to unstable configurations, or the predictions of quadratic gravity are unphysical. We also considered the possibility of generalizing Stelle's theory by removing the dependence from the only mass-scale present by including a new dynamical scalar field, making the theory scale invariant. Having a more complex theory, we did not investigate exotic solutions but limited ourselves to the impact of the new additional degrees of freedom on known analytical solutions. It was already known that in a cosmological setting this theory admits a transition between two de Sitter configurations; we analyzed the same problem in the context of static and spherically symmetric solutions and found a transition between two Schwarzschild-de Sitter configurations. In order to do that, we studied both linear perturbations and the semiclassical approximation of the path integral formulation of Euclidean quantum gravity. At last, we tried to extract some phenomenological signatures of the exotic solutions. In particular, we investigated the shadow of an object on background free-falling light, and a possible way of determining the behavior close to the origin using mass measurements that rely on different physical processes. We show that, whenever these measurements are applied to the case of compact stars, in principle it could be possible to distinguish solutions where different equations of state describe the fluid.
109

Static and dynamic disorder in nanocrystalline materials

Perez Demydenko, Camilo January 2019 (has links)
Peak profiles in X-ray Diffraction (XRD) patterns from nanocrystalline materials are affected by static and dynamic disorder which is specific of the size and shape of the nanocrystalline domains. Owing to their intrinsic differences, the two types of disorder can be separated, providing independent information from the modelling of the XRD patterns. In the present thesis a model for the static strain created by the nanoparticle surface is proposed. The model is built within the frame of the Whole Powder Pattern Modelling (WPPM) approach for XRD line profile analysis, developed at the University of Trento in the past 20 years. The WPPM approach is decribed in details. Based on a complex Fourier Transform of the diffraction profiles, the model leads to general equations to be used with the WPPM approach to represent the distorted atomic configuration with respect to the reference bulk one. The model was also implemented in TOPAS, a commercial and very popular software, developing a specific macro allowing a larger community of users to benefit of this new opportunity of studying nanocrystalline materials. The thesis work also extended to a more traditional and general description of strain broadening of XRD peak profiles, involving invariant forms under the Laue group symmetry operations of the material under study. As for the dynamic strain, the fundamentals of the Thermal Diffuse Scattering (TDS) contribution to the peak profiles are reviewed. Starting from the original work of B.E. Warren, the theory is generalized to account for surface effects, leading to a particular model developed recently at the University of Trento. This model was thoroughly reviewed and corrected. To test the model a parallel computer code in C was written, exploiting Molecular Dynamics simulations for obtaining reliable and independent estimates of static and dynamic disorder in nanocrystals.
110

Computational models for impact mechanics and related protective materials and structures

Signetti, Stefano January 2017 (has links)
The mechanics of impacts is not yet well understood due to the complexity of materials behaviour under extreme stress and strain conditions and is thus of challenge for fundamental research, as well as relevant in several areas of applied sciences and engineering. The involved complex contact and strain-rate dependent phenomena include geometrical and materials non-linearities, such as wave and fracture propagation, plasticity, buckling, and friction. The theoretical description of such non-linearities has reached a level of advance maturity only singularly, but when coupled -due to the severe mathematical complexity- remains limited. Moreover, related experimental tests are difficult and expensive, and usually not able to quantify and discriminate between the phenomena involved. In this scenario, computational simulation emerges as a fundamental and complementary tool for the investigation of such otherwise intractable problems. The aim of this PhD research was the development and use of computational models to investigate the behaviour of materials and structures undergoing simultaneously extreme contact stresses and strain-rates, and at different size and time scales. We focused on basic concepts not yet understood, studying both engineering and bio-inspired solutions. In particular, the developed models were applied to the analysis and optimization of macroscopic composite and of 2D-materials-based multilayer armours, to the buckling-governed behaviour of aerographite tetrapods and of the related networks, and to the crushing behaviour under compression of modified honeycomb structures. As validation of the used approaches, numerical-experimental-analytical comparisons are also proposed for each case.

Page generated in 0.0773 seconds