• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 442
  • 37
  • 8
  • 4
  • 4
  • 3
  • 1
  • Tagged with
  • 498
  • 391
  • 218
  • 118
  • 113
  • 109
  • 104
  • 98
  • 92
  • 89
  • 89
  • 89
  • 82
  • 82
  • 82
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
471

Time delay interferometry for LISA science and instrument characterization

Muratore, Martina 20 July 2021 (has links)
LISA, the Laser Interferometry Space Antenna, is the 3rd large mission (L3) of the ESA program Cosmic Vision with a junior partnership from NASA planned to be launched around 2034. Space-based gravitational wave observatories such as LISA have been developed for observation of sources that produce gravitational wave (GW) signals with frequencies in the mHz regime. The frequency band is achievable by having a longer-baseline interferometer compared to ground-based detectors. In addition, the significant size of the LISA arms-length guarantees the detection of many astrophysical sources. The absence of Newtonian noise in space, which is the dominant source of noise below few hertz for ground-based detectors, allows LISA to be sensitive to lower frequency compared to the former. Thus, going to space allows studying different sources with respect to the ones of interest for ground-based detectors such as supermassive black holes. Although having very long baselines between the satellites generally increases the sensitivity to gravitational waves, it also implies many technical challenges, such that a balance must be found between scientific performance and technical feasibility.In the actual proposal LISA is designed to be a constellation of three identical spacecraft in a triangular formation with six active laser links connecting the three spacecraft, which are separated by 2.5 million km. To fulfil the observatory program every spacecraft has a minimum requirement of two free-falling test masses, two telescopes, and two lasers. The detector’s center-of-mass follows a circular, heliocentric trajectory, trailing 20 degrees behind the Earth and the plane of the detector is tilted by 60 degrees with respect to the ecliptic.The goal of LISA is to detect GWs which manifest themselves as a tiny fluctuation in the frequency of the laser beam measured at the phase-meter. Thus, to detect GW you need to compete with many sources of disturbance that simulate the effect of a GW frequency modulation. Laser noise is an example of those. Therefore, one key element in the LISA data production chain is the post-processing technique called Time Delay Interferometry aimed at suppressing the intense laser frequency noise that would completely cover the astrophysical signal. Data from the six independent inter-satellite links, connecting the three spacecraft, are properly time-shifted and combined to form the final scientific signal. This post-processing technique circumvents the impossibility of physically building in space an equal arm interferometer, which would intrinsically beat the frequency noise by comparing light generated at the same time.The following work is focused on revisiting the Time-Delay-Interferometry (TDI) for LISA and studying the usage of all the possible TDI combinations we can build for the LISA instrument characterisation and science extraction. Many possible TDI combinations that suppress the frequency noise have been identified in the past and this thesis revisits the TDI technique focusing on the physical interpretation of it, that is a virtual interference of photons that have been travelling through the constellation via different paths but performing the same total distance. We illustrate all possible TDI configurations that suppress the laser noise contribution to the level required by the mission to understand how TDI channels can be best used for the diagnostic of the instrument and LISA science. With this philosophy, we develop an algorithm to search for all possible combinations that suppress laser noise at the same level as the classical TDI X, Y, and Z combinations presented in the TDI literature. This algorithm finds new combinations that fulfill the noise suppression requirement as accurately as X, Y, and Z.The LISA mission has been also advertised to probe the early Universe by detecting a stochastic GW background. Once the laser frequency noise has been subtracted, the stochastic signal, both cosmological and astrophysical, is itself going to contribute to the noise curve. Therefore it is necessary to have a good estimate of the noise of the instrument to discriminate between the stochastic background signal and the LISA noise.The strategy that has been suggested in the literature is to use the TDI T, insensitive (up to a certain order) to GW signals to estimate the pure instrumental noise in order to distinguish between the LISA background noise and the GW stochastic signal. Following this idea, as instrument noise is expected to have multiple, independent sources, this thesis explores combinations that could allow discriminating among those sources of noise, and between them and the GW signal, with the purpose of understanding how we can characterise our instrument using TDI. We illustrate special TDI combination signals in LISA, in addition to TDI T, that we call null-channels, which are ideally insensitive to gravitational waves and only carry information about instrumental noise. Studying the noise properties that can be extracted by monitoring these interferometric signals, we state that individual acceleration noise parameters are not well constrained. All null-channels behave as an ideal Sagnac interferometer, sensitive just to a particular linear combination of the six test masses acceleration that resembles a rotational acceleration signal of the entire constellation. Moreover, all null-channels show approximately the same signal to noise ratio remarkably suppressed relative to that of the TDI X. In support and application of our theoretical studies, we also give an introduction on calibrating the LISA instrument by injecting spurious signals in a LISA link and see how these propagates through a TDI channel. Indeed, this will be useful to calibrate the instrument during operations and also to build the basis for the data analysis to discriminate spurious signals from gravitational waves. My contribution to the results we present in this thesis can be summarised as the following. I supported the studies and the realisation of the search TDI algorithm whose results are published in the article. In particular, I took care of cataloging the new TDI combinations and consolidating the results we found. I have updated the TDI combinations reported in the above-mentioned work, the final version of it is reported in this thesis. I worked on the characterisation of these combinations concerning secondary noises such as clock noise, readout noise, residual laser frequency noise, and acceleration noise. In particular, I studied how these noises are transferred through the various TDI and I derive the correspondent analytical models. I then realize a software with Wolfram Mathematica, design to load and combines phase data produced by an external simulator to build the final TDI outputs, besides I also did the noise models’ validation. The basis of this program was then used to implement these TDI combinations in LISANode. Finally, I developed the algorithm to study how disturbances in force, such as glitches, and simple GW signals, such as monochromatic GW binaries, propagate through TDI and null-channels. Moreover, I tested through simulations the validity of these TDI and null-channels to distinguish instrumental artefact from GW signals and to characterise the instrumental noise.
472

A CONVERGENT AND MULTISCALE ASSESSMENT OF DNA DAMAGE BY PARTICLE RADIATION

Petrolli, Lorenzo 21 April 2022 (has links)
The mutation/deletion of the hereditary material in the cell nuclei is a chronic biochemical hazard; in fact, nuclear DNA faces tens of lesions from metabolic intermediates, hydrolytic reactions and external vectors a minute. The canonical lesions of DNA involve the DNA backbone as well as the nucleic bases and are mostly associated with reversible chemical modifications. However, the radiation field from beams of accelerated ions accounts for a dense streak of collisions and reactions with the DNA molecule, thereby achieving lethal clusters of elemental lesions. Double strand breaks (DSB), i.e., the cleft of the DNA backbone over both strands, are hazardous fractures of the chromatin fold associated with the radiation field, underlying cytotoxic outcomes and chromosomal aberrations. Eukaryotic cells, however, rejoin the fractured DNA moieties from DSB events via an apt enzymatic machinery, or the DDR. Prior to the deployment of enzymatic effectors, host enzyme sensors engage the DNA termini in reversible supramolecular assemblies, which requires that the fractured DNA moieties be fully exposed. The in silico assessments of the early layout of DNA lesions by radiations have defined DSBs as the closely associated modifications of the DNA backbone by means of “coarse” criteria, that is, within an arbitrary distance of the two clefts. However, the diverse DSB motifs, i.e. at a strand break distance of zero to several nucleotides, account for a different contact interface between the DNA termini, thus modulating the dynamics of the lesion sites. Moreover, it is reckoned that in the absence of excess external stimuli, far-distanced DSBs may not fracture the broken DNA moieties by thermal dissociation, within the characteristic timescales of the DDR activity. This thesis elaborate tackles the in silico assessment of the distribution of DSBs in a chromatin-like fold and the local mechanical strain enforced by blunt DSBs, by means of state-of-the-art Monte-Carlo track structure tools and classical molecular dynamics. We infer that i) a Poisson fit describes the spectrum of DSB motifs by the direct effect of accelerated hydrogen ions (H+) at a Bragg peak relevant energy range (500 keV - 5 MeV) and, notably, we observe a bias towards short-distanced, staggered DSBs; ii) the nucleosome fold, i.e. the elemental unit of the chromatin hierarchical framework, exerts an excess kinetic barrier on the disruption of DSBs, which is not observed in linear DNA, mediated by the contact interface between DNA and the core histone fold. In conclusion, we remark that in the absence of further data from in vitro and in vivo assessments, the (kinetic, thermodynamic) inferences about the thermal and mechanical resilience of broken DNA frameworks are as reliable as the force fields underneath; in fact, it is debated whether all-atom force fields and water models overestimate the force of the intermolecular contacts and over-stabilize the DNA double helix.
473

Climate Change and the Exhaustion of Fossil Energy and Mineral Resources

Chiari, Luca January 2010 (has links)
The ongoing exhaustion of fossil fuels places a limit to the total amount of anthropogenic CO2 that will be emitted into the atmosphere and therefore constraints future global warming. Here we assess the implications of fossil fuels depletion on future changes of atmospheric CO2 concentration and global-mean temperature. We find that, despite the exhaustion of fossil fuels, future global warming will likely reach a dangerous level. Deliberate actions aiming at emissions reduction are needed to avoid dangerous climate change.
474

Directional relationships between BOLD activity and autonomic nervous system fluctuations revealed by fast fMRI acquisition

Iacovella, Vittorio January 2012 (has links)
The problem of the relationship between brain function, characterized by functional magnetic resonance imaging, and physiological fluctuations by means of cardiac / respiratory oscillations is one of the most debated topics in the last decade. In recent literature, a great number of studies are found that focus on both practical and conceptual aspects about this topic. In this work, we start with reviewing two distinct approaches in considering physiology - related sequences with respect to functional magnetic resonance imaging: one treating physiology - related fluctuations as generators of noise, the other considering them as carriers of cognitively relevant information. In chapter 2 – “Physiology – related effects in the BOLD signal at rest at 4T”, we consider physiological quantities as generators of noise, and discuss conceptual flaws researchers have to face when dealing with data de-noising procedures. We point out that it can be difficult to show that the procedure has achieved its stated aim, i.e. to remove only physiology - related components from the data. As a practical solution, we present a benchmark for assessing whether correction for physiological noise has achieved its stated aim, based on the principle of permutation testing. In chapter 3 – “Directional relationships between BOLD activity and autonomic nervous system fluctuations revealed by fast fMRI acquisition”, on the other hand, we will consider autonomic indicants derived from physiological time - series as meaningful components of the BOLD signal. There, we describe a FMRI experiment building on this, where the goal was to localize brain areas whose activity is directionally related to autonomic one, in a top - down modulation fashion. In chapter 4 we recap the conclusions we found from the two approaches and we summarize the general contributions of our findings. We point out that bringing together the distinct approaches we reviewed lead us to mainly two contributions. On one hand we thought back the validity of almost established procedures in FMRI resting - state pre-processing pipelines. On the other we were able to say something new about general relationship between BOLD and autonomic activity, resting state fluctuations and deactivation theory.
475

A network medicine approach on microarray and Next generation Sequencing data

Filosi, Michele January 2014 (has links)
The goal of this thesis is the discovery of a bioinformatics solution for network-based predictive analysis of NGS data, in which network structures can substitute gene lists as a more rich and complex signature of disease. I have focused on methods for network stability, network inference and network comparison, as additional components of the pipeline and as methods to detects outliers in high-throughput datasets. Besides a first work on GEO datasets, the main application of my pipeline has been on original data from the FDA SEQC (Sequencing Quality Control)project. Here I will report some initial findings to which I have contributed with methods and analysis: as the corresponding papers are being submitted. My goal is to provide a comprehensive tool for network reconstruction and network comparison as an R package and user-friendly web service interface available on-line at https://renette.fbk.eu The goal of this thesis is the discovery of a bioinformatics solution for network-based predictive analysis of NGS data, in which network structures can substitute gene lists as a more rich and complex signature of disease. I have focused on methods for network stability, network inference and network comparison, as additional components of the pipeline and as methods to detects outliers in high-throughput datasets. Besides a first work on GEO datasets, the main application of my pipeline has been on original data from the FDA SEQC (Sequencing Quality Control)project. Here I will report some initial findings to which I have contributed with methods and analysis: as the corresponding papers are being submitted. My goal is to provide a comprehensive tool for network reconstruction and network comparison as an R package and user-friendly web service interface available on-line at https://renette.fbk.eu.
476

Experimental and numerical investigation of turbulence in Stable Boundary Layer flows

Gucci, Federica 16 February 2023 (has links)
The present work combines experimental and numerical analyses to improve current understanding of turbulence in stably stratified flows. An extensive literature review is presented on the mechanisms governing turbulence under stratified conditions, with a special focus on the Richardson number parameter, as it is often adopted as a switch to turn turbulence modelling on/off. Anisotropization of turbulence is investigated, as it is found to be an important mechanism for turbulence survival at any Richardson number, but usually overlooked in turbulence parameterizations. For this purpose, an experimental dataset previously collected over an Alpine glacier is used, with a focus on the anisotropy of the Reynolds stress tensor, as the scientific community has recently shown improvements in the description of the atmospheric surface layer by taking this aspect into account. Different sources leading stresses to deviate from the isotropic limit are explored, as well as energy exchanges across scales and between kinetic and potential reservoirs, in order to identify the main processes that should be included in turbulence parameterizations to properly represent anisotropic turbulence under stable conditions. High-resolution numerical simulations are then performed with the Weather Research and Forecasting (WRF) model to evaluate different PBL parameterizations in reproducing specific stable atmospheric conditions developing over complex terrain, and their influence on the local circulation. For this purpose, two wintertime case studies in a basin-like area of an Alpine valley are investigated. Both are fair-weather episodes with weak synoptic forcing and well-developed diurnal local circulations, differing by the thermal stratification in the basin. In particular, the influence of thermal stratification on the outbreak of a valley-exit wind coming from a tributary valley is investigated, and the influence of such type of flows on turbulence anisotropy in stably stratified conditions is discussed for future investigations.
477

Computational models for impact mechanics and related protective materials and structures

Signetti, Stefano January 2017 (has links)
The mechanics of impacts is not yet well understood due to the complexity of materials behaviour under extreme stress and strain conditions and is thus of challenge for fundamental research, as well as relevant in several areas of applied sciences and engineering. The involved complex contact and strain-rate dependent phenomena include geometrical and materials non-linearities, such as wave and fracture propagation, plasticity, buckling, and friction. The theoretical description of such non-linearities has reached a level of advance maturity only singularly, but when coupled -due to the severe mathematical complexity- remains limited. Moreover, related experimental tests are difficult and expensive, and usually not able to quantify and discriminate between the phenomena involved. In this scenario, computational simulation emerges as a fundamental and complementary tool for the investigation of such otherwise intractable problems. The aim of this PhD research was the development and use of computational models to investigate the behaviour of materials and structures undergoing simultaneously extreme contact stresses and strain-rates, and at different size and time scales. We focused on basic concepts not yet understood, studying both engineering and bio-inspired solutions. In particular, the developed models were applied to the analysis and optimization of macroscopic composite and of 2D-materials-based multilayer armours, to the buckling-governed behaviour of aerographite tetrapods and of the related networks, and to the crushing behaviour under compression of modified honeycomb structures. As validation of the used approaches, numerical-experimental-analytical comparisons are also proposed for each case.
478

Development of multilayer for protection from intense electric fields

Campostrini, Matteo January 2017 (has links)
The experimental work presented in this thesis is done to develop an innovative procedure to create a protective nanostructured coating inside the X-band radio frequency cavity, a key component in future particle accelerator. The scope of the multilayer coating is to prevent the breakdown due to high electric and magnetic field. In fact the electrical discharges damage, in irreversible way, the internal surface of the cavity and compromise the final operation of the device. The keen interest on the topic is due to decrease the length and the cost of the next generation linear accelerator. To do this it is essential to enhance the performance of X-band Linacs up to 100MV/m accelerating gradient and to maintain, high as possible, the electrical breakdown reliability. Several studies are made on different materials in order to develop these cavities [1] [2], but the use of physical vapor deposition technique (PVD), to obtain nanostructured coating directly on internal wall of these small sized cavities is not reported in literature. The size of the cavities is of order of few millimeters and the iris aperture ranges from 2 to 6mm: for this reason the direct PVD coating is not possible. Hence a mandrel, that is the negative shape of the cavity, is first coated using PVD technique and finally chemically dissolved after copper electroforming[3]. The novel nanostructured coating is a multilayer composed by two high purity and immiscible metals. One is Copper to guarantee electrical conductivity of the cavity and the second is Molybdenum because it is a refractory metal. Moreover the choice of immiscible materials is important, because these materials do not form alloy during the deposition phase. Keeping a well-defined interface is important to guarantee a barrier effect to the motion of the defects inside the cavity’s material[4][5]. The experimental part of the thesis is divided in three different parts: design and setup of the PVD deposition system, plasma discharge analysis and, finally, the characterization of the coatings. This work is a collaboration between Industrial Engineering Department (University of Trento) and the National Laboratory of Legnaro (National Institute of Nuclear Physics LNL-INFN), but this research involves several institutes in different countries: SLAC (USA), KEK (Japan) and UCLA (Los Angeles USA).
479

Machine learning-based sensitivity analysis of surface parameters in numerical weather prediction model simulations over complex terrain

Di Santo, Dario 22 July 2024 (has links)
Land surface models (LSMs) implemented in numerical weather prediction (NWP) models use several parameters to suitably describe the surface and its interaction with the atmosphere, whose determination is often affected by many uncertainties, strongly influencing simulation results. However, the sensitivity of meteorological model results to these parameters has not yet been studied systematically, especially in complex terrain, where uncertainty is expected to be even larger. This work aims at identifying critical LSM parameters influencing the results of NWP models, focusing in particular on the simulation of thermally-driven circulations over complex terrain. While previous sensitivity analyses employed offline LSM simulations to evaluate the sensitivity to surface parameters, this study adopts an online coupled approach, utilizing the Noah-MP LSM within the Weather Research and Forecasting (WRF) model. To overcome computational constraints, a novel tool, Machine Learning-based Automated Multi-method Parameter Sensitivity and Importance analysis Tool (ML-AMPSIT), is developed and tested. This tool allows users to explore the sensitivity of the results to model parameters using supervised machine learning regression algorithms, including Random Forest, CART, XGBoost, SVM, LASSO, Gaussian Process Regression, and Bayesian Ridge Regression. These algorithms serve as fast surrogate models, greatly accelerating sensitivity analyses while maintaining a high level of accuracy. The versatility and effectiveness of ML-AMPSIT enable the fast implementation of advanced sensitivity methods, such as the Sobol method, overcoming the computational limitations encountered in expensive models like WRF. The suitability of this tool to assess model’s sensitivity to the variation of specific parameters is first tested in an idealized sea breeze case study where six surface parameters are varied. Then, the analysis focuses on the evaluation of the sensitivity to surface parameters in the simulation of thermally-driven circulations in a mountain valley. Specifically, an idealized three-dimensional topography consisting of a valley-plain system is adopted, analyzing a complete diurnal cycle of valley and slope winds. The analysis focuses on all the key surface parameters governing the interactions between NoahMP and WRF. The proposed approach, novel in the context of LSM-NWP model coupling, draws from established applications of machine learning in various Earth science disciplines, underscoring its potential to improve the estimation of parameter sensitivities in NWP models.
480

Electromagnetic Characterization of the Ionosphere in the Framework of the Magnetosphere-Ionosphere-Lithosphere Coupling

Recchiuti, Dario 04 November 2024 (has links)
The ionospheric environment has become a focal point in the study of earthquake-related anomalies. In particular, both electromagnetic anomalies and particle bursts have been detected in the ionosphere and proposed as potential seismo-related phenomena. This thesis addresses the challenges in distinguishing earthquake-induced electromagnetic anomalies from the complex and variable background of ionospheric signals. Utilizing data from the CSES-01 satellite, this work introduces a robust methodology for characterizing both medium-long and short-duration electromagnetic signals in the ionosphere. A new approach to defining ionospheric EM background is proposed, considering temporal and geographical variations, and a statistically rigorous definition of anomalies is introduced. Additionally, a novel algorithm is developed for the efficient detection of short-duration whistler waves, revealing significant insights into their spatiotemporal distributions. To explore the coupling mechanisms between electromagnetic anomalies and particle bursts, numerical simulations using a hybrid particle-in-cell code were conducted, simulating ionospheric plasma interactions with small amplitude Alfvén waves. The results demonstrate modifications in ion velocity distributions and the emergence of fast ion beams, providing the first estimates of time delays between the impact of the electromagnetic waves and plasma disturbances. This research advances the understanding of seismo-ionospheric coupling, offering valuable tools for the identification of earthquake-related anomalies.

Page generated in 0.0415 seconds