• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 440
  • 37
  • 8
  • 4
  • 4
  • 3
  • 1
  • Tagged with
  • 496
  • 389
  • 216
  • 117
  • 113
  • 109
  • 104
  • 98
  • 92
  • 89
  • 89
  • 89
  • 82
  • 82
  • 82
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
371

Surface Functionalizations towards Nucleic Acid Purification: a nanoscale study

Marocchi, Lorenza January 2014 (has links)
Protein byosynthesis is performed by ribosomes, that translate the genetic information contained in a strand of mRNA and assemble the peptide chain. During translation, several ribosomes associate to a single strand of mRNA, forming supramolecular complexes known as polyribosomes (polysomes).This project is aimed at developing and studying a miniaturized purification system able to isolate and extract polysome-associated mRNA, namely mRNA under active translation. The resulting microdevice will constitute a faster, simpler and low-cost alternative to the time-consuming traditional laboratory procedures for polysome purification and mRNA extraction (sucrose gradient centrifugation and phenol/ethanol RNA extraction). Polysome purification on microdevice will be based on the immobilization of polysomes to the device surfaces, opportunely treated to enhance polysome adhesion. Surface funtionalization will be achieved by formation of Self-Assembled Monolayers (SAM) of organic molecules. In particular, since both ribosomes and nucleic acids expose an high quantity of electrical charged moieties towards the environment [Anger et al., 2013], organic molecules containing charged functional groups will be used as SAM constituents. In this thesis a characterization of gold and silicon oxide plane samples functionalized with different alkanethiols and alkylsilanes SAMs will be presented as well as a quantitative and qualitative evaluation of polysome adhesion performed mainly by Atomic Force Microscopy (AFM). A proof of principle of the purification and extraction of RNA from polysomes using a silicon/Pyrex microdevice will be also reported. [Anger et al., 2013] Anger A.M., Armache J.P., Berninghausen O., Habeck M., Subklewe M., Wilson D.N. and Beckmann R. (2013) Structures of the human and drosophila 80S ribosome. Nature, 497(7447):80-85
372

Strongly correlated quantum fluids and effective thermalization in non-Markovian driven-dissipative photonic systems

Lebreuilly, José Rafael Eric January 2017 (has links)
Collective quantum phenomena are fascinating, as they repeatedly challenge our comprehension of nature and its underlying mechanisms. The qualification ``quantum'' can be attributed to a generic many-body system whenever the interference effects related to the underlying wave nature of its elementary constituents can not be neglected anymore, and a naive classical description in terms of interacting billiard balls fails to catch its most essential features. This interference phenomenon called ``quantum degeneracy'' which occurs at weak temperatures, leads to spectacular collective behaviours such as the celebrated Bose-Einstein Condensation (BEC) phase transition, where a macroscopic fraction of a bosonic system of particles collapses below a critical temperature T_c on a single-particle state. Quantum coherence, when combined with inter-particle interactions, gives rise to highly non-classical frictionless hydrodynamic behaviours such as superfluidity (SF) and superconductivity (SC). Even more exotic quantum phases emerge in presence of important interactions as matter reaches a ``strongly correlated regime'' dominated by quantum fluctuations, where each particle is able to affect significantly the surrounding fluid: characteristic examples are the so-called Mott-Insulator (MI) quantum phase where particles are localized on a lattice due to a strong interaction-induced blockade, along with the Tonks-Girardeau (TG) gas where impenetrable bosons in one-dimension acquire effective fermionic statistics up to a unitary transformation, and the Fractional Quantum Hall (FQH) effect which occurs in presence of a gauge field, and features a special type of elementary excitation possessing a fractional charge and obeying to fractional statistics called `anyon'. These quantum many-body effects were explored in a first place in systems well isolated from the external environment such as ultra-cold atomic gases or electrons in solid-state systems, within a physical context well described by ``equilibrium statistical mechanics''. Yet, over the last two decades a broad community has started investigating the possibility of stabilizing interacting quantum phases in novel nonlinear quantum optics architectures, where interacting photons have replaced their atomic and electronic counterpart. Thanks to their high level of controllability and flexibility, and the possibility of reaching the quantum degeneracy regime at exceptionally high temperatures, these platforms appear as extremely promising candidates for the ``quantum simulation'' of the most exotic many-body quantum problems: while the precursors experiments in semiconductor exciton-polariton already allow to reach the Bose-Einstein Condensation and superfluid regimes, novel platforms such as superconducting circuits, coupled cavity arrays or photons coupled to Rydberg EIT (Electromagnetically induced Transparency) atoms have entered the so-called `photon blockade' where photons behave as impenetrable particles, and open a encouraging pathway toward the future generation of strongly correlated phases with light. A specificity of quantum optics devices is their intrinsic ``non-equilibrium'' nature: the interplay between the practically unavoidable radiative and non-radiative losses and the external drive needed to replenish the photon gas leads the many-body system toward a steady-state presenting important non-thermal features. One one hand, an overwhelmingly large quantity of novel quantum phenomena is expected in the non-equilibrium framework, as breaking the thermal equilibrium condition releases severe constraints on the state of a quantum system and on the nature of its surrounding environment. On the other hand, we do not benefit yet of an understanding of non-equilibrium statistical mechanics comparable with its well-established equilibrium counterpart, which relies on strong historical foundations. Understanding how to tame (and possibly exploit) non-equilibrium effects in order to stabilize interesting quantum phases in a controlled manner often reveals a hard challenge. In that prospect, an important conceptual issue in the non-equilibrium physics of strongly interacting photons regards the possibility of stabilizing ``incompressible quantum phases'' such as the Mott-Insulator or Fractional Quantum Hall states, and more generally to stabilize the ground-state of a given particle-number conserving Hamiltonian, in a physical context where dissipative losses can not be neglected. While being able to quantum simulate those emblematic strongly correlated quantum phases in this novel experimental context would strongly benefit to the quantum optics community, gaining such a kind of flexibility would also contribute to fill an important bridge between the equilibrium and the non-equilibrium statistical physics of open quantum systems, allowing to access in a controlled manner a whole new phenomenology at the interface between the two theories. In this thesis I address those questions, which I reformulate in the following manner: -What are the conditions for the emergence of analogue equilibrium properties in open quantum systems in contact with a non-thermal environment ? -In particular, is it possible to stabilize strongly correlated quantum phases with a perfectly defined particle number in driven-dissipative photonic platforms, in spite of environment-induced losses and heating effects ? The structure of the thesis is the following. [Chapter 1.] We give an overview of the physics of many-body photonic systems. As a first step we address the weakly interacting regime in the physical context of exciton-polaritons: after describing the microscopic aspects of typical experiments, we move to the discussion of non-equilibrium Bose-Einstein Condensation and the various mechanisms related to the emergence of thermal signatures at steady-state. The second part of this Chapter is dedicated to strongly interacting fluids. After drawing a quick overview of several experimental platforms presenting a good potential for the study of such physics in a near future, we discuss the relative performance of several schemes proposed in order to replenish the photonic population [Chapter 2.] We investigate the potential of a non-Markovian pump scheme with a narrow bandpass (Lorentzian shaped) emission spectrum for the generation of strongly correlated states of light in a Bose-Hubbard lattice. Our proposal can be implemented by mean of embedded inverted two-level emitters with a strong incoherent pumping toward the excited state. Our study confirms in a single cavity the possibility of stabilizing photonic Fock states in a single configuration, and strongly localized n=1 Mott-Insulator states in a lattice with n=1 density. We show that a relatively moderate hopping is responsible for a depletion of the Mott-state, which then moves toward a delocalized state reminiscent of the superfluid regime. Finally, we proceed to a mean-field analysis of the phase diagram, and unveil a Mott-to-Superfluid transition characterized by a spontaneous breaking of the U(1) symmetry and incommensurate density. The results of this Chapter are based on the following publications: - J. Lebreuilly, M. Wouters and I. Carusotto, ``Towards strongly correlated photons in arrays of dissipative nonlinear cavities under a frequency-dependent incoherent pumping'', C. R. Phys., 17(8), 836, 2016. - A. Biella, F. Storme, J. Lebreuilly, D. Rossini, R. Fazio, I. Carusotto and C. Ciuti, ``Phase diagram of incoherently driven strongly correlated photonic lattice'', Phys. Rev. A, 96, 023839, 2017. [Chapter 3.] In view of improving the performance of the scheme introduced in last chapter, and reproducing in particular the equilibrium zero temperature phenomenology in driven-dissipative photonic lattices, we develop a fully novel scheme based on the use of non-Markovian reservoirs with tailored broadband spectra which allows to mimick the effect of tunable chemical potential. Our proposal can be implemented by mean of a small number of emitters and absorbers and is accessible to current technologies. We first analyse the case of a frequency-dependent emission with a square spectrum and confirm the possibility of stabilizing Mott insulator states with arbitrary integer density. Unlike the previous proposal the Mott state is robust against both losses and tunneling. A sharp transition toward a delocalized superfluid-like state can be induced by strong values of the tunneling or a change in the effective chemical potential. While an overall good agreement is found with the T=0 predictions, our analysis highlights small deviations from the equilibrium case in some parts of the parameters space, which are characterized by a non-vanishing entropy and the kinetic generation of doublon excitations. We finally consider an improved scheme involving additional frequency-dependent losses, and show in that case that the Hamiltonian ground-state is fully recovered for any choice of parameters. Our proposal, whose functionality relies on generic energy relaxation mechanisms and is not restricted to the Bose-Hubbard model, appears as a promising quantum simulator of zero temperature physics in photonic devices. The results of this Chapter are based on the following publication: - J. Lebreuilly, A. Biella, F. Storme, D. Rossini, R. Fazio, C. Ciuti and I. Carusotto, ``Stabilizing strongly correlated photon fluids with non-Markovian reservoirs'', Phys. Rev. A 96, 033828 (2017). [Chapter 4.] We adopt a broader perspective, and analyse the conditions for the emergence of analogous thermal properties in driven-dissipative quantum systems. We show that the impact of an equilibrated environment can be mimicked by several non-Markovian and non-equilibrated reservoirs. Chapter 2 already features a preliminary result in that direction, showing that in presence of a broad reservoir spectral density a given quantum system will evolve toward a Gibbs ensemble with an artificial chemical potential and temperature. In this chapter we develop a broader analysis focusing as a counterpart part on the exactly solvable model of a weakly interacting Bose Gas in the \acs{BEC} regime. Our formalism based on a quantum Langevin model, allows in particular to access both static and dynamical properties: remarkably, we demonstrate not only the presence of an equilibrium static signature, but also the validity of the fluctuation-dissipation theorem. While our results apply only for low-energy excitations for an arbitrary choice of reservoir spectral densities, we predict that a fine tuned choices of reservoirs mimicking the so-called Kennard Stepanov condition will lead to a full apparent equilibration. Such effect that we call ``pseudo-thermalization'' implies that under very specific conditions, an open quantum system can present all the properties of an equilibrated one in spite of the presence of an highly non equilibrated environment. The results of this Chapter are based on the following paper: - J. Lebreuilly, A. Chiocchetta and I. Carusotto, ``Pseudo-thermalization in driven-dissipative non-Markovian open quantum systems'', arXiv:1710.09602 (submitted for publication).
373

Engineering & characterization of a gfp-based biosensor for ph and chloride intracellular measurements

Rocca, Francesco January 2014 (has links)
ClopHensor, a new fluorescent ratiometric GFP-based biosensor, is a powerful tool for non-invasive pH and chloride quantification in cells. ClopHensor is a chimeric construct, with the pH- and chloride-sensing E2GFP linked to the reference red protein DsRed-monomer, whose fluorescence is used as reference signal. E2GFP dissociation constant of about 50 mM (at pH=7.3) makes it ideal for quantifying physiological chloride concentration. However, chloride affinity of E2GFP strongly depends on pH value in solution: precise chloride measurement requires also a pH measurement. By ratio-imaging technique, three different excitation wavelengths are necessary for a pH and chloride concentration estimation. With the goal to reduce the number of excitation wavelengths required for ratio-imaging technique, in this thesis I present a detailed study of H148G-V224L-E2GFP, selected among several E2GFP-variants for its improved photophysic and spectroscopic characteristics. H148G-V224L-E2GFP exhibits a chloride affinity and a pH sensitivity similar to ClopHensor. Its emission spectra interestingly display two distinct emission peaks at 480 nm and 520 nm after excitation at 415 nm. Importantly, fluorescence emission spectra collected at various pH values also display a clear isosbestic point at 495 nm. This property allows the innovative possibility of pH and chloride concentration determination using only two excitation wavelengths. Moreover, while being chloride independent, the 520-to-495 (nm) ratio displays a pKa value of about 7.3, centered in the physiological pH range. These characteristics make it ideal for quantifying intracellular pH changes and chloride fluxes in physiological conditions. Applications in living cells of this new biosensor demonstrated its usefulness for ratio-imaging analysis. H148G-V224L-E2GFP+DsRed was successfully expressed in neuron-like cells, as proof-of-concept that ratio-imaging analysis can be performed also in neuron-like cells. These results are very promising for H148G-V224L-E2GFP+DsRed future expression in brain neurons, where chloride plays a crucial role in neuronal activity. Purified H148G-V224L-E2GFP was successfully uploaded in polymeric vaterite nanospheres to characterize their endocytosis pathways in cells.
374

Tunnelling and Unruh-DeWitt methods in curved spacetimes

Acquaviva, Giovanni January 2013 (has links)
The analysis and the results contained in this work are rooted in a first contact between the quantum theory and the general theory of relativity. By first contact it is meant that we are not considering candidates for “unified theories", but rather we focus on aspects of the full quantum theory in changing geometric backgrounds: the analysis of such an interaction already had important applications in cosmology, e.g. in the description of the evolution of fields in inflationary scenarios. Another compelling – and still growing – area of application is the study of thermodynamical properties of gravitional systems, which covers the main bulk of this thesis.
375

Dynamical properties of Bose-Bose Mixtures

Sartori, Alberto January 2016 (has links)
In this Thesis is presented a study on dynamical properties of mixtures of ultraold Bose gases. The behaviour of this system in different regimes is analysed: with and without coherent coupling between the two components, in homogeneous and harmonic shaped trapping potentials and in different dimensions and geometries. Most of the results presented here have been obtained by means of numerical solutions of coupled Gross-Pitaevskii equations and have been compared with theoretical predictions (and sometimes experiments), describing the same phenomena. In particualr the stability of persistent currents in a two-component Bose-Einstein condensate in a toroidal trap is studied in both the miscible and the immiscible regime. In the miscible regime we show that superflow decay is related to linear instabilities of the spin-density Bogoliubov mode. We find a region of partial stability, where the flow is stable in the majority component while it decays in the minority component. We also characterize the dynamical instability appearing for a large relative velocity between the two components. In the immiscible regime the stability criterion is modified and depends on the specific density distribution of the two components. The effect of a coherent coupling between the two components is also discussed. A study on the collective modes of the minority component of a highly unbalanced Bose-Bose mixture is also presented. In the immiscible case we find that the ground state can be a two-domain walls soliton. Although the mode frequencies are continuous at the transition, their behaviour is very different with respect to the miscible case. The dynamical behaviour of the solitonic structure and the frequency dependence on the inter- and intra-species interaction is numerically studied using coupled Gross-Pitaevskii equations. The results of the study on the static and the dynamic response of coherently coupled two component Bose-Einstein condensates due to a spin-dipole perturbation is also sown. The static dipole susceptibility is determined and is shown to be a key quantity to identify the second order ferromagnetic transition occurring at large inter-species interactions. The dynamics, which is obtained by quenching the spin-dipole perturbation, is very much affected by the system being paramagnetic or ferromagnetic and by the correlation between the motional and the internal degrees of freedom. In the paramagnetic phase the gas exhibits well defined out-of-phase dipole oscillations, whose frequency can be related to the susceptibility of the system using a sum rule approach. In particular in the interaction SU (2) symmetric case, when all the two-body interactions are the same, the external dipole oscillation coincides with the internal Rabi flipping frequency. In the ferromagnetic case, where linear response theory is not applicable, the system shows highly non-linear dynamics. In particular we observe phenomena related to ground state selection: the gas, initially trapped in a domain wall configuration, reaches a final state corresponding to the magnetic ground state plus small density ripples. Interestingly, the time during which the gas is unable to escape from its initial configuration is found to be proportional to the square root of the wall surface tension.
376

A new prompt gamma spectroscopy-based approach for range verification in proton therapy

Cartechini, Giorgio 14 February 2023 (has links)
Proton therapy is a well-established technology in radiotherapy, whose benefits stem from both physical and biological properties. Ions deposit the maximum dose, i.e. the ratio between the energy absorbed by the tissue and its mass (Gy = J/k g), in a localized region close to the end of their range (called the Bragg Peak BP). The combination of the favorable depth-dose profile with advanced delivery techniques translates into a high dose conformality in the tumor, as well as into a superior sparing of normal tissue compared to conventional radiotherapy with photons. Today, there are 107 proton therapy and 14 carbon ion centers operating worldwide, and many new ones are under construction. In Italy, the Trento proton therapy center and the proton and carbon ion center - Centro Nazionale per l’Adronterapia Oncologica - CANO in Pavia are already operational, while a third one is under construction at the Istituto Europeo Oncologico (IEO) in Milan and will be in operation in 2023. Although clinical results have been encouraging, numerous treatment uncertainties remain major obstacles to the full exploitation of proton therapy. One of the crucial challenges is monitoring the dose delivered during the treatment, both in terms of absolute value and spatial distribution inside the body. Ideally, the actual beam range in the patient should be equal to the value prescribed by the Treatment Planning System (TPS). However, there are sizeable uncertainties at the time of irradiation due to anatomical modifications, patient alignment, beam delivery, and dose calculation. Treatment plans are optimized to be conformal in terms of target coverage, healthy tissue spearing, and robust towards uncertainties. For this reason, the irradiation target is defined as a geometrical volume (Planning Target Volume PTV) corresponding to the physical tumor volume, to which safety margins of a few millimeters are added isotropically. Range errors determine the selection of the safety margins applied to the tumor volume, whose values depend on clinical protocols as well as on the treated area. For example, the Massachusetts General Hospital (MGH) prescribes safety margins equal to 3.5% of the nominal range +1 mm, while the University of Florida proton therapy center considers 2.5% of the nominal range + 1.5 mm. Decreasing the range uncertainties would reduce the safety margins, and hence the dose delivered to the normal tissue surrounding the tumor. In addition, a reduction of the proton range uncertainty could lead to the use of novel beam arrangements making greater use of the distal beam edge. Therefore it would be possible to maintain target coverage while reducing OAR and healthy tissue doses when the range uncertainty is low. Monitoring the proton range in vivo is a key tool to achieve this goal, and thus to improve the overall treatment effectiveness. Several techniques have been proposed to address the fundamental issue of in vivo proton verification, most of which exploit secondary particles produced by the interaction of protons and target nuclei, and are detectable outside the patient. Using these techniques, pre-clinical and clinical tests have obtained promising results in terms of absolute proton estimation. However, none of the investigated techniques are currently employed in the daily clinical workflow. A method already tested on patients is based on PET (Positron Emission Tomography) photon detection. The amount and emission distribution of PET photons depend on the target activity induced by the beam, as well by the delivered dose. Although this method has been clinically tested on patients, it has several limitations. The yield of annihilation photons produced during treatment depends on several factors, including the activity produced by the beam, which is fairly limited (up to two orders of magnitude lower than the diagnostic PET), the metabolic biological washout, and the background due to prompt radiation originated from other reaction channels. These issues have been partly resolved by the use of in-beam PET scanners, which measure annihilation photons during the treatment. One of the most advanced versions is the INSIDE (INnovative Solution for In-beam Dosimetry in hadronthErapy) PET scanner installed at CNAO (Centro Nazionale di Adronterapia Oncologica) in Pavia, Italy. Currently, it is part of a clinical trial and has acquired in-beam PET data during the treatment of various patients. Although encouraging results were obtained, still some limitations in its clinical applicability remain. In-beam PET is designed to work with low-duty-cycle accelerators, and so far it has only been installed in a fixed beam line. The other promising approach for in-vivo range monitoring is based on the prompt gammas (PGs) detection from nuclear de-excitation due to beam interactions in the tissue. The adjective prompt reflects the fact that they are emitted just a few pico-nano seconds after the impact of the proton on the target nucleus. The PGs escaping the patient have energy up to approximately 8 MeV, and their production is spatially correlated to the proton range. The feasibility of using an in vivo prompt gamma-based range verification for proton therapy has been demonstrated by numerous experimental and Monte Carlo studies, as well as by its recent application to the clinical practice for inter-fractional range variations. The current accuracy achieved on patients for retrieving the range of a single pencil beam is 2-3 mm. A major limitation identified by all studies that prevent the full exploitation of any prompt-gamma based approach for single spot range verification is the low statistics of the events produced. This issue is caused by: i) the short duration of a single spot delivery, ii) the immense gamma-ray production rate during delivery, iii) the finite rate capability of detectors, iv) the electronic throughput limits and v) the signal-to-background ratio. A particular PG range verification technique is prompt gamma spectroscopy (PGS). It relies on the analysis of the prompt gamma energy spectrum, which is characterized by specific energy lines corresponding to the reaction channels of the irradiated protons with the elements of the human body. The most common reactions are those with Oxygen and Carbon atoms, which become excited and eventually emit prompt gamma rays up to 8 MeV. Different studies on simplified geometries demonstrated that, by using the PGS technique, it is possible to estimate not only proton range variations, but also differences in the elemental composition of tissues. In this study, we present a novel approach for in vivo range verification via prompt gamma spectroscopy, based on creating signature gammas emitted only when protons traverse the tumor, and whose yield is directly related to the beam range. We propose to achieve this goal by loading the tumor with a drug-delivered stable element, that emits characteristic de-excitation PG following nuclear interactions with the primary protons. The use of tumor marker elements is not new in clinics: an example is a diagnostic PET which employs β+ emitter isotopes linked to a drug carrier, that is uptaken by the tumor allowing its diagnosis via PET scans (e.g. 18-FDG). In our approach, the radioisotope is substituted by a stable element, which decays via PG emission only when the proton interacts with it. By detecting signature gamma lines emitted by the tumor marker element, it is possible to assess if the beam has interacted or not with the tumor and increase the accuracy of the proton range estimation. Selection and characterization of candidate tumor markers The first part of this work focused on the identification of potential candidate elements following three criteria: i) emission of signature gamma energy lines following the proton irradiation, different from the characteristic emission of 12-Carbon and 16-Oxygen; ii) it should not be toxic for the patient iii) selection of an element whose carrier maximizes the tumor selectivity. While (i) is a purely physical constraint, and was deeply investigated in this work, points ii) and iii) depend also on several biological parameters, such as the achievable element concentration in the tumor, molecular carrier, tumor physiology, etc. To fulfill these criteria, we looked at elements that are already employed in medicine, either for diagnostic or therapeutic purposes and for which a drug carrier already exists. This allowed the applicability of our methodology in the clinic. Combining these criteria with simulations from the code TALYS, we identified three candidate tumor markers: 31-Phosphorous, 63-Copper and 89-Yttrium.We employed TALYS to characterize the elements in terms of the energy spectrum and gamma production cross section, and compared the results to Carbon and Oxygen, which are the two most abundant elements in the body. TALYS indicates that the three candidate elements produce signature gamma lines between 1 and 2 MeV, while Carbon and Oxygen signatures are between 3 and 8 MeV. Furthermore, the gamma yield per incident proton generated by the labeling elements is on average one order of magnitude higher than Carbon and Oxygen. To verify TALYS theoretical calculations, we designed an experimental campaign of prompt gamma spectroscopy measurements to characterize the emission of these elements when irradiated with a therapeutic proton beam. We irradiated two types of targets: solids made of 99.99% of candidate elements, and water-based solutions containing the label elements. While solids were used to characterize the PG energy spectrum emitted by the elements without background, the liquid targets were used to study the methodology in a setup closer to the clinical scenario, i.e by investigating the gamma emission of a compound material with a well-defined concentration of the marker element. Furthermore, using water-based solutions we were able to characterize the PG spectrum emitted by different element concentrations (from 2 M to 0.1 M), and evaluate the minimum value that provides a detectable signature. We characterized the elements by irradiating the different targets by using monoenergetic proton beam at 25 MeV and 70 MeV. Due to the thickness of the target, the beam looses all its energy inside the target, thus, these energies can be representative of a proton beam stopping in the first 5 mm of the tumor and after 4 cm depth, respectively. The 70 MeV proton beam was available at the experimental room of the Trento proton therapy center (Italy), while the Cyrcé cyclotron (Institut Pluridisciplinaire Hubert CURIEN-IPHC) in Strasbourg (France) accelerates protons up to 25 MeV. In the experiments performed in Trento and Strasbourg, we employed a LaBr3:Ce gamma-ray detector, which is suitable for our measurements as it is characterized by a fast detection response and high energy resolution. The data confirmed that all candidates emit signature PGs different from water (here used as a proxy for normal tissue), and that the gamma yield is directly proportional to the element concentration in the solution. We detected four specific gamma lines for 31P (1.14, 1.26, 1.78 and 2.23 MeV) and 63Cu (0.96, 1.17, 1.24, 1.326 MeV), while only one for 89Y (1.06 MeV). We compared all experiments with TOPAS MC. It is one of the leading toolkits for simulating particle interaction in the matter for medical physics applications. The comparison between simulations and experiments suggested that TOPAS is able to predict the energy of all characteristic gammas detected in the experimental spectrum, while the yield is either underestimated or overestimated, depending on the gamma-ray energy and element. Previous works had already shown TOPAS limited accuracy in reproducing nuclear de-excitation gammas, even for the most common materials like 16-Oxygen and 12-Carbon, and suggested that this discrepancy stems from the nuclear reaction models implemented in the physics list. Our findings support the hypothesis that the nuclear reaction cross section models available in TOPAS MC predict results with limited accuracy also for 31P, 63Cu and 89Y. Prompt-gamma yield and proton range correlation The finding of the first part of this work indicated that loading the tumor with 31P, 63Cu and 89Y generates a signature PG energy spectrum when irradiated with protons at therapeutic energies. In the second part of the project, we experimentally showed how the PG yield correlates with the proton range. We designed a multilayer phantom to mimic the irradiation of a deep-seated tumor. The phantom was composed of 15.5 cm of solid water (proxy of normal tissue), followed by 5 cm of liquid target filled with water-based solutions containing the marker element (tumor region) and an additional 2 cm of solid water for protons stopping downstream of the tumor.We irradiated the phantom with protons of energy ranging from 154 MeV (16.3 cm range in water) to 184 MeV (22.5 cm range) in order to build an experimental curve of the PG yield of different gamma-ray lines versus the proton range. We also acquired a blind spectrum at an unknown proton energy and used the curve to predict the range. By using the de-excitation peaks of 6.12 MeV from 16O, 4.44 MeV from 12C and 1.26 MeV from 31P, we successfully predicted the proton range of the blind data within 2 mm from the nominal value. The same test was repeated using a 63-Copper target, but due to the signature gamma lower yield, we overestimated the proton range prediction of 5 mm. As already observed for the liquid targets, large discrepancies were found between the experimental data and the simulation. This confirmed that TOPAS MC is not an accurate tool for predicting the PG yield. Toward the clinical application In the last part of the thesis, we discuss the applicability of the presented approach to patients. All experimental measurements were performed in conditions not clinically realistic because they investigated the basic principles of the methodology and provided a proof-of-principle. Using the measurements acquired at 70 MeV with liquid targets, we evaluated the expected PGs produced during a proton therapy treatment if the tumor were irradiated with 109 protons, the elements were loaded with a concentration of 0.4 mM (possible value when a glucose-based carrier is used) and a detection system with a larger solid angle acceptance (5sr) than the one used in our experiments (0.13 sr). We also started a preliminary in-silico investigation of our methodology applied to a real patient geometry. All experimental and simulated results so far presented were obtained by irradiating only homogeneous phantoms without taking into account patient heterogeneity and complex elemental compositions of the different tissues. To reproduce the patient geometry, we used a Computed Tomography (CT) image (3D map of the patient’s anatomy and tissue densities). The tumor region was localized on the prostate organ and its elemental composition and was artificially modified to achieve a homogeneous 31-Phosphorus, 63-Copper and 89-Yttrium concentration at a 5% percentage mass fraction for speeding up the computational time. TOPAS MC was used to simulate the irradiated of the tumor region with a 174 MeV proton beam and we simulated different beam position shifts from the nominal plan of 0.2, 0.4, 0.7, 1.0, 1.2, 1.4, 1.7 and 2 cm. Following the approach of the Massachusetts General Hospital group for prompt gamma spectroscopy range verification, we estimated the voxel-based gamma-ray yield from the elemental composition of the patient (CT scan) and from the gamma-ray production cross sections. TOPAS MC was used only for the calculation of the proton kinetic energy in each voxel of the patient. This analysis highlighted that gammas generated by the label elements are strongly correlated to the elemental composition of tissues traversed by the beam. When the beam partially misses the tumor region, the number of signature PGs emitted by the marker element decreases. Several aspects of the methodology still require further investigation and optimization from a physical, engineering and biological point of view. in vitro and in vivo toxicity studies must be conducted to determine the best carrier molecule that maximizes the tumor’s element concentration. Furthermore, to increase the accuracy of proton range estimation a novel gamma spectroscopy detection system must be designed to be fully integrated with the gantry treatment room. In conclusion, in this work, we demonstrated that loading the tumor with a label element before proton treatment generates signature gammas that can be used to verify the beam range in vivo. We selected three candidate elements already used in the clinic as promising tumor markers. We successfully employed these elements to simulate a proton range verification methodology on a homogeneous phantom. We showed how the current nuclear reaction models for prompt gamma spectroscopy applications are not accurate in predicting the PG yield from all the elements investigated. Further work is necessary to investigate the effect of a non-homogeneous element uptake due to tumor physiology on the proton range accuracy, as well as the diffusion of the label element on the normal tissue surrounding the tumor.
377

Light induced engrams in in-vitro neuronal cultures

Zaccaria, Clara 28 April 2022 (has links)
In the thesis is described the development of two platforms to optogenetically induce single neuron excitation and formation of memory in biological in-vitro neuronal networks.One platform is a photonic chip, with aperiodic grating scatterers able to create a specific light distribution on the surface of the chip. The second platform is a digital light processor device integrated in a microscopy setup, able to create on the sample plane whatever pattern with 3 um resolution. Stimulating simultaneously many neurons, it was demonstrated the ability of this system to induce potentiation on the illuminated cells, and thus engram formation.
378

Development of Free Energy Calculation Methods for the Study of Monosaccharides Conformation in Computer Simulations

Autieri, Emmanuel January 2011 (has links)
This thesis is devoted to the study of the conformation of monosacchrides in six-membered ring form. The main goal is to develop and apply new computational tools to investigate conformational properties and to improve the description of carbohydrates in the framework of molecular dynamics simulations. In the field of monosaccharides, modeling the system within the molecular dynamics framework presents troublesome aspects. The most important issue is that some force fields (e.g., the chosen gromos 45a4 parameter set) fail in reproducing the conformational preferences of the sugar constituents, with the appearance of unphysical conformations. This lack stems from the fact that the conformational behavior, dominated by few structures, generates a severe bottleneck: the non-ergodicity of the system by any practical means. This aspect explains the interest in free energy calculations, and methods exist, such as umbrella sampling or metadynamics, that allow to accelerate the sampling of different conformations by adding bias forces. In general, accelerated sampling methods are based on the choice of collective variables (CVs), which is of particular importance for the proper reconstruction of free energy landscapes. In the field of conformational analysis, suitable CVs have to be considered to describe non-planar, puckered conformations of cyclic structures. One of the main goals of this work is the enhancement of the gromos 45a4 force field for carbohydrates, with respect to the ability to describe ring conformation (that is, puckering) of six-membered rings. To this end, the development of efficient computational tools for the investigation of the general puckering problem are presented. In particular, we indicate how to exploit the capabilities of the metadynamics algorithm applied to the investigation of puckered ring conformers, exploring also different parametrizations of puckered structures to assess their respective advantages as collective variables for metadynamics.
379

Quantum Monte Carlo Methods applied to strongly correlated and highly inhomogeneous many-Fermion systems

Dandrea, Lucia January 2009 (has links)
Quantum Monte Carlo Methods applied to strongly correlated and highly inhomogeneous many-Fermion systems
380

The role of noise in brain dynamics processes

Vilardi, Andrea January 2009 (has links)
Noise is ubiquitous in nature. For this reason, it makes up a widely investigated topic. Taking into account experimental physics, whatever the measurement process under investigation, noise, defined as a random fluctuation of the measured signal, is usually considered to be detrimental. Thus, many techniques have been developed to reduce its impact: for example, the most common post–processing procedure to improve a noisy measurement consists in averaging the measured signal over repeated sessions. Rather surprisingly, the opposite, non–detrimental effect has been also observed: the response of a nonlinear system to a weak input signal is, under suitable conditions, optimized by the presence of a particular, non–vanishing noise level. In the last decades, the wide spectrum of such phenomena has been referred to as stochastic resonance. With regard to neuroscience, noise can be considered ubiquitous also in the brain processes. Theoretical considerations, as well as a robust experimental evidence, demonstrate its role in many phenomena occuring at different levels in neural system: for example, firing rate of neurons is not predictable by reason of their intrinsic variability [1, 5, 6]; measurements of brain activity with imaging techniques, like EEG, MEG or fMRI, are always affected by inner, noisy mechanisms. Recently, neuroscientists put forward the idea that stochastic fluctuations of neural activity have a functional role in brain dynamics. However, this functional role has been thus far observed in very few experimental situations. Regarding the human behaviour, decision making (representing our main field of investigation) is also influenced by several noisy processes. Also in this case, random fluctuations of brain processes typically have a detrimental role, limiting, for example, the possibility to exactly predict the human behaviour not only in everyday life, but also under controlled conditions, such as psychophysical experiments. Aim of this work is to gain insight into the problem of how noise influences the decisional mechanisms. In this framework, two different kinds of noise were investigated, namely endogenous and exogenous. The endogenous noise refers to stochastic fluctuations that are present within the neural system whereas with exogenous some form of enviromental noise, external to the perceptual system, are meant. These two distinct types of noise define two indipendent research fields, that were both investigated by means of behavioural experiments. In other words, we were interested in investigating how noise acts on human behavior, in particular in case of discrimination processes. To this purpose, psychophysical experiments were carried out, addressing both the acoustic and the visual modality. To investigate how a proper amount of exogenous noise can act positively on human perception, improving performance in detection experiments, – an effect that, as mentioned above, is interpreted as an occurence of stochastic resonance – we carried out experiments in the acoustic modality. This is the topic of the first part of this work. In particular, we used a detection paradigm where pure tone stimuli were superimposed with different levels of noise and subjects were requested to signalize the presence of the tone. Usually a sufficient noise level masks the signal. However,what we observed was a tiny, yet statistically significant improvement of stimulus detection ability in correspondence to a specific noise level. The used experimental approach – “Yes/No†experiments – is usually interpreted in terms of Signal Detection Theory (SDT). The two most important SDT parameters are the sensitivity d′ and the decisional criterion. Since improvement of detection ability driven by noise is, if any, a tiny effect, all the ingredients combined to formulate a decision must lay under the experimenter’s control. In particular, in addition to the stimulus detectability, also the knowledge of the decisional strategy at any time is crucial so as to achieve reliable data. In other words, the demand of criterion stability, and more in general the problem of its dynamics, turned out to have a critical role and urged us to focus our attention to the specific topic of criterion dynamics. The scientific literature on the possibility to condition the subject’s criterion and reconstruct its dynamics with the highest possible time resolution (the single trial) is extremely scant: the only two works on this subject do not provide robust methods to tackle the issue. The second part of the dissertation is completely focused on our theory of criterion setting dynamics and the related experimental evidence. An ad-hoc experiment involving the visual perceptual modality allowed to test a model for trial–by–trial criterion dynamics based on the theory of feedback. Feedback loop were implemented by informing the subject, after each trial, relatively to his/her performance. When requested to maximize the rate of correct response in an orientation discrimination experiment, subjects showed the ability to continuosly change their internal criterion. More in detail, the optimal criterion position oscillates at a certain frequency, set a-priori by the experimenter; we observed that subjects were able to modify their decisional criterion in order “to follow†the optimal position. Two main assumptions of our model are that the subject stores information coming from previous trials, and is willing of in improving his/her performance. One of the most important assumption of SDT is that the adopted strategy, i.e. the criterion positioning, by an observer performing a task is completely indipendent from his/her discrimination ability. We implemented this consideration in a model for criterion dynamics so that this parameter turns out to be completely indipendent from d′. The possibility to disentangle sensitivity and criterion allows the experimenter to force the subject’s inclination to be more liberal or conservative, indipendently from his/her ability in performing the task, and monitor at each trial the result of this conditioning. The problem of how the human neural system set and maintain a decision criterion over time is still an open question. This problem recently received particular attention, within the more general context of the neural mechanisms underlying the decision process. Our approach, based on behavioural experiments, provides an novel investigation tool to tackle the issue.

Page generated in 0.0586 seconds