• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 442
  • 37
  • 8
  • 4
  • 4
  • 3
  • 1
  • Tagged with
  • 498
  • 391
  • 218
  • 118
  • 113
  • 109
  • 104
  • 98
  • 92
  • 89
  • 89
  • 89
  • 82
  • 82
  • 82
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
441

Development of enhanced double-sided 3D radiation sensors for pixel detector upgrades at HL-LHC

Povoli, Marco January 2013 (has links)
The upgrades of High Energy Physics (HEP) experiments at the Large Hadron Collider (LHC) will call for new radiation hard technologies to be applied in the next generations of tracking devices that will be required to withstand extremely high radiation doses. In this sense, one of the most promising approaches to silicon detectors, is the so called 3D technology. This technology realizes columnar electrodes penetrating vertically into the silicon bulk thus decoupling the active volume from the inter-electrode distance. 3D detectors were first proposed by S. Parker and collaborators in the mid ’90s as a new sensor geometry intended to mitigate the effects of radiation damage in silicon. 3D sensors are currently attracting growing interest in the field of High Energy Physics, despite their more complex and expensive fabrication, because of the much lower operating voltages and enhanced radiation hardness. 3D technology was also investigated in other laboratories, with the intent of reducing the fabrication complexity and aiming at medium volume sensor production in view of the first upgrades of the LHC experiments. This work will describe all the efforts in design, fabrication and characterization of 3D detectors produced at FBK for the ATLAS Insertable B-Layer, in the framework of the ATLAS 3D sensor collaboration. In addition, the design and preliminary characterization of a new batch of 3D sensor will also be described together with new applications of 3D technology.
442

Forschungsinformationssysteme im Kontext von Open Science

Nagel, Stefanie 28 July 2023 (has links)
Angesichts des stetig wachsenden Wettbewerbs um Studierende, wissenschaftliches Personal und Forschungsförderung, steigender Berichtspflichten, Transfer- und Transparenzbestrebungen plant die TU Bergakademie Freiberg, ein modernes Forschungsinformationsmanagement umzusetzen. Das Rektorat hat daher die Einführung eines Forschungsinformationssystems (FIS) beschlossen, das die aktuell dezentral vorliegenden Informationen zu den zahlreichen Forschungsaktivitäten und -ergebnissen der Freiberger Wissenschaftler:innen nutzbringend zusammenführt und die Basis für eine transparente Außendarstellung schafft. Zum Einführungsprojekt 'FIS@TUBAF', das von der Universitätsbibliothek koordiniert wird, fand am 12.07.2023 eine erste Informationsveranstaltung statt. Aus diesem Anlass widmen wir die August-Ausgabe des Open-Science-Snacks dem Thema 'Forschungsinformationssysteme im Kontext von Open Science'.
443

Sports venues’ effect on social welfare : Cost-Benefit analysis of infrastructure investments within Lugnet area in Falun

Biedrzycki, Remigiusz January 2016 (has links)
Economic analysis and evaluation of sport events and sports infrastructure is a widely researched topic, especially when it comes to mega-sports events. As many of major and mega events require large amount of resources, governments and municipalities worldwide have to make decisions regarding support for the events. To determine whether and to what extent events should be subsidised with public resources, a thorough analysis of potential impacts of the event has to be conducted. Most of the studies within this field choose Economic Impact Analysis as a method, while many researchers point out a need for costbenefit analysis, as only a comprehensive analysis of costs and benefits for society can justify public subsidies for sport events and sports infrastructure. This paper presents a cost-benefit approach of sports venue evaluation. A cost-benefit analysis made in this paper, on the case of Swedish outdoor area of Lugnet, Falun, presents possible effects of sports infrastructure investments on social welfare. Analysis was aimed towards investments made prior to hosting 2015 FIS Nordic World Ski Championships in Falun. Presenting results for three alternative scenarios, this study compares different effects on social benefit. This research paper highlights areas that need to be investigated to ensure the better quality of the results, thus it can be beneficial for further studies of the topic. Results presented in this paper can also be beneficial for policy makers, as many of the potential welfare effects were described.
444

Addressing nonlinear systems with information-theoretical techniques

Castelluzzo, Michele 07 July 2023 (has links)
The study of experimental recording of dynamical systems often consists in the analysis of signals produced by that system. Time series analysis consists of a wide range of methodologies ultimately aiming at characterizing the signals and, eventually, gaining insights on the underlying processes that govern the evolution of the system. A standard way to tackle this issue is spectrum analysis, which uses Fourier or Laplace transforms to convert time-domain data into a more useful frequency space. These analytical methods allow to highlight periodic patterns in the signal and to reveal essential characteristics of linear systems. Most experimental signals, however, exhibit strange and apparently unpredictable behavior which require more sophisticated analytical tools in order to gain insights into the nature of the underlying processes generating those signals. This is the case when nonlinearity enters into the dynamics of a system. Nonlinearity gives rise to unexpected and fascinating behavior, among which the emergence of deterministic chaos. In the last decades, chaos theory has become a thriving field of research for its potential to explain complex and seemingly inexplicable natural phenomena. The peculiarity of chaotic systems is that, despite being created by deterministic principles, their evolution shows unpredictable behavior and a lack of regularity. These characteristics make standard techniques, like spectrum analysis, ineffective when trying to study said systems. Furthermore, the irregular behavior gives the appearance of these signals being governed by stochastic processes, even more so when dealing with experimental signals that are inevitably affected by noise. Nonlinear time series analysis comprises a set of methods which aim at overcoming the strange and irregular evolution of these systems, by measuring some characteristic invariant quantities that describe the nature of the underlying dynamics. Among those quantities, the most notable are possibly the Lyapunov ex- ponents, that quantify the unpredictability of the system, and measure of dimension, like correlation dimension, that unravel the peculiar geometry of a chaotic system’s state space. These methods are ultimately analytical techniques, which can often be exactly estimated in the case of simulated systems, where the differential equations governing the system’s evolution are known, but can nonetheless prove difficult or even impossible to compute on experimental recordings. A different approach to signal analysis is provided by information theory. Despite being initially developed in the context of communication theory, by the seminal work of Claude Shannon in 1948, information theory has since become a multidisciplinary field, finding applications in biology and neuroscience, as well as in social sciences and economics. From the physical point of view, the most phenomenal contribution from Shannon’s work was to discover that entropy is a measure of information and that computing the entropy of a sequence, or a signal, can answer to the question of how much information is contained in the sequence. Or, alternatively, considering the source, i.e. the system, that generates the sequence, entropy gives an estimate of how much information the source is able to produce. Information theory comprehends a set of techniques which can be applied to study, among others, dynamical systems, offering a complementary framework to the standard signal analysis techniques. The concept of entropy, however, was not new in physics, since it had actually been defined first in the deeply physical context of heat exchange in thermodynamics in the 19th century. Half a century later, in the context of statistical mechanics, Boltzmann reveals the probabilistic nature of entropy, expressing it in terms of statistical properties of the particles’ motion in a thermodynamic system. A first link between entropy and the dynamical evolution of a system is made. In the coming years, following Shannon’s works, the concept of entropy has been further developed through the works of, to only cite a few, Von Neumann and Kolmogorov, being used as a tool for computer science and complexity theory. It is in particular in Kolmogorov’s work, that information theory and entropy are revisited from an algorithmic perspective: given an input sequence and a universal Turing machine, Kolmogorov found that the length of the shortest set of instructions, i.e. the program, that enables the machine to compute the input sequence was related to the sequence’s entropy. This definition of the complexity of a sequence already gives hint of the differences between random and deterministic signals, in the fact that a truly random sequence would require as many instructions for the machine as the size of the input sequence to compute, as there is no other option than programming the machine to copy the sequence point by point. On the other hand, a sequence generated by a deterministic system would simply require knowing the rules governing its evolution, for example the equations of motion in the case of a dynamical system. It is therefore through the work of Kolmogorov, and also independently by Sinai, that entropy is directly applied to the study of dynamical systems and, in particular, deterministic chaos. The so-called Kolmogorov-Sinai entropy, in fact, is a well-established measure of how complex and unpredictable a dynamical system can be, based on the analysis of trajectories in its state space. In the last decades, the use of information theory on signal analysis has contributed to the elaboration of many entropy-based measures, such as sample entropy, transfer entropy, mutual information and permutation entropy, among others. These quantities allow to characterize not only single dynamical systems, but also highlight the correlations between systems and even more complex interactions like synchronization and chaos transfer. The wide spectrum of applications of these methods, as well as the need for theoretical studies to provide them a sound mathematical background, make information theory still a thriving topic of research. In this thesis, I will approach the use of information theory on dynamical systems starting from fundamental issues, such as estimating the uncertainty of Shannon’s entropy measures on a sequence of data, in the case of an underlying memoryless stochastic process. This result, beside giving insights on sensitive and still-unsolved aspects when using entropy-based measures, provides a relation between the maximum uncertainty on Shannon’s entropy estimations and the size of the available sequences, thus serving as a practical rule for experiment design. Furthermore, I will investigate the relation between entropy and some characteristic quantities in nonlinear time series analysis, namely Lyapunov exponents. Some examples of this analysis on recordings of a nonlinear chaotic system are also provided. Finally, I will discuss other entropy-based measures, among them mutual information, and how they compare to analytical techniques aimed at characterizing nonlinear correlations between experimental recordings. In particular, the complementarity between information-theoretical tools and analytical ones is shown on experimental data from the field of neuroscience, namely magnetoencefalography and electroencephalography recordings, as well as mete- orological data.
445

IN VIAGGIO ATTRAVERSO LA FISICA Proposta di un curriculum di fisica per aree tematiche

Perini, Marica 20 June 2023 (has links)
La domanda di molti studenti e molte studentesse a scuola si ripete anno dopo anno: “prof., perché dobbiamo studiare fisica?” Spiegare bene, trasmettere amore per la disciplina, usare metodologie innovative non è sufficiente. Le Indicazioni Nazionali, in quanto “indicazioni” e non “prescrizioni”, spronano a progettare e sperimentare liberamente percorsi nei quali l’insegnante sia una guida che stimola la creatività e l’autonomia degli studenti. Il lavoro presentato in questa tesi di dottorato consiste in una ristrutturazione per aree tematiche del curriculum di fisica del primo biennio del liceo scientifico e nella sua sperimentazione nelle classi, che ha coinvolto quindici insegnanti e quasi 380 studenti e studentesse. Il progetto è nato nell’ambito di un rinnovamento che si è reso necessario in un liceo scientifico della Provincia Autonoma di Trento dove, negli ultimi anni, erano stati rilevati un aumento del numero di carenze formative e un crescente disamore verso la disciplina. L’istituto è intervenuto incrementando le ore dedicate alla fisica, portandole da due a tre ore settimanali e, contestualmente, chiedendomi di proporre e coordinare un lavoro di rivisitazione del curricolo, in collaborazione con il Laboratorio di Comunicazione delle Scienze Fisiche dell’Università di Trento. La ricerca in didattica della fisica (di seguito denominata PER – Physics Education Research) da oltre vent’anni evidenzia l’efficacia dell’insegnamento in contesto, e la normativa scolastica a livello provinciale, nazionale ed europeo lo consentirebbe perché invita alla sperimentazione di percorsi innovativi volti all’acquisizione di competenze. Mentre in alcuni paesi stranieri sono stati progettati e sperimentati con successo interi curricoli tematici, nelle scuole italiane non risulta che gli insegnanti riescano a integrare in modo permanente ed efficace nella loro programmazione le unità di insegnamento-apprendimento in contesto proposte dalla PER. Si è deciso pertanto di indagare i motivi per i quali questo approccio non viene adottato sistematicamente. Per prima cosa, è stato somministrato un sondaggio a circa 300 studenti e studentesse frequentanti il triennio di licei scientifici nella Provincia Autonoma di Trento, focalizzato sull’esperienza di apprendimento della fisica al primo e al secondo anno, al fine di ricavare informazioni sulla loro percezione relativamente alle metodologie adottate e ai contenuti affrontati nel primo biennio. Sono quindi stati coinvolti in gruppi di studio e lavoro una ventina di docenti di liceo scientifico attivi nell'insegnamento di questa disciplina e direttamente interessati alla revisione del curricolo. In questi incontri gli insegnanti, pur ritenendo efficace l’apprendimento in contesto mediante un approccio tematico, hanno evidenziato alcune criticità che trovano riscontro anche in letteratura. Per esempio, la mancanza di tempo per pensare e progettare percorsi significativi, la necessità di scegliere temi rilevanti e stimolanti per la classe, sui quali però l’insegnante si senta preparato, e il bisogno di coerenza tra le competenze previste dalla normativa e quelle raggiungibili attraverso i percorsi tematici. In considerazione di quanto emerso da questa indagine, sono stati ideati quattro percorsi tematici la cui scelta è stata frutto di una lunga e attenta riflessione. Era infatti necessario individuare temi attuali, che potessero interessare la maggior parte degli studenti e delle studentesse e che permettessero di affrontare, mediante metodologie di apprendimento attivo, la maggior parte dei contenuti previsti dai piani di studio. Inoltre, si volevano individuare tematiche che permettessero di portare nella scuola i risultati PER e il lavoro svolto negli anni dal Laboratorio di Comunicazione delle Scienze Fisiche del Dipartimento di Fisica di Trento. I quattro percorsi, progettati ponendo particolare attenzione alle misconcezioni, all’uso del laboratorio povero, al processo di costruzione del sapere, al ruolo della comunicazione e a quello dei modelli e degli esperimenti nello studio dei fenomeni fisici, naturali o indotti dall’uomo, sono: 1. SARÀ VERO? Un approccio scientifico a credenze, fake news, video e post virali. 2. IL TELESCOPIO. Costruire un telescopio per comprendere le leggi dell’ottica. 3. PASSO DOPO PASSO. L’analisi della camminata e della corsa per affrontare le basi della meccanica. 4. CON LA TESTA TRA LE NUVOLE. L’analisi meteorologica per parlare di misure e proprietà termiche. Ciascun percorso è introdotto da due storie, accompagnante da opportune esperienze di laboratorio, che gli insegnanti degli istituti comprensivi possono utilizzare nella loro programmazione in modo tale da favorire la costruzione di percorsi verticali tra il primo e il secondo ciclo di istruzione. Il progetto è nato per il liceo scientifico ma, su richiesta di molti insegnanti, è stato esteso anche ad altre tipologie di licei. Nell’anno scolastico 2021-2022 e nei primi mesi dell’anno scolastico 2022-2023, hanno aderito e portato a termine la sperimentazione dieci classi di liceo scientifico e altrettante di licei non scientifici. La sperimentazione è stata attuata scegliendo di formare gli insegnanti affinché potessero lavorare direttamente con i propri studenti e le proprie studentesse, senza il tramite del ricercatore. Essa si è concretizzata mediante un rapporto costante con gli insegnanti che hanno proposto i percorsi nelle classi ed è stata controllata attraverso questionari e incontri di discussione rivolti a insegnanti e corpo studentesco. In due casi l’apprendimento è stato monitorato mediante una classe di controllo ma, nonostante gli insegnanti abbiano notato che le valutazioni medie delle classi sperimentali sono state leggermente superiori a quelle delle classi di controllo, si ritiene prematuro concludere che questo approccio abbia influito in modo significativo sul processo di apprendimento. Pur coinvolgendo un numero di insegnanti e studenti elevato, il fatto che la sperimentazione si sia svolta nel corso di un solo anno scolastico non consente infatti di trarre conclusioni definitive. I docenti dovrebbero avere il tempo di interiorizzare i percorsi e le metodologie proposte, pertanto, per poter valutare l’efficacia del progetto, si dovrebbe pensare alla costituzione di un gruppo ricerca-azione pluriennale. In ogni caso, dal confronto tra i questionari iniziali e quelli finali somministrati a tutte le classi e dalla lettura dei commenti degli studenti e delle studentesse, emerge l’efficacia di questo approccio in termini di motivazione e di atteggiamento verso la disciplina. Molti infatti hanno manifestato il desiderio che questo progetto possa essere esteso anche ad altri argomenti, evidenziando in particolare alcuni aspetti positivi: lavoro a gruppi, discussioni, possibilità di fare ipotesi ed esprimere le proprie idee, realizzazione di esperimenti con materiale povero, costruzione di strumenti di misura, momenti di confronto con i compagni, con le compagne e con l’insegnante, legame con la realtà. Dal momento che il progetto è nato proprio con questo scopo, si può concludere che i dati a disposizione confermino in una certa misura la validità di questo approccio. Infine si ritiene che questo modo di lavorare in sinergia con gli insegnanti abbia contribuito a rinforzare i rapporti tra il mondo della scuola e il mondo della ricerca, conformemente con quanto auspicato dal Piano Strategico di Ateneo
446

Entanglement certification in quantum many-body systems

Costa De Almeida, Ricardo 07 November 2022 (has links)
Entanglement is a fundamental property of quantum systems and its characterization is a central problem for physics. Moreover, there is an increasing demand for scalable protocols that can certify the presence of entanglement. This is primarily due to the role of entanglement as a crucial resource for quantum technologies. However, systematic entanglement certification is highly challenging, and this is particularly the case for quantum many-body systems. In this dissertation, we tackle this challenge and introduce some techniques that allow the certification of multipartite entanglement in many-body systems. This is demonstrated with an application to a model of interacting fermions that shows the presence of resilient multipartite entanglement at finite temperatures. Moreover, we also discuss some subtleties concerning the definition entanglement in systems of indistinguishable particles and provide a formal characterization of multipartite mode entanglement. This requires us to work with an abstract formalism that can be used to define entanglement in quantum many-body systems without reference to a specific structure of the states. To further showcase this technique, and also motivated by current quantum simulation efforts, we use it to extend the framework of entanglement witnesses to lattice gauge theories. / L'entanglement è una proprietà fondamentale dei sistemi quantistici e la sua caratterizzazione è un problema centrale per la fisica. Inoltre, vi è una crescente richiesta di protocolli scalabili in grado di certificare la presenza di entanglement. Ciò è dovuto principalmente al ruolo dell'entanglement come risorsa cruciale per le tecnologie quantistiche. Tuttavia, la certificazione sistematica dell'entanglement è molto impegnativa, e questo è particolarmente vero per i sistemi quantistici a molti corpi. In questa dissertazione, affrontiamo questa sfida e introduciamo alcune tecniche che consentono la certificazione dell'entanglement multipartito in sistemi a molti corpi. Ciò è dimostrato con un'applicazione a un modello di fermioni interagenti che mostra la presenza di entanglement multipartito resiliente a temperature finite. Inoltre, discutiamo anche alcune sottigliezze riguardanti la definizione di entanglement in sistemi di particelle indistinguibili e forniamo una caratterizzazione formale dell'entanglement multipartito. Ciò ci richiede di lavorare con un formalismo astratto che può essere utilizzato per definire l'entanglement nei sistemi quantistici a molti corpi senza fare riferimento a una struttura specifica degli stati. Per mostrare ulteriormente questa tecnica, e anche motivata dagli attuali sforzi di simulazione quantistica, la usiamo per estendere la struttura dei testimoni di entanglement alle teorie di gauge del reticolo.
447

Quantum algorithms for many-body structure and dynamics

Turro, Francesco 10 June 2022 (has links)
Nuclei are objects made of nucleons, protons and neutrons. Several dynamical processes that occur in nuclei are of great interest for the scientific community and for possible applications. For example, nuclear fusion can help us produce a large amount of energy with a limited use of resources and environmental impact. Few-nucleon scattering is an essential ingredient to understand and describe the physics of the core of a star. The classical computational algorithms that aim to simulate microscopic quantum systems suffer from the exponential growth of the computational time when the number of particles is increased. Even using today's most powerful HPC devices, the simulation of many processes, such as the nuclear scattering and fusion, is out of reach due to the excessive amount of computational time needed. In the 1980s, Feynman suggested that quantum computers might be more efficient than classical devices in simulating many-particle quantum systems. Following Feynman's idea of quantum computing, a complete change in the computation devices and in the simulation protocols has been explored in the recent years, moving towards quantum computations. Recently, the perspective of a realistic implementation of efficient quantum calculations was proved both experimentally and theoretically. Nevertheless, we are not in an era of fully functional quantum devices yet, but rather in the so-called "Noisy Intermediate-Scale Quantum" (NISQ) era. As of today, quantum simulations still suffer from the limitations of imperfect gate implementations and the quantum noise of the machine that impair the performance of the device. In this NISQ era, studies of complex nuclear systems are out of reach. The evolution and improvement of quantum devices will hopefully help us solve hard quantum problems in the coming years. At present quantum machines can be used to produce demonstrations or, at best, preliminary studies of the dynamics of a few nucleons systems (or other equivalent simple quantum systems). These systems are to be considered mostly toy models for developing prospective quantum algorithms. However, in the future, these algorithms may become efficient enough to allow simulating complex quantum systems in a quantum device, proving more efficient than classical devices, and eventually helping us study hard quantum systems. This is the main goal of this work, developing quantum algorithms, potentially useful in studying the quantum many body problem, and attempting to implement such quantum algorithms in different, existing quantum devices. In particular, the simulations made use of the IBM QPU's , of the Advanced Quantum Testbed (AQT) at Lawrence Berkeley National Laboratory (LBNL), and of the quantum testbed recently based at Lawrence Livermore National Laboratory (LLNL) (or using a device-level simulator of this machine). The our research aims are to develop quantum algorithms for general quantum processors. Therefore, the same developed quantum algorithms are implemented in different quantum processors to test their efficiency. Moreover, some uses of quantum processors are also conditioned by their availability during the time span of my PhD. The most common way to implement some quantum algorithms is to combine a discrete set of so-called elementary gates. A quantum operation is then realized in term of a sequence of such gates. This approach suffers from the large number of gates (depth of a quantum circuit) generally needed to describe the dynamics of a complex system. An excessively large circuit depth is problematic, since the presence of quantum noise would effectively erase all the information during the simulation. It is still possible to use error-correction techniques, but they require a huge amount of extra quantum register (ancilla qubits). An alternative technique that can be used to address these problems is the so-called "optimal control technique". Specifically, rather than employing a set of pre-packaged quantum gates, it is possible to optimize the external physical drive (for example, a suitably modulated electromagnetic pulse) that encodes a multi-level complex quantum gate. In this thesis, we start from the work of Holland et al. "Optimal control for the quantum simulation of nuclear dynamics" Physical Review A 101.6 (2020): 062307, where a quantum simulation of real-time neutron-neutron dynamics is proposed, in which the propagation of the system is enacted by a single dense multi-level gate derived from the nuclear spin-interaction at leading order (LO) of chiral effective field theory (EFT) through an optimal control technique. Hence, we will generalize the two neutron spin simulations, re-including spatial degrees of freedom with a hybrid algorithm. The spin dynamics are implemented within the quantum processor and the spatial dynamics are computed applying classical algorithms. We called this method classical-quantum coprocessing. The quantum simulations using optimized optimal control methods and discrete get set approach will be presented. By applying the coprocessing scheme through the optimal control, we have a possible bottleneck due to the requested classical computational time to compute the microwave pulses. A solution to this problem will be presented. Furthermore, an investigation of an improved way to efficiently compile quantum circuits based on the Similarity Renormalization Group will be discussed. This method simplifies the compilation in terms of digital gates. The most important result contained in this thesis is the development of an algorithm for performing an imaginary time propagation on a quantum chip. It belongs to the class of methods for evaluating the ground state of a quantum system, based on operating a Wick rotation of the real time evolution operator. The resulting propagator is not unitary, implementing in some way a dissipation mechanism that naturally leads the system towards its lowest energy state. Evolution in imaginary time is a well-known technique for finding the ground state of quantum many-body systems. It is at the heart of several numerical methods, including Quantum Monte Carlo techniques, that have been used with great success in quantum chemistry, condensed matter and nuclear physics. The classical implementations of imaginary time propagation suffer (with few exceptions) of an exponential increase in the computational cost with the dimension of the system. This fact calls for a generalization of the algorithm to quantum computers. The proposed algorithm is implemented by expanding the Hilbert space of the system under investigation by means of ancillary qubits. The projection is obtained by applying a series of unitary transformations having the effect of dissipating the components of the initial state along excited states of the Hamiltonian into the ancillary space. A measurement of the ancillary qubit(s) will then remove such components, effectively implementing a "cooling" of the system. The theory and testing of this method, along with some proposals for improvements will be thoroughly discussed in the dedicated chapter.
448

Modeling the interaction of light with photonic structures by direct numerical solution of Maxwell's equations

Vaccari, Alessandro January 2015 (has links)
The present work analyzes and describes a method for the direct numerical solution of the Maxwell's equations of classical electromagnetism. This is the FDTD (Finite-Difference Time-Domain) method, along with its implementation in an "in-house" computing code for large parallelized simulations. Both are then applied to the modelization of photonic and plasmonic structures interacting with light. These systems are often too complex, either geometrically and materially, in order to be mathematically tractable and an exact analytic solution in closed form, or as a series expansion, cannot be obtained. The only way to gain insight on their physical behavior is thus to try to get a numerical approximated, although convergent, solution. This is a current trend in modern physics because, apart from perturbative methods and asymptotic analysis, which represent, where applicable, the typical instruments to deal with complex physico-mathematical problems, the only general way to approach such problems is based on the direct approximated numerical solution of the governing equations. Today this last choice is made possible through the enormous and widespread computational capabilities offered by modern computers, in particular High Performance Computing (HPC) done using parallel machines with a large number of CPUs working concurrently. Computer simulations are now a sort of virtual laboratories, which can be rapidly and costless setup to investigate various physical phenomena. Thus computational physics has become a sort of third way between the experimental and theoretical branches. The plasmonics application of the present work concerns the scattering and absorption analysis from single and arrayed metal nanoparticles, when surface plasmons are excited by an impinging beam of light, to study the radiation distribution inside a silicon substrate behind them. This has potential applications in improving the eciency of photovoltaic cells. The photonics application of the present work concerns the analysis of the optical reflectance and transmittance properties of an opal crystal. This is a regular and ordered lattice of macroscopic particles which can stops light propagation in certain wavelenght bands, and whose study has potential applications in the realization of low threshold laser, optical waveguides and sensors. For these latters, in fact, the crystal response is tuned to its structure parameters and symmetry and varies by varying them. The present work about the FDTD method represents an enhacement of a previous one made for my MSc Degree Thesis in Physics, which has also now geared toward the visible and neighboring parts of the electromagnetic spectrum. It is organized in the following fashion. Part I provides an exposition of the basic concepts of electromagnetism which constitute the minimum, although partial, theoretical background useful to formulate the physics of the systems here analyzed or to be analyzed in possible further developments of the work. It summarizes Maxwell's equations in matter and the time domain description of temporally dispersive media. It addresses also the plane wave representation of an electromagnetic field distribution, mainly the far field one. The Kirchhoff formula is described and deduced, to calculate the angular radiation distribution around a scatterer. Gaussian beams in the paraxial approximation are also slightly treated, along with their focalization by means of an approximated diraction formula useful for their numericall FDTD representation. Finally, a thorough description of planarly multilayered media is included, which can play an important ancillary role in the homogenization procedure of a photonic crystal, as described in Part III, but also in other optical analyses. Part II properly concerns the FDTD numerical method description and implementation. Various aspects of the method are treated which globally contribute to a working and robust overall algorithm. Particular emphasis is given to those arguments representing an enhancement of previous work.These are: the analysis from existing literature of a new class of absorbing boundary conditions, the so called Convolutional-Perfectly Matched Layer, and their implementation; the analysis from existing literature and implementation of the Auxiliary Differential Equation Method for the inclusion of frequency dependent electric permittivity media, according to various and general polarization models; the description and implementation of a "plane wave injector" for representing impinging beam of lights propagating in an arbitrary direction, and which can be used to represent, by superposition, focalized beams; the parallelization of the FDTD numerical method by means of the Message Passing Interface (MPI) which, by using the here proposed, suitable, user dened MPI data structures, results in a robust and scalable code, running on massively parallel High Performance Computing Machines like the IBM/BlueGeneQ with a core number of order 2X10^5. Finally, Part III gives the details of the specific plasmonics and photonics applications made with the "in-house" developed FDTD algorithm, to demonstrate its effectiveness. After Chapter 10, devoted to the validation of the FDTD code implementation against a known solution, Chapter 11 is about plasmonics, with the analytical and numerical study of single and arrayed metal nanoparticles of different shapes and sizes, when surface plasmon are excited on them by a light beam. The presence of a passivating embedding silica layer and a silicon substrate are also included. The next Chapter 12 is about the FDTD modelization of a face-cubic centered (FCC) opal photonic crystal sample, with a comparison between the numerical and experimental transmittance/reflectance behavior. An homogenization procedure is suggested of the lattice discontinuous crystal structure, by means of an averaging procedure and a planarly multilayered media analysis, through which better understand the reflecting characteristic of the crystal sample. Finally, a procedure for the numerical reconstruction of the crystal dispersion banded omega-k curve inside the first Brillouin zone is proposed. Three Appendices providing details about specific arguments dealt with during the exposition conclude the work.
449

Synthesis, Characterization and Functionalization of Iron Oxide Magnetic Nanoparticles for Diagnostics and Therapy of Tumors

Dalbosco, Luca January 2012 (has links)
In the last decade nanotechnologies have greatly developed in many research ï¬ elds such as engineering, electronic, biological and many others. They can offer several possibilities to design tools, to create new techniques or improve the already existing ones, to discover innovative applications. And nanotechonology research is just at the beginning. One of the most interesting thing of this topic is the size of nanostructures. These materials are thousand times smaller than a cell and have a compatible size with proteins, enzymes and a lot of biological molecules. For this reason many research groups specialized in biotechnology started to invest people and resources in this new scientiï¬ c possibility. Following this very promising trend, BIOtech, a research group for biotechnology at the University of Trento, has proposed the Nanosmart project. Developed together with many prestigious institutes all over the world, this project aims to exploit the nanotechnology possibilities in biological research. The purpose of this challenge is the design, development and production of magnetic nanoparticles to use them in diagnostics and therapy of cancer disease. Magnetic nanoparticles (MNP) are spherical agglomerates of iron oxide, few tens of nanometers, which can be exploited in many ways. Being magnetic they can be used as contrast agents in magnetic resonance imaging MRI. Together having a high absorbing coefficient in the radio frequency band, they can locally increase the temperature of the tissues hosts and this being used for hyperthermia treatments. Entrapping some drugs in one of their multilayers, MNP can be used as inert carriers for drug delivery: due to their small size they can enter biological tissues, cross the plasma membrane of cells and release the drug only on predetermined targets. My Ph.D. started together with the project; so I had the possibility to follow this research from the beginning. In this years many problems have been handled, many errors have been made, many brilliant ideas have been shelved but also new abilities have been acquired, important collaborations were born and alternative structures have been thought and, fortunately, realized. Trying to eliminate unnecessary things and focusing on main purpose of this work, in this thesis I want to illustrate just the long â€œï¬ l rouge†that connects the idea of producing a nanoparticle that can cure tumor to the point of verify its effectiveness.
450

Implementation of an all-optical setup for insect brain optogenetic stimulation and two-photon functional imaging

Zanon, Mirko 14 April 2020 (has links)
Insect brain is a very complex but at the same time small, simplified and accessible model with respect to the mammalian one. In neuroscience a huge number of works adopt drosophila as animal model, given its easiness of maintenance and, overall, of genetical manipulation. With such a model one can investigate many behavioral tasks and at the same time have access to a whole brain in vivo, with improved specificity and cellular resolution capabilities. Still, a remarkable goal would be to gain a precise control over the neural network, in order to fully manipulate specific areas of the brain, acting directly on network nodes of interest. This is possible thanks to optogenetics, a technique that exploits photosensitive molecules to modulate molecular events in living cells and neurons. At the same time, it is possible to perform a neuronal readout with light, exploiting calcium-based reporters; in this way, neuronal response investigation can gain in temporal and spatial resolution. This is an all-optical approach that brings many advantages in the neural network study and an insight in the functional connectivity of the system under investigation. We present here a setup that combines a two-photon imaging microscope, capable of in vivo imaging with a sub-cellular resolution and an excellent penetration depth down to hundreds of microns, with a diode laser optogenetic stimulation. With such a setup we investigate the drosophila brain in vivo, stimulating single units of the primary olfactory system (the so-called glomeruli, about 20 μm of diameter). By our knowledge this is one of the first time a similar all-optical approach is used in such an animal model: we confirm, in this way, the possibility to perform these experiments in vivo, with all the advantages coming from the improved accessibility of our model. Moreover, we present the results using a sample co-expressing GCaMP6 and ChR2-XXL, optimal performing sensor and actuator, largely exploited in the field for their high efficiency: these were rarely used in combination, since their spectral overlap, nevertheless we are able to show the feasibility of this combined approach, enabling to take advantage from the use of both these performing molecules. Finally, we will show different approaches of data analysis to infer relevant information about correlation and time response of different areas of the brain, that can give us hints in favor of some functional connectivity between olfactory subunits.

Page generated in 0.0586 seconds