• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 94380
  • 44430
  • 27777
  • 17292
  • 7725
  • 5555
  • 4255
  • 2302
  • 2302
  • 2302
  • 2302
  • 2302
  • 2295
  • 1487
  • Tagged with
  • 45961
  • 15436
  • 11579
  • 10789
  • 8566
  • 8007
  • 7992
  • 6100
  • 6049
  • 5305
  • 5188
  • 5157
  • 5047
  • 5003
  • 4687
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

All purpose physics wheel class

Jones, Emil January 2008 (has links)
<p>Detta arbete beskriver hur processen för att skapa en generell hjulklass har gått till. Klassen skall skapas med hjälp av fysikmotorn Bullet. Den generella hjulklassen skall användas som en komponent i Craft Animations fordonssimuleringar. Det finns stöd för 3 olika former av hjulupphängning. Oberoende hjul medkollisionskroppar, Länkade hjul med kollisionskroppar samt Hjul med strålföljning. Rapporten visar hur konstruktionen av dessa hjulupphängningar går till. Slutligen görs en sammanställning där de olika teknikerna diskuteras och jämförs.</p>
12

Fysik och genus / Physics and gender

Gerstig, Madeleine January 2023 (has links)
I Sverige är det idag färre kvinnor än män som väljer att vidareutbilda sig i ämnet fysik, trots att flickor generellt sett har högre slutbetyg i ämnet än pojkar när de slutar årskurs 9. Detta leder till en ojämn könsfördelning inom de yrken som kräver kunskaper i ämnet fysik och fysikyrket blir därmed mansdominerat. Syftet med denna kvantitativa undersökning var att ta reda på hur stora skillnaderna är mellan pojkar och flickor i kunskaperna i samt attityderna till ämnet fysik bland barn i årskurs 9 i Sverige, samt hur sociala faktorer såsom normer och familjebakgrund påverkar deras attityd till ämnet fysik. I studien undersöktes också vad dessa skillnader kan bero på samt hur man eventuellt skulle kunna minska dem. Detta undersöktes med hjälp av att elever i årskurs 9 dels fick göra ett kunskapstest i Newtonsk dynamik, dels fick svara på en semistrukturerad enkät med frågor som behandlar attityder till naturvetenskapliga ämnen med huvudfokus på ämnet fysik. Resultatet analyseras och diskuteras framför allt ur ett pragmatiskt perspektiv och till viss del ur ett hermeneutiskt perspektiv. Resultatet visade att flickor har en mer negativ attityd till ämnet fysik än vad pojkar har och att de i lägre grad än pojkarna kan tänka sig att jobba som fysiker i framtiden. Resultatet visade dock att flickor har en något mer positiv attityd till skolans fysiklektioner än vad pojkarna har och att de i högre utsträckning än pojkarna föredrar att lära sig genom att läraren har genomgång. Flickorna har också något lägre kunskaper i Newtonsk dynamik än vad pojkarna har. Resultatet visade också en hög korrelation mellan elevernas attityd till skolans fysiklektioner och intresset för att lära sig mer inom ämnet fysik, vilket är en indikation på att skolans fysiklektioner har en betydande roll för elevernas attityd till ämnet. Korrelationen mellan elevernas erfarenheter av fysik utanför skolan och elevens intresse för att lära sig mer inom ämnet fysik visade sig vara medelhög, vilket är en indikation på att sociala faktorer såsom vad eleven gör på sin fritid spelar roll för elevens attityd till ämnet fysik. Antalet informanter var dock för lågt och felmarginalerna för stora för att kunna dra några generella slutsatser. Eleverna hade överlag en negativ attityd till ämnet fysik och en stor majoritet av dem kan inte tänka sig att jobba som fysiker i framtiden. Många av eleverna motiverade detta med att de tycker att fysik är ett svårt ämne och önskade att fysikundervisningen görs med lättbegriplig för dem, gärna med kopplingar till deras egen vardag.
13

Physics-based compact model of HEMTs for circuit simulation

Yigletu, Fetene Mulugeta 19 September 2014 (has links)
Aquesta tesi adreça el modelatge de dispositius HEMTs. Es presenta un model compacte, de base física, d’ AlGaN/GaN HEMTs per a la simulación de circuits. Es desenvolupa un model complet del corrent de drenador, i de les càrregues i capacitàncies de porta. El model bàsic de corrent de drenador i càrregues de porta s’obté usant un model simple de control de càrrega desenvolupat a partir de les solucions de les equacions de Poisson i Schrödinger per a l’àrea activa d’operació del dispositiu. Es presenta també un model separat del col•lapse del corrent, que és un efecte important en els AlGaN/GaN HEMT. El model de col•lapse del corrent es desenvolupa emprant el model bàsic de model com a marc, la qual cosa resulta en un model robust de gran senyal que pot ser utilitzat amb i sense la presència del col•lapse del corrent. A més, es presenta una anàlisi de no linearitats i modelatge de AlGaAs/GaAs pHEMT utilitzant les sèries de Volterra. / Esta tesis trata el modelado de dispositivos III-V HEMTs. Se presenta un modelo compacto, de base física, de AlGaN/GaN HEMTs para la simulación de circuitos. Se desarrolla un modelo completo de la corriente de drenador, y de cargas y capacitancias de puerta. El modelo básico de corriente de drenador y cargas de puerta se obtiene usando un modelo simple de control de carga desarrollado a partir de las soluciones de la ecuación de Poisson y Schrödinger para el área activa de operación del dispositivo. Se presenta también un modelo separado del colapso de la corriente, que es un efecto importante en AlGaN/GaN HEMT. El modelo de colapso de corriente de desarrolla empleando el modelo básico de corriente como marco, lo cual resulta en un modelo robusto de gran señal que puede ser utilizado con y sin la presencia del colapso de corriente. Además, se presenta un análisis de no linealidades y modelado de AlGaAs/GaAs pHEMT utilizando las series de Volterra. / This thesis targets the modeling of III-V HEMTs devices. A physics-based compact modeling of AlGaN/GaN HEMTs for circuit simulation is presented. A complete modeling of drain current, gate charge and gate capacitances is developed. The core drain current and gate charge models are derived using a simple charge control model developed from the solutions of Poisson's equation and Schrödinger’s equation solved for the active operating area of the device. The models are simple continuous and applicable for the whole operating regime of the device. A separate model is also presented for the current collapse effect, which is a serious issue in AlGaN/GaN HEMT. The current collapse model is developed using the core current model as a framework which resulted in a robust large signal model that can be used with and without the presence of current collapse. In addition, nonlinearity analysis and modeling of commercial AlGaAs/GaAs pHEMTs using the Volterra series is also presented.
14

Distributed Computing Solutions for High Energy Physics Interactive Data Analysis

Padulano, Vincenzo Eduardo 04 May 2023 (has links)
[ES] La investigación científica en Física de Altas Energías (HEP) se caracteriza por desafíos computacionales complejos, que durante décadas tuvieron que ser abordados mediante la investigación de técnicas informáticas en paralelo a los avances en la comprensión de la física. Uno de los principales actores en el campo, el CERN, alberga tanto el Gran Colisionador de Hadrones (LHC) como miles de investigadores cada año que se dedican a recopilar y procesar las enormes cantidades de datos generados por el acelerador de partículas. Históricamente, esto ha proporcionado un terreno fértil para las técnicas de computación distribuida, conduciendo a la creación de Worldwide LHC Computing Grid (WLCG), una red global de gran potencia informática para todos los experimentos LHC y del campo HEP. Los datos generados por el LHC hasta ahora ya han planteado desafíos para la informática y el almacenamiento. Esto solo aumentará con futuras actualizaciones de hardware del acelerador, un escenario que requerirá grandes cantidades de recursos coordinados para ejecutar los análisis HEP. La estrategia principal para cálculos tan complejos es, hasta el día de hoy, enviar solicitudes a sistemas de colas por lotes conectados a la red. Esto tiene dos grandes desventajas para el usuario: falta de interactividad y tiempos de espera desconocidos. En años más recientes, otros campos de la investigación y la industria han desarrollado nuevas técnicas para abordar la tarea de analizar las cantidades cada vez mayores de datos generados por humanos (una tendencia comúnmente mencionada como "Big Data"). Por lo tanto, han surgido nuevas interfaces y modelos de programación que muestran la interactividad como una característica clave y permiten el uso de grandes recursos informáticos. A la luz del escenario descrito anteriormente, esta tesis tiene como objetivo aprovechar las herramientas y arquitecturas de la industria de vanguardia para acelerar los flujos de trabajo de análisis en HEP, y proporcionar una interfaz de programación que permite la paralelización automática, tanto en una sola máquina como en un conjunto de recursos distribuidos. Se centra en los modelos de programación modernos y en cómo hacer el mejor uso de los recursos de hardware disponibles al tiempo que proporciona una experiencia de usuario perfecta. La tesis también propone una solución informática distribuida moderna para el análisis de datos HEP, haciendo uso del software llamado ROOT y, en particular, de su capa de análisis de datos llamada RDataFrame. Se exploran algunas áreas clave de investigación en torno a esta propuesta. Desde el punto de vista del usuario, esto se detalla en forma de una nueva interfaz que puede ejecutarse en una computadora portátil o en miles de nodos informáticos, sin cambios en la aplicación del usuario. Este desarrollo abre la puerta a la explotación de recursos distribuidos a través de motores de ejecución estándar de la industria que pueden escalar a múltiples nodos en clústeres HPC o HTC, o incluso en ofertas serverless de nubes comerciales. Dado que el análisis de datos en este campo a menudo está limitado por E/S, se necesita comprender cuáles son los posibles mecanismos de almacenamiento en caché. En este sentido, se investigó un sistema de almacenamiento novedoso basado en la tecnología de almacenamiento de objetos como objetivo para el caché. En conclusión, el futuro del análisis de datos en HEP presenta desafíos desde varias perspectivas, desde la explotación de recursos informáticos y de almacenamiento distribuidos hasta el diseño de interfaces de usuario ergonómicas. Los marcos de software deben apuntar a la eficiencia y la facilidad de uso, desvinculando la definición de los cálculos físicos de los detalles de implementación de su ejecución. Esta tesis se enmarca en el esfuerzo colectivo de la comunidad HEP hacia estos objetivos, definiendo problemas y posibles soluciones que pueden ser adoptadas por futuros investigadores. / [CA] La investigació científica a Física d'Altes Energies (HEP) es caracteritza per desafiaments computacionals complexos, que durant dècades van haver de ser abordats mitjançant la investigació de tècniques informàtiques en paral·lel als avenços en la comprensió de la física. Un dels principals actors al camp, el CERN, acull tant el Gran Col·lisionador d'Hadrons (LHC) com milers d'investigadors cada any que es dediquen a recopilar i processar les enormes quantitats de dades generades per l'accelerador de partícules. Històricament, això ha proporcionat un terreny fèrtil per a les tècniques de computació distribuïda, conduint a la creació del Worldwide LHC Computing Grid (WLCG), una xarxa global de gran potència informàtica per a tots els experiments LHC i del camp HEP. Les dades generades per l'LHC fins ara ja han plantejat desafiaments per a la informàtica i l'emmagatzematge. Això només augmentarà amb futures actualitzacions de maquinari de l'accelerador, un escenari que requerirà grans quantitats de recursos coordinats per executar les anàlisis HEP. L'estratègia principal per a càlculs tan complexos és, fins avui, enviar sol·licituds a sistemes de cues per lots connectats a la xarxa. Això té dos grans desavantatges per a l'usuari: manca d'interactivitat i temps de espera desconeguts. En anys més recents, altres camps de la recerca i la indústria han desenvolupat noves tècniques per abordar la tasca d'analitzar les quantitats cada vegada més grans de dades generades per humans (una tendència comunament esmentada com a "Big Data"). Per tant, han sorgit noves interfícies i models de programació que mostren la interactivitat com a característica clau i permeten l'ús de grans recursos informàtics. A la llum de l'escenari descrit anteriorment, aquesta tesi té com a objectiu aprofitar les eines i les arquitectures de la indústria d'avantguarda per accelerar els fluxos de treball d'anàlisi a HEP, i proporcionar una interfície de programació que permet la paral·lelització automàtica, tant en una sola màquina com en un conjunt de recursos distribuïts. Se centra en els models de programació moderns i com fer el millor ús dels recursos de maquinari disponibles alhora que proporciona una experiència d'usuari perfecta. La tesi també proposa una solució informàtica distribuïda moderna per a l'anàlisi de dades HEP, fent ús del programari anomenat ROOT i, en particular, de la seva capa d'anàlisi de dades anomenada RDataFrame. S'exploren algunes àrees clau de recerca sobre aquesta proposta. Des del punt de vista de l'usuari, això es detalla en forma duna nova interfície que es pot executar en un ordinador portàtil o en milers de nodes informàtics, sense canvis en l'aplicació de l'usuari. Aquest desenvolupament obre la porta a l'explotació de recursos distribuïts a través de motors d'execució estàndard de la indústria que poden escalar a múltiples nodes en clústers HPC o HTC, o fins i tot en ofertes serverless de núvols comercials. Atès que sovint l'anàlisi de dades en aquest camp està limitada per E/S, cal comprendre quins són els possibles mecanismes d'emmagatzematge en memòria cau. En aquest sentit, es va investigar un nou sistema d'emmagatzematge basat en la tecnologia d'emmagatzematge d'objectes com a objectiu per a la memòria cau. En conclusió, el futur de l'anàlisi de dades a HEP presenta reptes des de diverses perspectives, des de l'explotació de recursos informàtics i d'emmagatzematge distribuïts fins al disseny d'interfícies d'usuari ergonòmiques. Els marcs de programari han d'apuntar a l'eficiència i la facilitat d'ús, desvinculant la definició dels càlculs físics dels detalls d'implementació de la seva execució. Aquesta tesi s'emmarca en l'esforç col·lectiu de la comunitat HEP cap a aquests objectius, definint problemes i possibles solucions que poden ser adoptades per futurs investigadors. / [EN] The scientific research in High Energy Physics (HEP) is characterised by complex computational challenges, which over the decades had to be addressed by researching computing techniques in parallel to the advances in understanding physics. One of the main actors in the field, CERN, hosts both the Large Hadron Collider (LHC) and thousands of researchers yearly who are devoted to collecting and processing the huge amounts of data generated by the particle accelerator. This has historically provided a fertile ground for distributed computing techniques, which led to the creation of the Worldwide LHC Computing Grid (WLCG), a global network providing large computing power for all the experiments revolving around the LHC and the HEP field. Data generated by the LHC so far has already posed challenges for computing and storage. This is only going to increase with future hardware updates of the accelerator, which will bring a scenario that will require large amounts of coordinated resources to run the workflows of HEP analyses. The main strategy for such complex computations is, still to this day, submitting applications to batch queueing systems connected to the grid and wait for the final result to arrive. This has two great disadvantages from the user's perspective: no interactivity and unknown waiting times. In more recent years, other fields of research and industry have developed new techniques to address the task of analysing the ever increasing large amounts of human-generated data (a trend commonly mentioned as "Big Data"). Thus, new programming interfaces and models have arised that most often showcase interactivity as one key feature while also allowing the usage of large computational resources. In light of the scenario described above, this thesis aims at leveraging cutting-edge industry tools and architectures to speed up analysis workflows in High Energy Physics, while providing a programming interface that enables automatic parallelisation, both on a single machine and on a set of distributed resources. It focuses on modern programming models and on how to make best use of the available hardware resources while providing a seamless user experience. The thesis also proposes a modern distributed computing solution to the HEP data analysis, making use of the established software framework called ROOT and in particular of its data analysis layer implemented with the RDataFrame class. A few key research areas that revolved around this proposal are explored. From the user's point of view, this is detailed in the form of a new interface to data analysis that is able to run on a laptop or on thousands of computing nodes, with no change in the user application. This development opens the door to exploiting distributed resources via industry standard execution engines that can scale to multiple nodes on HPC or HTC clusters, or even on serverless offerings of commercial clouds. Since data analysis in this field is often I/O bound, a good comprehension of what are the possible caching mechanisms is needed. In this regard, a novel storage system based on object store technology was researched as a target for caching. In conclusion, the future of data analysis in High Energy Physics presents challenges from various perspectives, from the exploitation of distributed computing and storage resources to the design of ergonomic user interfaces. Software frameworks should aim at efficiency and ease of use, decoupling as much as possible the definition of the physics computations from the implementation details of their execution. This thesis is framed in the collective effort of the HEP community towards these goals, defining problems and possible solutions that can be adopted by future researchers. / Padulano, VE. (2023). Distributed Computing Solutions for High Energy Physics Interactive Data Analysis [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/193104
15

Testing new physics in long baseline neutrino oscillation experiments

Díaz Desposorio, Félix Napoleón 10 January 2023 (has links)
In this thesis, we focus on analyzing the different ways in which new physics scenarios, such as Violation of the Equivalence Principle (VEP) and Quantum Decoherence, can manifest themselves in the context of the neutrino oscillation phenomenon. Within the framework of the DUNE experiment, we examine several effects of the VEP, such as the possibility of getting a misconstructed neutrino oscillation parameter region provoked by our ignorance of VEP in nature, as well as the impact on the DUNE sensitivity for CPV and mass hierarchy. Additionally, we set limits for the different textures of the gravitational matrix and the diverse scenarios of energy dependencies associated with the Lorentz Violation. On the other hand, we demonstrate that the quantum decoherence phenomenon applied to the neutrino system leads us to fascinating phenomenological scenarios. One of the scenarios analyzed, within the context of quantum decoherence, is the one that breaks the fundamental CPT symmetry. For the latter, we identify what textures that include certain nondiagonal elements of the decoherence matrix are necessary. In this line, we propose a way to measure the CPT violation in the DUNE experiment using the muon neutrino and antineutrino channels for different energy dependencies. Another intriguing effect of considering the neutrino as an open quantum system is the possibility of discovering the neutrino nature by measuring the Majorana phase at the DUNE experiment achieving a competitive precision. As a consequence of the latter, we find that the crucial measurement of the CP violation phase (δCP), planned to be performed at the DUNE experiment, can be spoiled by the introduction of the decoherence and the Majorana phases in nature. Thus, a signature of a non-null Majorana phase is a sizable distortion in the measurement of the Dirac CP violation phase δCP at DUNE when compared with T2HK measurement. Subsequently, via simulation, we measured the Majorana phase for values of ϕ1/π = ±0.5 and decoherence parameter Γ = 4.5(5.5) × 10−24GeV, reaching a precision of 23 (21) %. This precision is consistent with the corresponding to the Dirac CP phase at T2K experiment. / En la presente tesis, nos enfocamos en analizar las diferentes formas en que los escenarios de física nueva, como la Violación del Principio de Equivalencia (VEP) y la Decoherencia Cuántica, pueden manifestarse en el contexto del fenómeno de oscilación de neutrinos. En el marco del experimento DUNE, examinamos distintos efectos de VEP, como la posibilidad de obtener una región de parámetros de oscilación de neutrinos mal construida debido a no considerar VEP en la naturaleza, el impacto en la sensitividad del experimento DUNE para CPV y la determinación de la jerarquía de masas. Adicionalmente, establecemos límites para las diferentes texturas de la matriz gravitacional y los diversos escenarios con distintas dependencias energéticas asociadas a la Violación de Lorentz. Por otro lado, demostramos que el fenómeno de la decoherencia cuántica aplicado al sistema de neutrinos nos conduce a fascinantes escenarios fenomenológicos. Uno de los escenarios analizados, dentro del contexto de la decoherencia cuántica, es el de la ruptura de la simetría fundamental CPT. Para esto último, identificamos que son necesarias texturas que incluyan ciertos elementos no diagonales de la matriz de decoherencia. En esta línea, proponemos una forma de medir la violación de CPT en el experimento DUNE utilizando los canales de neutrinos y antineutrinos muónicos para diferentes dependencias energéticas. Otro efecto interesante de considerar al neutrino como un sistema cuántico abierto es la posibilidad de descubrir la naturaleza del neutrino midiendo la fase de Majorana en el experimento DUNE con una precisión competitiva. Como consecuencia de lo último, encontramos que la medición de la fase de violación de CP (δCP), planificada para realizarse en el experimento DUNE, puede verse afectada por la introducción de la decoherencia y las fases de Majorana en la naturaleza. Por lo tanto, en el marco de la decoherencia, mostramos que una señal de una fase de Majorana no nula, es la observación de una distorsión considerable en la medición de la fase de violación CP δCP en DUNE en comparación con la medición realizada por T2HK. Posteriormente, mediante simulación, medimos la fase Majorana para valores de ϕ1/π = ±0.5 y el parámetro de decoherencia Γ = 4.5(5.5) × 10−24GeV, alcanzando una precisión de 23 (21) %. Esta precisión es consistente con la medida correspondiente a la fase CP de Dirac en el experimento T2K.
16

Mathematical methods in atomic physics = Métodos matemáticos en física atómica

Del Punta, Jessica A. 17 March 2017 (has links)
Los problemas de dispersión de partículas, como son los de dos y tres cuerpos, tienen una relevancia crucial en física atómica, pues permiten describir diversos procesos de colisiones. Hoy en día, los casos de dos cuerpos pueden ser resueltos con el grado de precisión numérica que se desee. Los problemas de dispersión de tres partículas cargadas son notoriamente más difíciles pero aún así algo similar, aunque en menor medida, puede establecerse. El objetivo de este trabajo es contribuir a la comprensión de procesos Coulombianos de dispersión de tres cuerpos desde un punto de vista analítico. Esto no solo es de fundamental interés, sino que también es útil para dominar mejor los enfoques numéricos que se actualmente se desarrollan dentro de la comunidad de colisiones atómicas. Para lograr este objetivo, proponemos aproximar la solución del problema con desarrollos en series de funciones adecuadas y expresables analíticamente. Al hacer esto, desarrollamos una serie de herramientas matemáticas relacionadas con funciones Coulombianas, ecuaciones diferenciales de segundo orden homogéneas y no homogéneas, y funciones hipergeométricas en una y dos variables. En primer lugar, trabajamos con las funciones de onda Coulombianas radiales y revisamos sus principales propiedades. Así, extendemos los resultados conocidos para dar expresiones analíticas de los coeficientes asociados al desarrollo, en serie de funciones de tipo Laguerre, de las funciones Coulombianas irregulares. También establecemos una nueva conexión entre los coeficientes asociados al desarrollo de la función Coulombiana regular y los polinomios de Meixner-Pollaczek. Esta relación nos permite deducir propiedades de ortogonalidad y clausura para estos coeficientes al considerar la carga como variable. Luego, estudiamos las funciones hipergeométricas de dos variables. Para algunas de ellas, como las funciones de Appell o las confluentes de Horn, presentamos expresiones analíticas de sus derivadas respecto de sus parámetros. También estudiamos un conjunto particular de funciones Sturmianas Generalizadas de dos cuerpos construidas considerando como potencial generador el potencial de Hulthén. Contrariamente al caso habitual, en el que las funciones Sturmianas se construyen numéricamente, las funciones Sturmianas de Hulthén poseen forma analítica. Sus propiedades matem´aticas pueden ser analíticamente estudiadas proporcionando una herramienta única para comprender y analizar los problemas de dispersión y sus soluciones. Además, proponemos un nuevo conjunto de funciones a las que llamamos funciones Quasi-Sturmianas. Estas funciones se presentan como una alternativa para expandir la solución buscada en procesos de dispersi´on de dos y tres cuerpos. Se definen como soluciones de una ecuación diferencial de tipo-Schrödinger, no homogénea. Por construcción, incluyen un comportamiento asintótico adecuado para resolver problemas de dispersión. Presentamos diferentes expresiones analíticas y exploramos sus propiedades matemáticas, vinculando y justificando los desarrollos realizados previamente. Para finalizar, utilizamos las funciones estudiadas (Sturmianas de Hulthén y Quasi-Sturmianas) en la resolución de problemas particulares de dos y tres cuerpos. La eficacia de estas funciones se ilustra comparando los resultados obtenidos con datos provenientes de la aplicación de otras metodologías. / Two and three-body scattering problems are of crucial relevance in atomic physics as they allow to describe different atomic collision processes. Nowadays, the two-body cases can be solved with any degree of numerical accuracy. Scattering problem involving three charged particles are notoriously difficult but something similar –though to a lesser extentcan be stated. The aim of this work is to contribute to the understanding of three-body Coulomb scattering problems from an analytical point of view. This is not only of fundamental interest, it is also useful to better master numerical approaches that are being developed within the collision community. To achieve this aim we propose to approximate scattering solutions with expansions on sets of appropriate functions having closed form. In so doing, we develop a number of related mathematical tools involving Coulomb functions, homogeneous and non-homogeneous second order differential equations, and hypergeometric functions in one and two variables. First we deal with the two-body radial Coulomb wave functions, and review their main properties. We extend known results to give in closed form the Laguerre expansions coefficients of the irregular solutions, and establish a new connection between the coefficients corresponding to the regular solution and Meixner-Pollaczek polynomials. This relation allows us to obtain an orthogonality and closure relation for these coefficients considering the charge as a variable. Then we explore two-variable hypergeometric functions. For some of them, such as Appell and confluent Horn functions, we find closed form for the derivatives with respect to their parameters. We also study a particular set of two-body Generalized Sturmian functions constructed with a Hulth´en generating potential. Contrary to the usual case in which Sturmian functions are numerically constructed, the Hulth´en Sturmian functions can be given in closed form. Their mathematical properties can thus be analytically studied providing a unique tool to investigate scattering problems. Next, we introduce a novel set of functions that we name Quasi-Sturmian functions. They constitute an alternative set of functions, given in closed form, to expand the sought after solution of two- and three-body scattering processes. Quasi-Sturmian functions are solutions of a non-homogeneous second order Schr¨odinger-like differential equation and have, by construction, the appropriate asymptotic behavior. We present different analytic expressions and explore their mathematical properties, linking and justifying the developed mathematical tools described above. Finally we use the studied Hulth´en Sturmian and Quasi-Sturmian functions to solve some particular two- and three-body scattering problems. The efficiency of these sets of functions is illustrated by comparing our results with those obtained by other methods
17

Impact of translucent water-based acrylic paint on the thermal performance of a low cost house

Overen, Ochuko Kelvin January 2014 (has links)
Insulation materials are selected based on their R-value, which is a measure of the thermal resistance of a material. Therefore, the higher the R-value of a material, the better its thermal insulation performance. There are two major groups of insulation materials: bulk and reflective insulation (or combine bulk and reflective). Bulk insulation is design to resist heat transfer due to conduction and convection. Reflective insulation resists radiant heat flow due to its high reflectivity and low emissivity. Insulation materials are not restricted to these materials only. Other low thermal conductive materials can be used as long as the primary aim of thermal insulation, which is increasing thermal resistance, is achieved. Hence, the aim of the project is to investigate the insulation ability of Translucent Water-based Acrylic Paint (TWAP) on the thermal performance of Low Cost Housing (LCH). To achieve the aim of the study, the inner surfaces of the external walls of LCH was coated with TWAP. Before the inner surfaces of the external walls were coated, the following techniques were used to characterised the paint; Scanning Electron Microscopy/ Energy Dispersive X-ray spectroscopy (SEM/EDX), Fourier Transform Infra-Red (FTIR) and IR thermography. SEM/EDX was adapted to view the surface morphology and to detect the elemental composition responsible for the thermal resistance of the TWAP. FTIR spectroscopy was used to determine the functional group and organic molecular composition of the paint. The heat resistance of TWAP was analyzed using IR thermography technique. A low cost house located in the Golf Course settlement in Alice, Eastern Cape, South Africa under the Nkonkobe Municipality Eastern Cape was used as a case study in this research. The house is facing geographical N16°E, It comprises a bedroom, toilet and an open plan living room and kitchen. The house has a floor dimension of 7.20 m x 5.70 m, giving an approximate area of 41 m2. The roof is made of galvanized corrugated iron sheets with no ceiling or any form of roof insulation. The walls of the buildings are made of the M6 (0.39 m 0.19 m x 0.14 m) hollow concrete blocks, with no plaster or insulation. The following meteorological parameters were measured: temperature, relative humidity, solar irradiance, wind speed and wind direction. Eleven type-K thermocouples were used to measure the indoor temperature, inner and outer surfaces temperature of the building walls. Two sets of HMP50 humidity sensors were used to measure the indoor and outdoor relative humidity as well as the ambient temperature. The indoor temperature and relative humidity were measured at a height of 1.80 m so as to have good indoor parameter variation patterns that are not influenced by the roof temperature. The outdoor relative humidity sensor together with a 03001 wind sentry anemometer/vane and Li-Cor pyranometer were installed at a height of 0.44 m above the roof of the building. Wind speed and direction were measured by the 03001 wind sentry anemometer/vane, while solar radiation was measured by the Li-Cor pyranometer. The entire set of sensors was connected to a CR1000 data logger from which data are stored and retrieved following a setup program.
18

Characterization and computer simulation of corn stover/coal blends for co-gasification in a downdraft gasifier

Mabizela, Polycarp Sbusiso January 2014 (has links)
The need for sustainable alternative energy technology is becoming more urgent as the demand for clean energy environment increases. For centuries, electricity in South Africa has been derived mostly from coal with results growing in multifold annually due to concerns about the impact of fossil fuel utilization related to emission of greenhouse gasses. It is practically impossible at the moment to replace coal with biomass resources because of the low energy value of biomass. However, the conversion of coal has experienced some challenges especially during its gasification which includes, but are not limited to a high reaction temperature exceeding 900°C which most gasifiers cannot achieve, and if achieved in most cases, combustion of the resulting syngas usually occur, leading to low conversion efficiency and the risk of reaching extremely high temperatures that may result in pressure build up and explosion may also occur. Therefore, this study sought to investigate the possibility of co-gasifying corn stover with coal with the ultimate aim establishing the best mixing ratio that would result in optimum co-gasification efficiency after computer simulation. Proximate and ultimate analysis, including energy values of corn stover and coal as well as their blends were undertaken and results showed significant differences between the two feedstocks and narrow range composition betwee their blends in terms of properties and energy value. Corn stover showed a higher fraction of volatile matter and lower ash content than coal, whereas those of their blends vary considerably in terms of physical properties. Differences in chemical composition also showed higher fraction of hydrogen and oxygen, and less carbon than coal while those of their blends vary according to the ratio of corn stover to coal and vice versa in the blends. The thermal stability of corn stover and coal as well as their blends were also established and the maximum temperature reached for thermal degradation of their blends was 900°C as depicted by TGA analysis. The SEM results revealed no changes in morphology of the pure samples of corn stover and coal which was due to the fact that a pre-treatment of the samples were not undertaken, whereas the blends showed significant changes in morphology as a result of blending. However, luminous and non-luminous features were noticed in both SEM images of the blends with the 10% coal/90% corn stover blend having higher percentages of luminosity as a result of higher quantities of coal in the blend. The energy density of the samples were also measured and found to be 16.1 MJ/kg and 22.8 MJ/kg for corn stover and coal respectively. Those of their blends varied from 16.9 to approximately 23.5 MJ/kg. These results were used to conduct computer simulation of the co-gasification process in order to establish the best blend that would result in maximum co-gasification efficiency. The blend 90% corn stover/10% coal was found to be the most suitable blend for co-gasification resulting in an efficiency of approximately 58% because its conversion was efficiently achieved at a temperature that is intermediate to that of coal and biomass independently. The simulation results were, however, compared with experimental data found in the literature and results showed only slight variation between them.
19

Electing high-order modes in solid state laser resonators

Iheanetu, Kelachukwu January 2014 (has links)
The first chapter considered the fundamental processes of laser operation: photon absorption, spontaneous and stimulated emissions. These processes are considered when designing a laser gain medium. A four-level laser scheme was also illustrated. Then, the basic components and operating principle of a simple laser system was presented using a diode end-pumped Nd:YAG solid state laser resonator. The second chapter considered laser light as light rays propagating in the resonator and extensively discussed the oscillating field in the laser resonators. It examined the characteristics of the fundamental Gaussian mode and the same theory was applied to higher-order modes. Chapter three started with an introduction to beam shaping and proceeded to present a review of some intra-cavity beam shaping techniques, the use of; graded phase mirrors, difractive elements { binary phase elements and spiral phase elements. Also, a brief discussion was given on the concept of conventional holography and digital holography. The phase-only spatial light modulator (SLM) was presented, which by default is used to perform (only) phase modulation of optical fields and how it can be use to perform amplitude modulation also. Finally, a detailed discussion of the digital laser which uses the intracavity SLM as a mode selection element was presented, since it was the technique used in the experiment. The elegance of dynamic on-demand mode selection that required only a change of the grey-scale hologram on the SLM was one quality that was exploited in using the digital laser. The next two chapters presented the experiments and results. The concept of the digital laser was first used in the experiment in chapter four, to assemble a stable diode endpumped Nd:YAG solid state laser resonator. Basically, the cavity was of hemispherical configuration using an intra-cavity SLM (virtual concave mirror) as a back re ector and a at mirror output coupler. A virtual concave mirror was achieve on the SLM by using phase modulation to generate the hologram of a lens, which when displayed on the SLM made it to mimic a concave mirror. Then the next phase was using symmetric Laguerre-Gaussian mode function, of zero azimuthal order to generate digital holograms that correspond to amplitude absorbing concentric rings. These holograms, combined with the hologram that iv mimics a concave mirror were used on the SLM to perform high-order Laguerre-Gaussian modes selection in the cavity. The fifth chapter presented the results of the mode selection and considered the purity of the beam at the output coupler by comparing measured modal properties with the theoretical prediction. The outcome confirmed that the modes were of high purity and quality which further implied that the cavity was indeed selecting single pure high-order modes. The results also demonstrated that forcing the cavity to oscillate at higher-order modes (p = 3) extracted 74% more power from the gain medium compared to the fundamental mode (p = 0), but this extra power is only accessible beyond a critical pump input power of 38.8 W. Laser brightness describes the potential of a laser beam to achieve high intensities while still maintaining a large Rayleigh range. It is a property that is dependent on beam power and its quality factor. To achieve high brightness one needs to generate a beam that extracts maximum power from the gain with good beam quality. Building on the experiments demonstrated in this study, one can make the correct choices of output coupler's re ectivity, the laser gain medium's length and doping concentration and the pump mode overlap for a particular mode to further enhance energy extraction from the cavity, and then using well known extra-cavity techniques to improve the output beams quality factor by transforming the high-order mode back to the fundamental mode. This will electively achieve higher laser brightness.
20

Development of corona-based power supplies for remote repeater stations for overhead HVDC power transmission systems

Kaseke, R January 2012 (has links)
More and more people worldwide are becoming “carbon conscious”. This means they are becoming increasingly aware of the imminent adverse effects of global warming. Of late there has been an urgent drive for governments to be on the forefront of all carbon mitigation initiatives. One such drive involves the United Nations Framework Convention on Climate Change whose parties have been meeting regularly under the banner of Conference of Parties (COP) since 1995. At this conference, parties to the convention review progress made in dealing with climate change. Also key to the deliberations in such meetings are better ways of developing cleaner “carbon free” energy sources. Energy sources of this nature are commonly known as renewable energy sources. In essence global energy trends are constantly moving towards development of more renewable energy sources. It is an undeniable fact that some of viable renewable energy sources especially those with bulk capacity are usually located remotely from load centers. This inevitable reality necessitates the construction of long distance bulk power transmission corridors to link generation sites with load centers. Due to its many inherent advantages over High Voltage Alternate Current (HVAC) for long distance power transmission, High Voltage Direct Current (HVDC) is gradually winning the favor of many utilities. In fact, recent advances in HVDC technology have encouraged many utilities to explore the possibility of harnessing remotely located renewable energy sources which would have otherwise not been viable with HVAC transmission. Through the unfortunate and inevitable phenomenon known as corona effect, overhead HVDC conductors suffer real power losses to the air dielectric surrounding them. Through corona, part of the energy carried on the transmission line is expended through ionization and movement of charges in the air dielectric. This study combined physics, mathematical as well engineering concepts to review corona phenomenon around HVDC lines with specific emphasis on space charge generation and motion within ionized DC fields as well as the influence of temperature on corona discharge or power loss. Also, unlike HVAC, performance of an HVDC system relies heavily on the availability of a reliable and robust telecommunication system. One of the key ways of ensuring reliability of a telecommunication system is by making sure that reliable power supplies are in place to power remote repeater stations. A novel concept of quasi-autonomous corona-based power supply (or QC power supply in short) that works on the principle of magnetohydrodymic (MHD) power generation was developed. A small scale experiment was then designed to assess the feasibility of such power supplies. The experiment was conducted with DC supply of a maximum rated voltage of 30 kVDC and generated up to 6 VDC at an optimum ambient temperature of 23°C. These results have confirmed that with further development QC power supplies have the potential of proving reliable power to remotely located repeaters or any other small critical loads along the stretch of the HVDC transmission line. Practical HVDC transmission systems operate voltages in the excess of 500 kV. By linear extrapolation of the above mentioned results; one would expect to yield up to 100-, 120- and 160-VDC from a 500-, 600- and 800- kV HVDC system, respectively. Although the study succeeded in conceptualizing a CMHD idea upon which the novel QC power supply was developed, quite extensive and rigorous design, modeling, prototyping and experimentation processes are still required before the first QC power supply can be commissioned on a practical HVDC line

Page generated in 0.6478 seconds