• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 152
  • 47
  • 41
  • 32
  • 30
  • 9
  • 3
  • 1
  • 1
  • Tagged with
  • 531
  • 141
  • 75
  • 72
  • 72
  • 57
  • 56
  • 53
  • 53
  • 47
  • 44
  • 36
  • 34
  • 27
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
351

Modellierung des Oberschwingungsverhaltens von Windparks mit probabilistischen Ansätzen

Malekian Boroujeni, Kaveh 22 April 2016 (has links)
Oberschwingungen als ein Merkmal der Elektroenergiequalität gewinnen durch die starke Netzintegration leistungselektronisch geregelter Anlagen wie Windenergieanlagen und nichtlineare Lasten zunehmend an Bedeutung. Bestehende Normen entsprechen nicht den zukünftigen Erfordernissen des Elektroenergiesystems und bedürfen diesbezüglich einer Überarbeitung. In der Arbeit werden wesentliche Einflussfaktoren auf das Oberschwingungsverhalten von Windparks identifiziert, beschrieben und modelliert. Dabei wird der stochastische Charakter der Oberschwingungen mithilfe von probabilistischen Ansätzen erfasst. Des Weiteren wird ein neuer Ansatz zur Untersuchung der Wechselwirkung zwischen dem Windpark und dem vorgelagerten Netz entwickelt. Der Ansatz ermöglicht, die vom Windpark verursachte Änderung der Oberschwingungsspannung am Netzanschlusspunkt zu ermitteln. Diese Arbeit liefert einen Beitrag zur Verbesserung bestehender Normen für die Anbindung von Windparks. / Harmonics, as one of the power quality criteria, are increasingly gaining attention due to the progressive contribution of renewable energy resources and the application of the nonlinear load in the power system. Current standards do not conform to the future requirements of the power system, thus requiring a revision. In this work, main influence factors on the harmonic behavior of wind farms are identified, explained, and modelled. Thereby, the stochastic nature of harmonics is taken into account using probabilistic approaches. Moreover, a novel approach is developed to investigate the interaction between the wind farm and the upstream grid. With the aid of this approach, it is possible to determine the voltage change caused by the wind farm at the point of connection. This work contributes to improve the existing standards for the connection of wind farms.
352

Identifikation von Transport- und Rekombinationskanälen zur Optimierung (AlGaIn)N basierter licht-emittierender Halbleiterdioden

Binder, Michael 19 January 2023 (has links)
Nitridbasierte LEDs bilden nicht nur die Basis für eine effiziente weiße Lichterzeugung, sondern halten auch durch eine Vielzahl weiterer Applikationen, wie zum Beispiel als Emitter bei der Pulsvermessung bei smart wearables oder auch in Displays, in unser Leben Einzug. Die Untersuchung der physikalischen Effekte, welche die elektrooptischen Eigenschaften (AlGaIN)N-basierter LEDs, insbesondere die Effizienz, bestimmen, sowie deren Abhängigkeit von der Emissionswellenlänge und vom Betriebsstrom der LED, ist Gegenstand dieser Arbeit. Es wird ein physikalisches Model zur Beschreibung der Strom-Spannungscharakteristik moderner blaue LEDs aufgestellt. Dieses bringt die LED-Spannung mit der internen Rekombinationsdynamik in Verbindung und ermöglicht somit die Vorhersage der Effizienz aus der Bestimmung rein elektrischer Kenngrößen. Die physikalische Ursache für den Effizienzabfall blauer sowie grüner LEDs im Bereich hoher Ströme war lange Zeit Gegenstand einer intensiv geführten Debatte in der Literatur. Mit dem in dieser Arbeit entwickelten Konzept zur Visualisierung von Auger-Prozessen kann bewiesen werden, dass dieses auch als Droop bezeichnete Problem auf Auger-Rekombination zurückzuführen ist. Aufbauend auf diesem Befund wird ein neuartiges Konzept zur Abmilderung des Droops aufgezeigt: Durch gezielte Einbringung einer dreidimensionalen Struktur lässt sich der Ladungsträgertransport verbessern und somit der Verlustkanal bei hohen Stromdichten verringern.:1 Einleitung 2 Grundlagen 3 Exemplarische Herleitung grundlegender Kenngrößen einer typischen LED 4 Effizienzuntersuchungen an SQWs unterschiedlicher Wellenlänge 5 Untersuchung der Kleinstromeffizienz 6 Der Droop – Untersuchung des Hochstromverlustkanals 7 Verminderung des Droops - V-förmige Defekte zur Löcherinjektion 8 Zusammenfassung / III-Nitride based LEDs not only constitute the basis for an efficient generation of white light, but they also play an increasingly important role in our lives by many new applications such as vital sign monitoring with smart wearables or displays. The identification of the underlying physical effects governing the electrooptical characteristics, especially efficiency, and their dependency on LED emission wavelength and operation current is the focus of this work. A physical model describing the current-voltage characteristics of modern blue LEDs is developed. This model correlates the LED voltage with its internal recombination dynamics and thus enables the prediction of the LED efficiency out of purely electrically acquired key figures. The physical root cause for the efficiency decrease of blue and green LEDs towards higher currents was intensively debated in the literature for many years. In this work a concept to visualize Auger processes is developed. This way, it can be shown that the high current efficiency decrease, also known as Droop, can be attributed to Auger recombination. Based on this conclusion a new concept to mediate the Droop is shown: By employing three-dimensional hole injecting layers in the epitaxial structure, the carrier transport can be improved, which is a lever to decrease the Droop.:1 Einleitung 2 Grundlagen 3 Exemplarische Herleitung grundlegender Kenngrößen einer typischen LED 4 Effizienzuntersuchungen an SQWs unterschiedlicher Wellenlänge 5 Untersuchung der Kleinstromeffizienz 6 Der Droop – Untersuchung des Hochstromverlustkanals 7 Verminderung des Droops - V-förmige Defekte zur Löcherinjektion 8 Zusammenfassung
353

La aplicación del control de búsqueda del extremo en la generación fotovoltaica

Zazo Jiménez, Héctor 09 February 2016 (has links)
En aquesta tesis s’estudia l’aplicació de la tècnica Extremum Seeking Control per extreure la màxima potència de generadors fotovoltaics. Es realitza una anàlisi detallada d’aquesta tècnica per tal d’ajustar els paràmetres de l’algoritme i aconseguir així les prestacions desitjades. Per a millorar aquestes prestacions de l’algoritme de la dependència de la curvatura de la característica V-P del panel. El segon vol millorar les prestacions mitjançant la sincronització de senyals amb l’addició d’una unitat de processament de fase. Finalment, es limiten els transitoris dels elements dinàmics del generador fotovoltaic mitjançant un saturador que aportem al llaç de control. Amb totes aquestes millores i el correcte ajustament dels paràmetres de control es poden assolir temps d’establiment de l’ordre de milisegons i rendiments del control per sobre del 99.5%. / En esta tesis se estudia la aplicación de la técnica Extremum Seeking Control para extraer la máxima potencia de generadores fotovoltaicos. Se realiza un análisis detallado de esta técnica con el fin de ajustar los parámetros del algoritmo para conseguir las prestaciones deseadas. Para mejorar estas prestaciones se implementa la adición de diferentes bloques al algoritmo ESC. El primero de ellos se trata de un estimador del hessiano que permite desvincular las prestaciones del algoritmo de la dependencia de la curvatura de la característica V-P del panel. El segundo pretende mejorar las prestaciones mediante la sincronización de señales con la adición de una unidad de procesamiento de fase. Finalmente, se limitan los transitorios de los elementos dinámicos del generador fotovoltaico mediante un saturador que aportamos al lazo de control. Con todas estas mejoras y el correcto ajuste de los parámetros de control se puede alcanzar tiempos de establecimiento de la orden de milisegundos y rendimientos del control por encima del 99.5%. / n this thesis the application of the Extremum Seeking Control technique is studied to extract the maximum power of photovoltaic generators. A detailed analysis of this technique is carried out in order to adjust the algorithm parameters to achieve desired performances. To improve these benefits different blocks are added to ESC algorithm. The first one is an estimate of the Hessian that allows decoupling algorithm performances dependence on the curvature of the V-P characteristic of the panel. The second one aims to improve the performances by synchronizing signals with the addition of a phase processing unit. Finally, transients dynamic elements of the PV array are limited by a saturator added to the control loop. With all these improvements and the correct setting of the control parameters can be setup times achieving the order of milliseconds and returns control over 99.5%.
354

MHD evolution of magnetic null points to static equilibria

Fuentes Fernández, Jorge January 2011 (has links)
In magnetised plasmas, magnetic reconnection is the process of magnetic field merging and recombination through which considerable amounts of magnetic energy may be converted into other forms of energy. Reconnection is a key mechanism for solar flares and coronal mass ejections in the solar atmosphere, it is believed to be an important source of heating of the solar corona, and it plays a major role in the acceleration of particles in the Earth's magnetotail. For reconnection to occur, the magnetic field must, in localised regions, be able to diffuse through the plasma. Ideal locations for diffusion to occur are electric current layers formed from rapidly changing magnetic fields in short space scales. In this thesis we consider the formation and nature of these current layers in magnetised plasmas. The study of current sheets and current layers in two, and more recently, three dimensions, has been a key field of research in the last decades. However, many of these studies do not take plasma pressure effects into consideration, and rather they consider models of current sheets where the magnetic forces sum to zero. More recently, others have started to consider models in which the plasma beta is non-zero, but they simply focus on the actual equilibrium state involving a current layer and do not consider how such an equilibrium may be achieved physically. In particular, they do not allow energy conversion between magnetic and internal energy of the plasma on their way to approaching the final equilibrium. In this thesis, we aim to describe the formation of equilibrium states involving current layers at both two and three dimensional magnetic null points, which are specific locations where the magnetic field vanishes. The different equilibria are obtained through the non-resistive dynamical evolution of perturbed hydromagnetic systems. The dynamic evolution relaxes via viscous damping, resulting in viscous heating. We have run a series of numerical experiments using LARE, a Lagrangian-remap code, that solves the full magnetohydrodynamic (MHD) equations with user controlled viscosity and resistivity. To allow strong current accumulations to be created in a static equilibrium, we set the resistivity to be zero and hence simply reach our equilibria by solving the ideal MHD equations. We first consider the relaxation of simple homogeneous straight magnetic fields embedded in a plasma, and determine the role of the coupling between magnetic and plasma forces, both analytically and numerically. Then, we study the formation of current accumulations at 2D magnetic X-points and at 3D magnetic nulls with spine-aligned and fan-aligned current. At both 2D X-points and 3D nulls with fan-aligned current, the current density becomes singular at the location of the null. It is impossible to be precisely achieve an exact singularity, and instead, we find a gradual continuous increase of the peak current over time, and small, highly localised forces acting to form the singularity. In the 2D case, we give a qualitative description of the field around the magnetic null using a singular function, which is found to vary within the different topological regions of the field. Also, the final equilibrium depends exponentially on the initial plasma pressure. In the 3D spine-aligned experiments, in contrast, the current density is mainly accumulated along and about the spine, but not at the null. In this case, we find that the plasma pressure does not play an important role in the final equilibrium. Our results show that current sheet formation (and presumably reconnection) around magnetic nulls is held back by non-zero plasma betas, although the value of the plasma pressure appears to be much less important for torsional reconnection. In future studies, we may consider a broader family of 3D nulls, comparing the results with the analytical calculations in 2D, and the relaxation of more complex scenarios such as 3D magnetic separators.
355

Évaluation de la perte du volume cérébral en IRM comme marqueur individuel de neurodégénérescence des patients atteints de sclérose en plaques. / Evaluation of brain volume loss on MRI as an individual marker of neurodegeneration in multiple sclerosis

Durand-Dubief, Françoise 20 December 2011 (has links)
La mesure de la perte du volume cérébral est un marqueur IRM de la neurodégénérescence dans la sclérose en plaques. Les techniques actuelles permettent de quantifier soit directement la perte de volume cérébral entre deux examens, soit de la mesurer indirectement à partir du volume cérébral de chaque examen. La fiabilité de ces techniques reste difficile à évaluer en l’absence de gold standard. Ce travail a consisté premièrement, en une étude de reproductibilité réalisée chez 9 patients à partir d’acquisitions semestrielles (3 IRM), sur deux machines différentes et post-traitées par sept algorithmes : BBSI, FreeSurfer, Intégration Jacobienne, KNBSI, un algorithme Segmentation / Classification, SIENA et SIENAX. Deuxièmement, un suivi longitudinal et prospectif a été effectué chez 90 patients SEP. L’étude des variabilités inter-techniques et inter-sites a montré que les techniques de mesures indirectes (Segmentation/Classification, FreeSurfer) et SIENAX fournissaient des pourcentages d’atrophie hétérogènes. A l’inverse, les techniques de mesures directes telles que BBSI, KNBSI, Intégration Jacobienne et à un moindre degré SIENA obtenaient des résultats reproductibles. Toutefois BBSI, KNBSI et l’Intégration Jacobienne obtenaient des pourcentages faibles, suggérant une possible sous-estimation de l’atrophie. L’évaluation de la perte du volume cérébral par Intégration Jacobienne a montré sur 2½ ans de suivi, une atrophie de 1,21% pour les 90 patients et de 1,55%, 1,51%, 0,84%, 1,21% respectivement pour les patients CIS, RR, SP et PP. A l’avenir l’évaluation de la perte de volume cérébral impose des défis d’ordre technique afin d’améliorer la fiabilité des algorithmes actuels. / Brain volume loss is currently a MRI marker of neurodegeneration in MS. The available algorithms for its quantification perfom either direct measurements, or indirect measurements. Their reliability remains difficult to assess especially since there is no gold standard technique. This work consisted first, in a reproducibility study performed on nine patients’ biannual MRI acquisitions (3 time points). These acquisitions were performed on two different MRI systems. Post-processing was applied using seven algorithms: BBSI, FreeSurfer, Jacobian Integration, KNBSI, an algorithm based on segmentation/classification, SIENA and SIENAX. Second, a longitudinal and prospective study was performed in 90 MS patients. The study of inter-technique and inter-site variabilities showed that direct measurement techniques and SIENAX provided heterogeneous values of atrophy. In contrast, indirect measurement algorithms such as BBSI, KNBSI, Jacobian Integration and to a lesser extent SIENA obtained reproducible results. However BBSI, KNBSI and Jacobian Integration algorithms showed lower percentages, suggesting a possible underestimation of atrophy. The evaluation of brain volume loss by Jacobian Integration has shown an atrophy rate of 1.21% over 2 ½ years of the 90 patients’ follow up, and of 1.55%, 1.51%, 0.84%, 1.21% for CIS, RR, SP and PP patients respectively. Jacobian Integration showed its importance in individual monitoring. In the future, assessing brain volume loss requires overcoming of some technical challenges to improve the reliability of the currently available algorithms.
356

Supramolecular networks and on surface polymerization studied by scanning tunneling microscopy / Réseaux supramolécualires et on-surface polymérisation étudiés par microscopie à effet tunnel

Zhan, Gaolei 09 November 2017 (has links)
Ce travail présente les études, par microscopie à effet tunnel (STM) sous ultravide, d’une part de la formation de réseaux supramoléculaires, résultant de l’auto-assemblage des précurseurs organiques sur les surfaces Cu(111), Au(111), Si(111)-B et HOPG, et d’autre part, de l’étude de réactions chimiques sur les surfaces Cu(111), Au(111) et HOPG. Le premier chapitre décrit l’état de l’art des réseaux supramoléculaires ainsi que les réactions chimiques sur les surfaces. Le deuxième chapitre présente le dispositif expérimental et les théories sous-jacentes, ainsi que les préparations des substrats, de la pointe et la méthode de déposition des molécules.Le troisième chapitre présente les réseaux supramoléculaires formés par les dépositions des molécules fonctionnalisées par des atomes de brome ou d’azote sur les surfaces Cu(111) et Si(111)-B. Dans tous les cas, le rôle de la surface est prédominant lors de la formation des réseaux. Sur Cu(111), deux réseaux linéaires sont stabilisés par des interactions organométalliques entre les adatomes de Cu et les molécules organiques. Sur Si(111)-B, les réseaux formés sont commensurables avec la reconstruction √3 × √3 de la surface. En fonction de la compétition entre les interactions intermoléculaires et molécule-surface, les réseaux peuvent être 2D ou 1D.Le quatrième chapitre présent le premier exemple de polymérisation radicalaire sur une surface. Pour ce faire, quatre molécules de type arylalkyléthers et deux molécules arylalkanes furent synthétisées et déposées sur les surfaces de Cu (111), Au (111) et HOPG. Le mécanisme proposé pour cette réaction débute par une étape d’initialisation grâce à un processus de transfert inélastique d’électron tunnel (IET), générant des radicaux libres qui peuvent ensuite polymérisés par voie radicalaire sur la surface. / This work deals with the investigation, by means of scanning tunneling microscopy under ultra-high vacuum, of supramolecular networks resulting from the self-assembly of organic precursors on surfaces such as Cu(111), Au(111), Si(111)-B and HOPG, but also the investigation of on-surface reactions. The first chapter describes the state-of-the-art of supramolecular networks and on-surface reactions on surfaces. The second chapter presents the experimental setup and theoretical concepts, as well as the preparations of the substrates, the probe tip and the method of molecular deposition.The third chapter presents the supramolecular networks formed by the depositions of molecules functionalized by bromine atoms or nitrogen atoms on the Cu(111) and Si(111)-B surfaces. For both cases, the surface plays a key role in the formation of networks: on Cu(111), the two linear networks are stabilized by metal-organic interactions between the Cu adatoms and the organic molecules; on Si(111)-B, the nanoarchitectures are commensurable with the reconstruction √3 × √3 of the surface. As function of the competition between the intramolecular and intermolecular interactions, the networks could be 2D or 1D.The fourth chapter presents the first example of on-surface radical polymerization, which is developed by the tandem synthetic method. For this end, four arylalkylether molecules and two arylalkane molecules are synthetized and deposited on different types of surfaces such as Cu (111), Au (111) and HOPG. The proposed mechanism suggests that this reaction is initialized by the inelastic electron tunneling (IET) process, which provides the free and stable radicals for the further on surface radical polymerization.
357

Development and validation of a predictive model to ensure the long-term electromagnetic compatibility of embedded electronic systems / Développement et validation de modèle prédictif pour assurer la compatibilité électromagnétique à long terme des systèmes électroniques embarqués.

Ghfiri, Chaimae 13 December 2017 (has links)
Avec l’avancement technologique des circuits intégrés à travers la miniaturisation des tailles des transistors et leur multiplication au sein d’une même puce, l’intégration des circuits dans des systèmes embarqués complexes, principalement dans l’industrie aéronautique, spatiale et automobile, rencontre de plus en plus d’exigences en termes de respect des niveaux d’émission et d’immunité. De plus, étant donné que l’évolution des niveaux de Compatibilité Electromagnétique (CEM) des équipements électroniques doit respecter ces exigences à long terme, les marges définis par les industriels sont souvent surestimés et les systèmes de filtrages établis par les équipementiers peuvent être surdimensionnés. De ce fait, pour les circuits intégrés dédiés aux applications embarquées, il est nécessaire d’étudier les deux aspects qui concernent la modélisation CEM ainsi que la modélisation de la fiabilité. Ces dernières années, des standards ont été proposés et permettent la construction de modèles CEM prédictifs tel que ICEM-CE/RE (Integrated Circuit Emission Model for Conducted and Radiated Emission) et ICIM-CI (Integrated Circuit Immunity Model for Conducted Immunity). De plus, pour intégrer l’effet du vieillissement dans les modèles CEM, il faut étudier les principaux mécanismes de dégradation intrinsèques aux circuits intégrés qui accélèrent leur vieillissement tels que le HCI (Hot Carrier Injection), TDDB (Time Dependent Dielectric Breakdown), EM (Electromigration) et NBTI (Negative Bias Temperature Instability). Des modèles standardisés sont utilisés dans les différents domaines industriels qui permettent la construction de modèle de fiabilité tels que le standard MIL-HDBK-217 et le standard FIDES. Cependant, ils ne permettent de prendre en compte qu’un seul mécanisme de dégradation à la fois. Ce manuscrit de thèse introduit ces aspects de modélisation CEM et de fiabilité. Il traite également la construction d’un modèle d’émission conduite d’un FPGA avec la proposition de nouvelle méthodologie de modélisation. Ensuite, l’étude de la fiabilité du FPGA est décrite à travers l’utilisation d’un nouveau modèle permettant la prise en compte des différents mécanismes de dégradations et a été combiné au modèle CEM pour la prédiction des niveaux d’émissions conduite à long terme. / With the technological evolution of integrated circuits (ICs) through the transistors scaling, which leads to the multiplication of the number of transistors within a chip, the requirements in terms of emission and immunity levels become more restrictive in the aeronautic, space and automotive industries. Moreover, since the evolution of Electromagnetic Compatibility (EMC) levels of electronic equipment after aging must meet the EMC long-term requirements, the EMC margins defined by the manufacturers are often overestimated and the filtering systems designed by the equipment manufacturer could be oversized.Therefore, for the integrated circuits dedicated to embedded applications, it is necessary to study the different aspects of EMC modeling as well as the reliability the modeling. These last years, several standards have been proposed for the construction of predictive EMC models such as ICEM-CE/RE (Integrated Circuit Emission Model for Conducted and Radiated Emission) and ICIM-CI (Integrated Circuit Immunity Model for Conducted Immunity). On the other hand, to integrate the effect of aging in EMC models, it is important to study the main intrinsic degradation mechanisms that accelerate the aging of ICs, such as HCI (Hot Carrier Injection), TDDB (Time Dependent Dielectric Breakdown), EM (Electromigration) and NBTI (Negative Bias Temperature Instability). For this purpose, there are existing models for the reliability prediction, such as the MIL-HDBK-217 standard and the FIDES standard. However, these models could take into account only the activation of one degradation mechanism. The combination of several degradation mechanisms could be critical for the IC performances and could contribute in the evolution of EMC level.This thesis introduces the different aspects of EMC and reliability modeling. This work deals with the construction of a conducted emission model of an FPGA and the proposition of new modeling methodologies. Furthermore, the reliability of the tested FPGA is described using a new predictive model, which takes into account the activation of the different degradation mechanisms. The reliability model has been combined with the EMC model for the long-term conducted emission level prediction.
358

Sobre la difusión de la luz por nanopartículas con propiedades ópticas convencionales y no convencionales = On light scattering by nanoparticles with conventional and non-conventional optical properties

García Cámara, Braulio 04 November 2010 (has links)
Inspirados por las investigaciones realizadas en los campos de la plasmónica y los metamateriales, en este trabajo hemos estudiado la difusión de la luz por pequeñas partículas (nanopartículas en el rango del visible) con propiedades ópticas arbitraria, tanto convencionales (dieléctricas como metálicas) como no convencionales (con permeabilidad magnética distinta de 1). El trabajo está principalmente enfocado en controlar la propiedades (principalmente la dirección) de la luz difundida por una partícula mediante la manipulación de sus constantes ópticas. Las principales aplicaciones de este estudio se pueden concentrar en los campos de los biosensores y/o de las comunicaciones ópticas. Es por esto que hemos considerado tanto partículas aisladas como sistemas de partículas(dímeros principalmente). / Inspired by the last researches on plasmonics and metamaterials, this work is devoted to the study of light scattering by small particles (nanoparticles in the visible range) with arbitrary optical properties, both conventional and non-conventional. We focused the analysis on the control of the directionality of light scattering by tunning the optical constants of the scatterer. This could be interesting for the design of futuristic optical communications and/or for the generation of improved biosensores. For this reason, isolated particles and also clusters of them (mainly dimers) were considered.
359

Estudio teórico y experimental de la guía dieléctrica en banda invertida

Prieto Gala, Andrés 01 October 1979 (has links)
Se discute la guía dieléctrica en banda invertida como modificación capaz de aumentar el factor de calidad de la guía dieléctrica rectangular. En su configuración abierta se emplea para su estudio el método de la constante dieléctrica efectiva; se optimiza la geometría para conseguir el mayor ancho de banda posible y se discute la posibilidad de naturaleza espuerea para una parte de los modos encontrados. Se comprueba que el sistema de excitación clásico para guías dieléctricas no resulta valido para la guía en banda invertida y se optimiza un sistema de tipo cornete. Mediante un sistema de sonda móvil se mide la constante de propagación en la guía obteniéndose un buen acuerdo con las predicciones teóricas. Se estudia la guía dieléctrica cerrada en una caja mediante el método de Schelkunoff comprobándose la existencia de modos EH. / The inverted band dielectric waveguide is proposed as an alternative to the classical open dielectric waveguide, providing a higher quality factor. The dielectric effective constant method is used to study the open waveguide. In order to get the highest bandwith, the geometry is optimised, finding not real modes in the waveguide. It has been necessary to use a new excitation method of waveguide using a horn as transition. The measurement of the propagation constant has been made by means of an electric probe. The agreement with the theoretical predictions is very good. Finally the closed guide is studied by the Schelkunoff's method: modes EH were observed.
360

Aportaciones al estudio de las máquinas eléctricas de flujo axial mediante la aplicación del método de los elementos finitos

Frias Valero, Eduardo 12 November 2004 (has links)
La existencia de máquinas eléctricas industriales desde finales del siglo XIX implica que es un sector de desarrollo maduro donde parecen difíciles nuevos conceptos de máquina, a no ser que el desarrollo de nuevos materiales o tecnologías de fabricación hagan que conceptos de máquina con pocas aplicaciones pase a ser objeto de mejora y optimización. Analizando la literatura de estos últimos años (ver bibliografía), donde se pueden encontrar máquinas nada convencionales, se infiere que los diseños más interesantes hacen referencia a flujos que ya no pueden simplificarse a dos dimensiones como en los motores convencionales, sino que requieren interpretarlo en sus tres dimensiones. De todas las topologías propuestas es la máquina de flujo axial la que está encontrando mayor número de aplicaciones.En esta tesis doctoral se pretende presentar el estado del arte de esta topología de máquina, justificar su necesidad frente/junto a las máquinas radiales, presentar los parámetros de diseño fundamentales de la misma, presentar el modelo analítico tradicional de máquina de inducción, aportando la solución analítica de las ecuaciones de la máquina.En las máquinas axiales de inducción con rótor sólido no ferromagnético se presentan fenómenos que no tienen explicación evidente, como puede ser el giro en vacío a velocidades muy alejadas de la de sincronismo, o las dificultades en el arranque de las mismas. Con anterioridad ya se han hecho intentos de aplicar caminos de corriente preferentes en maquinas de este tipo con el fin de mejorar las prestaciones, pero aún no se ha logrado encontrar una explicación a alguno de los fenómenos que rodean a este tipo de máquinas.Se ha analizado modelos en 3 dimensiones de máquinas de flujo axial empleando el Método de los elementos finitos (MEF), puesto que el problema de las máquinas de flujo axial es un problema tridimensional, como así se reconoce en la bibliografía consultada.La simulación en 3 dimensiones (3D) de campos electromagnéticos no es novedad, puesto que existe software que permite el cálculo en 3D, sin embargo las simulaciones consultadas en la bibliografía son siempre con campos magnéticos estáticos, la mayoría de veces en 2D y nunca aplicada a cuerpos en movimiento.De los trabajos realizados se ha comprobado que para extraer conclusiones, es tan importante el número como la imagen gráfica de la simulación. Se han validado los resultados en base a los prototipos desarrollados en el Departament d'Enginyeria Elèctrica de la UPC. Desarrollados dentro del equipo de trabajo en el que ha trabajado el autor, dirigidos por Ricard Bosch Tous.Estos prototipos tienen en común la geometría axial, el número de polos y las similares dimensiones interiores y exteriores. Se ha trabajado sobre máquinas de 20 pares de polos, formadas por 1 ó 2 inductores constituidos por un soporte ferromagnético o no, sobre el que se bobina el devanado inductor. El rótor es siempre un disco de aluminio de diferentes dimensiones en función del prototipo.Se ha concluido la necesidad de emplear rotores ranurados debido a los vórtices de corriente que se forman, con el fin de alargar las líneas de corriente y por lo tanto la componente útil de la misma.Se ha concluido también la necesidad de incrementar las frecuencias de trabajo por encima de la f de 50 Hz. Existiendo frecuencias donde se prevé un mejor funcionamiento, que son los 500 y 1000 Hz. / Electrical Machines began to develop in early beginning of XIXth century. This means they have evolved for two centuries, and actually, from an electromechanical point of view, it is difficult to improve them.But there exists other topologies of electrical machines which were forgotten practically from the beginning of their evolution, as in case of axial flux machines.New materials and manufacturing techniques can make possible to develop and improve them.Taking a look on the recent literature related with non conventional machines, as it is the case of linear or axial flux machines, one can see that conventional two dimensional analysis gives little result. Due to the particular topology of these machines, magnetic field must be analysed in its three dimensions.The aim and goal of this thesis is to show the state of the art of axial flux machines, to justify its need against/beside radial flux machines, to show the basic design parameters and to analyse a 2 stator 1 rotor axial flux machine using the Finite Element Merhod (FEM).Axial flux induction machines with non ferromagnetic rotor show results of extremely difficult explanation, as a no load speed far away from synchronous speed. In previous works other authors have tried to fix preferred paths for induced current in these machines, but actually no answer has been found for a lot of questions around them.In this work a three dimensional analysis of the axial flux machine, through FEM simulations with moving rotor, have been performed using ANSYS software.No references of similar works have been found in the literature.Prototypes of axial flux machines with non ferromagnetic conductive rotor have been developed in the "Departement d'Enginyeria Elèctrica" of the Polytechnic University of Catalonia, by the team of Dr. Ricard Boch.Iron and ironless inductors have been tested showing similar results and showing speeds at no load that agree with the simulations.The conclusion of the thesis is that these machines ar so sensitive to frequency (f) and s and show better results at 500 Hz and 1 kHz.It has been proposed to use slotted rotors to break the current vortex and to extent the current lines to improve the torque of the machine.

Page generated in 0.0266 seconds