321 |
Turbulent burning, flame acceleration, explosion triggeringAkkerman, V'yacheslav January 2007 (has links)
The present thesis considers several important problems of combustion theory, which are closely related to each other: turbulent burning, flame interaction with walls in different geometries, flame acceleration and detonation triggering. The theory of turbulent burning is developed within the renormalization approach. The theory takes into account realistic thermal expansion of burning matter. Unlike previous renormalization models of turbulent burning, the theory includes flame interaction with vortices aligned both perpendicular and parallel to average direction of flame propagation. The perpendicular vortices distort a flame front due to kinematical drift; the parallel vortices modify the flame shape because of the centrifugal force. A corrugated flame front consumes more fuel mixture per unit of time and propagates much faster. The Darrieus-Landau instability is also included in the theory. The instability becomes especially important when the characteristic length scale of the flow is large. Flame interaction with non-slip walls is another large-scale effect, which influences the flame shape and the turbulent burning rate. This interaction is investigated in the thesis in different geometries of tubes with open / closed ends. When the tube ends are open, then flame interaction with non-slip walls leads to an oscillating regime of burning. Flame oscillations are investigated for different flame parameters and tube widths. The average increase in the burning rate in the oscillations is found. Then, propagating from a closed tube end, a flame accelerates according to the Shelkin mechanism. In the theses, an analytical theory of laminar flame acceleration is developed. The theory predicts the acceleration rate, the flame shape and the velocity profile in the flow pushed by the flame. The theory is validated by extensive numerical simulations. An alternative mechanism of flame acceleration is also considered, which is possible at the initial stages of burning in tubes. The mechanism is investigated using the analytical theory and direct numerical simulations. The analytical and numerical results are in very good agreement with previous experiments on “tulip” flames. The analytical theory of explosion triggering by an accelerating flame is developed. The theory describes heating of the fuel mixture by a compression wave pushed by an accelerating flame. As a result, the fuel mixture may explode ahead of the flame front. The explosion time is calculated. The theory shows good agreement with previous numerical simulations on deflagration-to-detonation transition in laminar flows. Flame interaction with sound waves is studied in the geometry of a flame propagating to a closed tube end. It is demonstrated numerically that intrinsic flame oscillations coming into resonance with acoustic waves may lead to violent folding of the flame front with a drastic increase in the burning rate. The flame folding is related to the Rayleigh-Taylor instability developing at the flame front in the oscillating acceleration field of the acoustic wave.
|
322 |
Numerical study of flame dynamicsPetchenko, Arkady January 2007 (has links)
Modern industrial society is based on combustion with ever increasing standards on the efficiency of burning. One of the main combustion characteristics is the burning rate, which is influenced by intrinsic flame instabilities, external turbulence and flame interaction with walls of combustor and sound waves. In the present work we started with the problem how to include combustion along the vortex axis into the general theory of turbulent burning. We demonstrated that the most representative geometry for such problem is a hypothetic “tube” with rotating gaseous mixture. We obtained that burning in a vortex is similar to the bubble motion in an effective acceleration field created by the centrifugal force. If the intensity of the vortex is rather high then the flame speed is determined mostly by the velocity of the bubble. The results obtained complement the renormalization theory of turbulent burning. Using the results on flame propagation along a vortex we calculated the turbulent flame velocity, compared it to the experiments and found rather good agreement. All experiments on turbulent combustion in tubes inevitably involve flame interaction with walls. In the present thesis flame propagation in the geometry of a tube with nonslip walls has been widely studied numerically and analytically. We obtained that in the case of an open tube flame interaction with nonslip walls leads to the oscillating regime of burning. The oscillations are accompanied by variations of the curved flame shape and the velocity of flame propagation. If flame propagates from the closed tube end, then the flame front accelerates with no limit until the detonation is triggered. The above results make a good advance in solving one of the most difficult problems of combustion theory, the problem of deflagration to detonation transition. We developed the analytical theory of accelerating flames and found good agreement with results of direct numerical simulations. Also we performed analytical and numerical studies of another mechanism of flame acceleration caused by initial conditions. The flame ignited at the axis of a tube acquires a “finger” shape and accelerates. Still, such acceleration takes place for a rather short time until the flame reaches the tube wall. In the case of flame propagating from the open tube end to the closed one the flame front oscillates and therefore generates acoustic waves. The acoustic waves reflected from the closed end distort the flame surface. When the frequency of acoustic mode between the flame front and the tube end comes in resonance with intrinsic flame oscillations the burning rate increases considerably and the flame front becomes violently corrugated.
|
323 |
Alfvén Waves and Energy Transformation in Space PlasmasKhotyaintsev, Yuri January 2002 (has links)
This thesis is focused on the role of Alfvén waves in the energy transformation and transport in the magnetosphere. Different aspects of Alfvén wave generation, propagation and dissipation are considered. The study involves analysis of experimental data from the Freja, Polar and Cluster spacecraft, as well as theoretical development. An overview of the linear theory of Alfvén waves is presented, including the effects of fnite parallel electron inertia and fnite ion gyroradius, and nonlinear theory is developed for large amplitude Alfvén solitons and structures. The methodology is presented for experimental identification of dispersive Alfvén waves in a frame moving with respect to the plasma, which facilitates the resolution of the space-time ambiguity in such measurements. Dispersive Alfvén waves are identified on field lines from the topside ionosphere up to the magnetopause and it is suggested they play an important role in magnetospheric physics. One of the processes where Alfvén waves are important is the establishment of the field aligned current system, which transports the energy from the reconnection regions at the magnetopause to the ionosphere, where a part of the energy is dissipated. The main mechanism for the dissipation in the top-side ionosphere is related to wave-particle interactions leading to particle energization/heating. An observed signature of such a process is the presence of parallel energetic electron bursts associated with dispersive Alfvén waves. The accelerated electrons (electron beams) are unstable with respect to the generation of high frequency plasma wave modes. Therefore this thesis also demonstrates an indirect coupling between low frequency Alfvén wave and high frequency oscillations.
|
324 |
Investigations of auroral electric fields and currentsJohansson, Tommy January 2007 (has links)
The Cluster spacecraft have been used to investigate auroral electric fields and field-aligned currents (FACs) at geocentric distances between 4 and 7 Re. The electric fields have been measured by the EFW instrument, consisting of two pairs of spherical probes, and the FACs have been calculated from measurements of the magnetic field by the FGM fluxgate magnetometer. CIS ion and PEACE electron measurements have also been used. Event studies as well as statistical studies have been used to determine the characteristics of the auroral electric fields. In two events where regions of both spatial and temporal electric field variations could be identified, the quasi-static electric fields were, compared to the Alfvén waves, found to be more intense and contribute more to the downward Poynting flux. With the use of the four Cluster spacecraft, the quasi-static electric field structures were found to be relatively stable on the time scale of at least half a minute. Quasi-static electric fields were found throughout the altitude range covered by Cluster in the auroral region. The electric field structures were found both in the upward and downward current regions. Bipolar and monopolar electric fields, corresponding to U- and S-shaped potential structures, have been found at different plasma boundaries, consistent with the view that the plasma conditions and the geometry of the current system are related to the shape of the electric field. The type of the bipolar electric field structures (convergent or divergent) was further found to be consistent with the FAC direction. The typical scale sizes of the electric field structures have been determined to be between 4 and 5 km, when mapped to ionospheric altitude. The most intense FACs associated with intense electric fields were found for small FAC widths. The widths of upward and downward FACs were similar. / QC 20100730
|
325 |
Zur Beziehung zwischen der akzelerometrisch erfassten Körperbeschleunigung und der Herzfrequenz beim PferdKubus, Katrin 14 June 2013 (has links) (PDF)
Zur Ermittlung des Energieverbrauches bei Mensch und Tier stehen verschiedene Methoden zur Verfügung. Im Jahre 1780 nutzte Lavoisier die Schmelzwassermenge, um den Energieverlust eines Meerschweinchens zu berechnen. Das Tier saß in einem von Eis umgebenen Kalorimeter, die von ihm abgegebene Wärme brachte das Eis zum Schmelzen. Derzeit sind die indirekte Kalorimetrie, die den Energieumsatz über den im Respirationsversuch gemessenen Gaswechsel von O2 und CO2 sowie die im Harn ausgeschiedene Stickstoffmenge bestimmt, und die Isotopendilutionsmethode, die mit der unterschiedlichen Ausscheidungsrate von markierten Wasserstoff- (2H) und Sauerstoff- (18O) Atomen im Urin arbeitet, der „Goldstandard“ für die Bestimmung des Energieverbrauchs. Seit einigen Jahren bis heute steht die Herzfrequenzmethode in der Diskussion. Sie nutzt die Beziehung zwischen Herzfrequenz und Sauerstoffverbrauch zur Ermittlung des Energieumsatzes.
Alle genannten Methoden haben Vor- und Nachteile, insbesondere für den einfachen und schnellen täglichen Einsatz sowie bei Langzeitstudien. Deshalb werden Alternativen gesucht. Diese Dissertation untersucht die Beziehung zwischen der akzelerometrisch erfassten dreidimensionalen Körperbeschleunigung und der Herzfrequenz beim Pferd in verschiedenen Gangarten. Dabei wird die Herzfrequenz als Vergleichs- und Bezugsgröße verwendet. Sie stellt das direkte Bindeglied zum Sauerstoffverbrauch und damit Energieaufwand dar.
Es wurden drei Versuchsvarianten durchgeführt. Die Pferde gingen an der Hand, „geführt“, liefen frei in einem umzäunten Oval, „freilaufend“, oder wurden „geritten“. Bei den beiden Varianten „geführt“ und „freilaufend“ kamen jeweils dieselben vier Pferde zum Einsatz, die Variante „geritten“ absolvierten fünf andere Tiere. Die Versuche folgten verschiedenen Schemata mit den Gangarten Schritt, Trab und, zum Teil, Galopp. Bei allen Versuchen wurden parallel die dreidimensionale Körperbeschleunigung mit einer Frequenz von 32 Hz sowie die Herzfrequenz gemessen. Die Pulsuhr speicherte im kleinstmöglichen Intervall von fünf Sekunden. Nach Aufbereitung der Beschleunigungsrohdaten wurde letztendlich der dynamische Anteil der dreidimensionalen Beschleunigung in Form von „fünf-Sekunden-Mittelwerten“ berechnet. Anschließend wurden diese Beschleunigungswerte über die Regressionsanalyse mit den Originalwerten der Herzfrequenz in Beziehung gesetzt. Dabei wurden die Übergangsphasen zwischen den Gangarten ausgenommen, da die beiden Parameter hier ein sehr unterschiedliches und zeitversetztes Verhalten zeigen. Bei der Analyse der Gangarten Schritt und Trab konnte gut mit dem Modell der einfachen linearen Regression (y = a + bx) gearbeitet werden, mit Hinzukommen der dritten Gangart, Galopp, erwies sich das Modell der polynomialen Regression (y = a + bx + cx²) von Vorteil. Die Stärke des Zusammenhanges der beiden Größen wurde durch den Korrelationskoeffizienten r angezeigt. Bei differenzierter Betrachtung der Versuchsvarianten und der einzelnen Pferde erreichte r Werte von 0,86 bis 0,94, bei zusammenfassender Betrachtung aller Pferde einer Versuchsvariante Werte zwischen 0,82 und 0,87, stets bei signifikanter Korrelation (p < 0,05). Somit kann für die Parameter Herzfrequenz und Beschleunigung ein signifikanter und starker Zusammenhang beschrieben werden. Sie verhalten sich dabei nicht proportional zueinander.
Schlussfolgernd lässt sich sagen, dass die Akzelerometrie für bestimmte Zielstellungen und unter bestimmten Voraussetzungen eine geeignete Methode ist, um den Energieaufwand von Pferden zu bestimmen. Sie ist schnell und meist störungsfrei durchzuführen und im Gegensatz zur Herzfrequenz nahezu unabhängig von emotionalen Einflüssen. Des Weiteren bietet die Akzelerometrie die Möglichkeit, die Ermittlung des Energieumsatzes mit einer Verhaltensanalyse zu kombinieren. Bedingungen für ihren Einsatz sind eine situationsspezifische und möglichst individuelle Kalibrierung, denn die Beschleunigungsmessung weist insofern Nachteile auf, als dass sie die Auswirkungen von zum Beispiel Bodenbeschaffenheit, Umwelteinflüssen oder das Tragen einer Last auf den Energieumsatz nicht berücksichtigt. Die parallele Erfassung von Herzfrequenz und Beschleunigung kann zum Beispiel zur Analyse und Kontrolle von Trainingserfolgen genutzt werden. Somit bringt die Kombination von Herzfrequenz- und Beschleunigungsmessung klare Vorteile. / There are different opportunities to determine the consumption of energy in humans and animals. In 1780 Lavoisier used the quantity of melt water to calculate the energy loss of a guinea pig. The guinea pig was located inside a calorimeter which was surrounded by ice. The emitted heat induced the melting of the ice. At present both, indirect calorimetry that estimates energy expenditure from respiratory measurements of oxygen consumption and carbon dioxide production plus the excretion of nitrogen with the urine and the DLW-method that uses the different urinary elimination rates of the isotopes 2H and 18O are the so called “golden standard” for the calculation of energy consumption. For several years until now there has been a discussion about the heart rate-method. This method uses the correlation between heart rate and oxygen consumption for the calculation of energy expenditure.
All above mentioned methods have pros and cons, especially for simple and quick every day application and for long-term studies. Therefore alternatives are searched. This dissertation examines the relation between the accelerometricly measured three-dimensional body acceleration and the heart rate in horses at different gaits. The heart rate has been used for comparison and as a reference item. It directly relates the acceleration with the oxygen consumption and thus with the energy expenditure.
There have been three variants of trials. Horses were led by the hand (HD), moved freely (MF) in an enclosed oval or were ridden (R). In the HD- and MF-trials the same four horses were used, for the R-trials five other horses came into action. The trials followed different schemes with the gaits of walk, trot and gallop.
At every trial three-dimensional body-acceleration with a logging frequency of 32 Hz and heart rate were measured simultaneously. The heart rate meter stored the heart rate in the smallest possible intervals of five seconds. After processing the crude data the dynamic part of the three-dimensional acceleration was calculated in form of “five-second-means”. After that the regression analysis was used to relate these acceleration data to the original heart rate data. In this process the transitional phases between the gaits were excluded because there both parameters have a highly varying and time-shifted relation. The model of simple linear regression (y = a + bx) suited well for analysing walking and trotting. With adding the third gait gallop the model of polynomial regression (y = a + bx + cx²) became more favourable. The correlation coefficient r showed the strength of the correlation between both parameters. By the separate inspection of the variants of trials and the individual horses r reached values from 0,86 to 0,94; pooling all horses of each variant of trials yields r-values from 0,82 to 0,87, always with a significant correlation (p < 0,05).
Hence a significant and strong correlation can be attributed to the parameters heart rate and acceleration. They are not proportional to each other.
In conclusion one can say: for specific aims and under certain conditions the accelerometry is an appropriate method to assess energy expenditure in horses. You can implement it quickly and mostly disturbance-free and in contrast to the heart rate it is nearly independent of emotional influence. Furthermore accelerometry gives the opportunity to combine the determination of the energy expenditure with the analysis of behaviour. A possibly individual and situation-specific calibration are the preconditions for its application. A setback of the accelerometry is that the effects of such factors like the condition of the ground, environmental influences or carrying weights are not taken into consideration. Simultaneous measurement of heart rate and body-acceleration can for example be used for analysing and controlling the success of training.
Consequently there are clear advantages of combining the measurement of heart rate and acceleration.
|
326 |
The transient motion of a solid sphere between parallel wallsBrooke, Warren Thomas 20 October 2005
This thesis describes an investigation of the velocity field in a fluid around a solid sphere undergoing transient motion parallel to, and midway between, two plane walls. Particle Image Velocimetry (PIV) was used to measure the velocity at many discrete locations in a plane that was perpendicular to the walls and included the centre of the sphere. The transient motion was achieved by releasing the sphere from rest and allowing it to accelerate to terminal velocity. <p>To avoid complex wake structures, the terminal Reynolds number was kept below 200. Using solutions of glycerol and water, two different fluids were tested. The first fluid was 100%wt glycerol, giving a terminal Reynolds number of 0.6 which represents creeping flow. The second solution was 80%wt glycerol yielding a terminal Reynolds number of 72. For each of these fluids, three wall spacings were examined giving wall spacing to sphere diameter ratios of h/d = 1.2, 1.5 and 6.0. Velocity field measurements were obtained at five locations along the transient in each case. Using Y to denote the distance the sphere has fallen from rest, velocity fields were obtained at Y/d = 0.105, 0.262, 0.524, 1.05, and 3.15. <p>It was observed that the proximity of the walls tends to retard the motion of the sphere. A simple empirical correlation was fit to the observed sphere velocities in each case. A wall correction factor was used on the quasi-steady drag term in order to make the predicted unbounded terminal velocity match the observed terminal velocity when the walls had an effect.
While it has been previously established that the velocity of a sphere is retarded by the proximity of walls, the current research examined the link between the motion of the sphere and the dynamics of the fluid that surrounds it. By examining the velocity profile between the surface of the sphere at the equator and the wall, it was noticed that the shear stresses acting on the sphere increase throughout the transient, and also increase as the wall spacing decreases. This is due to the walls blocking the diffusion of vorticity away from the sphere as it accelerates leading to higher shear stresses. <p>In an unbounded fluid, the falling sphere will drag fluid along with it, and further from the sphere, fluid will move upward to compensate. It was found that there is a critical wall spacing that will completely prevent this recirculation in the gap between the sphere and the wall. In the 80%wt glycerol case, this critical wall spacing is between h/d = 1.2 and 1.5, and in the 100%wt glycerol case the critical wall spacing is between h/d = 1.5 and 6.0.
|
327 |
Precise GPS-based position, velocity and acceleration determination: algorithms and toolsSalazar Hernández, Dagoberto José 29 April 2010 (has links)
Esta tesis doctoral llevó a cabo el estudio, desarrollo e implementación de algoritmos para la navegación con
sistemas globales de navegación por satélite (GNSS), enfocándose en la determinación precisa de la posición,
velocidad y aceleración usando GPS, en modo post-procesado y lejos de estaciones de referencia.
Uno de los objetivos era desarrollar herramientas en esta área y hacerlas disponibles a la comunidad GNSS. Por ello
el desarrollo se hizo dentro del marco del proyecto preexistente de software libre llamado GPS Toolkit (GPSTk). Una
de las primeras tareas realizadas fue la validación de las capacidades de la GPSTk para el procesado del
pseudorango, realizando comparaciones con una herramienta de procesamiento de datos probada (BRUS).
La gestión de datos GNSS demostró ser un asunto importante cuando se intentó extender las capacidades de la
GPSTk al procesamiento de datos obtenidos de las fases de la señal GPS. Por ello se desarrollaron las Estructuras
de Datos GNSS (GDS), que combinadas con su paradigma de procesamiento aceleran el proceso de desarrollo de
software y reducen errores.
La extensión de la GPSTk a los algoritmos de procesado en fase se hizo mediante la ayuda de las GDS,
proporcionándose importantes clases accesorias que facilitan el trabajo. Se implementó el procesado de datos
Precise Point Positioning (PPP) con ejemplos relativamente simples basados en las GDS, y al comparar sus
resultados con otras aplicaciones de reputación ya establecida, se encontró que destacan entre los mejores.
También se estudió cómo obtener la posición precisa, en post-proceso, de un receptor GPS a cientos de kilómetros
de la estación de referencia más cercana y usando tasas de datos arbitrarias (una limitación del método PPP). Las
ventajas aportadas por las GDS permitieron la implementación de un procesado semejante a un PPP cinemático
basado en una red de estaciones de referencia, estrategia bautizada como Precise Orbits Positioning (POP) porque
sólo necesita órbitas precisas para trabajar y es independiente de la información de los relojes de los satélites GPS.
Los resultados de este enfoque fueron muy similares a los del método PPP cinemático estándar, pero
proporcionando soluciones de posición con una tasa mayor y de manera más robusta.
La última parte se enfocó en la implementación y mejora de algoritmos para determinar con precisión la velocidad y
aceleración de un receptor GPS. Se hizo énfasis en el método de las fases de Kennedy debido a su buen
rendimiento, desarrollando una implementación de referencia y demostrando la existencia de una falla en el
procedimiento propuesto originalmente para el cálculo de las velocidades de los satélites. Se propuso entonces una
modificación relativamente sencilla que redujo en un factor mayor que 35 el RMS de los errores 3D en velocidad.
Tomando ideas de los métodos Kennedy y POP se desarrolló e implementó un nuevo procedimiento de
determinación de velocidad y aceleración que extiende el alcance. Este método fue llamado Extended Velocity and
Acceleration determination (EVA). Un experimento usando una aeronave ligera volando sobre los Pirineos mostró
que tanto el método de Kennedy (modificado) como el método EVA son capaces de responder ante la dinámica de
este tipo de vuelos.
Finalmente, tanto el método de Kennedy modificado como el método EVA fueron aplicados a una red en la zona
ecuatorial de Sur América con líneas de base mayores a 1770 km. En este escenario el método EVA mostró una
clara ventaja tanto en los promedios como en las desviaciones estándar para todas las componentes de la velocidad
y la aceleración. / This Ph.D. Thesis focuses on the development of algorithms and tools for precise GPS-based position, velocity and
acceleration determination very far from reference stations in post-process mode.
One of the goals of this thesis was to develop a set of state-of-the-art GNSS data processing tools, and make them
available for the research community. Therefore, the software development effort was done within the frame of a
preexistent open source project called the GPSTk. Therefore, validation of the GPSTk pseudorange-based processing
capabilities with a trusted GPS data processing tool was one of the initial task carried out in this work.
GNSS data management proved to be an important issue when trying to extend GPSTk capabilities to carrier phasebased
data processing algorithms. In order to tackle this problem the GNSS Data Structures (GDS) and their
associated processing paradigm were developed. With this approach the GNSS data processing becomes like an
assembly line, providing an easy and straightforward way to write clean, simple to read and use software that speeds
up development and reduces errors.
The extension of GPSTk capabilities to carrier phase-based data processing algorithms was carried out with the help
of the GDS, adding important accessory classes necessary for this kind of data processing and providing reference
implementations. The performance comparison of these relatively simple GDS-based source code examples with
other state-of-the art Precise Point Positioning (PPP) suites demonstrated that their results are among the best.
Furthermore, given that the GDS design is based on data abstraction, it allows a very flexible handling of concepts
beyond mere data encapsulation, including programmable general solvers, among others.
The problem of post-process precise positioning of GPS receivers hundreds of kilometers away from nearest
reference station at arbitrary data rates was dealt with, overcoming an important limitation of classical post-processing
strategies like PPP. The advantages of GDS data abstraction regarding solvers were used to implement a kinematic
PPP-like processing based on a network of stations. This procedure was named Precise Orbits Positioning (POP)
because it is independent of precise clock information and it only needs precise orbits to work. The results from this
approach were very similar (as expected) to the standard kinematic PPP processing strategy, but yielding a higher
positioning rate. Also, the network-based processing of POP seems to provide additional robustness to the results,
even for receivers outside the network area.
The last part of this thesis focused on implementing, improving and testing algorithms for the precise determination of
velocity and acceleration hundreds of kilometers away from nearest reference station. Special emphasis was done on
the Kennedy method because of its good performance. A reference implementation of Kennedy method was
developed, and several experiments were carried out. Experiments done with very short baselines showed a flaw in
the way satellite velocities were computed, introducing biases in the velocity solution. A relatively simple modification
was proposed, and it reduced the RMS of 5-min average velocity 3D errors by a factor of over 35.
Then, borrowing ideas from Kennedy method and the POP method, a new velocity and acceleration determination
procedure named EVA was developed and implemented that greatly extends the effective range. An experiment using
a light aircraft flying over the Pyrenees showed that both the modified-Kennedy and EVA methods were able to cope
with the dynamics of this type of flight. Finally, both modified-Kennedy and EVA method were applied to a challenging
scenario in equatorial South America, with baselines over 1770 km, where EVA method showed a clear advantage in
both averages and standard deviations for all components of velocity and acceleration.
Lloc i
|
328 |
Ion acceleration mechanisms of helicon thrustersWilliams, Logan Todd 08 April 2013 (has links)
A helicon plasma source is a device that can efficiently ionize a gas to create high density, low temperature plasma. There is growing interest in utilizing a helicon plasma source in propulsive applications, but it is not yet known if the helicon plasma source is able to function as both an ion source and ion accelerator, or whether an additional ion acceleration stage is required. In order to evaluate the capability of the helicon source to accelerate ions, the acceleration and ionization processes must be decoupled and examined individually. To accomplish this, a case study of two helicon thruster configurations is conducted. The first is an electrodeless design that consists of the helicon plasma source alone, and the second is a helicon ion engine that combines the helicon plasma source with electrostatic grids used in ion engines. The gridded configuration separates the ionization and ion acceleration mechanisms and allows for individual evaluation not only of ion acceleration, but also of the components of total power expenditure and the ion production cost.
In this study, both thruster configurations are fabricated and experimentally characterized. The metrics used to evaluate ion acceleration are ion energy, ion beam current, and the plume divergence half-angle, as these capture the magnitude of ion acceleration and the bulk trajectory of the accelerated ions. The electrode-less thruster is further studied by measuring the plasma potential, ion number density, and electron temperature inside the discharge chamber and in the plume up to 60 cm downstream and 45 cm radially outward. The two configurations are tested across several operating parameter ranges: 343-600 W RF power, 50-450 G magnetic field strength, 1.0-4.5 mg/s argon flow rate, and the gridded configuration is tested over a 100-600 V discharge voltage range.
Both configurations have thrust and efficiency below that of contemporary thrusters of similar power, but are distinct in terms of ion acceleration capability. The gridded configuration produces a 65-120 mA ion beam with energies in the hundreds of volts that is relatively collimated. The operating conditions also demonstrate clear control over the performance metrics. In contrast, the electrodeless configuration generally produces a beam current less than 20 mA at energies between 20-40 V in a very divergent plume. The ion energy is set by the change in plasma potential from inside the device to the plume. The divergence ion trajectories are caused by regions of high plasma potential that create radial electric fields.. Furthermore, the operating conditions have limited control of the resulting performance metrics. The estimated ion production cost of the helicon ranged between 132-212 eV/ion for argon, the lower bound of which is comparable to the 157 eV/ion in contemporary DC discharges. The primary power expenditures are due to ion loss to the walls and high electron temperature leading to energy loss at the plasma sheaths.
The conclusion from this work is that the helicon plasma source is unsuitable as a single-stage thruster system. However, it is an efficient ion source and, if paired with an additional ion acceleration stage, can be integrated into an effective propulsion system.
|
329 |
Efficient laser-driven proton acceleration in the ultra-short pulse regimeZeil, Karl 10 July 2013 (has links) (PDF)
The work described in this thesis is concerned with the experimental investigation of the acceleration of high energy proton pulses generated by relativistic laser-plasma interaction and their application. Using the high intensity 150 TW Ti:sapphire based ultra-short pulse laser Draco, a laser-driven proton source was set up and characterized. Conducting experiments on the basis of the established target normal sheath acceleration (TNSA) process, proton energies of up to 20 MeV were obtained. The reliable performance of the proton source was demonstrated in the first direct and dose controlled comparison of the radiobiological effectiveness of intense proton pulses with that of conventionally generated continuous proton beams for the irradiation of in vitro tumour cells. As potential application radiation therapy calls for proton energies exceeding 200 MeV. Therefore the scaling of the maximum proton energy with laser power was investigated and observed to be near-linear for the case of ultra-short laser pulses. This result is attributed to the efficient predominantly quasi-static acceleration in the short acceleration period close to the target rear surface. This assumption is furthermore confirmed by the observation of prominent non-target-normal emission of energetic protons reflecting an asymmetry in the field distribution of promptly accelerated electrons generated by using oblique laser incidence or angularly chirped laser pulses. Supported by numerical simulations, this novel diagnostic reveals the relevance of the initial prethermal phase of the acceleration process preceding the thermal plasma sheath expansion of TNSA. During the plasma expansion phase, the efficiency of the proton acceleration can be improved using so called reduced mass targets (RMT). By confining the lateral target size which avoids the dilution of the expanding sheath and thus increases the strength of the accelerating sheath fields a significant increase of the proton energy and the proton yield was observed.
|
330 |
A Lightweight Processor Core for Application Specific AccelerationGrant, David January 2004 (has links)
Advances in configurable logic technology have permitted the development of low-cost, high-speed configurable devices, allowing one or more soft processor cores to be introduced into a configurable computing system. Soft processor cores offer logic-area savings and reduced configuration times when compared to the hardware-only implementations typically used for application specific acceleration. Programs for a soft processor core are small and simple compared to the design of a hardware core, but can leverage custom hardware within the processor core to provide greater acceleration for specific applications. This thesis presents several configurable system models, and implements one such model on a Nios Embedded Processor Development Board. A software programmable and hardware configurable lightweight processor core known as the FAST CPU is introduced. The configurable system implementation attaches several FAST CPUs to a standard Nios processor to create a system for experimentation with application specific acceleration. This system incorporating the FAST CPUs was tested for bus utilization behaviour, computing performance, and execution times for a minheap application. Experimental results are compared to the performance of a software-only solution, and also with previous research results. Experimental results verify that the theory and models used to predict bus utilization are correct. Performance testing shows that the FAST CPU is approximately 25% slower than a general purpose processor, which is expected. The FAST CPU, however, is 31% smaller in terms of logic area than the general purpose processor, and is 8% smaller than the design of a hardware-only implementation of a minheap for application specific acceleration. The results verify that it is possible to move functionality from a general purpose processor to a lightweight processor, and further, to realize an increase in performance when a task is parallelized across multiple FAST CPUs. The experimentation uses a procedure by which a set of equations can be derived for predicting bus utilization and deriving a cost-benefit curve for a coprocessing entity. They are applied to a specific system in this research, but the methods are generalizable to any coprocessing entity.
|
Page generated in 0.0534 seconds