• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 140
  • 35
  • 27
  • 27
  • 20
  • 8
  • 6
  • 5
  • 4
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 322
  • 85
  • 67
  • 46
  • 44
  • 38
  • 34
  • 33
  • 29
  • 28
  • 28
  • 26
  • 24
  • 24
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Codage de sources distribuées : Outils et Applications à la compression vidéo

Toto-Zarasoa, Velotiaray 29 November 2010 (has links) (PDF)
Le codage de sources distribuées est une technique permettant de compresser plusieurs sources corrélées sans aucune coopération entre les encodeurs, et sans perte de débit si leur décodage s'effectue conjointement. Fort de ce principe, le codage de vidéo distribué exploite la corrélation entre les images successives d'une vidéo, en simplifiant au maximum l'encodeur et en laissant le décodeur exploiter la corrélation. Parmi les contributions de cette thèse, nous nous intéressons dans une première partie au codage asymétrique de sources binaires dont la distribution n'est pas uniforme, puis au codage des sources à états de Markov cachés. Nous montrons d'abord que, pour ces deux types de sources, exploiter la distribution au décodeur permet d'augmenter le taux de compression. En ce qui concerne le canal binaire symétrique modélisant la corrélation entre les sources, nous proposons un outil, basé sur l'algorithme EM, pour en estimer le paramètre. Nous montrons que cet outil permet d'obtenir une estimation rapide du paramètre, tout en assurant une précision proche de la borne de Cramer-Rao. Dans une deuxième partie, nous développons des outils permettant de décoder avec succès les sources précédemment étudiées. Pour cela, nous utilisons des codes Turbo et LDPC basés syndrome, ainsi que l'algorithme EM. Cette partie a été l'occasion de développer des nouveaux outils pour atteindre les bornes des codages asymétrique et non-asymétrique. Nous montrons aussi que, pour les sources non-uniformes, le rôle des sources corrélées n'est pas symétrique. Enfin, nous montrons que les modèles de sources proposés modélisent bien les distributions des plans de bits des vidéos; nous montrons des résultats prouvant l'efficacité des outils développés. Ces derniers permettent d'améliorer de façon notable la performance débit-distorsion d'un codeur vidéo distribué, mais sous certaines conditions d'additivité du canal de corrélation.
252

Communications à grande efficacité spectrale sur le canal à évanouissements

Lamy, Catherine 18 April 2000 (has links) (PDF)
du fait de l'explosion actuelle des télécommunications, les opérateurs sont victimes d'une crise de croissance les obligeant à installer toujours plus de relais, à découper les cellules (zone de couverture d'un relais) en micro-cellules dans les grandes villes, afin de faire face à la demande toujours grandissante de communications. Les concepteurs des nouveaux réseaux de transmission sont donc constamment à la recherche d'une utilisation plus efficace des ressources disponibles
253

On the pricing equations of some path-dependent options

Eriksson, Jonatan January 2006 (has links)
<p>This thesis consists of four papers and a summary. The common topic of the included papers are the pricing equations of path-dependent options. Various properties of barrier options and American options are studied, such as convexity of option prices, the size of the continuation region in American option pricing and pricing formulas for turbo warrants. In Paper I we study the effect of model misspecification on barrier option pricing. It turns out that, as in the case of ordinary European and American options, this is closely related to convexity properties of the option prices. We show that barrier option prices are convex under certain conditions on the contract function and on the relation between the risk-free rate of return and the dividend rate. In Paper II a new condition is given to ensure that the early exercise feature in American option pricing has a positive value. We give necessary and sufficient conditions for the American option price to coincide with the corresponding European option price in at least one diffusion model. In Paper III we study parabolic obstacle problems related to American option pricing and in particular the size of the non-coincidence set. The main result is that if the boundary of the set of points where the obstacle is a strict subsolution to the differential equation is C<sup>1</sup>-Dini in space and Lipschitz in time, there is a positive distance, which is uniform in space, between the boundary of this set and the boundary of the non-coincidence set. In Paper IV we derive explicit pricing formulas for turbo warrants under the classical Black-Scholes assumptions.</p>
254

On the pricing equations of some path-dependent options

Eriksson, Jonatan January 2006 (has links)
This thesis consists of four papers and a summary. The common topic of the included papers are the pricing equations of path-dependent options. Various properties of barrier options and American options are studied, such as convexity of option prices, the size of the continuation region in American option pricing and pricing formulas for turbo warrants. In Paper I we study the effect of model misspecification on barrier option pricing. It turns out that, as in the case of ordinary European and American options, this is closely related to convexity properties of the option prices. We show that barrier option prices are convex under certain conditions on the contract function and on the relation between the risk-free rate of return and the dividend rate. In Paper II a new condition is given to ensure that the early exercise feature in American option pricing has a positive value. We give necessary and sufficient conditions for the American option price to coincide with the corresponding European option price in at least one diffusion model. In Paper III we study parabolic obstacle problems related to American option pricing and in particular the size of the non-coincidence set. The main result is that if the boundary of the set of points where the obstacle is a strict subsolution to the differential equation is C1-Dini in space and Lipschitz in time, there is a positive distance, which is uniform in space, between the boundary of this set and the boundary of the non-coincidence set. In Paper IV we derive explicit pricing formulas for turbo warrants under the classical Black-Scholes assumptions.
255

Flexible Constraint Length Viterbi Decoders On Large Wire-area Interconnection Topologies

Garga, Ganesh 07 1900 (has links)
To achieve the goal of efficient ”anytime, anywhere” communication, it is essential to develop mobile devices which can efficiently support multiple wireless communication standards. Also, in order to efficiently accommodate the further evolution of these standards, it should be possible to modify/upgrade the operation of the mobile devices without having to recall previously deployed devices. This is achievable if as much functionality of the mobile device as possible is provided through software. A mobile device which fits this description is called a Software Defined Radio (SDR). Reconfigurable hardware-based solutions are an attractive option for realizing SDRs as they can potentially provide a favourable combination of the flexibility of a DSP or a GPP and the efficiency of an ASIC. The work presented in this thesis discusses the development of efficient reconfigurable hardware for one of the most energy-intensive functionalities in the mobile device, namely, Forward Error Correction (FEC). FEC is required in order to achieve reliable transfer of information at minimal transmit power levels. FEC is achieved by encoding the information in a process called channel coding. Previous studies have shown that the FEC unit accounts for around 40% of the total energy consumption of the mobile unit. In addition, modern wireless standards also place the additional requirement of flexibility on the FEC unit. Thus, the FEC unit of the mobile device represents a considerable amount of computing ability that needs to be accommodated into a very small power, area and energy budget. Two channel coding techniques have found widespread use in most modern wireless standards -namely convolutional coding and turbo coding. The Viterbi algorithm is most widely used for decoding convolutionally encoded sequences. It is possible to use this algorithm iteratively in order to decode turbo codes. Hence, this thesis specifically focusses on developing architectures for flexible Viterbi decoders. Chapter 2 provides a description of the Viterbi and turbo decoding techniques. The flexibility requirements placed on the Viterbi decoder by modern standards can be divided into two types -code rate flexibility and constraint length flexibility. The code rate dictates the number of received bits which are handled together as a symbol at the receiver. Hence, code rate flexibility needs to be built into the basic computing units which are used to implement the Viterbi algorithm. The constraint length dictates the number of computations required per received symbol as well as the manner of transfer of results between these computations. Hence, assuming that multiple processing units are used to perform the required computations, supporting constraint length flexibility necessitates changes in the interconnection network connecting the computing units. A constraint length K Viterbi decoder needs 2K−1computations to be performed per received symbol. The results of the computations are exchanged among the computing units in order to prepare for the next received symbol. The communication pattern according to which these results are exchanged forms a graph called a de Bruijn graph, with 2K−1nodes. This implies that providing constraint length flexibility requires being able to realize de Bruijn graphs of various sizes on the interconnection network connecting the processing units. This thesis focusses on providing constraint length flexibility in an efficient manner. Quite clearly, the topology employed for interconnecting the processing units has a huge effect on the efficiency with which multiple constraint lengths can be supported. This thesis aims to explore the usefulness of interconnection topologies similar to the de Bruijn graph, for building constraint length flexible Viterbi decoders. Five different topologies have been considered in this thesis, which can be discussed under two different headings, as done below: De Bruijn network-based architectures The interconnection network that is of chief interest in this thesis is the de Bruijn interconnection network itself, as it is identical to the communication pattern for a Viterbi decoder of a given constraint length. The problem of realizing flexible constraint length Viterbi decoders using a de Bruijn network has been approached in two different ways. The first is an embedding-theoretic approach where the problem of supporting multiple constraint lengths on a de Bruijn network is seen as a problem of embedding smaller sized de Bruijn graphs on a larger de Bruijn graph. Mathematical manipulations are presented to show that this embedding can generally be accomplished with a maximum dilation of, where N is the number of computing nodes in the physical network, while simultaneously avoiding any congestion of the physical links. In this case, however, the mapping of the decoder states onto the processing nodes is assumed fixed. Another scheme is derived based on a variable assignment of decoder states onto computing nodes, which turns out to be more efficient than the embedding-based approach. For this scheme, the maximum number of cycles per stage is found to be limited to 2 irrespective of the maximum contraint length to be supported. In addition, it is also found to be possible to execute multiple smaller decoders in parallel on the physical network, for smaller constraint lengths. Consequently, post logic-synthesis, this architecture is found to be more area-efficient than the architecture based on the embedding theoretic approach. It is also a more efficiently scalable architecture. Alternative architectures There are several interconnection topologies which are closely connected to the de Bruijn graph, and hence could form attractive alternatives for realizing flexbile constraint length Viterbi decoders. We consider two more topologies from this class -namely, the shuffle-exchange network and the flattened butterfly network. The variable state assignment scheme developed for the de Bruijn network is found to be directly applicable to the shuffle-exchange network. The average number of clock cycles per stage is found to be limited to 4 in this case. This is again independent of the constraint length to be supported. On the flattened butterfly (which is actually identical to the hypercube), a state scheduling scheme similar to that of bitonic sorting is used. This architecture is found to offer the ideal throughput of one decoded bit every clock cycle, for any constraint length. For comparison with a more general purpose topology, we consider a flexible constraint length Viterbi decoder architecture based on a 2D-mesh, which is a popular choice for general purpose applications, as well as many signal processing applications. The state scheduling scheme used here is also similar to that used for bitonic sorting on a mesh. All the alternative architectures are capable of executing multiple smaller decoders in parallel on the larger interconnection network. Inferences Following logic synthesis and power estimation, it is found that the de Bruijn network-based architecture with the variable state assignment scheme yields the lowest (area)−(time) product, while the flattened butterfly network-based architecture yields the lowest (area) - (time)2product. This means, that the de Bruijn network-based architecture is the best choice for moderate throughput applications, while the flattened butterfly network-based architecture is the best choice for high throughput applications. However, as the flattened butterfly network is less scalable in terms of size compared to the de Bruijn network, it can be concluded that among the architectures considered in this thesis, the de Bruijn network-based architecture with the variable state assignment scheme is overall an attractive choice for realizing flexible constraint length Viterbi decoders.
256

Parallelized Architectures For Low Latency Turbo Structures

Gazi, Orhan 01 January 2007 (has links) (PDF)
In this thesis, we present low latency general concatenated code structures suitable for parallel processing. We propose parallel decodable serially concatenated codes (PDSCCs) which is a general structure to construct many variants of serially concatenated codes. Using this most general structure we derive parallel decodable serially concatenated convolutional codes (PDSCCCs). Convolutional product codes which are instances of PDSCCCs are studied in detail. PDSCCCs have much less decoding latency and show almost the same performance compared to classical serially concatenated convolutional codes. Using the same idea, we propose parallel decodable turbo codes (PDTCs) which represent a general structure to construct parallel concatenated codes. PDTCs have much less latency compared to classical turbo codes and they both achieve similar performance. We extend the approach proposed for the construction of parallel decodable concatenated codes to trellis coded modulation, turbo channel equalization, and space time trellis codes and show that low latency systems can be constructed using the same idea. Parallel decoding operation introduces new problems in implementation. One such problem is memory collision which occurs when multiple decoder units attempt accessing the same memory device. We propose novel interleaver structures which prevent the memory collision problem while achieving performance close to other interleavers.
257

Non-iterative joint decoding and signal processing: universal coding approach for channels with memory

Nangare, Nitin Ashok 16 August 2006 (has links)
A non-iterative receiver is proposed to achieve near capacity performance on intersymbol interference (ISI) channels. There are two main ingredients in the proposed design. i) The use of a novel BCJR-DFE equalizer which produces optimal soft estimates of the inputs to the ISI channel given all the observations from the channel and L past symbols exactly, where L is the memory of the ISI channel. ii) The use of an encoder structure that ensures that L past symbols can be used in the DFE in an error free manner through the use of a capacity achieving code for a memoryless channel. Computational complexity of the proposed receiver structure is less than that of one iteration of the turbo receiver. We also provide the proof showing that the proposed receiver achieves the i.i.d. capacity of any constrained input ISI channel. This DFE-based receiver has several advantages over an iterative (turbo) receiver, such as low complexity, the fact that codes that are optimized for memoryless channels can be used with channels with memory, and finally that the channel does not need to be known at the transmitter. The proposed coding scheme is universal in the sense that a single code of rate r; optimized for a memoryless channel, provides small error probability uniformly across all AWGN-ISI channels of i.i.d. capacity less than r: This general principle of a proposed non-iterative receiver also applies to other signal processing functions, such as timing recovery, pattern-dependent noise whiten ing, joint demodulation and decoding etc. This makes the proposed encoder and receiver structure a viable alternative to iterative signal processing. The results show significant complexity reduction and performance gain for the case of timing recovery and patter-dependent noise whitening for magnetic recording channels.
258

High-performance computer system architectures for embedded computing

Lee, Dongwon 26 August 2011 (has links)
The main objective of this thesis is to propose new methods for designing high-performance embedded computer system architectures. To achieve the goal, three major components - multi-core processing elements (PEs), DRAM main memory systems, and on/off-chip interconnection networks - in multi-processor embedded systems are examined in each section respectively. The first section of this thesis presents architectural enhancements to graphics processing units (GPUs), one of the multi- or many-core PEs, for improving performance of embedded applications. An embedded application is first mapped onto GPUs to explore the design space, and then architectural enhancements to existing GPUs are proposed for improving throughput of the embedded application. The second section proposes high-performance buffer mapping methods, which exploit useful features of DRAM main memory systems, in DSP multi-processor systems. The memory wall problem becomes increasingly severe in multiprocessor environments because of communication and synchronization overheads. To alleviate the memory wall problem, this section exploits bank concurrency and page mode access of DRAM main memory systems for increasing the performance of multiprocessor DSP systems. The final section presents a network-centric Turbo decoder and network-centric FFT processors. In the era of multi-processor systems, an interconnection network is another performance bottleneck. To handle heavy communication traffic, this section applies a crossbar switch - one of the indirect networks - to the parallel Turbo decoder, and applies a mesh topology to the parallel FFT processors. When designing the mesh FFT processors, a very different approach is taken to improve performance; an optical fiber is used as a new interconnection medium.
259

Aufladung von Pkw DI - Ottomotoren mit Abgasturboladern mit variabler Turbinengeometrie

Schmalzl, Hans-Peter 21 October 2006 (has links) (PDF)
Das Konzept „Downsizing“ für Otto- und Dieselmotoren zur Verbesserung von Kraftstoffverbrauch und Schadstoffemission ist inzwischen durch viele praktische Beispiele und theoretische Untersuchungen zweifelsfrei bestätigt worden. Da „Downsizing“ aber untrennbar mit der Aufladung verbunden ist, wächst der Bedarf nach Aufladetechnologien, die das Hauptmanko des „Downsizing“ – das mangelhafte Drehmoment bei niedriger Motordrehzahl – überwinden. Mit zunehmender spezifischer Leistung und damit höheren Aufladegraden tritt diese Problematik immer stärker in den Vordergrund. Vor diesem Hintergrund hat sich für den Pkw-Dieselmotor die Aufladung mit VTG durchgesetzt. Beim Ottomotor wurde bislang der Schritt vom einfacheren Wastegate-Lader zur VTG noch nicht unternommen. Die Gründe dafür sind insbesondere in der höheren thermischen Belastung, aufgrund der höheren Abgastemperatur, und der größeren Luftdurchsatzspanne zu finden. Andererseits besteht inzwischen speziell beim Ottomotor ein großer Bedarf bezüglich der Verbesserung des Kraftstoffverbrauches und der Fahrdynamik in Kombination mit der Turboaufladung. Vor dem Hintergrund der in den letzten Jahren durchgeführten Weiterentwicklungen auf dem Gebiet der Benzindirekteinspritzung und der Aufladetechnik, stellt sich inzwischen verstärkt die Frage, ob durch den Einsatz einer VTG am Ottomotor ähnlich große Verbrauchseinsparungen und Verbesserungen in der Fahrdynamik erzielt werden können, wie dies vor einigen Jahren beim Pkw-Dieselmotor der Fall war. Im Rahmen der durchgeführten Arbeit wurden die Potentiale einer VTG an einem direkteinspritzenden Ottomotor eingehend durch Experimente und Motorprozesssimulation untersucht. Bei der direkten Übertragung der heute üblichen Diesel-VTG-Technik auf die Anwendung am Ottomotor können allerdings nur unwesentliche Verbesserungen beim spezifischen Kraftstoffverbrauch erzielt werden. Um die volle Drehzahlspanne des Ottomotors in seiner Basisabstimmung bedienen zu können, muss der Verstellbereich der VTG extrem ausgereizt werden, was Wirkungsgradnachteile mit sich bringt. Mit dem Übergang auf ein 2-flutiges Zwillingsstromturbinengehäuse in Kombination mit VTG wird es möglich, den Gaswechsel des Motors zu verbessern, da der Auslassvorgang der einzelnen Zylinder weniger durch die anderen Zylinder behindert wird. Der Effekt ist allerdings wesentlich schwächer ausgeprägt als bei einem 2-flutigen Wastegate Lader, da hier die Flutentrennung bis kurz vor das Turbinenrad erfolgen kann. Bei der VTG-Zwillingsstromturbine endet die Trennung konstruktionsbedingt bereits vor dem Leitgitter. Im Bereich des beschaufelten Ringkanales treffen die beiden bis dorthin getrennten Abgasstränge aufeinander und beeinflussen sich hier wieder gegenseitig, wobei die negativen Auswirkungen geringer sind als bei einer 1-flutigen Turbine, ganz ohne Trennung im Turbinengehäuse. Die bessere Nutzung der kinetischen Energie aus dem Vorauslassstoß, die bei Stoßaufladung mit getrennt geführten Abgaskanälen üblicherweise möglich ist, kann allerdings bei einer VTG-Turbine nicht erreicht werden. Speziell im unteren Motordrehzahlbereich, wo die Leitschaufeln weit geschlossen sind, werden die Druckpulsationen stark gedämpft und haben somit nur noch einen geringen Anteil an der Totalenthalpie des Abgases. Wie sich aus den Untersuchungen zeigte, kann dieser Nachteil der VTG aber durch den kleineren Turbinendurchsatz bei kleiner Schaufelstellung überkompensiert werden, wodurch das Drehmoment bei niedrigen Motordrehzahlen angehoben werden kann. Eine wesentlich bessere Flutentrennung kann durch die Verwendung einer VTG-Doppelstromturbine erreicht werden. Durch zwei über den Turbinenumfang getrennt geführte Spiralkanäle können die Überströmquerschnitte verkleinert, und damit die gegenseitige Beeinflussung der Abgasströme wesentlich verringert werden. Die Verhältnisse sind in dieser Ausführung vergleichbar mit Wastegate- Zwillingsstromturbinen, was die Effektivität der Flutentrennung anbelangt. Das volle Potential dieser optimierten Flutentrennung kann durch eine geänderte Applikation der Nockenwellenverstellungen im Motorkennfeld ausgeschöpft werden. Es ist damit möglich, längere Ventilüberschneidungen im unteren Motordrehzahlbereich zu realisieren und damit den Spülluftanteil in diesem Kennfeldbereich wesentlich zu steigern. Diese Maßnahme hat einen sehr positiven Einfluss auf die Motorbetriebswerte aufgrund: • Verringerter Klopfempfindlichkeit durch Reduktion des Restgasanteiles. • Absenkung der mittleren Abgastemperatur vor Turbine und damit der Möglichkeit, das Verbrennungsluftverhältnis anzuheben. • Verringerung der notwendigen Durchsatzspanne für Verdichter und Turbine und damit der Möglichkeit den Lader bei besseren Wirkungsgraden zu betreiben. Aufgrund des mit der Doppelstromanordnung begrenzten Zuströmquerschnittes über den Umfang der Turbine (180° pro Turbinenstrang) stellt sich allerdings ein geringerer Maximaldurchsatz für die Turbine ein. Die Simulationsergebnisse haben gezeigt, dass dadurch der mittlere Abgasdruck vor Turbine im oberen Volllastdrehzahlbereich ansteigt. Um dies zu verhindern, kann die Doppelstromturbine mit einer so genannten Stau–Stoß–Umschaltung versehen werden, mit der die beiden Turbinenstränge bei hohen Motordrehzahlen verbunden werden. Bei geöffnetem Umschaltventil kann sich das Abgas auf beide Turbinenstränge verteilen, und die Pulsation wird zusätzlich reduziert. Beide Effekte bewirken ein Absinken der Turbinenleistung und damit die gewünschte Begrenzung des Ladedruckes. Gleichzeitig ist es auch möglich, das Stoß–Stau–Umschaltventil als zusätzliches Wastegate zu betreiben, wodurch der Durchsatzbereich der Turbine noch weiter gesteigert werden kann. Die Kombination der geschilderten Maßnahmen: • VTG mit Doppelstromturbine • Stoß-Stau-Umschaltung • Vergrößerte Ventilüberschneidung hat bei den durchgeführten Untersuchungen zu einer Steigerung des stationären Volllastdrehmomentes von 40 % bei nM = 1500 1/min geführt, bei gleichzeitiger Verbesserung des Spüldruckgefälles um ca. 400 mbar im Nennleistungspunkt gegenüber dem 1-flutigen Wastegate-Basislader. Im Instationärbetrieb konnte am Beispiel eines Lastsprunges bei nM = 1800 1/min eine Verkürzung der Zeit bis zum Erreichen von 90 % des Nennmomentes um ca. 50 % festgestellt werden. Obgleich auf Basis der untersuchten Varianten bezüglich der aerodynamischen Auslegung der Einzelkomponenten, der Regelbarkeit der VTG und der mechanischen Haltbarkeit noch weitere Entwicklungsaktivitäten notwendig sein werden, kann aufgrund der sehr positiven Untersuchungsergebnisse von einem großen Potential für die Aufladung von DI-Ottomotoren mit variabler Turbinengeometrie ausgegangen werden.
260

Exhaust system energy management of internal combustion engines

Wijewardane, M. Anusha January 2012 (has links)
Today, the investigation of fuel economy improvements in internal combustion engines (ICEs) has become the most significant research interest among the automobile manufacturers and researchers. The scarcity of natural resources, progressively increasing oil prices, carbon dioxide taxation and stringent emission regulations all make fuel economy research relevant and compelling. The enhancement of engine performance solely using incylinder techniques is proving increasingly difficult and as a consequence the concept of exhaust energy recovery has emerged as an area of considerable interest. Three main energy recovery systems have been identified that are at various stages of investigation. Vapour power bottoming cycles and turbo-compounding devices have already been applied in commercially available marine engines and automobiles. Although the fuel economy benefits are substantial, system design implications have limited their adaptation due to the additional components and the complexity of the resulting system. In this context, thermo-electric (TE) generation systems, though still in their infancy for vehicle applications have been identified as attractive, promising and solid state candidates of low complexity. The performance of these devices is limited to the relative infancy of materials investigations and module architectures. There is great potential to be explored. The initial modelling work reported in this study shows that with current materials and construction technology, thermo-electric devices could be produced to displace the alternator of the light duty vehicles, providing the fuel economy benefits of 3.9%-4.7% for passenger cars and 7.4% for passenger buses. More efficient thermo-electric materials could increase the fuel economy significantly resulting in a substantially improved business case. The dynamic behaviour of the thermo-electric generator (TEG) applied in both, main exhaust gas stream and exhaust gas recirculation (EGR) path of light duty and heavy duty engines were studied through a series of experimental and modelling programs. The analyses of the thermo-electric generation systems have highlighted the need for advanced heat exchanger design as well as the improved materials to enhance the performance of these systems. These research requirements led to the need for a systems evaluation technique typified by hardware-in-the-loop (HIL) testing method to evaluate heat exchange and materials options. HIL methods have been used during this study to estimate both the output power and the exhaust back pressure created by the device. The work has established the feasibility of a new approach to heat exchange devices for thermo-electric systems. Based on design projections and the predicted performance of new materials, the potential to match the performance of established heat recovery methods has been demonstrated.

Page generated in 0.0163 seconds