• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 51
  • 9
  • 9
  • 7
  • 2
  • 2
  • 1
  • Tagged with
  • 87
  • 34
  • 17
  • 16
  • 15
  • 14
  • 13
  • 13
  • 12
  • 12
  • 12
  • 11
  • 11
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Ultra High Compression For Weather Radar Reflectivity Data

Makkapati, Vishnu Vardhan 17 November 2006 (has links)
Honeywell Technology Solutions Lab, India / Weather is a major contributing factor in aviation accidents, incidents and delays. Doppler weather radar has emerged as a potent tool to observe weather. Aircraft carry onboard radars but their range and angular resolution are limited. Networks of ground-based weather radars provide extensive coverage of weather over large geographic regions. It would be helpful if these data can be transmitted to the pilot. However, these data are highly voluminous and the bandwidth of the ground-air communication links is limited and expensive. Hence, these data have to be compressed to an extent where they are suitable for transmission over low-bandwidth links. Several methods have been developed to compress pictorial data. General-purpose schemes do not take into account the nature of data and hence do not yield high compression ratios. A scheme for extreme compression of weather radar data is developed in this thesis that does not significantly degrade the meteorological information contained in these data. The method is based on contour encoding. It approximates a contour by a set of systematically chosen ‘control points’ that preserve its fine structure up to a certain level. The contours may be obtained using a thresholding process based on NWS or custom reflectivity levels. This process may result in region and hole contours, enclosing `high' or `low' areas, which may be nested. A tag bit is used to label region and hole contours. The control point extraction method first obtains a smoothed reference contour by averaging the original contour. Then the points on the original contour with maximum deviation from the smoothed contour between the crossings of these contours are identified and are designated as control points. Additional control points are added midway between the control point and the crossing points on either side of it, if the length of the segment between the crossing points exceeds a certain length. The control points, referenced with respect to the top-left corner of each contour for compact quantification, are transmitted to the receiving end. The contour is retrieved from the control points at the receiving end using spline interpolation. The region and hole contours are identified using the tag bit. The pixels between the region and hole contours at a given threshold level are filled using the color corresponding to it. This method is repeated till all the contours for a given threshold level are exhausted, and the process is carried out for all other thresholds, thereby resulting in a composite picture of the reconstructed field. Extensive studies have been conducted by using metrics such as compression ratio, fidelity of reconstruction and visual perception. In particular the effect of the smoothing factor, the choice of the degree of spline interpolation and the choice of thresholds are studied. It has been shown that a smoothing percentage of about 10% is optimal for most data. A degree 2 of spline interpolation is found to be best suited for smooth contour reconstruction. Augmenting NWS thresholds has resulted in improved visual perception, but at the expense of a decrease in the compression ratio. Two enhancements to the basic method that include adjustments to the control points to achieve better reconstruction and bit manipulations on the control points to obtain higher compression are proposed. The spline interpolation inherently tends to move the reconstructed contour away from the control points. This has been somewhat compensated by stretching the control points away from the smoothed reference contour. The amount and direction of stretch are optimized with respect to actual data fields to yield better reconstruction. In the bit manipulation study, the effects of discarding the least significant bits of the control point addresses are analyzed in detail. Simple bit truncation introduces a bias in the contour description and reconstruction, which is removed to a great extent by employing a bias compensation mechanism. The results obtained are compared with other methods devised for encoding weather radar contours.
72

Performance of MIMO and non-orthogonal transmission in lossy forward relay networks

He, J. (Jiguang) 23 October 2018 (has links)
Abstract In the current LTE-Advanced system, decode-and-forward (DF) is leveraged for cooperative relaying, where the erroneously decoded sequences are discarded at the relay, resulting in a waste of resources. The reason lies in that the erroneously decoded sequence can provide a certain amount of useful information about the source at the destination. Therefore, we develop a new relaying scheme, called lossy DF (also known as lossy forward (LF)), where the relay always forwards the decoded sequence to the destination. Beneficial from the always-forward principle, it has been verified that LF relaying outperforms DF relaying in terms of outage probability, ε-outage achievable rate, frame error rate (FER), and communication coverage. Three exemplifying network scenarios are studied in this thesis: the one-way multiple-input multiple-output (MIMO) relay network, the multiple access relay channel (MARC), and the general multi-source multi-relay network. We derive the outage probability of the one-way MIMO relay networks under the assumption that the orthogonal space-time block code (OSTBC) is implemented at the transmitter side for each individual transmission. Interestingly, we find that the diversity order of the OSTBC based one-way MIMO relay network can be interpreted and formulated by the well-known max-flow min-cut theorem, which is widely utilized to calculate the network capacity. For the MARC, non-orthogonal transmission is introduced to further improve the network throughput compared to its orthogonal counterpart. The region for lossless recovery of both sources is formulated by the theorem of multiple access channel (MAC) with a helper, which combines the Slepian-Wolf rate region and the MAC capacity region. Since the region for lossless recovery is obtained via sufficient condition, the derived outage probability can be regarded as a theoretical upper bound. We also conduct the performance evaluation by exploiting different accumulator (ACC) aided turbo codes at the transmitter side, exclusive or (XOR) based multi-user complete decoding at the relay, and iterative joint decoding (JD) at the destination. For the general multi-source multi-relay network, we focus on the investigation the end-to-end outage probability. The performance improvement of LF over DF is verified through theoretical analyses and numerical results in terms of outage probability. / Tiivistelmä Tämän päivän LTE-A-tiedonsiirtojärjestelmissä hyödynnetään dekoodaa-ja-välitä (decode-and-forward, DF) menetelmää yhteistoiminnalliseen tiedon edelleenlähetykseen (relaying) siten, että virheellisesti vastaanotetut sekvenssit hylätään välittimessä (relay). Tämä on resurssien hukkaamista, sillä virheellisissäkin viesteissä on informaatiota, jota voidaan hyödyntää vastaanottimessa. Tässä väitöskirjassa tutkitaan uutta häviöllistä DF-menetelmää, johon viitataan nimellä häviöllinen välitys (lossy forward, LF). Menetelmässä välitin lähettää informaation aina eteenpäin olipa siinä virheitä tai ei. Sen etuna verrattuna perinteiseen DF-menetelmään, on parantunut luotettavuus metriikoilla jossa mitataan vastaanoton todennäköisyyttä ja verkon peittoaluetta. Väitöskirjassa tarkastellaan LF-menetelmää kolmessa eri verkkotopologiassa jotka ovat yksisuuntainen monitulo-monilähtövälitinverkko (multiple-input multiple-output, MIMO), moniliityntävälitinkanava (multiple access relay channel, MARC), sekä yleinen moniläheinen monivälitinverkko. Työssä johdetaan matemaattinen esitys estotilan todennäköisyydelle (outage probability) yksisuuntaisessa MIMO-välitinverkossa olettaen ortogonaalisen tila-aika lohkokoodin (orthogonal space-time block code, OSTBC) käyttö. Estotilan todennäköisyys esitetään käyttäen toisteastta (diversity order), joka saadaan johdettua tunnetusta max-flow min-cut lauseesta, jota puolestaan käytetään yleisesti erilaisten verkkojen kapasiteettien laskentaan. MARC-topologiassa hyödynnetään ei-ortogonaalista lähetystä verkon datavirran kasvattamiseen. Häviöttömän lähetyksen informaatioteoreettinen kapasiteettialue saadaan johdettua MAC-auttajan kanssa. Lähestymistavassa Slepian-Wolf- sekä MAC-kapasiteettialueet yhdistyvät. Alueelle, jossa kahden lähteen lähetysnopeudet ovat sellaiset, että vastaanotto on häviötöntä, annetaan riittävä ehto, jolloin johdettu estotilan todennäköisyys on teoreettinen yläraja. Suorituskykyä evaluoidaan myös tietokonesimulaatioilla, joissa käytetään erilaisia akkumulaattoriavusteisia turbokoodeja lähettimessä, ehdoton tai (exclusive or, XOR) pohjaista monen käyttäjän dekoodausta välittimessä sekä iteratiivista yhteisdekoodausta vastaanottimessa. Yleisessä monilähteisessä monivälitinverkossa keskitytään alkuperäisen lähetyksen estotilatodennäköisyyteen. Teoreettinen analyysi sekä simulaatiot osoittavat, että LF:n estotilan todennäköisyys on pienempi kuin DF:n.
73

Distributed Coding For Wireless Sensor Networks

Varshneya, Virendra K 11 1900 (has links) (PDF)
No description available.
74

Conception d'antennes pour le réseau BAN et modélisation du canal de propagation / BAN antennas conception and channel modelling

Alves, Thierry 01 April 2011 (has links)
Les études présentées dans cette thèse font l’objet d’un travail innovant concernant la conception des antennes pour les réseaux de type BAN et la modélisation des canaux associés. L’ouvrage de thèse est réparti en quatre chapitres. Deux chapitres sont consacrés à la modélisation de la propagation le long du corps où l’on montre que les formulations analytiques d’ondes de surface et d’ondes rampantes sont applicables dans ce contexte. L’effet des tissus adipeux est également pris en compte par le biais d’un modèle à trois couches (peau, graisse et muscle) et renseigne sur la variabilité du bilan de liaison suivant les personnes. Ce type de modélisation est le premier à inclure les formes du corps, les caractéristiques électriques des tissus biologiques et les caractéristiques de rayonnement des antennes. Une méthode basée sur l’autocorrélation du canal est également présentée afin de connaître les temps de cohérences des évanouissements lents et rapides. Par la suite, il est montré comment les évanouissements lents sont extraits par le biais d’un filtrage FFT fonction du temps de cohérence associé. L’étude des canaux se termine sur une série de mesures en chambre anéchoïde qui a permis de vérifier la validité des modèles analytiques. Des mesures en milieu indoor ont abouti à la proposition de plusieurs modèles statistiques basés sur une loi de Nakagami-m fonction de la distance sur le corps. Deux autres chapitres sont consacrés à la conception d’antennes à proximité de tissus biologiques et devant être intégrées dans des biocapteurs ou des vêtements. Pour cela, nous nous sommes particulièrement intéressés aux structures en F-inversé comme les IFA imprimées et les PIFA. Nous avons également réalisé des monopôles courts ayant un comportement de type magnétique. Nous montrons par le biais de simulations et de mesures sur un fantôme que seules les antennes du type monopôle et PIFA permettent une bonne excitation des ondes de surface. On montre par la suite l’influence du facteur de qualité d’une antenne sur son rendement et l’on en conclue qu’une antenne doit présenter un facteur de qualité faible pour avoir un bon rendement. La désensibilisation d’une antenne face au corps est également présentée. L’emploi de feuilles de ferrites aide à concentrer le champ réactif et limite ainsi les inévitables désadaptations dues au corps. Le coefficient de qualité joue également un rôle important dans le comportement de l’antenne face aux variabilités des tissus biologiques. L’estimation du rendement est un autre point difficile à réaliser lorsque les antennes sont sur le corps. Malgré tout nous proposons une nouvelle méthode que nous vérifions par simulation. Finalement, une structure à diversité est également proposée. Cette dernière tient compte des connaissances acquises au long de ce travail de recherche. Une sélection des meilleurs types d’antennes du point de vu canal et rendement est réalisée. La structure choisie est composée d’une PIFA et d’un monopôle court découplés par le biais de fentes λ/4. Des mesures in situ en milieu indoor donnent un gain en diversité maximum de 8.1 dB pour un schéma de type sélection / BAN antennas conception and channel modelling
75

The steady-state analysis of the non-isolated and isolated type SEPIC PWM DC-DC converters for CCM

Dasari, Anuroop Reddy 15 December 2020 (has links)
No description available.
76

Získávání frekventovaných vzorů z proudu dat / Frequent Pattern Discovery in a Data Stream

Dvořák, Michal January 2012 (has links)
Frequent-pattern mining from databases has been widely studied and frequently observed. Unfortunately, these algorithms are not suitable for data stream processing. In frequent-pattern mining from data streams, it is important to manage sets of items and also their history. There are several reasons for this; it is not just the history of frequent items, but also the history of potentially frequent sets that can become frequent later. This requires more memory and computational power. This thesis describes two algorithms: Lossy Counting and FP-stream. An effective implementation of these algorithms in C# is an integral part of this thesis. In addition, the two algorithms have been compared.
77

Investigation of Negative Refractive Index in Isotropic Chiral Metamaterials Under First and Second-Order Material Dispersion With and Without Conductive Loss

Algadey, Tarig 17 May 2016 (has links)
No description available.
78

Energy Efficient RF for UDNs

Abdulkhaleq, Ahmed M., Sajedin, M., Al-Yasir, Yasir I.A., Mejillones, S.C., Ojaroudi Parchin, Naser, Rayit, A., Elfergani, Issa T., Rodriguez, J., Abd-Alhameed, Raed, Oldoni, M., D’Amico, M. 12 November 2021 (has links)
Multi-standard RF front-end is a critical part of legacy and future emerging mobile architectures, where the size, the efficiency, and the integration of the elements in the RF front-end will affect the network key performance indicators (KPIs). This chapter discusses power amplifier design for both handset and base station applications for 5G and beyond. Also, this chapter deals with filter-antenna design for 5G applications that include a synthesis-based approach, differentially driven reconfigurable planar filter-antenna, and an insensitive phased array antenna with air-filled slot-loop resonators.
79

Error-robust coding and transformation of compressed hybered hybrid video streams for packet-switched wireless networks

Halbach, Till January 2004 (has links)
<p>This dissertation considers packet-switched wireless networks for transmission of variable-rate layered hybrid video streams. Target applications are video streaming and broadcasting services. The work can be divided into two main parts.</p><p>In the first part, a novel quality-scalable scheme based on coefficient refinement and encoder quality constraints is developed as a possible extension to the video coding standard H.264. After a technical introduction to the coding tools of H.264 with the main focus on error resilience features, various quality scalability schemes in previous research are reviewed. Based on this discussion, an encoder decoder framework is designed for an arbitrary number of quality layers, hereby also enabling region-of-interest coding. After that, the performance of the new system is exhaustively tested, showing that the bit rate increase typically encountered with scalable hybrid coding schemes is, for certain coding parameters, only small to moderate. The double- and triple-layer constellations of the framework are shown to perform superior to other systems.</p><p>The second part considers layered code streams as generated by the scheme of the first part. Various error propagation issues in hybrid streams are discussed, which leads to the definition of a decoder quality constraint and a segmentation of the code stream to transmit. A packetization scheme based on successive source rate consumption is drafted, followed by the formulation of the channel code rate optimization problem for an optimum assignment of available codes to the channel packets. Proper MSE-based error metrics are derived, incorporating the properties of the source signal, a terminate-on-error decoding strategy, error concealment, inter-packet dependencies, and the channel conditions. The Viterbi algorithm is presented as a low-complexity solution to the optimization problem, showing a great adaptivity of the joint source channel coding scheme to the channel conditions. An almost constant image qualiity is achieved, also in mismatch situations, while the overall channel code rate decreases only as little as necessary as the channel quality deteriorates. It is further shown that the variance of code distributions is only small, and that the codes are assigned irregularly to all channel packets.</p><p>A double-layer constellation of the framework clearly outperforms other schemes with a substantial margin. </p><p>Keywords — Digital lossy video compression, visual communication, variable bit rate (VBR), SNR scalability, layered image processing, quality layer, hybrid code stream, predictive coding, progressive bit stream, joint source channel coding, fidelity constraint, channel error robustness, resilience, concealment, packet-switched, mobile and wireless ATM, noisy transmission, packet loss, binary symmetric channel, streaming, broadcasting, satellite and radio links, H.264, MPEG-4 AVC, Viterbi, trellis, unequal error protection</p>
80

Games and Probabilistic Infinite-State Systems

Sandberg, Sven January 2007 (has links)
<p>Computer programs keep finding their ways into new safety-critical applications, while at the same time growing more complex. This calls for new and better methods to verify the correctness of software. We focus on one approach to verifying systems, namely that of <i>model checking</i>. At first, we investigate two categories of problems related to model checking: <i>games</i> and <i>stochastic infinite-state systems</i>. In the end, we join these two lines of research, by studying <i>stochastic infinite-state games</i>.</p><p>Game theory has been used in verification for a long time. We focus on finite-state 2-player parity and limit-average (mean payoff) games. These problems have applications in model checking for the <i>μ</i>-calculus, one of the most expressive logics for programs. We give a simplified proof of memoryless determinacy. The proof applies <i>both</i> to parity and limit-average games. Moreover, we suggest a strategy improvement algorithm for limit-average games. The algorithm is discrete and strongly subexponential.</p><p>We also consider probabilistic infinite-state systems (Markov chains) induced by three types of models. <i>Lossy channel systems (LCS)</i> have been used to model processes that communicate over an unreliable medium. <i>Petri nets</i> model systems with unboundedly many parallel processes. <i>Noisy Turing machines</i> can model computers where the memory may be corrupted in a stochastic manner. We introduce the notion of <i>eagerness</i> and prove that all these systems are eager. We give a scheme to approximate the value of a reward function defined on paths. Eagerness allows us to prove that the scheme terminates. For probabilistic LCS, we also give an algorithm that approximates the limit-average reward. This quantity describes the long-run behavior of the system.</p><p>Finally, we investigate Büchi games on probabilistic LCS. Such games can be used to model a malicious cracker trying to break a network protocol. We give an algorithm to solve these games.</p>

Page generated in 0.0297 seconds