• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 163
  • 42
  • 20
  • 19
  • 18
  • 17
  • 13
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 353
  • 61
  • 54
  • 34
  • 32
  • 31
  • 22
  • 20
  • 19
  • 18
  • 18
  • 18
  • 17
  • 16
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
301

Implementation of Low-Density Parity-Check codes for 5G NR shared channels / Implementering av paritetskoder med låg densitet för delade 5G NR kanaler

Wang, Lifang January 2021 (has links)
Channel coding plays a vital role in telecommunication. Low-Density Parity- Check (LDPC) codes are linear error-correcting codes. According to the 3rd Generation Partnership Project (3GPP) TS 38.212, LDPC is recommended for the Fifth-generation (5G) New Radio (NR) shared channels due to its high throughput, low latency, low decoding complexity and rate compatibility. LDPC encoding chain has been defined in 3GPP TS 38.212, but some details of LDPC encoding chain are still required to be explored in the MATLAB environment. For example, how to deal with the filler bits for encoding and decoding. However, as the reverse process of LDPC encoding, there is no information on LDPC decoding process for 5G NR shared channels in 3GPP TS 38.212. In this thesis project, LDPC encoding and decoding chains were thoughtfully developed with MATLAB programming based on 3GPP TS 38.212. Several LDPC decoding algorithms were implemented and optimized. The performance of LDPC algorithms was evaluated using block error rate (BLER) v.s. signal to noise ratio (SNR) and CPU time. Results show that the double diagonal structure-based encoding method is an efficient LDPC encoding algorithm for 5G NR. Layered Sum Product Algorithm (LSPA) and Layered Min-Sum Algorithm (LMSA) are more efficient than Sum Product Algorithm (SPA) and Min-Sum Algorithm (MSA). Layered Normalized Min-Sum Algorithm (LNMSA) with proper normalization factor and Layered Offset Min-Sum Algorithm (LOMSA) with good offset factor can optimize LMSA. The performance of LNMSA and LOMSA decoding depends more on code rate than transport block. / Kanalkodning spelar en viktig roll i telekommunikation. Paritetskontrollkoder med låg densitet (LDPC) är linjära felkorrigeringskoder. Enligt tredje generationens partnerskapsprojekt (3GPP) TS 38.212, LDPC rekommenderas för den femte generationens (5G) nya radio (NR) delade kanal på grund av dess höga genomströmning, låga latens, låga avkodningskomplexitet och hastighetskompatibilitet. LDPC kodningskedjan har definierats i 3GPP TS 38.212, men vissa detaljer i LDPC kodningskedjan krävs fortfarande för att utforskas i Matlabmiljön. Till exempel hur man hanterar fyllnadsbitar för kodning och avkodning. Men som den omvända processen för LDPC kodning finns det ingen information om LDPC avkodningsprocessen för 5G NR delade kanaler på 3GPP TS 38.212. I detta avhandlingsprojekt utvecklades LDPC-kodning och avkodningskedjor enligt 3GPP TS 38.212. Flera LDPC-avkodningsalgoritmer implementerades och optimerades. Prestandan för LDPC-algoritmer utvärderades med användning av blockfelshalt (BLER) v.s. signal / brusförhållande (SNR) och CPU-tid. Resultaten visar att den dubbla diagonala strukturbaserade kodningsmetoden är en effektiv LDPC kodningsalgoritm för 5G NR. Layered Sum Product Algorithm (LSPA) och Layered Min-Sum Algorithm (LMSA) är effektivare än Sum Product Algorithm (SPA) och Min-Sum Algorithm (MSA). Layered Normalized Min-Sum Algorithm (LNMSA) med rätt normaliseringsfaktor och Layered Offset Min-Sum Algorithm (LOMSA) med bra offsetfaktor kan optimera LMSA. Prestandan för LNMSA- och LOMSA-avkodning beror mer på kodhastighet än transportblock.
302

利用多元衛星影像監測格陵蘭Russell冰河之變動行為與消融機制分析 / A remote sensing monitoring of greenland Russell glacier dynamics and analysis of melting mechanism

蔡亞倫, Tsai, Ya Lun Unknown Date (has links)
近年全球暖化現象日益嚴重,格陵蘭等極區融冰所造成之海平面上升將對全球人類帶來嚴重威脅。因冰層質量之改變與冰河移動速度高度相關,故可藉由監測格陵蘭冰層(Greenland Ice Sheet,GrIS)上冰河之移動推估全球暖化對其造成之影響。衛星影像因具有連續且快速獲得大範圍地表資訊之能力,且可結合各影像處理技術獲得地表變形量,故已廣泛應用於廣域冰河之監測。然不同影像與技術均有其優勢與限制,故本研究將使用合成孔徑雷達(Synthetic Aperture Radar,SAR)與光學影像,並結合合成孔徑雷達差分干涉(Differential Interferometric SAR,D-InSAR)、多重合成孔徑雷達干涉(Multi-aperture Interferometric SAR,MAI)與偏移偵測法(Pixel-offset,PO)技術獲得冰河表面於不同方向之位移向量,再整合各向量透過三維變動量解構法(3D decomposition)求解表面於三維方向之變形量。據此執行數值冰層動力模型(Numerical Ice Sheet Model,ISM),並結合模擬之冰底基岩渠道網絡、數化之冰面冰隙與冰面湖及氣象觀測資料後,參佐冰河變動理論,進一步了解格陵蘭Russell冰河之變動行為與機制。 / Global warming has been a worldwide issue and significantly increasing icecap melting rate over polar area. Consequently the sea level rises continuously and poses a fundamental threat to whole human beings. Since the mass loss of Greenland ice sheet (GrIS) is highly correlated to the velocity of glacier movement, this study aims to monitor the impact of global warming by tracking glacier terminus displacement over GrIS using remote sensing techniques. As there are multiple spaceborne images of various characteristics and also multiple techniques with different functions, we proposed a monitoring strategy using Synthetic Aperture Radar (SAR) and optical images, with Differential Interferometric SAR (D-InSAR), Multi-aperture Interferometric SAR (MAI) and Pixel-offset (PO) techniques to estimate glacier movement vectors. The vectors were then merged using 3D decomposition method to derive 3D deformation. Based on the resultant 3D deformation, the Numerical Ice Sheet Model (ISM) is conducted and then integrates with modeled subglacial drainage channel network and glaciological theories, the melting dynamics and mechanism of Russell glacier can be further understood.
303

Elektrokoagulacioni i adsorpcioni tretmani efluenata u grafičkim procesima ofset štampe / Electrocoagulation and Adsorption Treatments of Effluents in Offset Printing Graphic Processes

Adamović Savka 09 September 2016 (has links)
<p>Predmet doktorske disertacije je uklanjanje neorganskih i organskih polutanata iz efluenata ofset tehnike štampe (otpadnog razvijača i otpadnog sredstva za vlaženje) u cilju minimiziranja njihovog štetnog uticaja na životnu sredinu. Uklanjanje polutanata sprovedeno je primenom elektrokoagulaciono/ flotacionog (EKF) tetmana, adsorpcionog (AD) tretmana i kombinacijom navedenih tretmana. Izvodljivost i efikasnost tretmana analizirana je ispitivanjem uticaja karakterističnih operativnih promenljivih u okviru procesa na smanjivanje količine polutanata. Mehanizmi EKF i AD tretmana definisani su na osnovu teorijskih matematičko kinetičkih modela. Za rešavanje problema odlaganja mulja nastalog nakon EKF tretmana primenjen je solidifikaciono/stabiliza-cioni tretman sa odgovarajućim imobilizacionim agensima. Razvijen je efikasan model kombinacije tretmana efluenata grafičkih procesa ofset štampe kojim je omogućena konverzija efluenata u proizvode kompatibilne sa principima i normativima životne sredine.</p> / <p>The topic of this doctoral dissertation is the removal of inorganic and organic pollutants from the offset printing effluents (waste developer and waste fountain solution) in order to minimize their damaging influence onto the environment. The removal of the pollutants has been performed by electrocoagulation/flotation (ECF) treatment, adsorption (AD) treatment and the combination of the two said treatments. Feasibility and efficacy of the treatments has been analyzed by investigating the effect of characteristic operational variables within the process on the decrease in the quantity of pollutants. The mechanisms of ECF and AD treatments have been defined on the basis of theoretical mathematical-kinetic models. For the solution of the problem of sludge disposal, originating from the ECF treatment, a solidification/stabilization treatment with immobilization agents has been applied. An efficient model that combines the offset printing effluent treatments has been developed, the one which enables the conversion of effluents into products compatible with environmental principles and norms.</p>
304

選擇權賣方跨式與勒式交易策略之探討--以台指選擇權為例 / A study of straddle and strangle strategies: evidence from TAIEX options

王祈凱, Wang, Chi Kai Unknown Date (has links)
Straddles and strangles are common trading strategies introduced in a lot of textbooks and are widely used for option market participants. However, to our knowledge, we might not know how these trades should be designed, which trades are preferable, and how they are constructed in practice. Thus, we want to apply and discuss straddles and strangles as our trading strategies to the practical market. In our research paper, focusing on the time value and finding some profitable strategies are the two important concepts of our straddles and strangles. Being a sell side to earn the time value is our main goal. Although we may take higher risk, time value decay is helpful for us. The research focuses on straddles and strangles by using historical data of TAIEX futures and options. We use the closing price and settlement price as our trading price from data period January 2005 to December 2010. We also compare two different situations, holding positions to maturity and early offset condition, to our straddles and strangles. The findings show that the straddle strategies have positive earnings by holding positions to maturity, and 3 out of 4 strangle strategies have the same results. We can indeed earn the time value as a seller because time value decays quickly for the last seven days of the options contracts. After considering the early offset condition, the profitability of the ATM straddle and strangles become worse. We might easily fall into a trap in which the index futures price fluctuates greatly for a few days and comes back to the normal level on the settlement date. Therefore, we encounter loss due to selling low and buying high so that the trading performance is poor compared with the positions held to the end. Key words: Straddle Strategy, Strangle Strategy, Time Value, Settlement, Early Offset, TAIEX Options, TAIEX Futures
305

Modelling the Dynamics of Mass Capture

Lahey, Timothy John January 2013 (has links)
This thesis presents an approach to modelling dynamic mass capture which is applied to a number of system models. The models range from a simple 2D Euler-Bernoulli beam with point masses for the end-effector and target to a 3D Timoshenko beam model (including torsion) with rigid bodies for the end-effector and target. In addition, new models for torsion, as well as software to derive the finite element equations from first principles were developed to support the modelling. Results of the models are compared to a simple experiment as done by Ben Rhody. Investigations of offset capture are done by simulation to show why one would consider using a 3D model that includes torsion. These problems have relevance to both terrestrial robots and to space based robotic systems such as the manipulators on the International Space Station capturing payloads such as the SpaceX Dragon capsule. One could increase production in an industrial environment if industrial robots could pick up items without having to establish a zero relative velocity between the end effector and the item. To have a robot acquire its payload in this way would introduce system dynamics that could lead to the necessity of modelling a previously ‘rigid’ robot as flexible.
306

Offset Surface Light Fields

Ang, Jason January 2003 (has links)
For producing realistic images, reflection is an important visual effect. Reflections of the environment are important not only for highly reflective objects, such as mirrors, but also for more common objects such as brushed metals and glossy plastics. Generating these reflections accurately at real-time rates for interactive applications, however, is a difficult problem. Previous works in this area have made assumptions that sacrifice accuracy in order to preserve interactivity. I will present an algorithm that tries to handle reflection accurately in the general case for real-time rendering. The algorithm uses a database of prerendered environment maps to render both the original object itself and an additional bidirectional reflection distribution function (BRDF). The algorithm performs image-based rendering in reflection space in order to achieve accurate results. It also uses graphics processing unit (GPU) features to accelerate rendering.
307

Offset Surface Light Fields

Ang, Jason January 2003 (has links)
For producing realistic images, reflection is an important visual effect. Reflections of the environment are important not only for highly reflective objects, such as mirrors, but also for more common objects such as brushed metals and glossy plastics. Generating these reflections accurately at real-time rates for interactive applications, however, is a difficult problem. Previous works in this area have made assumptions that sacrifice accuracy in order to preserve interactivity. I will present an algorithm that tries to handle reflection accurately in the general case for real-time rendering. The algorithm uses a database of prerendered environment maps to render both the original object itself and an additional bidirectional reflection distribution function (BRDF). The algorithm performs image-based rendering in reflection space in order to achieve accurate results. It also uses graphics processing unit (GPU) features to accelerate rendering.
308

Area and energy efficient VLSI architectures for low-density parity-check decoders using an on-the-fly computation

Gunnam, Kiran Kumar 15 May 2009 (has links)
The VLSI implementation complexity of a low density parity check (LDPC) decoder is largely influenced by the interconnect and the storage requirements. This dissertation presents the decoder architectures for regular and irregular LDPC codes that provide substantial gains over existing academic and commercial implementations. Several structured properties of LDPC codes and decoding algorithms are observed and are used to construct hardware implementation with reduced processing complexity. The proposed architectures utilize an on-the-fly computation paradigm which permits scheduling of the computations in a way that the memory requirements and re-computations are reduced. Using this paradigm, the run-time configurable and multi-rate VLSI architectures for the rate compatible array LDPC codes and irregular block LDPC codes are designed. Rate compatible array codes are considered for DSL applications. Irregular block LDPC codes are proposed for IEEE 802.16e, IEEE 802.11n, and IEEE 802.20. When compared with a recent implementation of an 802.11n LDPC decoder, the proposed decoder reduces the logic complexity by 6.45x and memory complexity by 2x for a given data throughput. When compared to the latest reported multi-rate decoders, this decoder design has an area efficiency of around 5.5x and energy efficiency of 2.6x for a given data throughput. The numbers are normalized for a 180nm CMOS process. Properly designed array codes have low error floors and meet the requirements of magnetic channel and other applications which need several Gbps of data throughput. A high throughput and fixed code architecture for array LDPC codes has been designed. No modification to the code is performed as this can result in high error floors. This parallel decoder architecture has no routing congestion and is scalable for longer block lengths. When compared to the latest fixed code parallel decoders in the literature, this design has an area efficiency of around 36x and an energy efficiency of 3x for a given data throughput. Again, the numbers are normalized for a 180nm CMOS process. In summary, the design and analysis details of the proposed architectures are described in this dissertation. The results from the extensive simulation and VHDL verification on FPGA and ASIC design platforms are also presented.
309

Design and Implementation of Physical Layer Network Coding Protocols

Maduike, Dumezie K. 2009 August 1900 (has links)
There has recently been growing interest in using physical layer network coding techniques to facilitate information transfer in wireless relay networks. The physical layer network coding technique takes advantage of the additive nature of wireless signals by allowing two terminals to transmit simultaneously to the relay node. This technique has several performance benefits, such as improving utilization and throughput of wireless channels and reducing delay. In this thesis, we present an algorithm for joint decoding of two unsynchronized transmitters to a modulo-2 sum of their transmitted messages. We address the problems that arise when the boundaries of the signals do not align with each other and when their phases are not identical. Our approach uses a state-based Viterbi decoding scheme that takes into account the timing offsets between the interfering signals. As a future research plan, we plan to utilize software-defined radios (SDRs) as a testbed to show the practicality of our approach and to verify its performance. Our simulation studies show that the decoder performs well with the only degrading factor being the noise level in the channel.
310

Diversity-Mutiplexing Tradeoff Of Asynchronous Cooperative Relay Networks And Diversity Embedded Coding Schemes

Naveen, N 07 1900 (has links)
This thesis consists of two parts addressing two different problems in fading channels. The first part deals with asynchronous cooperative relay communication. The assumption of nodes in a cooperative communication relay network operating in synchronous fashion is often unrealistic. In this work we consider two different models of asynchronous operation in cooperative-diversity networks experiencing slow fading and examine the corresponding Diversity-Multiplexing Tradeoffs (DMT). For both models, we propose protocols and distributed space-time codes that asymptotically achieve the transmit diversity bound on DMT for all multiplexing gains and for number of relays N ≥ 2. The distributed space-time codes for all the protocols considered are based on Cyclic Division Algebras (CDA). The second part of the work addresses the DMT analysis of diversity embedded codes for MIMO channels. Diversity embedded codes are high rate codes that are designed so that they have a high diversity code embedded within them. This allows a form of opportunistic communication depending on the channel conditions. The high diversity code ensures that at least a part of the information is received reliably, whereas the embedded high rate code allows additional information to be transferred if the channel is good. This can be thought of coding the data into two streams: high priority and low priority streams so that the high priority stream gets a better reliability than the lower priority stream. We show that superposition based diversity embedded codes in conjunction with naive single stream decoding is sub-optimal in terms of the DM tradeoff. We then construct explicit diversity embedded codes by the superposition of approximately universal space-time codes from CDAs. The relationship between broadcast channels and the diversity embedded setting is then utilized to provide some achievable Diversity Gain Region (DGR) for MIMO broadcast Channels.

Page generated in 0.184 seconds