• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 224
  • 51
  • 49
  • 18
  • 16
  • 15
  • 14
  • 12
  • 11
  • 7
  • 4
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 490
  • 490
  • 165
  • 101
  • 79
  • 67
  • 67
  • 53
  • 49
  • 39
  • 38
  • 38
  • 36
  • 34
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Renewable Energy Consumption and Foreign Direct Investment : Bangladesh's Case

Tasnim, Sumaya January 2020 (has links)
FDI investment is a vital factor for the developing countries economic growth. Apart from working as a catalyst of increasing total output level, FDI is a source of clean energy, technology transfer and energy efficiency. There have been very limited studies on the impact of FDI on renewable energy consumption in the context of Bangladesh. In fact, to my best knowledge there hasn’t been any studies on Bangladesh regarding this relationship with recent data available. Therefore, the aim of this paper is to reveal the relationship between FDI and renewable energy consumption in Bangladesh with annual Data spanning from 1980 to 2016. Johansen’s cointegration test showed that variables are cointegrated in the long run. Through Vector Error Correction Model (VECM), the paper shows there is short run and long run causality between FDI and Renewable Energy Consumption and the causality is negative. Granger causality test reveals that the direction of causality is running from FDI to Renewable Energy Consumption. Policies regarding attracting more sectoral FDI should be considered to improve investment scenario in Renewable energy sector.
182

A piRNA regulation landscape in C. elegans and a computational model to predict gene functions

Chen, Hao 28 October 2020 (has links)
Investigating mechanisms that regulate genes and the genes' functions are essential to understand a biological system. This dissertation is consists of two specific research projects under these aims, which are for understanding piRNA's regulation mechanism and predicting genes' function computationally. The first project shows a piRNA regulation landscape in C. elegans. piRNAs (Piwi-interacting small RNAs) form a complex with Piwi Argonautes to maintain fertility and silence transposons in animal germlines. In C. elegans, previous studies have suggested that piRNAs tolerate mismatched pairing and in principle could target all transcripts. In this project, by computationally analyzing the chimeric reads directly captured by cross-linking piRNA and their targets in vivo, piRNAs are found to target all germline mRNAs with microRNA-like pairing rules. The number of targeting chimeric reads correlates better with binding energy than with piRNA abundance, suggesting that piRNA concentration does not limit targeting. Further more, in mRNAs silenced by piRNAs, secondary small RNAs are found to be accumulating at the center and ends of piRNA binding sites. Whereas in germline-expressed mRNAs, reduced piRNA binding density and suppression of piRNA-associated secondary small RNAs targeting correlate with the CSR-1 Argonaute presence. These findings reveal physiologically important and nuanced regulation of piRNA targets and provide evidence for a comprehensive post-transcriptional regulatory step in germline gene expression. The second project elaborates a computational model to predict gene function. Predicting genes involved in a biological function facilitates many kinds of research, such as prioritizing candidates in a screening project. Following the “Guilt By Association” principle, multiple datasets are considered as biological networks and integrated together under a multi-label learning framework for predicting gene functions. Specifically, the functional labels are propagated and smoothed using a label propagation method on the networks and then integrated using an “Error correction of code” multi-label learning framework, where a “codeword” defines all the labels annotated to a specific gene. The model is then trained by finding the optimal projections between the code matrix and the biological datasets using canonical correlation analysis. Its performance is benchmarked by comparing to a state-of-art algorithm and a large scale screen results for piRNA pathway genes in D.melanogaster. Finally, piRNA targeting's roles in epigenetics and physiology and its cross-talk with CSR-1 pathway are discussed, together with a survey of additional biological datasets and a discussion of benchmarking methods for the gene function prediction.
183

Bridging the Gap: Integration, Evaluation and Optimization of Network Coding-based Forward Error Correction

Schütz, Bertram 18 October 2021 (has links)
The formal definition of network coding by Ahlswede et al. in 2000 has led to several breakthroughs in information theory, for example solving the bottleneck problem in butterfly networks and breaking the min-cut max-flow theorem for multicast communication. Especially promising is the usage of network coding as a packet-level Forward Error Correction (FEC) scheme to increase the robustness of a data stream against packet loss, also known as intra-session coding. Yet, despite these benefits, network coding-based FEC is still rarely deployed in real-world networks. To bridge this gap between information theory and real-world usage, this cumulative thesis will present our contributions to the integration, evaluation, and optimization of network coding-based FEC. The first set of contributions introduces and evaluates efficient ways to integrate coding into UDP-based IoT protocols to speed up bulk data transfers in lossy scenarios. This includes a packet-level FEC extension for the Constrained Application Protocol (CoAP) [P1] and one for MQTT for Sensor Networks (MQTT-SN), which levels the underlying publish-subscribe architecture [P2]. The second set of contributions addresses the development of novel evaluation tools and methods to better quantify possible coding gains. This includes link ’em, our award-winning link emulation bridge for reproducible networking research [P3], and also SPQER, a word recognition-based metric to evaluate the impact of packet loss on the Quality of Experience of Voice over IP applications [P5]. Finally, we highlight the impact of padding overhead for applications with heterogeneous packet lengths [P6] and introduce a novel packet-preserving coding scheme to significantly reduce this problem [P4]. Because many of the shown contributions can be applied to other areas of network coding research as well, this thesis does not only make meaningful contributions to specific network coding challenges, but also paves the way for future work to further close the gap between information theory and real-world usage.
184

On feedback-based rateless codes for data collection in vehicular networks

Hashemi, Morteza 28 October 2015 (has links)
The ability to transfer data reliably and with low delay over an unreliable service is intrinsic to a number of emerging technologies, including digital video broadcasting, over-the-air software updates, public/private cloud storage, and, recently, wireless vehicular networks. In particular, modern vehicles incorporate tens of sensors to provide vital sensor information to electronic control units (ECUs). In the current architecture, vehicle sensors are connected to ECUs via physical wires, which increase the cost, weight and maintenance effort of the car, especially as the number of electronic components keeps increasing. To mitigate the issues with physical wires, wireless sensor networks (WSN) have been contemplated for replacing the current wires with wireless links, making modern cars cheaper, lighter, and more efficient. However, the ability to reliably communicate with the ECUs is complicated by the dynamic channel properties that the car experiences as it travels through areas with different radio interference patterns, such as urban versus highway driving, or even different road quality, which may physically perturb the wireless sensors. This thesis develops a suite of reliable and efficient communication schemes built upon feedback-based rateless codes, and with a target application of vehicular networks. In particular, we first investigate the feasibility of multi-hop networking for intra-car WSN, and illustrate the potential gains of using the Collection Tree Protocol (CTP), the current state of the art in multi-hop data aggregation. Our results demonstrate, for example, that the packet delivery rate of a node using a single-hop topology protocol can be below 80% in practical scenarios, whereas CTP improves reliability performance beyond 95% across all nodes while simultaneously reducing radio energy consumption. Next, in order to migrate from a wired intra-car network to a wireless system, we consider an intermediate step to deploy a hybrid communication structure, wherein wired and wireless networks coexist. Towards this goal, we design a hybrid link scheduling algorithm that guarantees reliability and robustness under harsh vehicular environments. We further enhance the hybrid link scheduler with the rateless codes such that information leakage to an eavesdropper is almost zero for finite block lengths. In addition to reliability, one key requirement for coded communication schemes is to achieve a fast decoding rate. This feature is vital in a wide spectrum of communication systems, including multimedia and streaming applications (possibly inside vehicles) with real-time playback requirements, and delay-sensitive services, where the receiver needs to recover some data symbols before the recovery of entire frame. To address this issue, we develop feedback-based rateless codes with dynamically-adjusted nonuniform symbol selection distributions. Our simulation results, backed by analysis, show that feedback information paired with a nonuniform distribution significantly improves the decoding rate compared with the state of the art algorithms. We further demonstrate that amount of feedback sent can be tuned to the specific transmission properties of a given feedback channel.
185

Diagnostika chyb v počítačových sítích založená na překlepech / Diagnosing Errors inside Computer Networks Based on the Typo Errors

Bohuš, Michal January 2020 (has links)
The goal of this diploma thesis is to create system for network data diagnostics based on detecting and correcting spelling errors. The system is intended to be used by network administrators as next diagnostics tool. As opposed to the primary use of detection and correction spelling error in common text, these methods are applied to network data, which are given by the user. Created system works with NetFlow data, pcap files or log files. Context is modeled with different created data categories. Dictionaries are used to verify the correctness of words, where each category uses its own. Finding a correction only according to the edit distance leads to many results and therefore a heuristic for evaluating candidates was proposed for selecting the right candidate. The created system was tested in terms of functionality and performance.
186

Posouzení vlivu dělícího poměru na pasivní optickou síť / Impact assessment of split ratios on passive optical network

Gallo, Martin January 2016 (has links)
This thesis deals with the most recent passive optical network standard NG-PON2, describes the sublayer model which includes error correction coding during propagation in optical fibres. Assesses the impact of split ratios using the simulation environment created from defined model and compares various scenarios. Discusses possible sources of errors of simulation model in compare to real deployment.
187

Quantum error correction

Almlöf, Jonas January 2012 (has links)
This thesis intends to familiarise the reader with quantum error correction, and also show some relations to the well known concept of information - and the lesser known quantum information. Quantum information describes how information can be carried by quantum states, and how interaction with other systems give rise to a full set of quantum phenomena, many of which have no correspondence in classical information theory. These phenomena include decoherence, as a consequence of entanglement. Decoherence can also be understood as "information leakage", i.e., knowledge of an event is transferred to the reservoir - an effect that in general destroys superpositions of pure states. It is possible to protect quantum states (e.g., qubits) from interaction with the environment - but not by amplification or duplication, due to the "no-cloning" theorem. Instead, this is done using coding, non-demolition measurements, and recovery operations. In a typical scenario, however, not all types of destructive events are likely to occur, but only those allowed by the information carrier, the type of interaction with the environment, and how the environment "picks up" information of the error events. These characteristics can be incorporated into a code, i.e., a channel-adapted quantum error-correcting code. Often, it is assumed that the environment's ability to distinguish between error events is small, and I will denote such environments "memory-less".  This assumption is not always valid, since the ability to distinguish error events is related to the \emph{temperature} of the environment, and in the particular case of information coded onto photons, <img src="http://www.diva-portal.org/cgi-bin/mimetex.cgi?k_%7B%5Ctext%7BB%7D%7DT_%7B%5Ctext%7BR%7D%7D%5Cll%5Chbar%5Comega" /> typically holds, and one must then assume that the environment has a "memory". In this thesis, I describe a short quantum error-correcting code (QECC), adapted for photons interacting with a cold environment, i.e., this code protects from an environment that continuously records which error occurred in the coded quantum state. Also, it is of interest to compare the performance of different QECCs - But which yardstick should one use? We compare two such figures of merit, namely the quantum mutual information and the quantum fidelity, and show that they can not, in general, be simultaneously maximised in an error correcting procedure. To show this, we have used a five-qubit perfect code, but assumed a channel that only cause bit-flip errors. It appears that quantum mutual information is the better suited yardstick of the two, however more tedious to calculate than quantum fidelity - which is more commonly used. / Denna avhandling är en introduktion till kvantfelrättning, där jag undersöker släktskapet med teorin om klassisk information - men också det mindre välkända området kvantinformation. Kvantinformation beskriver hur information kan bäras av kvanttillstånd, och hur växelverkan med andra system ger upphov till åtskilliga typer av fel och effekter, varav många saknar motsvarighet i den klassiska informationsteorin. Bland dessa effekter återfinns dekoherens - en konsekvens av s.k. sammanflätning. Dekoherens kan också förstås som "informationsläckage", det vill säga att kunskap om en händelse överförs till omgivningen - en effekt som i allmänhet förstör superpositioner i rena kvanttillstånd.  Det är möjligt att med hjälp av kvantfelrättning skydda kvanttillstånd (t.ex. qubitar) från omgivningens påverkan, dock kan sådana tillstånd aldrig förstärkas eller dupliceras, p.g.a icke-kloningsteoremet. Tillstånden skyddas genom att införa redundans, varpå tillstånden interagerar med omgivningen. Felen identifieras m.h.a. icke-förstörande mätningar och återställs med unitära grindar och ancilla-tillstånd.Men i realiteten kommer inte alla tänkbara fel att inträffa, utan dessa begränsas av vilken informationsbärare som används, vilken interaktion som uppstår med omgivningen, samt hur omgivningen "fångar upp" information om felhändelserna. Med kunskap om sådan karakteristik kan man bygga koder, s.k. kanalanpassade kvantfelrättande koder. Vanligtvis antas att omgivningens förmåga att särskilja felhändelser är liten, och man kan då tala om en minneslös omgivning. Antagandet gäller inte alltid, då denna förmåga bestäms av reservoirens temperatur, och i det speciella fall då fotoner används som informationsbärare gäller typiskt <img src="http://www.diva-portal.org/cgi-bin/mimetex.cgi?k_%7B%5Ctext%7BB%7D%7DT_%7B%5Ctext%7BR%7D%7D%5Cll%5Chbar%5Comega" />, och vi måste anta att reservoiren faktiskt har ett "minne". I avhandlingen beskrivs en kort, kvantfelrättande kod som är anpassad för fotoner i växelverkan med en "kall" omgivning, d.v.s. denna kod skyddar mot en omgivning som kontinuerligt registrerar vilket fel som uppstått i det kodade tillståndet.  Det är också av stort intresse att kunna jämföra prestanda hos kvantfelrättande koder, utifrån någon slags "måttstock" - men vilken? Jag jämför två sådana mått, nämligen ömsesidig kvantinformation, samt kvantfidelitet, och visar att dessa i allmänhet inte kan maximeras samtidigt i en felrättningsprocedur. För att visa detta har en 5-qubitarskod använts i en tänkt kanal där bara bitflip-fel uppstår, och utrymme därför finns att detektera fel. Ömsesidig kvantinformation framstår som det bättre måttet, dock är detta mått betydligt mer arbetskrävande att beräkna, än kvantfidelitet - som är det mest förekommande måttet. / <p>QC 20121206</p>
188

DETECTING INITIAL CORRELATIONS IN OPEN QUANTUM SYSTEMS

Mullaparambi Babu, Anjala Mullaparambil 01 December 2021 (has links)
In this thesis, we discuss correlations arising between a system and its environment that lead to errors in an open quantum system. Detecting those correlations would be valuable for avoiding and/or correcting those errors. It was studied previously that we can detect correlations by only measuring the system itself if we know the cause of interaction between the two, for example in the case of a dipole-dipole interaction for a spin 1/2-spin 1/2 interaction Hamiltonian. We investigate the unitary, U which is associated with the exchange Hamiltonian and examine the ability to detect initial correlations between a system and its environment for a spin-1/2(qubit) system interacting with a larger higher dimensional environment. We provide bounds for when we can state with certainty that there are initial system-environment correlations given experimental data.
189

Error-Floors of the 802.3an LDPC Code for Noise Assisted Decoding

Tithi, Tasnuva Tarannum 01 May 2019 (has links)
In digital communication, information is sent as bits, which is corrupted by the noise present in wired/wireless medium known as the channel. The Low Density Parity Check (LDPC) codes are a family of error correction codes used in communication systems to detect and correct erroneous data at the receiver. Data is encoded with error correction coding at the transmitter and decoded at the receiver. The Noisy Gradient Descent BitFlip (NGDBF) decoding algorithm is a new algorithm with excellent decoding performance with relatively low implementation requirements. This dissertation aims to characterize the performance of the NGDBF algorithm. A simple improvement over NGDBF called the Re-decoded NGDBF (R-NGDBF) is proposed to enhance the performance of NGDBF decoding algorithm. A general method to estimate the decoding parameters of NGDBF is presented. The estimated parameters are then verified in a hardware implementation of the decoder to validate the accuracy of the estimation technique.
190

Generalization of Signal Point Target Code

Billah, Md Munibun 01 August 2019 (has links)
Detecting and correcting errors occurring in the transmitted data through a channel is a task of great importance in digital communication. In Error Correction Coding (ECC), some redundant data is added with the original data while transmitting. By exploiting the properties of the redundant data, the errors occurring in the data from the transmission can be detected and corrected. In this thesis, a new coding algorithm named Signal Point Target Code has been studied and various properties of the proposed code have been extended. Signal Point Target Code (SPTC) uses a predefined shape within a given signal constellation to generate a parity symbol. In this thesis, the relation between the employed shape and the performance of the proposed code have been studied and an extension of the SPTC are presented. This research presents simulation results to compare the performances of the proposed codes. The results have been simulated using different programming languages, and a comparison between those programming languages is provided. The performance of the codes are analyzed and possible future research areas have been indicated.

Page generated in 0.0657 seconds