• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 696
  • 223
  • 199
  • 91
  • 75
  • 48
  • 25
  • 23
  • 17
  • 16
  • 15
  • 15
  • 14
  • 11
  • 10
  • Tagged with
  • 1735
  • 536
  • 244
  • 183
  • 165
  • 153
  • 153
  • 125
  • 114
  • 108
  • 107
  • 94
  • 80
  • 78
  • 77
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

Two Essays on Estimation and Inference of Affine Term Structure Models

Wang, Qian 09 May 2015 (has links)
Affine term structure models (ATSMs) are one set of popular models for yield curve modeling. Given that the models forecast yields based on the speed of mean reversion, under what circumstances can we distinguish one ATSM from another? The objective of my dissertation is to quantify the benefit of knowing the “true” model as well as the cost of being wrong when choosing between ATSMs. In particular, I detail the power of out-of-sample forecasts to statistically distinguish one ATSM from another given that we only know the data are generated from an ATSM and are observed without errors. My study analyzes the power and size of affine term structure models (ATSMs) by evaluating their relative out-of-sample performance. Essay one focuses on the study of the oneactor ATSMs. I find that the model’s predictive ability is closely related to the bias of mean reversion estimates no matter what the true model is. The smaller the bias of the estimate of the mean reversion speed, the better the out-of-sample forecasts. In addition, my finding shows that the models' forecasting accuracy can be improved, in contrast, the power to distinguish between different ATSMs will be reduced if the data are simulated from a high mean reversion process with a large sample size and with a high sampling frequency. In the second essay, I extend the question of interest to the multiactor ATSMs. My finding shows that adding more factors in the ATSMs does not improve models' predictive ability. But it increases the models' power to distinguish between each other. The multiactor ATSMs with larger sample size and longer time span will have more predictive ability and stronger power to differentiate between models.
262

Energy Efficient Adaptive Reed-Solomon Decoding System

Allen, Jonathan D 01 January 2008 (has links) (PDF)
This work presents an energy efficient adaptive error correction system utilizing the Reed-Solomon errors-and-erasures algorithm, targeted to an Altera Stratix FPGA device. The system adapts to changing channel conditions by reconfiguring the system with different decoders to allow for the lowest possible energy consumption rate that the current channel conditions will allow. A series of energy saving optimizations were applied to a set of previous designs, resulting in a reduction in the energy required to decode a megabit of data of more than 70%. In addition, a new channel model was used to assess the effects of differing reconfiguration rates on codeword error rate, energy consumption, and decoding speed.
263

Deep Learning of Model Correction and Discontinuity Detection

Zhou, Zixu 26 August 2022 (has links)
No description available.
264

Novel Algorithms and Hardware Architectures for Computational Subsystems Used in Cryptography and Error Correction Coding

Chakraborty, Anirban 08 1900 (has links)
A modified, single error-correcting, and double error detecting Hamming code, hereafter referred to as modified SEC-DED Hamming code, is proposed in this research. The code requires fewer logic gates to implement than the SEC-DED Hamming code. Also, unlike the popular Hsiao's code, the proposed code can determine the error in the received word from its syndrome location in the parity check matrix. A detailed analysis of the area and power utilization by the encoder and decoder circuits of the modified SEC-DED Hamming code is also discussed. Results demonstrate that this code is an excellent alternative to Hsiao's code as the area and power values are very similar. In addition, the ability to locate the error in the received word from its syndrome is also of particular interest. Primitive polynomials play a crucial role in the hardware realizations for error-correcting codes. This research describes an implementation of a scalable primitive polynomial circuit with coefficients in GF(2). The standard cell area and power values for various degrees of the circuit are analyzed. The physical design of a degree 6 primitive polynomial computation circuit is also provided. In addition to the codes, a background of the already existing SPX GCD computation algorithm is provided. Its implementation revealed that the combinational implementation of the SPX algorithm utilizes a significantly lesser area than Euclid's algorithm. The FSMD implementation of the SPX algorithm reduces both dynamic and leakage power consumption. The physical design of the GCD computation using the SPX algorithm is also provided.
265

Low-Complexity Erasure Decoding of Staircase Codes

Clelland, William Stewart 30 August 2023 (has links)
This thesis presents a new low complexity erasure decoder for staircase codes in optical interconnects between data centers. We developed a parallel software simulation environment to measure the performance of the erasure decoding techniques at output error rates relevant to an optical link. Low complexity erasure decoding demonstrated a 0.06dB increase in coding gain when compared to bounded distance decoding at an output error rate of 3 × 10⁻¹². Further, a log-linear extrapolation predicts a gain of 0.09dB at 10⁻¹⁵. This performance improvement is achieved without an increase in the maximum number of decoding iteration and keeping power constant. In addition, we found the optimal position within the decoding window to apply erasure decoding to minimize iteration count and output error rates, as well as the erasure threshold that minimizes the iteration count subject to the constrained erasure decoding structure.
266

A Syllabus of Techniques for Correction of Speech Defects

Pugh, William O. 01 January 1946 (has links) (PDF)
The problem is to survey the significant reference materials in the field of speech correction in order to ascertain and compile in digest-form those explanations of corrective techniques that are most valid with respect to consistency, both intrinsic and comparative.
267

Improving the Accuracy of Density Functional Approximations: Self-Interaction Correction and Random Phase Approximation

Ruan, Shiqi January 2022 (has links)
Complexes containing a transition metal atom with a 3d^4 - 3d^7 electron configuration typically have two low-lying, high spin (HS) and low spin (LS) states. The adiabatic energy difference between these states, known as the spin-crossover energy, is small enough to pose a challenge even for electronic structure methods that are well known for their accuracy and reliability. In this work we analyze the quality of electronic structure approximations for spin-crossover energies of iron complexes with four different ligands by comparing energies from self-consistent and post-self-consistent calculations for methods based on the random phase approximation and the Fermi-L\"{o}wdin self-interaction correction. Considering that Hartree-Fock densities were found by Song et al. J. Chem. Theory Comput. 14,2304 (2018) to eliminate the density error to a large extent, and that the Hartree-Fock method and the Perdew-Zunger-type self-interaction correction share some physics, we compare the densities obtained with these methods to learn about their resemblance. We find that evaluating non-empirical exchange-correlation energy functionals on the corresponding self-interaction-corrected densities can mitigate the strong density errors and improves the accuracy of the adiabatic energy differences between HS and LS states. / Physics
268

The BMI: Measurement, Physician Costs and Distributional Decomposition

Ornek, Mustafa January 2016 (has links)
This thesis comprises three chapters involving the analysis of the body mass index (BMI) in health economics. The first chapter evaluates two correction models that aim to address measurement error in self-reported (SR) BMI in survey data. This chapter is an addition to the literature as it utilizes two separate Canadian datasets to evaluate the transportability of these correction equations both over time and across different datasets. Our results indicate that the older method remains competitive and that when BMI is used as an independent variable, correction may even be unnecessary. The second chapter measures the relationship between long-term physician costs and BMI. The results show that obesity is associated with higher longterm physician costs only at older ages for males, but at all ages for females. We find that accounting for existing health conditions that are often associated with obesity does not explain the increase in long-term physician costs as BMI increases. This indicates that there is an underlying relationship between the two that we could not account for in our econometric models. Finally, the third chapter decomposes the differences in BMI distributions of Canada and the US. The results show that the differences between BMI levels, both over time and across countries, are increasing with BMI; meaning the highest difference is observed at the right tail of the two distributions. In analysis comparing two points in time, these differences are solely due to differences in the returns from attributes and the omitted variables that we cannot account for in our models. In cross-country analysis, there is evidence that the differences observed below the mean can be explained by the differences in characteristics of the two populations. The differences observed above the mean are again due to those in returns and the omitted variables. / Dissertation / Doctor of Philosophy (PhD)
269

Development of Multi-model Ensembles for Climate Projection

Li, Xinyi January 2024 (has links)
Climate change is one of the most challenging and defining issues that has resulted in substantial societal, economic, and environmental impacts across the world. To assess the potential climate change impact, climate projections are generated with General Circulation Models (GCMs). However, the climate change signals remain uncertain and GCMs have difficulty in representing regional climate features. Therefore, comprehensive knowledge of climate change signals and reliable high-resolution climate projections are highly desired. This dissertation aims to address such challenges by developing climate projections with multi-model ensembles for climate impact assessment. This includes: i) developing multi-model ensembles to analyze global changes in all water components within the hydrological cycle and quantify the uncertainties with GCM projections; ii) development of bias correction models for generating high-resolution daily maximum and minimum temperature projections with individual GCMs and multi-model ensemble means over Canada; iii) proposing bias correction models with individual GCMs and multi-model ensemble means for high-resolution daily precipitation projections for Canada. The proposed models are capable of developing high-resolution climate projections at a regional scale and exploring the climate change signals. The reliable climate projections generated could provide valuable information for formulating appropriate climate change mitigation and adaptation strategies across the world. / Thesis / Doctor of Philosophy (PhD)
270

Beyond Pulse Position Modulation : a Feasibility Study

Gustafsson, Danielle January 2023 (has links)
During the thesis work, a feasibility study of the BPPM error-correction protocol is performed. The beyond pulse position modulation (BPPM) protocol was invented at Ericsson AB and describes a modulation encoding using vertically and horizontally polarized single photons for optical transmission and error-correction. The thesis work is a mixture of both experimental laboratory work and theoretical software simulations which are intended to mimic actual optical fiber transmission. One aspect of the project work involves designing the optical communication system which is used to evaluate the probabilities of transmission errors in the form of false detections and losses of light. During the project work, the BPPM protocol is implemented and used for software simulated error generation and correction. With the available laboratory setup used as the point of reference, error-correction using the BPPM protocol is studied using pulses of light containing more than one photon. The results show that the BPPM protocol can be used to recover some of the information that is lost during optical fiber transmission. Factors such as the size of the codewords, the number of photons per pulse and detection efficiency of the utilized single photon detector (SPD) have a significant impact on the success of the transmission.

Page generated in 0.1005 seconds