271 |
Stresses and deformations in involute spur gears by finite element methodWei, Zeping 29 October 2004
This thesis investigates the characteristics of an involute gear system including contact stresses, bending stresses, and the transmission errors of gears in mesh. Gearing is one of the most critical components in mechanical power transmission systems. Transmission error is considered to be one of the main contributors to noise and vibration in a gear set. Transmission error measurement has become popular as an area of research on gears and is possible method for quality control. To estimate transmission error in a gear system, the characteristics of involute spur gears were analyzed by using the finite element method. The contact stresses were examined using 2-D FEM models. The bending stresses in the tooth root were examined using a 3-D FEM model.
Current methods of calculating gear contact stresses use Hertzs equations, which were originally derived for contact between two cylinders. To enable the investigation of contact problems with FEM, the stiffness relationship between the two contact areas is usually established through a spring placed between the two contacting areas. This can be achieved by inserting a contact element placed in between the two areas where contact occurs. The results of the two dimensional FEM analyses from ANSYS are presented. These stresses were compared with the theoretical values. Both results agree very well. This indicates that the FEM model is accurate.
This thesis also considers the variations of the whole gear body stiffness arising from the gear body rotation due to bending deflection, shearing displacement and contact deformation. Many different positions within the meshing cycle were investigated.
|
272 |
Bandwidth-efficient communication systems based on finite-length low density parity check codesVu, Huy Gia 31 October 2006
Low density parity check (LDPC) codes are linear block codes constructed by pseudo-random parity check matrices. These codes are powerful in terms of error performance and, especially, have low
decoding complexity. While infinite-length LDPC codes approach the capacity of communication channels, finite-length LDPC codes also
perform well, and simultaneously meet the delay requirement of many communication applications such as voice and backbone transmissions. Therefore, finite-length LDPC codes are attractive to employ in low-latency communication systems. This thesis mainly focuses on the bandwidth-efficient communication systems using finite-length LDPC codes. Such bandwidth-efficient systems are realized by mapping a group of LDPC coded bits to a symbol of a high-order signal constellation. Depending on the systems' infrastructure and knowledge of the channel state information (CSI), the signal constellations in different coded modulation systems can be two-dimensional multilevel/multiphase constellations or multi-dimensional space-time constellations.
In the first part of the thesis, two basic bandwidth-efficient coded modulation systems, namely LDPC coded modulation and multilevel LDPC coded modulation, are investigated for both additive white Gaussian noise (AWGN) and frequency-flat Rayleigh fading channels. The bounds on the bit error rate (BER) performance are derived for these systems based on the maximum likelihood (ML) criterion. The derivation of these bounds relies on the union bounding and combinatoric techniques. In particular, for the LDPC coded modulation, the ML bound is computed from the Hamming distance spectrum of the LDPC code and the Euclidian distance profile of the two-dimensional constellation. For the multilevel LDPC coded modulation, the bound of each decoding stage is obtained for a generalized multilevel coded modulation, where more than one coded bit is considered for level. For both systems, the bounds are confirmed by the simulation results of ML decoding and/or the performance of the ordered-statistic decoding (OSD) and the sum-product decoding. It is demonstrated that these bounds can be efficiently used to evaluate the error performance and select appropriate parameters (such as the code rate, constellation and mapping) for the two communication systems.<p>The second part of the thesis studies bandwidth-efficient LDPC coded systems that employ multiple transmit and multiple receive antennas, i.e., multiple-input multiple-output (MIMO) systems. Two scenarios of CSI availability considered are: (i) the CSI is unknown at both the transmitter and the receiver; (ii) the CSI is known at both the transmitter and the receiver. For the first scenario, LDPC coded unitary space-time modulation systems are most suitable and the ML performance bound is derived for these non-coherent systems. To derive the bound, the summation of chordal distances is obtained and used instead of the Euclidean distances. For the second case of CSI, adaptive LDPC coded MIMO modulation systems are studied, where three adaptive schemes with antenna beamforming and/or antenna selection are investigated and compared in terms of the bandwidth efficiency. For uncoded discrete-rate adaptive modulation, the computation of the bandwidth efficiency shows that the scheme with antenna selection at the transmitter and antenna combining at the receiver performs the best when the number of antennas is small. For adaptive LDPC coded MIMO modulation systems, an achievable threshold of the bandwidth efficiency is also computed from the ML bound of LDPC coded modulation derived in the first part.
|
273 |
The Da Vinci Project : - A theoretical approach to language learningAlmqvist, Sandra January 2012 (has links)
The English language could be considered to be a second language in Swedish society; it is present in more than just school, such as television, the world of computers and radio. The general field of interest for this study is an exchange between two schools, one located in Sweden and one in Italy, called the Da Vinci Project. The aim of the study is to get an insight into the effects on language development for the students participating in the project. The study focuses on exposure and error-feedback. The information was gathered using both interviews and questionnaires with students and teachers and I had the opportunity to visit the students when attending the school in Italy. Visiting the school in Italy made it easier to understand both the Italian and the Swedish students’ experience of a different school system than they were accustomed to. The results have been analyzed and it was possible to draw conclusions concerning the students’ language development as a result of meeting different teaching methods in the different school systems. The teachers’ answers provided important information about different teaching methods. Using theoretical approach to second language acquisition the results have been analyzed. Stephen Krashen’s monitor model has been an important component in analyzing the results. The results of the survey showed that the Da Vinci Project involves two totally different school systems using different methods when focusing on error-feedback and exposure. A “gap” was found between the two schools but the results of this survey show that for some students it has been their second language, English, has improved through the exchange. While a few students, in the Da Vinci Project, believe that they have not developed in their second language.
|
274 |
Decentralized Coding in Unreliable Communication NetworksLin, Yunfeng 30 August 2010 (has links)
Many modern communication networks suffer significantly from the unreliable characteristic of their nodes and links. To deal with failures, traditionally, centralized erasure codes have been extensively used to improve reliability by introducing data redundancy. In this thesis, we address several issues in implementing erasure codes in a decentralized way such that coding operations are spread to multiple nodes. Our solutions are based on fountain codes and randomized network coding, because of their capability of being amenable to decentralized implementation originated from their simplicity and randomization properties.
Our contributions consist of four parts. First, we propose a novel decentralized implementation of fountain codes utilizing random walks. Our solution does not require node location information and enjoys a small local routing table with a size in proportion to the number of neighbors. Second, we introduce priority random linear codes to achieve partial data recovery by partition and encoding data into non-overlapping or overlapping subsets. Third, we present geometric random linear codes to decrease communication costs in decoding significantly, by introducing modest data redundancy in a hierarchical fashion. Finally, we study the application of network coding in disruption tolerant networks. We show that network coding achieves shorter data transmission time than replication, especially when data buffers are limited. We also propose an efficient variant of network coding based protocol, which attains similar transmission delay, but with much lower transmission costs, as compared to a protocol based on epidemic routing.
|
275 |
Automated Error Assessment in Spherical Near-Field Antenna MeasurementsPelland, Patrick 27 May 2011 (has links)
This thesis will focus on spherical near-field antenna measurements and the methods developed or modified for the work of this thesis to estimate the uncertainty in a particular far-field radiation pattern. We will discuss the need for error assessment in spherical near-field antenna measurements. A procedure will be proposed that, in an automated fashion, can be used to determine the overall uncertainty in the measured far-field radiation pattern of a particular antenna. This overall uncertainty will be the result of a combination of several known sources of error common to SNF measurements. This procedure will consist of several standard SNF measurements, some newly developed tests, and several stages of post-processing of the measured data. The automated procedure will be tested on four antennas of various operating frequencies and directivities to verify its functionality. Finally, total uncertainty data will be presented to the reader in several formats.
|
276 |
Statistical Analysis of Hartmann-Shack Images of a Pre-school PopulationThapa, Damber 01 1900 (has links)
The impact of uncoordinated growth of the optical components of the eye may stimulate different levels of monochromatic aberrations in the growing eyes of the children. This thesis aimed to examine the impact of age, visual acuity and refractive error on higher order aberrations as well as to determine the relationship between them.
Hartman Shack images taken with the Welch Allyn® SureSight Autorefractor were calibrated in order to determine the Zernike coefficients up to the 8th order for a pupil diameter of 5mm. The MATLAB code proposed by Thibos et al that follows the standard for reporting the optical aberrations of the eye was the basis of code written for this study. Modification was required to suit the specific needs of the Welch Allyn® SureSight Autorefractor. After calibration the lower order aberrations could then be compared with the results from cyclopledged retinoscopy. RMS values of aberrations and Strehl ratios were computed to examine the optical performance of the eye.
A total of 834 Hartmann-Shack images of 436 children (mean age 3.94± 0.94 years, range 3 to 6 years) were examined in this study (right eyes 436; left eyes 398).The sample had a mean (± STD) spherical equivalent of 1.19 ± 0.59D, a mean with-the-rule astigmatism (J0) of 0.055 ± 0.22D, and a mean oblique astigmatism (J45) of 0.01±0.14D. Visual acuity varied from 6/6 to 6/18.
Moderate mirror symmetry was found between the eyes. Like refractive error, higher order aberrations declined with age in this sample. There was an impact of higher order aberrations on refractive error. Significantly higher ocular aberrations were found in the higher hyperopic group (SE>+2.0D) compared to emmetropic (-0.5<SE<+0.5D) and low hyperopic groups (+0.5<SE<+2.0D). The Strehl ratio was significantly lower in the high hyperopic group. Higher Strehl ratios were observed for better acuity groups but the average Strehl ratios among the different visual acuity groups were not statistically significant.
In conclusion, there was an impact of age on the ocular aberrations. A wider range of age from birth to adolescence is required for further investigation. This could be indirectly influenced by the age related changes in refractive error as the correlation between refractive error and the higher order aberrations were significant. This finding also concludes that Strehl Ratio alone is not capable of perfectly describing the visual acuity of the eye; other metrics such as the neural transfer function and neural noise are necessary to describe the resultant visual performance of the eye.
|
277 |
Statistical Analysis of Hartmann-Shack Images of a Pre-school PopulationThapa, Damber 01 1900 (has links)
The impact of uncoordinated growth of the optical components of the eye may stimulate different levels of monochromatic aberrations in the growing eyes of the children. This thesis aimed to examine the impact of age, visual acuity and refractive error on higher order aberrations as well as to determine the relationship between them.
Hartman Shack images taken with the Welch Allyn® SureSight Autorefractor were calibrated in order to determine the Zernike coefficients up to the 8th order for a pupil diameter of 5mm. The MATLAB code proposed by Thibos et al that follows the standard for reporting the optical aberrations of the eye was the basis of code written for this study. Modification was required to suit the specific needs of the Welch Allyn® SureSight Autorefractor. After calibration the lower order aberrations could then be compared with the results from cyclopledged retinoscopy. RMS values of aberrations and Strehl ratios were computed to examine the optical performance of the eye.
A total of 834 Hartmann-Shack images of 436 children (mean age 3.94± 0.94 years, range 3 to 6 years) were examined in this study (right eyes 436; left eyes 398).The sample had a mean (± STD) spherical equivalent of 1.19 ± 0.59D, a mean with-the-rule astigmatism (J0) of 0.055 ± 0.22D, and a mean oblique astigmatism (J45) of 0.01±0.14D. Visual acuity varied from 6/6 to 6/18.
Moderate mirror symmetry was found between the eyes. Like refractive error, higher order aberrations declined with age in this sample. There was an impact of higher order aberrations on refractive error. Significantly higher ocular aberrations were found in the higher hyperopic group (SE>+2.0D) compared to emmetropic (-0.5<SE<+0.5D) and low hyperopic groups (+0.5<SE<+2.0D). The Strehl ratio was significantly lower in the high hyperopic group. Higher Strehl ratios were observed for better acuity groups but the average Strehl ratios among the different visual acuity groups were not statistically significant.
In conclusion, there was an impact of age on the ocular aberrations. A wider range of age from birth to adolescence is required for further investigation. This could be indirectly influenced by the age related changes in refractive error as the correlation between refractive error and the higher order aberrations were significant. This finding also concludes that Strehl Ratio alone is not capable of perfectly describing the visual acuity of the eye; other metrics such as the neural transfer function and neural noise are necessary to describe the resultant visual performance of the eye.
|
278 |
Soft Error Resistant Design of the AES Cipher Using SRAM-based FPGAGhaznavi, Solmaz January 2011 (has links)
This thesis presents a new architecture for the reliable implementation of the symmetric-key algorithm Advanced Encryption Standard (AES) in Field Programmable Gate Arrays (FPGAs). Since FPGAs are prone to soft errors caused by radiation, and AES is highly sensitive to errors, reliable architectures are of significant concern. Energetic particles hitting a device can flip bits in FPGA SRAM cells controlling all aspects of the implementation. Unlike previous research, heterogeneous error detection techniques based on properties of the circuit and functionality are used to provide adequate reliability at the lowest possible cost. The use of dual ported block memory for SubBytes, duplication for the control circuitry, and a new enhanced parity technique for MixColumns is proposed. Previous parity techniques cover single errors in datapath registers, however, soft errors can occur in the control circuitry as well as in SRAM cells forming the combinational logic and routing. In this research, propagation of single errors is investigated in the routed netlist. Weaknesses of the previous parity techniques are identified. Architectural redesign at the register-transfer level is introduced to resolve undetected single errors in both the routing and the combinational logic.
Reliability of the AES implementation is not only a critical issue in large scale FPGA-based systems but also at both higher altitudes and in space applications where there are a larger number of energetic particles. Thus, this research is important for providing efficient soft error resistant design in many current and future secure applications.
|
279 |
A study of the robustness of magic state distillation against Clifford gate faultsJochym-O'Connor, Tomas Raphael January 2012 (has links)
Quantum error correction and fault-tolerance are at the heart of any scalable quantum computation architecture. Developing a set of tools that satisfy the requirements of fault- tolerant schemes is thus of prime importance for future quantum information processing implementations. The Clifford gate set has the desired fault-tolerant properties, preventing bad propagation of errors within encoded qubits, for many quantum error correcting codes, yet does not provide full universal quantum computation. Preparation of magic states can enable universal quantum computation in conjunction with Clifford operations, however preparing magic states experimentally will be imperfect due to implementation errors. Thankfully, there exists a scheme to distill pure magic states from prepared noisy magic states using only operations from the Clifford group and measurement in the Z-basis, such a scheme is called magic state distillation [1]. This work investigates the robustness of magic state distillation to faults in state preparation and the application of the Clifford gates in the protocol. We establish that the distillation scheme is robust to perturbations in the initial state preparation and characterize the set of states in the Bloch sphere that converge to the T-type magic state in different fidelity regimes. Additionally, we show that magic state distillation is robust to low levels of gate noise and that performing the distillation scheme using noisy Clifford gates is a more efficient than using encoded fault-tolerant gates due to the large overhead in fault-tolerant quantum computing architectures.
|
280 |
Stresses and deformations in involute spur gears by finite element methodWei, Zeping 29 October 2004 (has links)
This thesis investigates the characteristics of an involute gear system including contact stresses, bending stresses, and the transmission errors of gears in mesh. Gearing is one of the most critical components in mechanical power transmission systems. Transmission error is considered to be one of the main contributors to noise and vibration in a gear set. Transmission error measurement has become popular as an area of research on gears and is possible method for quality control. To estimate transmission error in a gear system, the characteristics of involute spur gears were analyzed by using the finite element method. The contact stresses were examined using 2-D FEM models. The bending stresses in the tooth root were examined using a 3-D FEM model.
Current methods of calculating gear contact stresses use Hertzs equations, which were originally derived for contact between two cylinders. To enable the investigation of contact problems with FEM, the stiffness relationship between the two contact areas is usually established through a spring placed between the two contacting areas. This can be achieved by inserting a contact element placed in between the two areas where contact occurs. The results of the two dimensional FEM analyses from ANSYS are presented. These stresses were compared with the theoretical values. Both results agree very well. This indicates that the FEM model is accurate.
This thesis also considers the variations of the whole gear body stiffness arising from the gear body rotation due to bending deflection, shearing displacement and contact deformation. Many different positions within the meshing cycle were investigated.
|
Page generated in 0.0321 seconds