• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 28
  • 28
  • 23
  • 7
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The Computational Problem of Motor Control

Poggio, Tomaso, Rosser, B.L. 01 May 1983 (has links)
We review some computational aspects of motor control. The problem of trajectory control is phrased in terms of an efficient representation of the operator connecting joint angles to joint torques. Efficient look-up table solutions of the inverse dynamics are related to some results on the decomposition of function of many variables. In a biological perspective, we emphasize the importance of the constraints coming from the properties of the biological hardware for determining the solution to the inverse dynamic problem.
2

"They're All Sort of Fake, Not Real": An Exploratory Study of Who Young Girls Look Up To

Wright, Carole Ann January 2008 (has links)
The purpose of this qualitative study was to explore the phenomenon of role models for younger girls. Girls aged 5 to 12 years were asked who they chose to look up to, how significant their role models were to them, why they had chosen them and if they thought they thought that they could achieve their chosen model‟s achievements. Socio-cultural framework provides a useful perspective for understanding the significance of role models as they act as powerful transmitters and reinforcers of the tenets of socialization. In Social Cognitive Theory, it is claimed that children largely learn through modelling, observing and imitating significant others. Interview and task sessions including a field-mapping activity and the sorting of peer-generated photographs were conducted with 12 girls aged from 5 to 12 years from one urban school. In analysis of the interview data, it was found that family members or family substitutes were the most significant people that these girls chose and, despite the alleged pressure from popular culture, young girls in this study were able to make discerning judgements about the „hollowness‟ of characters of popular culture. They identified skills or attributes that their role models demonstrated rather than physical attractiveness, their popularity or the amount of money their fame had brought them. This study is a valid representation of what mattered to a group of young girls at one specific point in time and could indicate the value of further investigation of how to maximize the benefits of role models for young girls.
3

FPGA Implementation of a Pseudo-Random Aggregate Spectrum Generator for RF Hardware Test and Evaluation

Baweja, Randeep Singh 09 October 2020 (has links)
Test and evaluation (TandE) is a critically important step before in-the-field deployment of radio-frequency (RF) hardware in order to assure that the hardware meets its design requirements and specifications. Typically, TandE is performed either in a lab setting utilizing a software simulation environment or through real-world field testing. While the former approach is typically limited by the accuracy of the simulation models (particularly of the anticipated hardware effects) and by non-real-time data rates, the latter can be extremely costly in terms of time, money, and manpower. To build upon the strengths of these approaches and to mitigate their weaknesses, this work presents the development of an FPGA-based TandE tool that allows for real-time pseudo-random aggregate signal generation for testing RF receiver hardware (such as communication receivers, spectrum sensors, etc.). In particular, a framework is developed for an FPGA-based implementation of a test signal emulator that generates randomized aggregate spectral environments containing signals with random parameters such as center frequencies, bandwidths, start times, and durations, as well as receiver and channel effects such as additive white Gaussian noise (AWGN). To test the accuracy of the developed spectrum generation framework, the randomization properties of the framework are analyzed to assure correct probability distributions and independence. Additionally, FPGA implementation decisions, such as bit precision versus accuracy of the generated signal and the impact on the FPGA's hardware footprint, are analyzed.This analysis allows the test signal engineer to make informed decisions while designing a hardware-based RF test system. This framework is easily extensible to other signal types and channel models, and can be used to test a variety of signal-based applications. / Master of Science / Test and evaluation (TandE) is a critically important step before in-the-field deployment of radio-frequency signal hardware in order to assure that the hardware meets its design requirements and specifications. Typically, TandE is performed either in a lab setting utilizing a software simulation or through real-world field testing. While the former approach is typically limited by the accuracy of the simulation models and by slower data rates, the latter can be extremely costly in terms of time, money, and manpower. To address these issues, a hardware-based signal generation approach that takes the best of both methods mentioned above is developed in this thesis. This approach allows the user to accurately model a radio-frequency system without requiring expensive equipment. This work presents the development of a hardware-based TandE tool that allows for real-time random signal generation for testing radio-frequency receiver hardware (such as communication receivers). In particular, a framework is developed for an implementation of a test signal emulator that allows for user-defined randomization of test signal parameters such as frequencies, signal bandwidths, start times, and durations, as well as communications receiver effects. To test the accuracy of the developed emulation framework, the randomization properties of the framework are analyzed to assure correct probability distributions and independence. Additionally, hardware implementation decisions such as bit precision versus quality of the generated signal and the impact on the hardware footprint are analyzed. Ultimately, it is shown that this framework is easily extensible to other signal types and communication channel models.
4

Matematické metody zabezpečení přenosu digitálních dat / Mathematical security methods in digital data transfer

Bartušek, Petr January 2014 (has links)
This master’s thesis deals with an analysis of digital security with CRC. In the thesis there is described a principle of coding theory, especially then digital security with CRC, for which there is explained a mathematical principle of their encoding and decoding, software implementation and a description of frequently used generator polynomials. The main aim of the thesis is a testing of undetected errors and a finding of number of these errors. After that it is used for the computation of probability with which undetected errors can occur. The thesis is supplemented with several programs which are programmed in the software Matlab.
5

[en] FAST DECODING PREFIX CODES / [pt] CÓDIGOS DE PREFIXO DE RÁPIDA DECODIFICAÇÃO

LORENZA LEAO OLIVEIRA MORENO 12 November 2003 (has links)
[pt] Mesmo com a evolução dos dispositivos de armazenamento e comunicação, mantém-se crescente a demanda por mecanismos de compressão de dados mais eficientes. Entre os compressores baseados na freqüência de símbolos, destacam - se os códigos livres de prefixo, que são executados por vários métodos compostos de diferentes algoritmos e também apresentam bom desempenho em uso isolado. Muitas pesquisas trouxeram maior eficiência aos códigos de prefixo, centradas, sobretudo, na redução do espaço de memória necessário e tempo gasto durante a descompressão. O presente trabalho abrange códigos de prefixos e respectivas técnicas de descompressão visando propor um novo codificador, o compressor LTL, que utiliza códigos com restrição de comprimento para reduzir o espaço de memória da tabela Look-up, eficiente método de decodificação. Devido ao uso de códigos restritos, é admitido um pequeno decréscimo nas taxas de compressão para possibilitar uma decodificação mais rápida. Os resultados obtidos indicam perda de compressão inferior a 11 por cento para um modelo baseado em caracteres, com velocidade média de decodificação cinco vezes maior que a de um decodificador canônico. Embora, para um modelo de palavras, o ganho médio de velocidade seja de 3,5, constata-se que, quando o número de símbolos é muito grande, o tamanho da tabela look-up impossibilita uma utilização eficiente da memória cache. Assim, o LTL é indicado para substituir quaisquer códigos de prefixo baseados em caracteres cuja aplicação requer agilidade no processo de descompressão. / [en] Even with the evolution of communication and storage devices, the use of complex data structures, like video and hypermedia documents, keeps increasing the demand for efficient data compression mechanisms. Prefix codes are one of the most known compressors, since they are executed by some compression methods that group different algorithms, besides presenting a good performance when used separately. A lot of approaches have been tried to improve the decoding speed of these codes. One major reason is that files are compressed and updated just a few times, whereas they have to be decompressed each time they are accessed. This work presents prefix codes and their decoding techniques in order to introduce a new coding scheme. In this scheme length-restricted codes are used to control the space requirements of the Look-up table, an efficient and fast prefix codes decoding method. Since restricted codewords are used, a small loss of compression efficiency is admitted. Empirical experiments indicate that this loss in the coded text is smaller than 11 percent if a character based model is used, and the observed average decoding speed is five times faster than the one for canonical codes. For a word based model, the average decoding speed is 3,5 times faster than a canonical decoder, but it decreases when a large number of symbols is used. Hence, this method is very suitable for applications where a character based model is used and extremely fast decoding is mandatory.
6

Characterization and Correction of Analog-to-Digital Converters

Lundin, Henrik January 2005 (has links)
Denna avhandling behandlar analog-digitalomvandling. I synnerhet behandlas postkorrektion av analog-digitalomvandlare (A/D-omvandlare). A/D-omvandlare är i praktiken behäftade med vissa fel som i sin tur ger upphov till distorsion i omvandlarens utsignal. Om felen har ett systematiskt samband med utsignalen kan de avhjälpas genom att korrigera utsignalen i efterhand. Detta verk behandlar den form av postkorrektion som implementeras med hjälp av en tabell ur vilken korrektionsvärden hämtas. Innan en A/D-omvandlare kan korrigeras måste felen i den mätas upp. Detta görs genom att estimera omvandlarens överföringsfunktion. I detta arbete behandlas speciellt problemet att skatta kvantiseringsintervallens mittpunkter. Det antas härvid att en referenssignal finns tillgänglig som grund för skattningen. En skattare som baseras på sorterade data visas vara bättre än den vanligtvis använda skattaren baserad på sampelmedelvärde. Nästa huvudbidrag visar hur resultatet efter korrigering av en A/D-omvandlare kan predikteras. Omvandlaren antas här ha en viss differentiell olinjäritet och insignalen antas påverkad av ett slumpmässigt brus. Ett postkorrektionssystem, implementerat med begränsad precision, korrigerar utsignalen från A/D-omvandlaren. Ett utryck härleds som beskriver signal-brusförhållandet efter postkorrektion. Förhållandet visar sig bero på den differentiella olinjäritetens varians, det slumpmässiga brusets varians, omvandlarens upplösning samt precisionen med vilken korrektionstermerna beskrivs. Till sist behandlas indexering av korrektionstabeller. Valet av metod för att indexera en korrektionstabell påverkar såväl tabellens storlek som förmågan att beskriva och korrigera dynamiska fel. I avhandlingen behandlas i synnerhet tillståndsmodellbaserade metoder, det vill säga metoder där tabellindex bildas som en funktion utav flera på varandra följande sampel. Allmänt gäller att ju fler sampel som används för att bilda ett tabellindex, desto större blir tabellen, samtidigt som förmågan att beskriva dynamiska fel ökar. En indexeringsmetod som endast använder en delmängd av bitarna i varje sampel föreslås här. Vidare så påvisas hur valet av indexeringsbitar kan göras optimalt, och experimentella utvärderingar åskådliggör att tabellstorleken kan reduceras avsevärt utan att fördenskull minska prestanda mer än marginellt. De teorier och resultat som framförs här har utvärderats med experimentella A/D-omvandlardata eller genom datorsimuleringar. / Analog-to-digital conversion and quantization constitute the topic of this thesis. Post-correction of analog-to-digital converters (ADCs) is considered in particular. ADCs usually exhibit non-ideal behavior in practice. These non-idealities spawn distortions in the converters output. Whenever the errors are systematic, it is possible to mitigate them by mapping the output into a corrected value. The work herein is focused on problems associated with post-correction using look-up tables. All results presented are supported by experiments or simulations. The first problem considered is characterization of the ADC. This is in fact an estimation problem, where the transfer function of the converter should be determined. This thesis deals with estimation of quantization region midpoints, aided by a reference signal. A novel estimator based on order statistics is proposed, and is shown to have superior performance compared with the sample mean traditionally used. The second major area deals with predicting the performance of an ADC after post-correction. A converter with static differential nonlinearities and random input noise is considered. A post-correction is applied, but with limited (fixed-point) resolution in the corrected values. An expression for the signal-to-noise and distortion ratio after post-correction is provided. It is shown that the performance is dependent on the variance of the differential nonlinearity, the variance of the random noise, the resolution of the converter and the precision of the correction values. Finally, the problem of addressing, or indexing, the correction look-up table is dealt with. The indexing method determines both the memory requirements of the table and the ability to describe and correct dynamically dependent error effects. The work here is devoted to state-space--type indexing schemes, which determine the index from a number of consecutive samples. There is a tradeoff between table size and dynamics: more samples used for indexing gives a higher dependence on dynamic, but also a larger table. An indexing scheme that uses only a subset of the bits in each sample is proposed. It is shown how the selection of bits can be optimized, and the exemplary results show that a substantial reduction in memory size is possible with only marginal reduction of performance. / QC 20101019
7

Multi-precision Function Interpolator for Multimedia Applications

Cheng, Chien-Kang 25 July 2012 (has links)
A multi-precision function interpolator, which is fitted in with the IEEE-754 single precision floating point standard, is proposed in this paper. It provides logarithms, exponentials, reciprocal and square root reciprocal operations. Each operation is able to dynamically select four different precision modes in demand. The hardware architecture is designed with fully pipeline in order to comply with hardware architectures of general digital signal processors (DSPs) and graphics processors (GPUs). When considering the usefulness of each precision mode, it is designed to minimize the error among various modes as far as possible in the beginning. According to the precision from high to low, function interpolator can provide 23, 18, 13 and 8-bit accuracy respectively in spite of the rounding effect. This function interpolator is designed based on the look-up table method. It can get the approximation value of target function through the calculation of quadratic polynomial. The coefficient of quadratic polynomial is obtained by piecewise minimax approximation. Before implementing the hardware, we use the Maple algebra software to generate the quadratic polynomial coefficients of aforementioned four operations, and estimate whether these coefficients can meet IEEE-754 single precision floating point standard. In addition, we take the exhaustive search to check the results generated by our implementation to make sure that it can meet the requirements for various operations and precision modes. When performing one of the above four operations, only the tables of the operation are used to obtain the quadratic polynomial coefficient. Therefore, we can take the advantage of the tri-state buffer as a switch to reduce dynamic power consumption of tables for the other three operations. In addition, when performing lower precision modes, we can turn off a part of hardwares, which are used to calculate the quadratic polynomial, to save the power consumption more effectively. By providing multi-precision hardware, we hope users or developers, those who use the battery device, can choose a lower precision mode within the permissible error range to extend the battery life.
8

Characterization and Correction of Analog-to-Digital Converters

Lundin, Henrik January 2005 (has links)
<p>Denna avhandling behandlar analog-digitalomvandling. I synnerhet behandlas postkorrektion av analog-digitalomvandlare (A/D-omvandlare). A/D-omvandlare är i praktiken behäftade med vissa fel som i sin tur ger upphov till distorsion i omvandlarens utsignal. Om felen har ett systematiskt samband med utsignalen kan de avhjälpas genom att korrigera utsignalen i efterhand. Detta verk behandlar den form av postkorrektion som implementeras med hjälp av en tabell ur vilken korrektionsvärden hämtas.</p><p>Innan en A/D-omvandlare kan korrigeras måste felen i den mätas upp. Detta görs genom att estimera omvandlarens överföringsfunktion. I detta arbete behandlas speciellt problemet att skatta kvantiseringsintervallens mittpunkter. Det antas härvid att en referenssignal finns tillgänglig som grund för skattningen. En skattare som baseras på sorterade data visas vara bättre än den vanligtvis använda skattaren baserad på sampelmedelvärde.</p><p>Nästa huvudbidrag visar hur resultatet efter korrigering av en A/D-omvandlare kan predikteras. Omvandlaren antas här ha en viss differentiell olinjäritet och insignalen antas påverkad av ett slumpmässigt brus. Ett postkorrektionssystem, implementerat med begränsad precision, korrigerar utsignalen från A/D-omvandlaren. Ett utryck härleds som beskriver signal-brusförhållandet efter postkorrektion. Förhållandet visar sig bero på den differentiella olinjäritetens varians, det slumpmässiga brusets varians, omvandlarens upplösning samt precisionen med vilken korrektionstermerna beskrivs.</p><p>Till sist behandlas indexering av korrektionstabeller. Valet av metod för att indexera en korrektionstabell påverkar såväl tabellens storlek som förmågan att beskriva och korrigera dynamiska fel. I avhandlingen behandlas i synnerhet tillståndsmodellbaserade metoder, det vill säga metoder där tabellindex bildas som en funktion utav flera på varandra följande sampel. Allmänt gäller att ju fler sampel som används för att bilda ett tabellindex, desto större blir tabellen, samtidigt som förmågan att beskriva dynamiska fel ökar. En indexeringsmetod som endast använder en delmängd av bitarna i varje sampel föreslås här. Vidare så påvisas hur valet av indexeringsbitar kan göras optimalt, och experimentella utvärderingar åskådliggör att tabellstorleken kan reduceras avsevärt utan att fördenskull minska prestanda mer än marginellt.</p><p>De teorier och resultat som framförs här har utvärderats med experimentella A/D-omvandlardata eller genom datorsimuleringar.</p> / <p>Analog-to-digital conversion and quantization constitute the topic of this thesis. Post-correction of analog-to-digital converters (ADCs) is considered in particular. ADCs usually exhibit non-ideal behavior in practice. These non-idealities spawn distortions in the converters output. Whenever the errors are systematic, it is possible to mitigate them by mapping the output into a corrected value. The work herein is focused on problems associated with post-correction using look-up tables. All results presented are supported by experiments or simulations.</p><p>The first problem considered is characterization of the ADC. This is in fact an estimation problem, where the transfer function of the converter should be determined. This thesis deals with estimation of quantization region midpoints, aided by a reference signal. A novel estimator based on order statistics is proposed, and is shown to have superior performance compared with the sample mean traditionally used.</p><p>The second major area deals with predicting the performance of an ADC after post-correction. A converter with static differential nonlinearities and random input noise is considered. A post-correction is applied, but with limited (fixed-point) resolution in the corrected values. An expression for the signal-to-noise and distortion ratio after post-correction is provided. It is shown that the performance is dependent on the variance of the differential nonlinearity, the variance of the random noise, the resolution of the converter and the precision of the correction values.</p><p>Finally, the problem of addressing, or indexing, the correction look-up table is dealt with. The indexing method determines both the memory requirements of the table and the ability to describe and correct dynamically dependent error effects. The work here is devoted to state-space--type indexing schemes, which determine the index from a number of consecutive samples. There is a tradeoff between table size and dynamics: more samples used for indexing gives a higher dependence on dynamic, but also a larger table. An indexing scheme that uses only a subset of the bits in each sample is proposed. It is shown how the selection of bits can be optimized, and the exemplary results show that a substantial reduction in memory size is possible with only marginal reduction of performance.</p>
9

The effects of multimedia annotations on L2 vocabulary immediate recall and reading comprehension: A comparative study of text-picture and audio-picture annotations under incidental and intentional learning conditions

Chen, Zhaohui 01 June 2006 (has links)
This dissertation investigated the effects of multimedia annotation on L2 vocabulary learning and reading comprehension. The overarching objective of this study was to compare the effects of text-picture annotation and audio-picture annotation on L2 vocabulary immediate recall and reading comprehension. This study also sought to examine the different effects under incidental and intentional learning conditions. The participants were 78 intermediate adult ESL learners from three universities in northwest U.S. The participants read an Internet-based English text. Twenty target words, annotated in either text-picture or audio-picture, were embedded in the reading text. The participants accessed the annotations by clicking on the highlighted target words. Two instruments were used for measuring vocabulary immediate recall: Vocabulary Knowledge Scale and Word Recognition Test. Two measurements were used to assess reading comprehension: Multiple-choice Reading Comprehension Questions and L1 Written Recall. In term of annotation types, the results indicated that the audio-picture annotation group did significantly better than the text-picture group in L2 vocabulary immediate recall. However, there was no significantly different effect between the two annotations on L2 reading comprehension. In terms of learning conditions, the intentional learning condition resulted in significantly better performance in L2 vocabulary immediate recall than the incidental learning condition. However, the incidental learning condition resulted in significantly better L2 reading comprehension than the intentional learning condition only in the Written Recall measure, but not in the multiple-choice Reading Comprehension Test. In terms of interaction between annotation type and learning condition, there was not interaction between annotation type and learning condition on L2 vocabulary immediate recall. The interaction between annotation type and learning condition on L2 reading comprehension was not significant in multiple-choice Reading Comprehension Text. However, the interaction was found to be significant in Written Recall: in the incidental learning condition, the difference between text-picture annotation and audio-picture annotation was not significant; in the intentional learning condition, participants in text-picture did significantly better than those in audio-picture on Written Recall.
10

A CONTROL MECHANISM TO THE ANYWHERE PIXEL ROUTER

Krishnan, Subhasri 01 January 2007 (has links)
Traditionally large format displays have been achieved using software. A new technique of using hardware based anywhere pixel routing is explored in this thesis. Information stored in a Look Up Table (LUT) in the hardware can be used to tile two image streams to produce a seamless image display. This thesis develops a 1 input-image 1 output-image system that implements arbitrary image warping on the image, based a LUT stored in memory. The developed system control mechanism is first validated using simulation results. It is next validated via implementation to a Field Programmable Gate Array (FPGA) based hardware prototype and appropriate experimental testing. It was validated by changing the contents of the LUT and observing that the resulting changes on the pixel mapping were always correct.

Page generated in 0.0336 seconds