• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 80
  • 21
  • 19
  • 19
  • 16
  • 7
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 190
  • 40
  • 38
  • 36
  • 31
  • 28
  • 27
  • 26
  • 25
  • 24
  • 24
  • 22
  • 22
  • 21
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Low Power Register Exchange Viterbi Decoder for Wireless Applications

El-Dib, Dalia January 2004 (has links)
Since the invention of wireless telegraphy by Marconi in 1897, wireless technology has not only been enhanced, but also has become an integral part of our everyday lives. The first wireless mobile phone appeared around 1980. It was based on first generation analog technology that involved the use of Frequency Division Multiple Access (FDMA) techniques. Ten years later, second generation (2G) mobiles were dependent on Time Division Multiple Access (TDMA) techniques and Code Division Multiple Access (CDMA) techniques. Nowadays, third generation (3G) mobile systems depend on CDMA techniques to satisfy the need for faster, and more capacious data transmission in mobile wireless networks. Wideband CDMA (WCDMA) has become the major 3G air interface in the world. WCDMA employs convolutional encoding to encode voice and MPEG4 applications in the baseband transmitter at a maximum frequency of 2<i>Mbps</i>. To decode convolutional codes, Andrew Viterbi invented the Viterbi Decoder (VD) in 1967. In 2G mobile terminals, the VD consumes approximately one third of the power consumption of a baseband mobile transceiver. Thus, in 3G mobile systems, it is essential to reduce the power consumption of the VD. Conceptually, the Register Exchange (RE) method is simpler and faster than the Trace Back (TB) method for implementing the VD. However, in the RE method, each bit in the memory must be read and rewritten for each bit of information that is decoded. Therefore, the RE method is not appropriate for decoders with long constraint lengths. Although researchers have focused on implementing and optimizing the TB method, the RE method is focused on and modified in this thesis to reduce the RE method's power consumption. This thesis proposes a novel modified RE method by adopting a <i>pointer</i> concept for implementing the survivor memory unit (SMU) of the VD. A pointer is assigned to each register or memory location. The contents of thepointer which points to one register is altered to point to a second register, instead of copying the contents of the first register to the second. When the pointer concept is applied to the RE's SMU implementation (modified RE), there is no need to copy the contents of the SMU and rewrite them, but one row of memory is still needed for each state of the VD. Thus, the VDs in CDMA systems require 256 rows of memory. Applying the pointer concept reduces the VD's power consumption by 20 percent as estimated by the VHDL synthesis tool and by the new power reduction estimation that is introduced in this work. The coding gain for the modified RE method is 2. 6<i>dB</i> at an SNR of approximately 10-3. Furthermore, a novel zero-memory implementation for the modified RE method is proposed. If the initial state of the convolutional encoder is known, the entire SMU of the modified RE VD is reduced to only one row. Because the decoded data is generated in the required order, even this row of memory is dispensable. The zero-memory architecture is called the MemoryLess Viterbi Decoder (MLVD), and reduces the power consumption by approximately 50 percent. A prototype of the MLVD with a one third convolutional code rate and a constraint length of nine is mapped into a Xilinx 2V6000 chip, operating at 25 <i>MHz</i> with a decoding throughput of more than 3<i>Mbps</i> and a latency of two data bits. The other problem of the VD which is addressed in this thesis is the Add Compare Select Unit (ACSU) which is composed of 128 butterfly ACS modules. The ACSU's high parallelism has been previously solved by using a bit serial implementation. The 8-bit First Input First Output (FIFO) register, needed for the storage of each path metric (PM), is at the heart of the single bit serial ACS butterfly module. A new, simply controlled shift register is designed at the circuit level and integrated into the ACS module. A chip for the new module is also fabricated.
52

Calcium/Calmodulin-Dependent Protein Kinase II Serves as a Biochemical Integrator of Calcium Signals for the Induction of Synaptic Plasticity

Chang, Jui-Yun January 2016 (has links)
<p>Repetitive Ca2+ transients in dendritic spines induce various forms of synaptic plasticity by transmitting information encoded in their frequency and amplitude. CaMKII plays a critical role in decoding these Ca2+ signals to initiate long-lasting synaptic plasticity. However, the properties of CaMKII that mediate Ca2+ decoding in spines remain elusive. Here, I measured CaMKII activity in spines using fast-framing two-photon fluorescence lifetime imaging. Following each repetitive Ca2+ elevations, CaMKII activity increased in a stepwise manner. This signal integration, at the time scale of seconds, critically depended on Thr286 phosphorylation. In the absence of Thr286 phosphorylation, only by increasing the frequency of repetitive Ca2+ elevations could high peak CaMKII activity or plasticity be induced. In addition, I measured the association between CaMKII and Ca2+/CaM during spine plasticity induction. Unlike CaMKII activity, association of Ca2+/CaM to CaMKII plateaued at the first Ca2+ elevation event. This result indicated that integration of Ca2+ signals was initiated by the binding of Ca2+/CaM and amplified by the subsequent increases in Thr286-phosphorylated form of CaMKII. Together, these findings demonstrate that CaMKII functions as a leaky integrator of repetitive Ca2+ signals during the induction of synaptic plasticity, and that Thr286 phosphorylation is critical for defining the frequencies of such integration.</p> / Dissertation
53

Läser jag bättre eller minns jag bra? : Grundskoleelevers förmåga till ordavkodning vid upprepad mätning och förekomst av igenkänningseffekt i LäSt / Is my reading better or is my memory good? : Primary school children's word decoding ability as a result of repeated test administration and retest-effect prevalence in LäSt

Hatic, Amer, Popovici Valenzuela, Mikaela January 2019 (has links)
Ungefär 20% av befolkningen uppfyller kriterier för läs- och skrivsvårigheter. För att kunna hjälpa elever som uppvisar svårigheter av denna typ behövs reliabla och valida testinstrument. LäSt är ett normerat svenskt lästest som är väl använt på nationell basis i syfte att mäta barns förmåga till ordavkodning. Avkodning innefattar en koppling mellan enskilda bokstäver och ord till deras korrekta språkljud vid uttal, och utgör således en central teknisk del av läsning. Evidens finns för att olika typer av psykologiska test, däribland lästest, uppvisar en igenkänningseffekt varigenom elevers prestation på dessa test ökar till följd av upprepad administrering. Syftet med den aktuella studien är att undersöka huruvida LäSt uppvisar en igenkänningseffekt vid upprepad administrering till ett urval av elever i årskurs 4 och 6.  Sammanlagt deltog 92 elever från 4 skolor i Sverige. De genomförde LäSt vid tre separata tillfällen under ett tidsintervall på 8 veckor. Testning 1 skedde vid vecka 1, testning 2 skedde vid vecka 2, och testning 3 skedde vid vecka 8.  En signifikant igenkänningseffekt hittades som tyder på att elevers resultat på LäSt förbättrades över tid. Det fanns ingen skillnad i igenkänning mellan pojkar och flickor, elever i årskurs 4 och årskurs 6, eller enspråkiga och flerspråkiga elever. En signifikant skillnad i igenkänning fanns mellan de elever som hade allra lägst avkodningsförmåga (poor decoders)och de elever som hade en mycket hög avkodningsförmåga (good decoders). Poor decoders uppvisade en starkare igenkänning på deltestet ord än good decoders. Ingen sådan effekt fanns för deltestet non-ord.  Dessa resultat indikerar att elevers prestation på LäSt kan förbättras över tid till följd av igenkänning av testmaterialet. Detta påverkar hur man bör tolka individers resultatutveckling vid upprepad testning och hur ofta det är lämpligt att använda LäSt på en och samma elev. Dessa resultat och deras implikationer diskuteras i relation till tidigare forskning. / Approximately 20% of the population suffer from a reading- or writing disability. In order to aid students with such disabilities reliable and valid reading tests are warranted. LäSt is a Swedish standardised reading test currently well-used nationally to measure children’s ability to decode words. Decoding encompasses the making of connections between single letters and words and their correct sounds, and therefore constitutes a central technical aspect of reading. There is evidence to show that some common psychological tests, including reading tests, display a retest-effect, whereby individual’s results increase over time as a result of repeated test administration. The aim of the current study is to investigate whether such a retest-effect is pronounced when LäSt is repeatedly administered to a sample of students in year 4 and 6.  In sum, 92 students from 4 different schools in Sweden took part in the study. They completed LäSt at three separate times during a time interval of 8 weeks. Testing 1 was completed in the first week, testing 2 was completed in the second week, and testing 3 was completed in the eight week.  A significant retest-effect was found indicating that students’ results on LäSt increased with each administration. There was no difference in retest-effect between boys and girls, students in year 4 and year 6, or monolingual and multilingual students. A significant difference emerged between students who had a very low word decoding ability (poor decoders) and those who had a very high word decoding ability (good decoders). Poor decoders showcased a stronger retest-effect on the sub-test Words than good decoders. No such effect was found for the sub-test Non-words.  The current results indicate that repeated testing with LäSt yields improved results over time due to familiarity. These findings have implications for how one should interpret students’ test scores over time, as well as how often LäSt should be administered. The findings and their implications are further discussed in relation to previous research.
54

An Assessment of Available Software Defined Radio Platforms Utilizing Iterative Algorithms

Ferreira, Nathan 04 May 2015 (has links)
As the demands of communication systems have become more complex and varied, software defined radios (SDR) have become increasingly popular. With behavior that can be modified in software, SDR's provide a highly flexible and configurable development environment. Despite its programmable behavior, the maximum performance of an SDR is still rooted in its hardware. This limitation and the desire for the use of SDRs in different applications have led to the rise of various pieces of hardware to serve as SDR platforms. These platforms vary in aspects such as their performance limitations, implementation details, and cost. In this way the choice of SDR platform is not solely based on the cost of the hardware and should be closely examined before making a final decision. This thesis examines the various SDR platform families available on the market today and compares the advantages and disadvantages present for each during development. As many different types of hardware can be considered an option to successfully implement an SDR, this thesis specifically focuses on general purpose processors, system on chip, and field-programmable gate array implementations. When examining these SDR families, the Freescale BSC9131 is chosen to represent the system on chip implementation, while the Nutaq PicoSDR 2x2 Embedded with Virtex6 SX315 is used for the remaining two options. In order to test each of these platforms, a Viterbi algorithm is implemented on each and the performance measured. This performance measurement considers both how quickly the platform is able to perform the decoding, as well as its bit error rate performance in order to ascertain the implementations' accuracy. Other factors considered when comparing each platform are its flexibility and the amount of options available for development. After testing, the details of each implementation are discussed and guidelines for choosing a platform are suggested.
55

Voice Codec for Floating Point Processor

Ross, Johan, Engström, Hans January 2008 (has links)
<p>As part of an ongoing project at the department of electrical engineering, ISY, at Linköping University, a voice decoder using floating point formats has been the focus of this master thesis. Previous work has been done developing an mp3-decoder using the floating point formats. All is expected to be implemented on a single DSP.The ever present desire to make things smaller, more efficient and less power consuming are the main reasons for this master thesis regarding the use of a floating point format instead of the traditional integer format in a GSM codec. The idea with the low precision floating point format is to be able to reduce the size of the memory. This in turn reduces the size of the total chip area needed and also decreases the power consumption.One main question is if this can be done with the floating point format without losing too much sound quality of the speech. When using the integer format, one can represent every value in the range depending on how many bits are being used. When using a floating point format you can represent larger values using fewer bits compared to the integer format but you lose representation of some values and have to round the values off.From the tests that have been made with the decoder during this thesis, it has been found that the audible difference between the two formats is very small and can hardly be heard, if at all. The rounding seems to have very little effect on the quality of the sound and the implementation of the codec has succeeded in reproducing similar sound quality to the GSM standard decoder.</p>
56

Design of Single Scalar DSP based H.264/AVC Decoder

Tiejun Hu, Di Wu January 2005 (has links)
<p>H.264/AVC is a new video compression standard designed for future broadband network. Compared with former video coding standards such as MPEG-2 and MPEG-4 part 2, it saves up to 40% in bit rate and provides important characteristics such as error resilience, stream switching etc. However, the improvement in performance also introduces increase in computational complexity, which requires more powerful hardware. At the same time, there are several image and video coding standards currently used such as JPEG and MPEG-4. Although ASIC design meets the performance requirement, it lacks flexibility for heterogeneous standards. Hence reconfigurable DSP processor is more suitable for media processing since it provides both real-time performance and flexibility. </p><p>Currently there are several single scalar DSP processors in the market. Compare to media processor, which is generally SIMD or VLIW, single scalar DSP is cheaper and has smaller area while its performance for video processing is limited. In this paper, a method to promote the performance of single scalar DSP by attaching hardware accelerators is proposed. And the bottleneck for performance promotion is investigated and the upper limit of acceleration of a certain single scalar DSP for H.264/AVC decoding is presented. </p><p>Behavioral model of H.264/AVC decoder is realized in pure software during the first step. Although real-time performance cannot be achieved with pure software implementation, computational complexity of different parts is investigated and the critical path in decoding was exposed by analyzing the first design of this software solution. Then both functional acceleration and addressing acceleration were investigated and designed to achieve the performance for real-time decoding using available clock frequency within 200MHz.</p>
57

Development of a low power hand-held device in a low budget manner

Kagerin, Anders, Karlsson, Michael January 2006 (has links)
<p>The market of portable digital audio players (DAPs) have literally exploded the last couple of years. Other markets has grown as well. PDAs, GPS receivers, mobile phones, and so on. This resulted in more advanced ICs and SoCs becoming publically available, eliminating the need for in-house ASICs, thus enableing smaller actors to enter the markets.</p><p>This thesis explores the possibilities of developing a low power, hand-held device on a very limited budget and strict time scale.</p><p>This thesis report also covers all the steps taken in the development procedure.</p>
58

Behavioral Model of an Instruction Decoder of Motorola DSP56000 Processor

Krishna Kumar, Guda January 2006 (has links)
<p>This thesis is a part of an effort to make a scalable behavioral model of the Central Processing Unit and instruction set compatible with the DSP56000 Processor. The goal of this design is to reduce the critical path, silicon area, as well as power consumption of the instruction decoder.</p><p>The instruction decoder consists of three different types of operations instruction fetching, decoding and execution. By using these three steps an efficient model has to be designed to get the shortest critical path, less silicon area, and low power consumption.</p>
59

Parallel-Node Low-Density Parity-Check Convolutional Code Encoder and Decoder Architectures

Brandon, Tyler 06 1900 (has links)
We present novel architectures for parallel-node low-density parity-check convolutional code (PN-LDPC-CC) encoders and decoders. Based on a recently introduced implementation-aware class of LDPC-CCs, these encoders and decoders take advantage of increased node-parallelization to simultaneously decrease the energy-per-bit and increase the decoded information throughput. A series of progressively improved encoder and decoder designs are presented and characterized using synthesis results with respect to power, area and throughput. The best of the encoder and decoder designs significantly advance the state-of-the-art in terms of both the energy-per-bit and throughput/area metrics. One of the presented decoders, for an Eb /N0 of 2.5 dB has a bit-error-rate of 106, takes 4.5 mm2 in a CMOS 90-nm process, and achieves an energy-per-decoded-information-bit of 65 pJ and a decoded information throughput of 4.8 Gbits/s. We implement an earlier non-parallel node LDPC-CC encoder, decoder and a channel emulator in silicon. We provide readers, via two sets of tables, the ability to look up our decoder hardware metrics, across four different process technologies, for over 1000 variations of our PN-LDPC-CC decoders. By imposing practical decoder implementation constraints on power or area, which in turn drives trade-offs in code size versus the number of decoder processors, we compare the code BER performance. An extensive comparison to known LDPC-BC/CC decoder implementations is provided.
60

A novel high-speed trellis-coded modulation encoder/decoder ASIC design

Hu, Xiao 03 September 2003
Trellis-coded Modulation (TCM) is used in bandlimited communication systems. TCM efficiency improves coding gain by combining modulation and forward error correction coding in one process. In TCM, the bandwidth expansion is not required because it uses the same symbol rate and power spectrum; the differences are the introduction of a redundancy bit and the use of a constellation with double points. <p> In this thesis, a novel TCM encoder/decoder ASIC chip implementation is presented. This ASIC codec not only increases decoding speed but also reduces hardware complexity. The algorithm and technique are presented for a 16-state convolutional code which is used in standard 256-QAM wireless systems. In the decoder, a Hamming distance is used as a cost function to determine output in the maximum likelihood Viterbi decoder. Using the relationship between the delay states and the path state in the Trellis tree of the code, a pre-calculated Hamming distances are stored in a look-up table. In addition, an output look-up-table is generated to determine the decoder output. This table is established by the two relative delay states in the code. The thesis provides details of the algorithm and the structure of TCM codec chip. Besides using parallel processing, the ASIC implementation also uses pipelining to further increase decoding speed. <p> The codec was implemented in ASIC using standard 0.18Ým CMOS technology; the ASIC core occupied a silicon area of 1.1mm2. All register transfer level code of the codec was simulated and synthesized. The chip layout was generated and the final chip was fabricated by Taiwan Semiconductor Manufacturing Company through the Canadian Microelectronics Corporation. The functional testing of the fabricated codec was performed partially successful; the timing testing has not been fully accomplished because the chip was not always stable.

Page generated in 0.1327 seconds