• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 43
  • 15
  • 12
  • 9
  • 8
  • 7
  • 5
  • 2
  • 1
  • 1
  • Tagged with
  • 112
  • 31
  • 24
  • 23
  • 18
  • 17
  • 16
  • 13
  • 13
  • 12
  • 12
  • 11
  • 11
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Development and Analysis of Synchronization Process Control Algorithms in a Dual Clutch Transmission

Gustavsson, Andreas January 2009 (has links)
The Dual Clutch Transmission (DCT) is a relatively new kind of transmission which shows increased efficiency and comfort compared to manual transmissions. Its construction is much like two parallell manual transmissions, where the gearshifts are controlled automatically. The gear-shift of a manual transmission involves a synchronization process, which synchronizes and locks the input shaft to the output shaft via the desired gear ratio. This process, which means transportation of a synchronizer sleeve, is performed by moving the gear shift lever which is connected to the sleeve. In a DCT, there is no mechanical connection between the gear-shift lever and the sleeve. Hence, an actuator system, controlled by a control system, must be used. This report includes modelling, control system design and simulation results of a DCT synchronization process. The thesis work is performed at GM Powertrain (GMPT) in Trollhättan. At the time of this thesis, there is no DCT produced by GM, and therefore the results and conclusions rely on simulations. Most of the used system parameters are reasonable values collected from employees at GMPT and manual transmission literature. The focus of the control design is to achieve a smooth, rather than fast, movement of the synchronizer sleeve. Simulations show that a synchronization process can be performed in less than 400 ms under normal conditions. The biggest problems controlling the sleeve position occur if there is a large amount of drag torque affecting the input shaft. Delay problems also worsen the performance a lot. An attempt to predict the synchronizer sleeve position is made and simulations shows advantages of that. Some further work is needed before the developed control software can be used on a real DCT. Investigations of sensor noise robustness and the impact of dogging forces are the most important issues to be further investigated. Implementation of additional functionality for handling special conditions are also needed.
42

DSP Platform Benchmarking : DSP Platform Benchmarking

Xinyuan, Luo January 2009 (has links)
Benchmarking of DSP kernel algorithms was conducted in the thesis on a DSP processor for teaching in the course TESA26 in the department of Electrical Engineering. It includes benchmarking on cycle count and memory usage. The goal of the thesis is to evaluate the quality of a single MAC DSP instruction set and provide suggestions for further improvement in instruction set architecture accordingly. The scope of the thesis is limited to benchmark the processor only based on assembly coding. The quality check of compiler is not included. The method of the benchmarking was proposed by BDTI, Berkeley Design Technology Incorporations, which is the general methodology used in world wide DSP industry. Proposals on assembly instruction set improvements include the enhancement of FFT and DCT. The cycle cost of the new FFT benchmark based on the proposal was XX% lower, showing that the proposal was right and qualified. Results also show that the proposal promotes the cycle cost score for matrix computing, especially matrix multiplication. The benchmark results were compared with general scores of single MAC DSP processors offered by BDTI.
43

Benchmarking of Sleipnir DSP Processor, ePUMA Platform

Murugesan, Somasekar January 2011 (has links)
Choosing a right processor for an embedded application, or designing a new pro-cessor requires us to know how it stacks up against the competition, or sellinga processor requires a credible communication about its performance to the cus-tomers, which means benchmarking of a processor is very important. They arerecognized world wide by processor vendors and customers alike as the fact-basedway to evaluate and communicate embedded processor performance. In this the-sis, the benchmarking of ePUMA multiprocessor developed by the Division ofComputer Engineering, ISY, Linköping University, Sweden will be described indetails. A number of typical digital signal processing algorithms are chosen asbenchmarks. These benchmarks have been implemented in assembly code withtheir performance measured in terms of clock cycles and root mean square errorwhen compared with result computed using double precision. The ePUMA multi-processor platform which comprises of the Sleipnir DSP processor and Senior DSPprocessor was used to implement the DSP algorithms. Matlab inbuilt models wereused as reference to compare with the assembly implementation to derive the rootmean square error values of different algorithms. The execution time for differentDSP algorithms ranged from 51 to 6148 clock cycles and the root mean squareerror values varies between 0.0003 to 0.11.
44

Cosine Modulated Filter Banks / Cosinus-modulerade filterbankar

Nord, Magnus January 2003 (has links)
The initial goal of this report was to implement and compare cosine modulated filter banks. Because of time limitations, focus shifted towards the implementation. Filter banks and multirate systems are important in a vast range of signal processing systems. When implementing a design, there are several considerations to be taken into account. Some examples are word length, number systems and type of components. The filter banks were implemented using a custom made software, especially designed to generate configurable gate level code. The generated code was then synthesized and the results were compared. Some of the results were a bit curious. For example, considerable effort was put into implementing graph multipliers, as these were expected to be smaller and faster than their CSDC (Canonic Signed Digit Code) counterparts. However, with one exception, they turned out to generate larger designs. Another conclusion drawn is that the choice of FPGA is important. There are several things left to investigate, though. For example, a more thorough comparison between CSDC and graph multipliers should be carried out, and other DCT (Discrete Cosine Transform) implementations should be investigated.
45

Focus controlled image coding based on angular and depth perception / Fokusstyrd bildkodning baserad på vinkel och djup perception

Grangert, Oskar January 2003 (has links)
In normal image coding the image quality is the same in all parts of the image. When it is known where in the image a single viewer is focusing it is possible to lower the image quality in other parts of the image without lowering the perceived image quality. This master's thesis introduces a coding scheme based on depth perception where the quality of the parts of the image that correspond to out-of-focus scene objects is lowered to obtain data reduction. To obtain further data reduction the method is combined with angular perception coding where the quality is lowered in parts of the image corresponding to the peripheral visual field. It is concluded that depth perception coding can be done without lowering the perceived image quality and that the coding gain increases as the two methods are combined.
46

Inverse Discrete Cosine Transform by Bit Parallel Implementation and Power Comparision

Bhardwaj, Divya Anshu January 2003 (has links)
The goal of this project was to implement and compare Invere Discrete Cosine Transform using three methods i.e. by bit parallel, digit serial and bit serial. This application describes a one dimensional Discrete Cosine Transform by bit prallel method and has been implemented by 0.35 ìm technology. When implementing a design, there are several considerations like word length etc. were taken into account. The code was implemented using WHDL and some of the calculations were done in MATLAB. The VHDL code was the synthesized using Design Analyzer of Synopsis; power was calculated and the results were compared.
47

Implementering av 1D-DCT

Zilic, Edmin January 2006 (has links)
IDCT (Inverse Discrete Cosine Transform) is a common algorithm being used with image and sound decompression. The algorithm is a Fourier related transform which can occur in many different types like, one-dimensional, two-dimensional, three-dimensional and many more. The goal with this thesis is to create a fast and low effect version of two-dimensional IDCT algorithm, where techniques as multiple-constant multiplication and subexpression sharing plus bit-serial and bit-parallel arithmetic are used. The result is a hardware implementation with power consumption at 19,56 mW.
48

Adaptive Constrained DCT-LMS Time Delay Estimation Algorithm

Jian, Jiun-Je 27 June 2000 (has links)
n the problem of time delay estimation (TDE), the desired source signals of interest are correlated and with a specific spectral distribution. In such cases, the convergence speed using the conventional approaches, viz., time domain adaptive constrained and unconstrained LMS TDE algorithms, becomes slowly and the performance of TDE will be degraded, dramatically. In fact, the convergence rate depends highly on the distribution of spectral density of the desired signal sources. Also, the performance of TDE is affected by the background noises, accordingly. To circumvent the problem described above, in this thesis, a transformed domain adaptive constrained filtering scheme, refers to the constrained adaptive DCT-LMS algorithm, for TDE is devised. We show that this new proposed constrained algorithm, with the so-called direct delay estimation formula, for non-integer TDE does perform better than the conventional time domain adaptive constrained and unconstrained LMS TDE algorithms and the unconstrained adaptive DCT-LMS TDE algorithm. Finally, to further reduce the spread of eigenvalue in the unconstrained adaptive DCT-LMS algorithm, the Gram-Schmidt orthogonalizer approach realizing by the adaptive Escalator is investigated. It indicates that bias of TDE will occur without using the constraint of weight vector. That is, it could not be used to alleviate the effect due to background noises.
49

IC Design and Implementation of 32-Bit 1.25 GHz Tree-Structured CLA Adder and Discrete Cosine Transform

Lee, Rong-Chin 14 June 2001 (has links)
The thesis comprises three parts: Part 1 is the design and implementation of a high speed pipelined carry lookahead adder (CLA) ; Part 2 introduces how to build 0.35£gm basic cell library in the Cadence 97¡¦s environment and execute the cell-based design flow by self-built basic cells; Part 3 is the design and implementation of a low-power discrete cosine transform (DCT) processor. Part 1 of this thesis is a 32-bit tree-structured pipelined carry lookahead adder (CLA) constructed by the modified all-N-transistor (ANT) design. Not only the CLA possesses few transistor count, but also occupies small chip size. Moreover, the post- layout simulation results given by TimeMill show that the clock used in the 32-bit CLA can run up to 1.25 GHz. The proposed architecture can be easily expanded for long data addition. Part 2 of this thesis is to describe the procedure of a self-built cell library in detail, and explain how to correctly proceed cell-based design flow by using the self-built basic cell library. Part 3 of this thesis is to implementation of a DCT processor. We carefully observed the operation behavior of Multiply Accumulator (MAC) and improved the power consumption
50

Efficient Memory Arrangement Methods and VLSI Implementations for Discrete Fourier and Cosine Transforms

Hsu, Fang-Chii 24 July 2001 (has links)
The thesis proposes using the efficient memory arrangement methods for the implementation of radix-r multi-dimensional Discrete Fourier Transform (DFT) and Discrete Cosine Transform (DCT). By using the memory instead of the registers to buffer and reorder data, hardware complexity is significantly reduced. We use the recursive architecture that requires only one arithmetic-processing element to compute the entire DFT/DCT operation. The algorithm is based on efficient coefficient matrix factorization and data allocation. By exploiting the features of Kronecker product representation in the fast algorithm, the multi-dimensional DFT/DCT operation is converted into its corresponding 1-D problem and the intermediate data is stored in several memory units. In addition to the smaller area, we also propose a method to reduce the power consumption of the DFT/DCT processors.

Page generated in 0.0247 seconds