• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1520
  • 603
  • 211
  • 163
  • 161
  • 70
  • 56
  • 30
  • 27
  • 24
  • 24
  • 20
  • 16
  • 14
  • 13
  • Tagged with
  • 3445
  • 1035
  • 725
  • 464
  • 433
  • 401
  • 385
  • 315
  • 307
  • 307
  • 252
  • 248
  • 220
  • 208
  • 187
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Compiler Support for Vector Processing on OpenGL ES 2.0 Programs

Huang, Kuo-An 02 September 2010 (has links)
This thesis describes the development of a compiler for OpenGLES 2.0 programs for a novel GPU. This work is a part of a larger project to develop a low-power GPU for embedded systems. Our compiler has been developed in the LLVM compiler infrastructure. The present thesis focuses on three areas of the compiler: 1) making corrections and improvements to an existing graphics shading language parser, 2) augmenting LLVM¡¦s bit-code format to support the new information from the shading language, and 3) modifying LLVM¡¦s backend to support this augmented bit-code. Much of this work is related to supporting the matrix and vector primitive data types found in OpenGL¡¦s GLSL shading language. In conjunction with several other theses, as listed in the text, this work achieves a working basic compiler for GLSL code on our new GPU. Continuing work by future researchers is necessary to make the compiler more robust and optimized.
192

Implementation of Vectorization-Based VLIW DSP with Compact Instructions

Lee, Chun-Hsien 23 August 2005 (has links)
The main goal of this thesis is to design and implement the high performance processor core for completing those digital signal processing algorithms applied at the DVB-T systems. The DSP must support the signal flow in time. Completing the FFT algorithm at 8192 input signal points instantaneously is the most important key. In order to achieve the time demand of FFT and the DSP frequency must be as lower as possible, the way is to increase the degree of instruction level parallelism (ILP). The thesis designs a VLIW architecture processing core called DVB-T DSP to support instruction parallelism with enough execution units. The thesis also uses the software pipelining to schedule the loop to achieve the highest ILP when used to execute FFT butterfly operations. Furthermore, in order to provide the smooth data stream for pipeline, the thesis designs a mechanism to improve the modulo addressing, called extended modulo addressing, will collect the discrete vectors into one continuous vector. This is a big problem that the program size is bigger than other processor architecture at the VLIW processor architecture. In order to solve the problem, this thesis proposes an instruction compression mechanism, which can increase double program density and does not affect the processor execution efficiency. The simulation result shows that DVB-T DSP can achieve the time demand of FFT at 133Mhz. DVB-T DSP also has good performance for other digital signal processing algorithms.
193

Video Scene Change Detection Using Support Vector Clustering

Kao, Chih-pang 13 October 2005 (has links)
As digitisation era will come, a large number of multimedia datas (image, video, etc.) are stored in the database by digitisation, and its retrieval system is more and more important. Video is huge in frames amount, in order to search effectively and fast, the first step will detect and examine the place where the scene changes in the video, cut apart the scene, find out the key frame from each scene, regard as analysis that the index file searches with the key frame. The scene changes the way and divides into the abrupt and the gradual transition. But in the video, even if in the same scene, incident of often violent movements or the camera are moving etc. happens, and obscure with the gradual transition to some extent. Thus this papper gets the main component from every frame in the video using principal component analysis (PCA), reduce the noise to interfere, and classify these feauture points with support vector clustering, it is the same class that the close feature points is belonged to. If the feature points are located between two groups of different datas, represent the scene is changing slowly in the video, detect scene change by this.
194

Flow Rate Based Detection Method for Apneas And Hypopneas

Chen, Yu-Chou 16 July 2007 (has links)
SAS has become an increasingly important public-health problem in recent years. It can adversely affect neurocognitive, cardiovascular, respiratory diseases and can also cause behavior disorder. Since up to 90% of these cases are obstructive sleep apnea (OSA), therefore, the study of how to diagnose, detect and treat OSA is becoming a significant issue, academically and medically. Polysomnography (PSG) can monitor the OSA with relatively fewer invasive techniques. However, PSG-based sleep studies are expansive and time-consuming because they require overnight evaluation in sleep laboratories with dedicated systems and attending personnel. This work develops a flow rate based detection method for apneas. In particular, via signal processing, feature extraction and neural network, this thesis introduces a flow rate based detective system. The goal is to detect OSA with less time and reduced financial costs.
195

Fault-Tolerant Routing on the Star Graph Using Safety Vectors

Yeh, Sheng-I 27 July 2000 (has links)
When the number of nodes increases, the chance that nodes or links fail increases. Then a fault-tolerant routing method is important to maintian the performance of the system. In the hypercube, safety levels and safety vectors provide the fault distribution information used to guide routing fault-tolerantly. The safety vectors for the hypercube describes the fault distribution more percisely than the safety level. The concept of safety levels has been applied to the star graph by other researchers. In this thesis, we apply the concept of the safety vectors in the hypercube to the star graph, and define three different safety vectors, including undirected safety vector, directed safety vector, and statistical safety vector. We first show the ability of the undirected safety vector. Then we extend the ideal to the directed safety vector and show it is better in deciding routing paths than the safety level for the star graph. We also show the reason that makes the directed safety vector not able to be used for derouting. In the previous result, a little change can make the directed safety vector usable for derouting in the hypercube. However, for the star graph, we can use only the information of neighbors to perform derouting with a slight modification in the directed safety vector. Then we set levels to the routing ability using the statistical safety vector. Try to make it contain more information of the fault distribution.
196

Nil

Liu, Tse-Tseng 27 July 2000 (has links)
Nil
197

Designing the Nearest Neighbor Classifiers via the VQ Method

Chiang, Hsin-Kuan 19 July 2001 (has links)
Designing the Nearest Neighbor Classifiers via the VQ Method
198

Improving ILP with the Vectorized Computing Mechanism in VLIW DSP Architecture

Yang, Te-Shin 25 June 2003 (has links)
In order to improving the performance for real-time application, current digital signal processors use VLIW architectures to increase the degree of instruction level parallelism (ILP). Two factors will limit the ILP, one is enough hardware resource for all parallel instructions. Another is the dependence relations between instructions. This thesis designs a VLIW architecture processing core called DVBTDSP molded by FFT algorithm and uses the software pipelining mechanism to schedule the loop to achieve the highest ILP degree when used to execute FFT butterfly operations. Furthermore, in order to provide the smooth data stream for pipeline operations, we design a mechanism to improve the modulo addressing, which will collect the discrete vectors into one continuous vector. The simulation results show that the DVBTDSP has double performance of the C6200 for the FFT processing, and has good performance for FIR, IIR and DCT algorithm computing.
199

The Quantitative Verifying Framework for Balanced Scorecard

Yen, Wen-Jen 08 July 2003 (has links)
The theme of the thesis is a verifying framework for balanced scorecard or multi ¡V dimensional managerial indicators . The verifying framework consists of seven major modules , TETRAD , and LISREL . The seven major modules are mission-strategy-module,factor-analysis-module , dimension-indicator-module,principle-component- analysis-module,canonical-correlation-module, game-theory-module,and performance-vector-model. The verifying framework takes advantage of knowledge or tools of vector analysis, multivariate statistical analysis , game theory , fuzzy sets , and multiobjective decision making . The thesis hopes to offer a preciser quantitative verifying framework for balanced scorecard or multi-dimensional managerial indicators .
200

An Effective Feature Selection for Protein Fold Recognition

Lin, Jyun-syong 11 October 2007 (has links)
The protein fold recognition problem is one of the important topics in biophysics. It is believed that the primary structure of a protein is helpful to drawing its three-dimensional (3D) structure. Given a target protein (a sequence of amino acids), the protein fold recognition problem is to decide which fold group of some protein structure database the target protein belongs to. Since more than two fold groups are to be located in this problem, it is a multi-class classification problem. Recently, many researchers have solved this problem by using the popular machine learning tools, such as neural networks (NN) and support vector machines (SVM). In this thesis, we use the SVM tool to solve this problem. Our strategy is to find out the effective features which can be used as an efficient guide to the classification problem. We build the feature preference table to help us to find out effective feature combinations quickly. We take 27 well-known fold groups in SCOP (Structural Classification of Proteins) as our data set. Our experimental results show that our method achieves the overall prediction accuracy of 61.4%, which is better than the previous method (56.5%). With the same feature combinations, our prediction accuracy is also higher than the previous results. These results show that our method is indeed effective for the fold recognition problem.

Page generated in 0.04 seconds