• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2628
  • 942
  • 381
  • 347
  • 331
  • 101
  • 66
  • 49
  • 40
  • 36
  • 34
  • 32
  • 32
  • 27
  • 26
  • Tagged with
  • 6015
  • 1462
  • 893
  • 731
  • 726
  • 709
  • 498
  • 495
  • 487
  • 455
  • 422
  • 414
  • 386
  • 366
  • 343
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
701

Storage management for large scale systems

Wang, Wenguang 15 December 2004 (has links)
<p>Because of the slow access time of disk storage, storage management is crucial to the performance of many large scale computer systems. This thesis studies performance issues in buffer cache management and disk layout management, two important components of storage management. </p><p>The buffer cache stores popular disk pages in memory to speed up the access to them. Buffer cache management algorithms used in real systems often have many parameters that require careful hand-tuning to get good performance. A self-tuning algorithm is proposed to automatically tune the page cleaning activity in the buffer cache management algorithm by monitoring the I/O activities of the buffer cache. This algorithm achieves performance comparable to the best manually tuned system.</p><p>The global data structure used by the buffer cache management algorithm is protected by a lock. Access to this lock can cause contention which can significantly reduce system throughput in multi-processor systems. Current solutions to eliminate lock contention decrease the hit ratio of the buffer cache, which causes poor performance when the system is I/O-bound. A new approach, called the multi-region cache, is proposed. This approach eliminates lock contention, maintains the hit ratio of the buffer cache, and incurs little overhead. Moreover, this approach can be applied to most buffer cache management algorithms.</p><p>Disk layout management arranges the layout of pages on disks to improve the disk I/O efficiency. The typical disk layout approach, called Overwrite, is optimized for sequential I/Os from a single file. Interleaved writes from multiple users can significantly decrease system throughput in large scale systems using Overwrite. Although the Log-structured File System (LFS) is optimized for such workloads, its garbage collection overhead can be expensive. In modern and future disks, because of the much faster improvement of disk transfer bandwidth over disk positioning time, LFS performs much better than Overwrite in most workloads, unless the disk is close to full. A new disk layout approach, called HyLog, is proposed. HyLog achieves performance comparable to the best of existing disk layout approaches in most cases.
702

Hardware implementation of daubechies wavelet transforms using folded AIQ mapping

Islam, Md Ashraful 22 September 2010 (has links)
The Discrete Wavelet Transform (DWT) is a popular tool in the field of image and video compression applications. Because of its multi-resolution representation capability, the DWT has been used effectively in applications such as transient signal analysis, computer vision, texture analysis, cell detection, and image compression. Daubechies wavelets are one of the popular transforms in the wavelet family. Daubechies filters provide excellent spatial and spectral locality-properties which make them useful in image compression.<p> In this thesis, we present an efficient implementation of a shared hardware core to compute two 8-point Daubechies wavelet transforms. The architecture is based on a new two-level folded mapping technique, an improved version of the Algebraic Integer Quantization (AIQ). The scheme is developed on the factorization and decomposition of the transform coefficients that exploits the symmetrical and wrapping structure of the matrices. The proposed architecture is parallel, pipelined, and multiplexed. Compared to existing designs, the proposed scheme reduces significantly the hardware cost, critical path delay and power consumption with a higher throughput rate.<p> Later, we have briefly presented a new mapping scheme to error-freely compute the Daubechies-8 tap wavelet transform, which is the next transform of Daubechies-6 in the Daubechies wavelet series. The multidimensional technique maps the irrational transformation basis coefficients with integers and results in considerable reduction in hardware and power consumption, and significant improvement in image reconstruction quality.
703

A comparably robust approach to estimate the left-censored data of trace elements in Swedish groundwater

Li, Cong January 2012 (has links)
Groundwater data in this thesis, which is taken from the database of Sveriges Geologiska Undersökning, characterizes chemical and quantitative status of groundwater in Sweden. The data usually is recorded with only quantification limits when it is below certain values. Accordingly, this thesis is aiming at handling such kind of data. The thesis considers this topic by using the EM algorithm to get the results from maximum likelihood estimation. Consequently, estimations of distributions on censored data of trace elements are expounded on. Related simulations show that the estimation is acceptable.
704

PSTD Method for Thermoacoustic Tomography (TAT) and Related Experimental Investigation

Ye, Gang January 2009 (has links)
<p>In this work, the simulation (forward problem) and reconstruction (inverse problem) in Thermoacoustic Tomography (TAT) are studied using a pseudospectral time-domain (PSTD) method with 4th-order time integration.</p><p>The objective of the TAT simulation is to solve for the thermoacoustic pressure field in an inhomogeneous medium. Using the PSTD method, the spatial derivatives of pressure field and particle velocity can be obtained using fast fourier transform (FFT). Since the Fourier transforms used to represent the spatial derivatives of smooth functions are exact, only 2 points per wavelength are needed in the spatial discretization. The time integration is achieved by a 4th-order method to effectively reduce the computational time. The results of the algorithm are validated by analytical solutions. Perfectly Matched Layers (PMLs) are applied to absorb the outgoing waves and avoid ``wraparound'' effect. The maximum attenuation coefficient of the PMLs has an optimum value to minimize the reflections due to discretization and wraparound effect for 2D and 3D problems. Different PML profiles are also compared, quadratic profile is chosen because it can minimize the overall reflection. Spatial smoothing is needed for PSTD to avoid Gibbs' phenomenon in the modeling of a point source, and the effect of the smoothing function is studied.</p><p>In the TAT reconstruction problem, the PSTD method is used to reconstruct the thermoacoustic sources by solving the thermoacoustic wave equations in a reversed temporal order within the framework of time reversal imaging. The back-propagated pressure waves then refocus at the spatial locations of the original sources. Most other TAT reconstruction algorithms are based on the assumption that the tissue medium is acoustically homogeneous. In practice, however, even the mild tissue inhomogeneity will cause large phase errors and cause spatial misplacement and distortion of the sources. The proposed PSTD method utilizes a two-step process to solve this problem. In the first step, a homogeneous time reversal reconstruction is performed. Since an inhomogeneity itself is usually a source because of spatially dependent electrical conductivity (thus microwave absorption), the spatial location and the shape of the inhomogeneity can be estimated. In the second step, the updated acoustic property map is loaded followed by an inhomogeneous reconstruction. Numerical results show that this method greatly improves the reconstruction results. Images with improved quality are reconstructed from experimental data.</p><p>A 3D PSTD algorithm is developed and validated. Numerical results show that the PSTD algorithm with the 4th-order time integration is capable of simulating large 3D acoustic problems accurately and efficiently. A 3D breast phantom model is used to study the inhomogeneous reconstruction in 3D. Improved results over the homogeneous method are observed.</p><p>A preliminary study of the Thermoacoustic Tomography (TAT) using continuous-wave (CW) modulated microwaves is summarized. The theoretical background, system configuration, experiment setup, and measurement results are presented.</p> / Dissertation
705

Structural Breaks and GARCH Models of Exchange Rate Return Volatility¡GAn Empirical Research of Asia & Pacific Countries

Zeng, Han-jun 25 June 2010 (has links)
Since the Bretton Woods System collapsed, the volatility of the exchange rate return has been an important and concerned issue in financial domain. The purpose of this paper is to investigate the empirical relevance of stricture breaks for the volatility of the exchange rate return, and we use both in-sample and out-of-sample tests. GARCH(1,1) Model is considered to be the representative quantitative method for analyzing the volatility of asset returns, as a result, we picked GARCH(1,1) as natural benchmarks in this article. In addition, we cogitated the structure breaks in this paper, and used ICSS(Iterated Cumulative Sums of Squares) algorithm to test the points of structural breaks. The results of empirical analysis show that there are significant evidences of structural breaks in the unconditional variance for six of eight US exchange rate return series, which implying unstable GARCH processes for these exchange rates. We also find those competing models that accommodating structural breaks will have higher predictive ability. Pooling forecasts from different models that allow for structural breaks in volatility appears to offer a reliable method for improving volatility forecast accuracy given the uncertainty surrounding the timing and size of the structural breaks.
706

The matching mechanism under the online job banks

Tsai, Ya-chi 07 July 2010 (has links)
The aim of the paper is to discuss the way that the online job banks send resumes to businesses for job seekers, and most businesses and job seekers have chosen online job banks as channel management for job wanted due to the rapid development of information technology for recent years. What businesses find employees and job seekers find jobs through the online job banks can be classified into two kinds, one is active candidates for the job, and another is matching pair by the online job banks. The online job banks help job seekers to send resumes to businesses by means of both ways, and how the online job banks send resumes to businesses for job seekers will affect the outcome. Therefore, this paper focuses on original way of sending resumes used by the online job banks, and also uses Gale-Shapley algorithm to devise different way of sending resumes which the online job banks possibly use in the future and consequently by comparing two ways of sending resumes, it can analyze what ways of sending resumes can be adopted by the online job banks under different situations.
707

GAGS : A Novel Microarray Gene Selection Algorithm for Gene Expression Classification

Wu, Kuo-yi 30 July 2010 (has links)
In this thesis, we have proposed a novel microarray gene selection algorithm consisting of five processes for solving gene expression classification problem. A normalization process is first used to remove the differences among different scales of genes. Second, an efficient gene ranking process is proposed to filter out the unrelated genes. Then, the genetic algorithm is adopted to find the informative gene subsets for each class. For each class, these informative gene subsets are adopted to classify the testing dataset separately. Finally, the separated classification results are fused to one final classification result. In the first experiment, 4 microarray datasets are used to verify the performance of the proposed algorithm. The experiment is conducted using the leave-one-out-cross-validation (LOOCV) resampling method. We compared the proposed algorithm with twenty one existing methods. The proposed algorithm obtains three wins in four datasets, and the accuracies of three datasets all reach 100%. In the second experiment, 9 microarray datasets are used to verify the proposed algorithm. The experiment is conducted using 50% VS 50% resampling method. Our proposed algorithm obtains eight wins among nine datasets for all competing methods.
708

Particle Swarm Optimization Algorithm for Multiuser Detection in DS-CDMA System

Fang, Ping-hau 31 July 2010 (has links)
In direct-sequence code division multiple access (DS-CDMA) systems, the heuristic optimization algorithms for multiuser detection include genetic algorithms (GA) and simulated annealing (SA) algorithm. In this thesis, we use particle swarm optimization (PSO) algorithms to solve the optimization problem of multiuser detection (MUD). PSO algorithm has several advantages, such as fast convergence, low computational complexity, and good performance in searching optimum solution. In order to enhance the performance and reduce the number of parameters, we propose two modified PSO algorithms, inertia weighting controlled PSO (W-PSO) and reduced-parameter PSO (R-PSO). From simulation results, the performance of our proposed algorithms can achieve that of optimal solution. Furthermore, our proposed algorithms have faster convergence performance and lower complexity when compared with other conventional algorithms.
709

High-performance Low-power Configurable Montgomery Multiplier for RSA Cryptosystems

Chang, Kai-cheng 03 August 2010 (has links)
The communication technology is changing rapidly every day, and the internet has played a very important role in our lives. Through specific protocols, people transform the data into 0¡¦s and 1¡¦s as digital signals and transfer them from sender to receiver via the network. Unfortunately, data transfer through the internet is open to the public, and too much exposure of private data may be a serious risk. To avoid this situation, we can encrypt the data before transmission to guarantee data confidentiality and privacy. The RSA encryption system is a simple and highly secure public key cryptosystem, but the encryption and decryption process requires a lot of exponentiation operations and division operations. In order to improve the reliability of the encrypted data, the operands are usually larger than 512 bits. If software is used to perform encryption and decryption, real time application will not be sufficed, since software lacks performance. For this reason, the RSA must be implemented in hardware. Since then, many methods of refining the effectiveness of the RSA encryption and decryption hardware have began to be developed. This research proposes a new Modular Multiplier architecture similar to the original Montgomery Modular Multiplier and the RSA encryption system, which is composed by simple adders, shifting registers and multiplexers. What¡¦s more, we¡¦ve also proposed new concepts including the Quotient Lookahead and the Superfluous Operation Elimination to further enhance the performance. The test results show that our design can reduce the total cycle count by 19%, and also save the overall energy consumption. Due to the features of high performance and energy saving, the proposed design is suitable for portable devices which have low power requirements.
710

CUDA-Based Modified Genetic Algorithms for Solving Fuzzy Flow Shop Scheduling Problems

Huang, Yi-chen 23 August 2010 (has links)
The flow shop scheduling problems with fuzzy processing times and fuzzy due dates are investigated in this paper. The concepts of earliness and tardiness are interpreted by using the concepts of possibility and necessity measures that were developed in fuzzy sets theory. And the objective function will be taken into account through the different combinations of possibility and necessity measures. The genetic algorithm will be invoked to tackle these objective functions. A new idea based on longest common substring will be introduced at the best-keeping step. This new algorithm reduces the number of generations needed to reach the stopping criterion. Also, we implement the algorithm on CUDA. The numerical experiments show that the performances of the CUDA program on GPU compare favorably to the traditional programs on CPU.

Page generated in 0.0562 seconds