• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 2
  • 1
  • Tagged with
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

USING SHORT-BLOCK TURBO CODES FOR TELEMETRY AND COMMAND

Wang, Charles C., Nguyen, Tien M. 10 1900 (has links)
International Telemetering Conference Proceedings / October 25-28, 1999 / Riviera Hotel and Convention Center, Las Vegas, Nevada / The turbo code is a block code even though a convolutional encoder is used to construct codewords. Its performance depends on the code word length. Since the invention of the turbo code in 1993, most of the bit error rate (BER) evaluations have been performed using large block sizes, i.e., sizes greater than 1000, or even 10,000. However, for telemetry and command, a relatively short message (<500 bits) may be used. This paper investigates the turbo-coded BER performance for short packets. Fading channel is also considered. In addition, biased channel side information is adopted to improve the performance.
2

Emulating Variable Block Size Caches

Muthulaxmi, S 05 1900 (has links) (PDF)
No description available.
3

Evaluation of Idempotency & Block Size of Data on the Performance of Normalized Compression Distance Algorithm

Mandhapati, Venkata Srikanth, Bajwa, Kamran Ali January 2012 (has links)
Normalized compression distance (NCD) is a similarity distance metric algorithm which is used for the purpose of analyzing the type of file fragments. The performance of NCD depends upon underlying compression algorithm to be used. We have studied three compressors bzip2, gzip and ppmd, the compression ratio of ppmd is better than bzip2 and the compression ratio of bzip2 is better than gzip, but which one out of these three is better than one another in the viewpoint of idempotency is evaluated by us. Then we have applied NCD along with k nearest neighbour as a classification algorithm to a randomly selected public corpus data with different block sizes (512 byte, 1024 bytes, 1536 bytes, 2048 bytes). The performance of two compressors bzip2 and gzip is also compared for the NCD algorithm in the perspective of idempotency. Objectives: In this study we have investigated the In this study we have investigated the combine effect of both of the parameters namely compression ratio versus idempotency and varying block size of data on the performance of NCD. The objective is to figure out that in order to have a better performance of NCD either a compressor for NCD should be selected on the basis of better compression ratio of compressors or better idempotency of compressors. The whole purpose of using different block sizes was to evaluate either the performance of NCD will improve or not by varying the block size of data to be used for making the datasets. Methods: Experiments are performed to test the hypotheses and evaluate the effect of compression ratio versus idempotency and block size of data on the performance of NCD. Results: The results obtained after the analysis of null hypotheses of main experiment are retained, which showed that there is no statistically significant difference on the performance of NCD when varying block size of data is used and also there is no statistically significant difference on the NCD’s performance when a compressor is selected for NCD on the basis of better compression ratio or better idempotency. Conclusions: As the results obtained from the experiments are unable to reject the null hypotheses of main experiment so no conclusion could be drawn of the effect of the independent variables on the dependent variable i.e. there is no statistically significant effect of compression ratio versus idempotency and varying block size of data on performance of the NCD.
4

Hadoop Read Performance During Datanode Crashes / Hadoops läsprestanda vid datanodkrascher

Johannsen, Fabian, Hellsing, Mattias January 2016 (has links)
This bachelor thesis evaluates the impact of datanode crashes on the performance of the read operations of a Hadoop Distributed File System, HDFS. The goal is to better understand how datanode crashes, as well as how certain parameters, affect the  performance of the read operation by looking at the execution time of the get command. The parameters used are the number of crashed nodes, block size and file size. By setting up a Linux test environment with ten virtual machines and Hadoop installed on them and running tests on it, data has been collected in order to answer these questions. From this data the average execution time and standard deviation of the get command was calculated. The network activity during the tests was also measured. The results showed that neither the number of crashed nodes nor block size had any significant effect on the execution time. It also demonstrated that the execution time of the get command was not directly proportional to the size of the fetched file. The execution time was up to 4.5 times as long when the file size was four times as large. A four times larger file did sometimes result in more than a four times as long execution time. Although, the consequences of a datanode crash while fetching a small file appear to be much greater than with a large file. The average execution time increased by up to 36% when a large file was fetched but it increased by as much as 85% when fetching a small file.
5

Analysis, Implementation and Evaluation of Direction Finding Algorithms using GPU Computing / Analys, implementering och utvärdering av riktningsbestämningsalgoritmer på GPU

Andersdotter, Regina January 2022 (has links)
Direction Finding (DF) algorithms are used by the Swedish Defence Research Agency (FOI) in the context of electronic warfare against radio. Parallelizing these algorithms using a Graphics Processing Unit (GPU) might improve performance, and thereby increase military support capabilities. This thesis selects the DF algorithms Correlative Interferometer (CORR), Multiple Signal Classification (MUSIC) and Weighted Subspace Fitting (WSF), and examines to what extent GPU implementation of these algorithms is suitable, by analysing, implementing and evaluating. Firstly, six general criteria for GPU suitability are formulated. Then the three algorithms are analyzed with regard to these criteria, giving that MUSIC and WSF are both 58% suitable, closely followed by CORR on 50% suitability. MUSIC is selected for implementation, and an open source implementation is extended to three versions: a multicore CPU version, a GPU version (with Eigenvalue Decomposition (EVD) and pseudo spectrum calculation performed on the GPU), and a MIXED version (with only pseudo spectrum calculation on the GPU). These versions are then evaluated for angle resolutions between 1° and 0.025°, and CUDA block sizes between 8 and 1024. It is found that the GPU version is faster than the CPU version for angle resolutions above 0.1°, and the largest measured speedup is 1.4 times. The block size has no large impact on the total runtime. In conclusion, the overall results indicate that it is not entirely suitable, yet somewhat beneficial for large angle resolutions, to implement MUSIC using GPU computing.

Page generated in 0.0635 seconds