• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 2
  • 1
  • 1
  • Tagged with
  • 15
  • 15
  • 7
  • 7
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Performance Analysis of JavaScript

Smedberg, Fredrik January 2010 (has links)
<p>In the last decade, web browsers have seen a remarkable increase of performance, especially in the JavaScript engines. JavaScript has over the years gone from being a slow and rather limited language, to today have become feature-rich and fast. It’s speed can be around the same or half of comparable code written in C++, but this speed is directly dependent on the choice of the web browser, and the best performance is seen in browsers using JIT compilation techniques.</p><p>Even though the language has seen a dramatic increase in performance, there’s still major problems regarding memory usage. JavaScript applications typically consume 3-4 times more memory than similar applications written in C++. Many browser vendors, like Opera Software, acknowledge this and are currently trying to optimize their memory usage. This issue is hopefully non-existent within a near future.</p><p>Because the majority of scientific papers written about JavaScript only compare performance using the industry benchmarks SunSpider and V8, this thesis have chosen to widen the scope. The benchmarks really give no information about how JavaScript stands in comparison to C#, C++ and other popular languages. To be able to compare that, I’ve implemented a GIF decoder, an XML parser and various elementary tests in both JavaScript and C++ to compare how far apart the languages are in terms of speed, memory usage and responsiveness.</p>
12

Performance Analysis of JavaScript

Smedberg, Fredrik January 2010 (has links)
In the last decade, web browsers have seen a remarkable increase of performance, especially in the JavaScript engines. JavaScript has over the years gone from being a slow and rather limited language, to today have become feature-rich and fast. It’s speed can be around the same or half of comparable code written in C++, but this speed is directly dependent on the choice of the web browser, and the best performance is seen in browsers using JIT compilation techniques. Even though the language has seen a dramatic increase in performance, there’s still major problems regarding memory usage. JavaScript applications typically consume 3-4 times more memory than similar applications written in C++. Many browser vendors, like Opera Software, acknowledge this and are currently trying to optimize their memory usage. This issue is hopefully non-existent within a near future. Because the majority of scientific papers written about JavaScript only compare performance using the industry benchmarks SunSpider and V8, this thesis have chosen to widen the scope. The benchmarks really give no information about how JavaScript stands in comparison to C#, C++ and other popular languages. To be able to compare that, I’ve implemented a GIF decoder, an XML parser and various elementary tests in both JavaScript and C++ to compare how far apart the languages are in terms of speed, memory usage and responsiveness.
13

On the design of fast and efficient wavelet image coders with reduced memory usage

Oliver Gil, José Salvador 06 May 2008 (has links)
Image compression is of great importance in multimedia systems and applications because it drastically reduces bandwidth requirements for transmission and memory requirements for storage. Although earlier standards for image compression were based on the Discrete Cosine Transform (DCT), a recently developed mathematical technique, called Discrete Wavelet Transform (DWT), has been found to be more efficient for image coding. Despite improvements in compression efficiency, wavelet image coders significantly increase memory usage and complexity when compared with DCT-based coders. A major reason for the high memory requirements is that the usual algorithm to compute the wavelet transform requires the entire image to be in memory. Although some proposals reduce the memory usage, they present problems that hinder their implementation. In addition, some wavelet image coders, like SPIHT (which has become a benchmark for wavelet coding), always need to hold the entire image in memory. Regarding the complexity of the coders, SPIHT can be considered quite complex because it performs bit-plane coding with multiple image scans. The wavelet-based JPEG 2000 standard is still more complex because it improves coding efficiency through time-consuming methods, such as an iterative optimization algorithm based on the Lagrange multiplier method, and high-order context modeling. In this thesis, we aim to reduce memory usage and complexity in wavelet-based image coding, while preserving compression efficiency. To this end, a run-length encoder and a tree-based wavelet encoder are proposed. In addition, a new algorithm to efficiently compute the wavelet transform is presented. This algorithm achieves low memory consumption using line-by-line processing, and it employs recursion to automatically place the order in which the wavelet transform is computed, solving some synchronization problems that have not been tackled by previous proposals. The proposed encode / Oliver Gil, JS. (2006). On the design of fast and efficient wavelet image coders with reduced memory usage [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/1826 / Palancia
14

Efficient Minimum Cycle Mean Algorithms And Their Applications

Supriyo Maji (9158723) 23 July 2020 (has links)
<p>Minimum cycle mean (MCM) is an important concept in directed graphs. From clock period optimization, timing analysis to layout optimization, minimum cycle mean algorithms have found widespread use in VLSI system design optimization. With transistor size scaling to 10nm and below, complexities and size of the systems have grown rapidly over the last decade. Scalability of the algorithms both in terms of their runtime and memory usage is therefore important. </p> <p><br></p> <p>Among the few classical MCM algorithms, the algorithm by Young, Tarjan, and Orlin (YTO), has been particularly popular. When implemented with a binary heap, the YTO algorithm has the best runtime performance although it has higher asymptotic time complexity than Karp's algorithm. However, as an efficient implementation of YTO relies on data redundancy, its memory usage is higher and could be a prohibitive factor in large size problems. On the other hand, a typical implementation of Karp's algorithm can also be memory hungry. An early termination technique from Hartmann and Orlin (HO) can be directly applied to Karp's algorithm to improve its runtime performance and memory usage. Although not as efficient as YTO in runtime, HO algorithm has much less memory usage than YTO. We propose several improvements to HO algorithm. The proposed algorithm has comparable runtime performance to YTO for circuit graphs and dense random graphs while being better than HO algorithm in memory usage. </p> <p><br></p> <p>Minimum balancing of a directed graph is an application of the minimum cycle mean algorithm. Minimum balance algorithms have been used to optimally distribute slack for mitigating process variation induced timing violation issues in clock network. In a conventional minimum balance algorithm, the principal subroutine is that of finding MCM in a graph. In particular, the minimum balance algorithm iteratively finds the minimum cycle mean and the corresponding minimum-mean cycle, and uses the mean and cycle to update the graph by changing edge weights and reducing the graph size. The iterations terminate when the updated graph is a single node. Studies have shown that the bottleneck of the iterative process is the graph update operation as previous approaches involved updating the entire graph. We propose an improvement to the minimum balance algorithm by performing fewer changes to the edge weights in each iteration, resulting in better efficiency.</p> <p><br></p> <p>We also apply the minimum cycle mean algorithm in latency insensitive system design. Timing violations can occur in high performance communication links in system-on-chips (SoCs) in the late stages of the physical design process. To address the issues, latency insensitive systems (LISs) employ pipelining in the communication channels through insertion of the relay stations. Although the functionality of a LIS is robust with respect to the communication latencies, such insertion can degrade system throughput performance. Earlier studies have shown that the proper sizing of buffer queues after relay station insertion could eliminate such performance loss. However, solving the problem of maximum performance buffer queue sizing requires use of mixed integer linear programming (MILP) of which runtime is not scalable. We formulate the problem as a parameterized graph optimization problem where for every communication channel there is a parameterized edge with buffer counts as the edge weight. We then use minimum cycle mean algorithm to determine from which edges buffers can be removed safely without creating negative cycles. This is done iteratively in the similar style as the minimum balance algorithm. Experimental results suggest that the proposed approach is scalable. Moreover, quality of the solution is observed to be as good as that of the MILP based approach.</p><p><br></p>
15

Elliptic Curve Cryptography for Lightweight Applications.

Hitchcock, Yvonne Roslyn January 2003 (has links)
Elliptic curves were first proposed as a basis for public key cryptography in the mid 1980's. They provide public key cryptosystems based on the difficulty of the elliptic curve discrete logarithm problem (ECDLP) , which is so called because of its similarity to the discrete logarithm problem (DLP) over the integers modulo a large prime. One benefit of elliptic curve cryptosystems (ECCs) is that they can use a much shorter key length than other public key cryptosystems to provide an equivalent level of security. For example, 160 bit ECCs are believed to provide about the same level of security as 1024 bit RSA. Also, the level of security provided by an ECC increases faster with key size than for integer based discrete logarithm (dl) or RSA cryptosystems. ECCs can also provide a faster implementation than RSA or dl systems, and use less bandwidth and power. These issues can be crucial in lightweight applications such as smart cards. In the last few years, ECCs have been included or proposed for inclusion in internationally recognized standards. Thus elliptic curve cryptography is set to become an integral part of lightweight applications in the immediate future. This thesis presents an analysis of several important issues for ECCs on lightweight devices. It begins with an introduction to elliptic curves and the algorithms required to implement an ECC. It then gives an analysis of the speed, code size and memory usage of various possible implementation options. Enough details are presented to enable an implementer to choose for implementation those algorithms which give the greatest speed whilst conforming to the code size and ram restrictions of a particular lightweight device. Recommendations are made for new functions to be included on coprocessors for lightweight devices to support ECC implementations Another issue of concern for implementers is the side-channel attacks that have recently been proposed. They obtain information about the cryptosystem by measuring side-channel information such as power consumption and processing time and the information is then used to break implementations that have not incorporated appropriate defences. A new method of defence to protect an implementation from the simple power analysis (spa) method of attack is presented in this thesis. It requires 44% fewer additions and 11% more doublings than the commonly recommended defence of performing a point addition in every loop of the binary scalar multiplication algorithm. The algorithm forms a contribution to the current range of possible spa defences which has a good speed but low memory usage. Another topic of paramount importance to ECCs for lightweight applications is whether the security of fixed curves is equivalent to that of random curves. Because of the inability of lightweight devices to generate secure random curves, fixed curves are used in such devices. These curves provide the additional advantage of requiring less bandwidth, code size and processing time. However, it is intuitively obvious that a large precomputation to aid in the breaking of the elliptic curve discrete logarithm problem (ECDLP) can be made for a fixed curve which would be unavailable for a random curve. Therefore, it would appear that fixed curves are less secure than random curves, but quantifying the loss of security is much more difficult. The thesis performs an examination of fixed curve security taking this observation into account, and includes a definition of equivalent security and an analysis of a variation of Pollard's rho method where computations from solutions of previous ECDLPs can be used to solve subsequent ECDLPs on the same curve. A lower bound on the expected time to solve such ECDLPs using this method is presented, as well as an approximation of the expected time remaining to solve an ECDLP when a given size of precomputation is available. It is concluded that adding a total of 11 bits to the size of a fixed curve provides an equivalent level of security compared to random curves. The final part of the thesis deals with proofs of security of key exchange protocols in the Canetti-Krawczyk proof model. This model has been used since it offers the advantage of a modular proof with reusable components. Firstly a password-based authentication mechanism and its security proof are discussed, followed by an analysis of the use of the authentication mechanism in key exchange protocols. The Canetti-Krawczyk model is then used to examine secure tripartite (three party) key exchange protocols. Tripartite key exchange protocols are particularly suited to ECCs because of the availability of bilinear mappings on elliptic curves, which allow more efficient tripartite key exchange protocols.

Page generated in 0.0649 seconds