• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2606
  • 912
  • 381
  • 347
  • 331
  • 101
  • 66
  • 49
  • 40
  • 36
  • 34
  • 32
  • 31
  • 27
  • 26
  • Tagged with
  • 5945
  • 1424
  • 873
  • 728
  • 722
  • 669
  • 492
  • 492
  • 480
  • 448
  • 421
  • 414
  • 386
  • 366
  • 341
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
711

High-performance Radix-4 Montgomery Modular Multiplier for RSA Cryptosystem

Hsu, Hong-Yi 30 August 2011 (has links)
Thanks to the development of the Internet in recent years, we can see more and more applications on E-commerce in the world. At the same time, we have to prevent our personal information to be leaked out during the transaction. Therefore, topic on researching network security becomes increasingly popular. It is well-known that an encryption system can be applied to consolidate the network security. RSA encryption algorithm is a special kind of asymmetric cryptography, commonly used in public key encryption system on the network, by using two prime numbers as the two keys to encrypt and decrypt. These two keys are called public key and private key, and the key length is at least 512 bits. As a public key encryption, the only way to decrypt is using the private key. As long as the private key is not revealed, it is very difficult to get the private key from the public key even using the reverse engineering. Therefore, RSA encryption algorithm can be regarded as a very safe encryption and decryption algorithm. As the minimum key length has to be greater than 512 bits to ensure information security, using software to execute RSA encryption and decryption will be very slow so that the real time requirement may not be satisfied. Hence we will have to implement RSA encryption system with a hardware circuit to meet the real time requirement on the network. Modular exponentiation (i.e., ME mod N) in RSA cryptosystem is usually achieved by repeated modular multiplications on large integers. A famous approach to implement the modular multiplication into hardware circuits is based on the Montgomery modular multiplication algorithm, which replaces the trial division by modulus with a series of addition and shift operations. However, a large amount of clock cycle is still required to complete a modular multiplication. For example, Montgomery multiplication algorithm will take 512 clock cycles to complete an A․B mod N. As a result, performing one modular exponentiation ME mod N in RSA cryptosystm will need 512․512 clock cycles. To counter the above disadvantage, we employ radix-4 algorithm to reduce 50% of clock cycle number for each A•B mod N. In addition, we also modify the architecture of conventional in order to achieve the radix-4 algorithm to reduce its critical path delay so that the performance can be improved further. Experimental results show that the proposed 1024-bit radix-4 modular multiplier (Our-Booth-Radix-4) before performing as pipeline is 70% faster than the radix-2 multiplier with 24% area overhead. Furthermore, it is 20% faster than traditional radix-4 modular multiplier with 12% area reduction. Therefore, its AT is smaller than the previous architectures.
712

The Optimization Analysis on Dual Input Transmission Mechanisms of Wind Turbines

Yang, Chung-hsuan 18 July 2012 (has links)
¡@¡@The dynamic power flow in a dual-input parallel planetary gear train system is simulated in this study. Different wind powers for the small wind turbines are merged to the synchronous generator in this system to simplify and reduce the cost of the system. Nonlinear equations of motion of these gears in the planetary system are derived. The fourth order Runge-Kutta method has employed to calculate the time varied torque, root stress and Hertz stress between engaged gears. The genetic optimization method has also applied to derive the optimized tooth form factors, e.g. module and the tooth face width. ¡@¡@The dynamic power flow patterns in this dual input system under various input conditions, e.g. two equal and unequal input powers, only single available input power, have been simulated and illustrated. The corresponding dynamic stress and safety factor variations have also been explored. Numerical results reveal that the proposed dual-input planetary gear system is feasible. To improve the efficiency of this wind power generation system. An inertia variable flywheel system has also been added at the output end to store or release the kinetic energies at higher or lower wind speed cases. A magnetic density variable synchronous generator has also been studied in this work to investigate the possible efficiency improvement in the system. Numerical results indicate that these inertia variable flywheel and magnetic density variable generator may have advantages in power generation.
713

Observation on the local structural transformation of amorphous zinc oxide during the heating process by molecular dynamics

Tsai, Jen-Yu 15 August 2012 (has links)
In this study, we employ molecular statics to construct the structure of amorphous zinc oxide. First, we find out the first number of higher energy structures in all local stable structures by Basin-Hopping algorithm, which are separated into different ratio of crystalline/amorphous zinc oxide structures, and then we judge the type of zinc oxide structure by radial distribution function. In addition, we use coordination number to analyse the interatomic bond length and bond angle in the structures. Furthermore, we employ molecular dynamics to increase the temperature of amorphous zinc oxide structures, and then use the distribution of coordination number, bond length and bond angle between zinc and oxygen atom to analyse the change of the local structure of amorphous zinc oxide during the heating process.
714

Algorithms for VLSI Circuit Optimization and GPU-Based Parallelization

Liu, Yifang 2010 May 1900 (has links)
This research addresses some critical challenges in various problems of VLSI design automation, including sophisticated solution search on DAG topology, simultaneous multi-stage design optimization, optimization on multi-scenario and multi-core designs, and GPU-based parallel computing for runtime acceleration. Discrete optimization for VLSI design automation problems is often quite complex, due to the inconsistency and interference between solutions on reconvergent paths in directed acyclic graph (DAG). This research proposes a systematic solution search guided by a global view of the solution space. The key idea of the proposal is joint relaxation and restriction (JRR), which is similar in spirit to mathematical relaxation techniques, such as Lagrangian relaxation. Here, the relaxation and restriction together provides a global view, and iteratively improves the solution. Traditionally, circuit optimization is carried out in a sequence of separate optimization stages. The problem with sequential optimization is that the best solution in one stage may be worse for another. To overcome this difficulty, we take the approach of performing multiple optimization techniques simultaneously. By searching in the combined solution space of multiple optimization techniques, a broader view of the problem leads to the overall better optimization result. This research takes this approach on two problems, namely, simultaneous technology mapping and cell placement, and simultaneous gate sizing and threshold voltage assignment. Modern processors have multiple working modes, which trade off between power consumption and performance, or to maintain certain performance level in a powerefficient way. As a result, the design of a circuit needs to accommodate different scenarios, such as different supply voltage settings. This research deals with this multi-scenario optimization problem with Lagrangian relaxation technique. Multiple scenarios are taken care of simultaneously through the balance by Lagrangian multipliers. Similarly, multiple objective and constraints are simultaneously dealt with by Lagrangian relaxation. This research proposed a new method to calculate the subgradients of the Lagrangian function, and solve the Lagrangian dual problem more effectively. Multi-core architecture also poses new problems and challenges to design automation. For example, multiple cores on the same chip may have identical design in some part, while differ from each other in the rest. In the case of buffer insertion, the identical part have to be carefully optimized for all the cores with different environmental parameters. This problem has much higher complexity compared to buffer insertion on single cores. This research proposes an algorithm that optimizes the buffering solution for multiple cores simultaneously, based on critical component analysis. Under the intensifying time-to-market pressure, circuit optimization not only needs to find high quality solutions, but also has to come up with the result fast. Recent advance in general purpose graphics processing unit (GPGPU) technology provides massive parallel computing power. This research turns the complex computation task of circuit optimization into many subtasks processed by parallel threads. The proposed task partitioning and scheduling methods take advantage of the GPU computing power, achieve significant speedup without sacrifice on the solution quality.
715

Capacity Proportional Unstructured Peer-to-Peer Networks

Reddy, Chandan Rama 2009 August 1900 (has links)
Existing methods to utilize capacity-heterogeneity in a P2P system either rely on constructing special overlays with capacity-proportional node degree or use topology adaptation to match a node's capacity with that of its neighbors. In existing P2P networks, which are often characterized by diverse node capacities and high churn, these methods may require large node degree or continuous topology adaptation, potentially making them infeasible due to their high overhead. In this thesis, we propose an unstructured P2P system that attempts to address these issues. We first prove that the overall throughput of search queries in a heterogeneous network is maximized if and only if traffic load through each node is proportional to its capacity. Our proposed system achieves this traffic distribution by biasing search walks using the Metropolis-Hastings algorithm, without requiring any special underlying topology. We then define two saturation metrics for measuring the performance of overlay networks: one for quantifying their ability to support random walks and the second for measuring their potential to handle the overhead caused by churn. Using simulations, we finally compare our proposed method with Gia, an existing system which uses topology adaptation, and find that the former performs better under all studied conditions, both saturation metrics, and such end-to-end parameters as query success rate, latency, and query-hits for various file replication schemes.
716

An experimental comparison of wireless position locating algorithms based on received signal strength

Gutierrez, Felix 2008 December 1900 (has links)
This thesis presents and discusses research associated with locating wireless devices. Several algorithms have been developed to determine the physical location of the wireless device and a subset of these algorithms only rely on received signal strength (RSS). Two of the most promising RSS-based algorithms are the LC and dwMDS algorithms; however each algorithm has only been tested via computer simulations with different environmental parameters. To determine which algorithm performs better (i.e., produces estimates that are closer to the true location of the wireless device), a fair comparison needs to be made using the same set of data. The goal of this research is to compare the performance of these two algorithms using not only the same set of data, but data that is collected from the field. An extensive measurement campaign at different environments provided a vast amount of data as input to these algorithms. Both of these algorithms are evaluated in a onedimensional (straight line) and two-dimensional (grid) setting. In total, six environments were used to test these algorithms; three environments for each setting. The results show that on average, the LC algorithm outperforms dwMDS in most of the environments. Since the same data was inputted for each algorithm, a fair comparison can be made and doesn’t give any unfair advantage to any particular algorithm. In addition, since the data was taken directly from the field as opposed to computer simulations, this provides a better degree of confidence for a successful realworld implementation.
717

Development of Algorithms to Estimate Post-Disaster Population Dislocation--A Research-Based Approach

Lin, Yi-Sz 2009 August 1900 (has links)
This study uses an empirical approach to develop algorithms to estimate population dislocation following a natural disaster. It starts with an empirical reexamination of the South Dade Population Impact Survey data, integrated with the Miami-Dade County tax appraisal data and 1990 block group census data, to investigate the effects of household and neighborhood socioeconomic characteristics on household dislocation. The empirical analyses found evidence suggesting that households with higher socio-economic status have a greater tendency to leave their homes following a natural disaster. Then one of the statistical models is selected from the empirical analysis and integrated into the algorithm that estimates the probability of household dislocation based on structural damage, housing type, and the percentages of Black and Hispanic population in block groups. This study also develops a population dislocation algorithm using a modified Hazard-US (HAZUS) approach that integrates the damage state probabilities proposed by Bai, Hueste and Gardoni in 2007, accompanied with dislocation factors described in HAZUS to produce structural level estimates. These algorithms were integrated into MAEviz, the Mid-American Earthquake Centers Seismic Loss Assessment System, to produce post-disaster dislocation estimates at either the structure or block group level, whichever is appropriate for the user's planning purposes. Sensitivity analysis follows to examine the difference among the estimates produced by the two newly-developed algorithms and the HAZUS population dislocation algorithm.
718

Measure-Driven Algorithm Design and Analysis: A New Approach for Solving NP-hard Problems

Liu, Yang 2009 August 1900 (has links)
NP-hard problems have numerous applications in various fields such as networks, computer systems, circuit design, etc. However, no efficient algorithms have been found for NP-hard problems. It has been commonly believed that no efficient algorithms for NP-hard problems exist, i.e., that P6=NP. Recently, it has been observed that there are parameters much smaller than input sizes in many instances of NP-hard problems in the real world. In the last twenty years, researchers have been interested in developing efficient algorithms, i.e., fixed-parameter tractable algorithms, for those instances with small parameters. Fixed-parameter tractable algorithms can practically find exact solutions to problem instances with small parameters, though those problems are considered intractable in traditional computational theory. In this dissertation, we propose a new approach of algorithm design and analysis: discovering better measures for problems. In particular we use two measures instead of the traditional single measure?input size to design algorithms and analyze their time complexity. For several classical NP-hard problems, we present improved algorithms designed and analyzed with this new approach, First we show that the new approach is extremely powerful for designing fixedparameter tractable algorithms by presenting improved fixed-parameter tractable algorithms for the 3D-matching and 3D-packing problems, the multiway cut problem, the feedback vertex set problems on both directed and undirected graph and the max-leaf problems on both directed and undirected graphs. Most of our algorithms are practical for problem instances with small parameters. Moreover, we show that this new approach is also good for designing exact algorithms (with no parameters) for NP-hard problems by presenting an improved exact algorithm for the well-known satisfiability problem. Our results demonstrate the power of this new approach to algorithm design and analysis for NP-hard problems. In the end, we discuss possible future directions on this new approach and other approaches to algorithm design and analysis.
719

Adaptive Control of Third Harmonic Generation via Genetic Algorithm

Hua, Xia 2010 August 1900 (has links)
Genetic algorithm is often used to find the global optimum in a multi-dimensional search problem. Inspired by the natural evolution process, this algorithm employs three reproduction strategies -- cloning, crossover and mutation -- combined with selection, to improve the population as the evolution progresses from generation to generation. Femtosecond laser pulse tailoring, with the use of a pulse shaper, has become an important technology which enables applications in femtochemistry, micromachining and surgery, nonlinear microscopy, and telecommunications. Since a particular pulse shape corresponds to a point in a highly-dimensional parameter space, genetic algorithm is a popular technique for optimal pulse shape control in femtosecond laser experiments. We use genetic algorithm to optimize third harmonic generation (THG), and investigate various pulse shaper options. We test our setup by running the experiment with varied initial conditions and study factors that affect convergence of the algorithm to the optimal pulse shape. Our next step is to use the same setup to control coherent anti-Stocks Raman scattering. The results show that the THG signal has been enhanced.
720

Adequacy Assessment in Power Systems Using Genetic Algorithm and Dynamic Programming

Zhao, Dongbo 2010 December 1900 (has links)
In power system reliability analysis, state space pruning has been investigated to improve the efficiency of the conventional Monte Carlo Simulation (MCS). New algorithms have been proposed to prune the state space so as to make the Monte Carlo Simulation sample a residual state space with a higher density of failure states. This thesis presents a modified Genetic Algorithm (GA) as the state space pruning tool, with higher efficiency and a controllable stopping criterion as well as better parameter selection. This method is tested using the IEEE Reliability Test System (RTS 79 and MRTS), and is compared with the original GA-MCS method. The modified GA shows better efficiency than the previous methods, and it is easier to have its parameters selected. This thesis also presents a Dynamic Programming (DP) algorithm as an alternative state space pruning tool. This method is also tested with the IEEE Reliability Test System and it shows much better efficiency than using Monte Carlo Simulation alone.

Page generated in 0.0354 seconds