• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 881
  • 551
  • 228
  • 104
  • 68
  • 45
  • 35
  • 33
  • 28
  • 28
  • 17
  • 13
  • 12
  • 9
  • 8
  • Tagged with
  • 2358
  • 404
  • 242
  • 224
  • 199
  • 177
  • 164
  • 130
  • 130
  • 124
  • 118
  • 112
  • 112
  • 104
  • 103
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
551

Water Management in Mongolia

Ochirkhuyag, Myagmersuren January 2011 (has links)
The world experiences large-scale ecosystems degradation in an every part of the planet - in rich as well as in and poor parts. Unstable economic conditions together with weak law enforcements make low income countries face more severe forms of natural destruction. This draws the attention on the need to design economic policies that are environmentally sound and while at the same time ensuring the well-being of their inhabitants in economic, social and natural settings. A number of countries in Central and Eastern Europe and Central Asia have experienced a unique historical period of transition from communist regimes to free democratic societies. This has been followed by numerous effects on their financial situations as economic hardships caused by the collapse of economies injected by the assistance from the Soviet and committees of socialist countries mutually aiding each other, opening up of opportunities as private ownership and market liberations. Not all countries succeeded in liberalizing their economic structures and reforming economic and political environments. Simultaneously, the natural environment underwent various effects, both positive and negative, after the Iron Curtain fell and exposed destructing effects of command and control economy. Mongolia has experienced all the hard aspects of the transition and started to climb up on the income ladder from the low income to the lower middle-income list of the World Bank, but also seen many negative price aspects of development. Water resources have been severely degraded in recent years due to anthropogenic impact. However, there are reforms taking place in water sector institutions that have recently attracted wide attention nationwide.This thesis will give detailed picture on current state of water resources in the country and the system that coordinates them. The Environmental Kuznets Curve (EKC) is used as an approach to highlight the relationship between water resource quality and income per capita in Mongolia. This is followed by a detailed discussion on water institutions development and the coordinating mechanisms badly needed among sectors involved. The research suggests that collaborative actions are important if sustainable water management is to be reached. More generally, I recommend further research issues on the generated topic as my thesis is one of the first discussions coupling the EKC and institutional theory aspects together.
552

Bit Serial Systolic Architectures for Multiplicative Inversion and Division over GF(2<sup>m</sup>)

Daneshbeh, Amir January 2005 (has links)
Systolic architectures are capable of achieving high throughput by maximizing pipelining and by eliminating global data interconnects. Recursive algorithms with regular data flows are suitable for systolization. The computation of multiplicative inversion using algorithms based on EEA (Extended Euclidean Algorithm) are particularly suitable for systolization. Implementations based on EEA present a high degree of parallelism and pipelinability at bit level which can be easily optimized to achieve local data flow and to eliminate the global interconnects which represent most important bottleneck in todays sub-micron design process. The net result is to have high clock rate and performance based on efficient systolic architectures. This thesis examines high performance but also scalable implementations of multiplicative inversion or field division over Galois fields <i>GF</i>(2<i><sup>m</sup></i>) in the specific case of cryptographic applications where field dimension <i>m</i> may be very large (greater than 400) and either <i>m</i> or defining irreducible polynomial may vary. For this purpose, many inversion schemes with different basis representation are studied and most importantly variants of EEA and binary (Stein's) GCD computation implementations are reviewed. A set of common as well as contrasting characteristics of these variants are discussed. As a result a generalized and optimized variant of EEA is proposed which can compute division, and multiplicative inversion as its subset, with divisor in either <i>polynomial</i> or <i>triangular</i> basis representation. Further results regarding Hankel matrix formation for double-basis inversion is provided. The validity of using the same architecture to compute field division with polynomial or triangular basis representation is proved. Next, a scalable unidirectional bit serial systolic array implementation of this proposed variant of EEA is implemented. Its complexity measures are defined and these are compared against the best known architectures. It is shown that assuming the requirements specified above, this proposed architecture may achieve a higher clock rate performance w. r. t. other designs while being more flexible, reliable and with minimum number of inter-cell interconnects. The main contribution at system level architecture is the substitution of all counter or adder/subtractor elements with a simpler distributed and free of carry propagation delays structure. Further a novel restoring mechanism for result sequences of EEA is proposed using a double delay element implementation. Finally, using this systolic architecture a CMD (Combined Multiplier Divider) datapath is designed which is used as the core of a novel systolic elliptic curve processor. This EC processor uses affine coordinates to compute scalar point multiplication which results in having a very small control unit and negligible with respect to the datapath for all practical values of <i>m</i>. The throughput of this EC based on this bit serial systolic architecture is comparable with designs many times larger than itself reported previously.
553

High Performance Elliptic Curve Cryptographic Co-processor

Lutz, Jonathan January 2003 (has links)
In FIPS 186-2, NIST recommends several finite fields to be used in the elliptic curve digital signature algorithm (ECDSA). Of the ten recommended finite fields, five are binary extension fields with degrees ranging from 163 to 571. The fundamental building block of the ECDSA, like any ECC based protocol, is elliptic curve scalar multiplication. This operation is also the most computationally intensive. In many situations it may be desirable to accelerate the elliptic curve scalar multiplication with specialized hardware. In this thesis a high performance elliptic curve processor is developed which is optimized for the NIST binary fields. The architecture is built from the bottom up starting with the field arithmetic units. The architecture uses a field multiplier capable of performing a field multiplication over the extension field with degree 163 in 0. 060 microseconds. Architectures for squaring and inversion are also presented. The co-processor uses Lopez and Dahab's projective coordinate system and is optimized specifically for Koblitz curves. A prototype of the processor has been implemented for the binary extension field with degree 163 on a Xilinx XCV2000E FPGA. The prototype runs at 66 MHz and performs an elliptic curve scalar multiplication in 0. 233 msec on a generic curve and 0. 075 msec on a Koblitz curve.
554

Finite Field Multiplier Architectures for Cryptographic Applications

El-Gebaly, Mohamed January 2000 (has links)
Security issues have started to play an important role in the wireless communication and computer networks due to the migration of commerce practices to the electronic medium. The deployment of security procedures requires the implementation of cryptographic algorithms. Performance has always been one of the most critical issues of a cryptographic function, which determines its effectiveness. Among those cryptographic algorithms are the elliptic curve cryptosystems which use the arithmetic of finite fields. Furthermore, fields of characteristic two are preferred since they provide carry-free arithmetic and at the same time a simple way to represent field elements on current processor architectures. Multiplication is a very crucial operation in finite field computations. In this contribution, we compare most of the multiplier architectures found in the literature to clarify the issue of choosing a suitable architecture for a specific application. The importance of the measuring the energy consumption in addition to the conventional measures for energy-critical applications is also emphasized. A new parallel-in serial-out multiplier based on all-one polynomials (AOP) using the shifted polynomial basis of representation is presented. The proposed multiplier is area efficient for hardware realization. Low hardware complexity is advantageous for implementation in constrained environments such as smart cards. Architecture of an elliptic curve coprocessor has been developed using the proposed multiplier. The instruction set architecture has been also designed. The coprocessor has been simulated using VHDL to very the functionality. The coprocessor is capable of performing the scalar multiplication operation over elliptic curves. Point doubling and addition procedures are hardwired inside the coprocessor to allow for faster operation.
555

On Fault-based Attacks and Countermeasures for Elliptic Curve Cryptosystems

Dominguez Oviedo, Agustin January 2008 (has links)
For some applications, elliptic curve cryptography (ECC) is an attractive choice because it achieves the same level of security with a much smaller key size in comparison with other schemes such as those that are based on integer factorization or discrete logarithm. Unfortunately, cryptosystems including those based on elliptic curves have been subject to attacks. For example, fault-based attacks have been shown to be a real threat in today’s cryptographic implementations. In this thesis, we consider fault-based attacks and countermeasures for ECC. We propose a new fault-based attack against the Montgomery ladder elliptic curve scalar multiplication (ECSM) algorithm. For security reasons, especially to provide resistance against fault-based attacks, it is very important to verify the correctness of computations in ECC applications. We deal with protections to fault attacks against ECSM at two levels: module and algorithm. For protections at the module level, where the underlying scalar multiplication algorithm is not changed, a number of schemes and hardware structures are presented based on re-computation or parallel computation. It is shown that these structures can be used for detecting errors with a very high probability during the computation of ECSM. For protections at the algorithm level, we use the concepts of point verification (PV) and coherency check (CC). We investigate the error detection coverage of PV and CC for the Montgomery ladder ECSM algorithm. Additionally, we propose two algorithms based on the double-and-add-always method that are resistant to the safe error (SE) attack. We demonstrate that one of these algorithms also resists the sign change fault (SCF) attack.
556

Prediction Performance of Survival Models

Yuan, Yan January 2008 (has links)
Statistical models are often used for the prediction of future random variables. There are two types of prediction, point prediction and probabilistic prediction. The prediction accuracy is quantified by performance measures, which are typically based on loss functions. We study the estimators of these performance measures, the prediction error and performance scores, for point and probabilistic predictors, respectively. The focus of this thesis is to assess the prediction performance of survival models that analyze censored survival times. To accommodate censoring, we extend the inverse probability censoring weighting (IPCW) method, thus arbitrary loss functions can be handled. We also develop confidence interval procedures for these performance measures. We compare model-based, apparent loss based and cross-validation estimators of prediction error under model misspecification and variable selection, for absolute relative error loss (in chapter 3) and misclassification error loss (in chapter 4). Simulation results indicate that cross-validation procedures typically produce reliable point estimates and confidence intervals, whereas model-based estimates are often sensitive to model misspecification. The methods are illustrated for two medical contexts in chapter 5. The apparent loss based and cross-validation estimators of performance scores for probabilistic predictor are discussed and illustrated with an example in chapter 6. We also make connections for performance.
557

A Stochastic Programming Model for a Day-Ahead Electricity Market: a Heuristic Methodology and Pricing

Zhang, Jichen January 2009 (has links)
This thesis presents a multi-stage linear stochastic mixed integer programming (SMIP) model for planning power generation in a pool-type day-ahead electricity market. The model integrates a reserve demand curve and shares most of the features of a stochastic unit commitment (UC) problem, which is known to be NP-hard. We capture the stochastic nature of the problem through scenarios, resulting in a large-scale mixed integer programming (MIP) problem that is computationally challenging to solve. Given that an independent system operator (ISO) has to solve such a problem within a time requirement of an hour or so, in order to release operating schedules for the next day real-time market, the problem has to be solved efficiently. For that purpose, we use some approximations to maintain the linearity of the model, parsimoniously select a subset of scenarios, and invoke realistic assumptions to keep the size of the problem reasonable. Even with these measures, realistic-size SMIP models with binary variables in each stage are still hard to solve with exact methods. We, therefore, propose a scenario-rolling heuristic to solve the SMIP problem. In each iteration, the heuristic solves a subset of the scenarios, and uses part of the obtained solution to solve another group in the subsequent iterations until all scenarios are solved. Two numerical examples are provided to test the performance of the scenario-rolling heuristic, and to highlight the difference between the operative schedules of a deterministic model and the SMIP model. Motivated by previous studies on pricing MIP problems and their applications to pricing electric power, we investigate pricing issues and compensation schemes using MIP formulations in the second part of the thesis. We show that some ideas from the literature can be applied to pricing energy/reserves for a relatively realistic model with binary variables, but some are found to be impractical in the real world. We propose two compensation schemes based on the SMIP that can be easily implemented in practice. We show that the compensation schemes with make-whole payments ensure that generators can have non-negative profits. We also prove that under some assumptions, one of the compensation schemes has the interesting theoretical property of minimizing the variance of the profit of generators to zero. Theoretical and numerical results of these compensation schemes are presented and discussed.
558

Nondestructive Evaluation of Asphalt Pavement Joints Using LWD and MASW Tests

du Tertre, Antonin January 2010 (has links)
Longitudinal joints are one of the critical factors that cause premature pavement failure. Poor-quality joints are characterized by a low density and high permeability; which generates surface distresses such as ravelling or longitudinal cracking. Density has been traditionally considered as the primary performance indicator of joint construction. Density measurements consist of taking cores in the field and determining their density in the laboratory. Although this technique provides the most accurate measure of joint density, it is destructive and time consuming. Nuclear and non-nuclear gauges have been used to evaluate the condition of longitudinal joint non-destructively, but did not show good correlation with core density tests. Consequently, agencies are searching for other non-destructive testing (NDT) options for longitudinal joints evaluation. NDT methods have significantly advanced for the evaluation of pavement structural capacity during the past decade. These methods are based either on deflection or wave velocity measurements. The light weight deflectometer (LWD) is increasingly being used in quality control/quality assurance to provide a rapid determination of the surface modulus. Corresponding backcalculation programs are able to determine the moduli of the different pavement layers; these moduli are input parameters for mechanistic-empirical pavement design. In addition, ultrasonic wave-based methods have been studied for pavement condition evaluation but not developed to the point of practical implementation. The multi-channel analysis of surface waves (MASW) consists of using ultrasonic transducers to measure surface wave velocities in pavements and invert for the moduli of the different layers. In this study, both LWD and MASW were used in the laboratory and in the field to assess the condition of longitudinal joints. LWD tests were performed in the field at different distances from the centreline in order to identify variations of the surface modulus. MASW measurements were conducted across the joint to evaluate its effect on wave velocities, frequency content and attenuation parameters. Improved signal processing techniques were used to analyze the data, such as Fourier Transform, windowing, or discrete wavelet transform. Dispersion curves were computed to determine surface wave velocities and identify the nature of the wave modes propagating through the asphalt pavement. Parameters such as peak-to-peak amplitude or the area of the frequency spectrum were used to compute attenuation curves. A self calibrating technique, called Fourier transmission coefficient (FTC), was used to assess the condition of longitudinal joints while eliminating the variability introduced by the source, receivers and coupling system. A critical component of this project consisted of preparing an asphalt slab with a joint in the middle that would be used for testing in the laboratory. The compaction method was calibrated by preparing fourteen asphalt samples. An exponential correlation was determined between the air void content and the compaction effort applied to the mixture. Using this relationship, an asphalt slab was prepared in two stages to create a joint of medium quality. Nuclear density measurements were performed at different locations on the slab and showed a good agreement with the predicted density gradient across the joint. MASW tests were performed on the asphalt slabs using different coupling systems and receivers. The FTC coefficients showed good consistency from one configuration to another. This result indicates that the undesired variability due to the receivers and the coupling system was reduced by the FTC technique. Therefore, the coefficients were representative of the hot mix asphalt (HMA) condition. A comparison of theoretical and experimental dispersion curves indicated that mainly Lamb waves were generated in the asphalt layer. This new result is in contradiction with the common assumption that the response is governed by surface waves. This result is of critical importance for the analysis of the data since MASW tests have been focusing on the analysis of Rayleigh waves. Deflection measurements in the field with the LWD showed that the surface modulus was mostly affected by the base and subgrade moduli, and could not be used to evaluate the condition of the surface course that contains the longitudinal joints. The LWDmod software should be used to differentiate the pavement layers and backcalculate the modulus of the asphalt layer. Testing should be performed using different plate sizes and dropping heights in order to generate different stress levels at the pavement surface and optimize the accuracy of the backcalculation. Finally, master curves were computed using a predictive equation based on mix design specifications. Moduli measured at different frequencies of excitation with the two NDT techniques were shifted to a design frequency of 25 Hz. Design moduli measured in the field and in the laboratory with the seismic method were in good agreement (less than 0.2% difference). Moreover, a relatively good agreement was found between the moduli measured with the LWD and the MASW method after shifting to the design frequency. In conclusion, LWD and MASW measurements were representative of HMA condition. However, the condition assessment of medium to good quality joints requires better control of the critical parameters, such as the measurement depth for the LWD, or the frequency content generated by the ultrasonic source and the coupling between the receivers and the asphalt surface for the MASW method.
559

High-Speed Elliptic Curve and Pairing-Based Cryptography

Longa, Patrick 05 April 2011 (has links)
Elliptic Curve Cryptography (ECC), independently proposed by Miller [Mil86] and Koblitz [Kob87] in mid 80’s, is finding momentum to consolidate its status as the public-key system of choice in a wide range of applications and to further expand this position to settings traditionally occupied by RSA and DL-based systems. The non-existence of known subexponential attacks on this cryptosystem directly translates to shorter keylengths for a given security level and, consequently, has led to implementations with better bandwidth usage, reduced power and memory requirements, and higher speeds. Moreover, the dramatic entry of pairing-based cryptosystems defined on elliptic curves at the beginning of the new millennium has opened the possibility of a plethora of innovative applications, solving in some cases longstanding problems in cryptography. Nevertheless, public-key cryptography (PKC) is still relatively expensive in comparison with its symmetric-key counterpart and it remains an open challenge to reduce further the computing cost of the most time-consuming PKC primitives to guarantee their adoption for secure communication in commercial and Internet-based applications. The latter is especially true for pairing computations. Thus, it is of paramount importance to research methods which permit the efficient realization of Elliptic Curve and Pairing-based Cryptography on the several new platforms and applications. This thesis deals with efficient methods and explicit formulas for computing elliptic curve scalar multiplication and pairings over fields of large prime characteristic with the objective of enabling the realization of software implementations at very high speeds. To achieve this main goal in the case of elliptic curves, we accomplish the following tasks: identify the elliptic curve settings with the fastest arithmetic; accelerate the precomputation stage in the scalar multiplication; study number representations and scalar multiplication algorithms for speeding up the evaluation stage; identify most efficient field arithmetic algorithms and optimize them; analyze the architecture of the targeted platforms for maximizing the performance of ECC operations; identify most efficient coordinate systems and optimize explicit formulas; and realize implementations on x86-64 processors with an optimal algorithmic selection among all studied cases. In the case of pairings, the following tasks are accomplished: accelerate tower and curve arithmetic; identify most efficient tower and field arithmetic algorithms and optimize them; identify the curve setting with the fastest arithmetic and optimize it; identify state-of-the-art techniques for the Miller loop and final exponentiation; and realize an implementation on x86-64 processors with optimal algorithmic selection. The most outstanding contributions that have been achieved with the methodologies above in this thesis can be summarized as follows: • Two novel precomputation schemes are introduced and shown to achieve the lowest costs in the literature for different curve forms and scalar multiplication primitives. The detailed cost formulas of the schemes are derived for most relevant scenarios. • A new methodology based on the operation cost per bit to devise highly optimized and compact multibase algorithms is proposed. Derived multibase chains using bases {2,3} and {2,3,5} are shown to achieve the lowest theoretical costs for scalar multiplication on certain curve forms and for scenarios with and without precomputations. In addition, the zero and nonzero density formulas of the original (width-w) multibase NAF method are derived by using Markov chains. The application of “fractional” windows to the multibase method is described together with the derivation of the corresponding density formulas. • Incomplete reduction and branchless arithmetic techniques are optimally combined for devising high-performance field arithmetic. Efficient algorithms for “small” modular operations using suitably chosen pseudo-Mersenne primes are carefully analyzed and optimized for incomplete reduction. • Data dependencies between contiguous field operations are discovered to be a source of performance degradation on x86-64 processors. Three techniques for reducing the number of potential pipeline stalls due to these dependencies are proposed: field arithmetic scheduling, merging of point operations and merging of field operations. • Explicit formulas for two relevant cases, namely Weierstrass and Twisted Edwards curves over and , are carefully optimized employing incomplete reduction, minimal number of operations and reduced number of data dependencies between contiguous field operations. • Best algorithms for the field, point and scalar arithmetic, studied or proposed in this thesis, are brought together to realize four high-speed implementations on x86-64 processors at the 128-bit security level. Presented results set new speed records for elliptic curve scalar multiplication and introduce up to 34% of cost reduction in comparison with the best previous results in the literature. • A generalized lazy reduction technique that enables the elimination of up to 32% of modular reductions in the pairing computation is proposed. Further, a methodology that keeps intermediate results under Montgomery reduction boundaries maximizing operations without carry checks is introduced. Optimized formulas for the popular tower are explicitly stated and a detailed operation count that permits to determine the theoretical cost improvement attainable with the proposed method is carried out for the case of an optimal ate pairing on a Barreto-Naehrig (BN) curve at the 128-bit security level. • Best algorithms for the different stages of the pairing computation, including the proposed techniques and optimizations, are brought together to realize a high-speed implementation at the 128-bit security level. Presented results on x86-64 processors set new speed records for pairings, introducing up to 34% of cost reduction in comparison with the best published result. From a general viewpoint, the proposed methods and optimized formulas have a practical impact in the performance of cryptographic protocols based on elliptic curves and pairings in a wide range of applications. In particular, the introduced implementations represent a direct and significant improvement that may be exploited in performance-dominated applications such as high-demand Web servers in which millions of secure transactions need to be generated.
560

On Error Detection and Recovery in Elliptic Curve Cryptosystems

Alkhoraidly, Abdulaziz Mohammad January 2011 (has links)
Fault analysis attacks represent a serious threat to a wide range of cryptosystems including those based on elliptic curves. With the variety and demonstrated practicality of these attacks, it is essential for cryptographic implementations to handle different types of errors properly and securely. In this work, we address some aspects of error detection and recovery in elliptic curve cryptosystems. In particular, we discuss the problem of wasteful computations performed between the occurrence of an error and its detection and propose solutions based on frequent validation to reduce that waste. We begin by presenting ways to select the validation frequency in order to minimize various performance criteria including the average and worst-case costs and the reliability threshold. We also provide solutions to reduce the sensitivity of the validation frequency to variations in the statistical error model and its parameters. Then, we present and discuss adaptive error recovery and illustrate its advantages in terms of low sensitivity to the error model and reduced variance of the resulting overhead especially in the presence of burst errors. Moreover, we use statistical inference to evaluate and fine-tune the selection of the adaptive policy. We also address the issue of validation testing cost and present a collection of coherency-based, cost-effective tests. We evaluate variations of these tests in terms of cost and error detection effectiveness and provide infective and reduced-cost, repeated-validation variants. Moreover, we use coherency-based tests to construct a combined-curve countermeasure that avoids the weaknesses of earlier related proposals and provides a flexible trade-off between cost and effectiveness.

Page generated in 0.0498 seconds