• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 881
  • 551
  • 228
  • 104
  • 68
  • 45
  • 35
  • 33
  • 28
  • 28
  • 17
  • 13
  • 12
  • 9
  • 8
  • Tagged with
  • 2358
  • 404
  • 242
  • 224
  • 199
  • 177
  • 164
  • 130
  • 130
  • 124
  • 118
  • 112
  • 112
  • 104
  • 103
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
541

Circular motion for robotized metal deposition : verification and implementation

Denys, Kristof January 2013 (has links)
Metal deposition is an additive layered manufacturing process that deposits molten metal droplets on a substrate and by repeating this process layer by layer, a complex shaped 3D geometry can be manufactured. In this thesis, the metal deposition process is performed by a robot with a wire feeder tool and a laser as energy source to melt the metal wire. The robot programming for robotized metal deposition process can be completely automated by computer aided robotics software. University West is currently developing an add-in application in a computer aided robotics software, Process Simulate, that is capable of programming the robotized metal deposition process. The first goal of this thesis was to verify the up to now developed software and the process from CAD drawing down to robot code. Another goal was to find and implement an algorithm that will reduce the number of locations on a circular arc to three locations. The algorithm to minimize the locations must be capable of changing all the different curvature paths to linear and circular arc motions which are easy to translate to robot code. The user should be able to decide the fitting precision of the approximated motion path to the original path. A real robot cell setup is modelled in Process Simulate. This lets Process Simulate generate the correct robot code for that specific cell.  Since each robot cell has its own unique setup, a custom script will be developed that changes the universal robot code, that Process Simulate generates, to the custom robot code required in this specific robot cell. The software is improved and tested from CAD drawing down to robot code but still needs to be debugged more and needs implementation of some non-existing features.
542

Bit Serial Systolic Architectures for Multiplicative Inversion and Division over GF(2<sup>m</sup>)

Daneshbeh, Amir January 2005 (has links)
Systolic architectures are capable of achieving high throughput by maximizing pipelining and by eliminating global data interconnects. Recursive algorithms with regular data flows are suitable for systolization. The computation of multiplicative inversion using algorithms based on EEA (Extended Euclidean Algorithm) are particularly suitable for systolization. Implementations based on EEA present a high degree of parallelism and pipelinability at bit level which can be easily optimized to achieve local data flow and to eliminate the global interconnects which represent most important bottleneck in todays sub-micron design process. The net result is to have high clock rate and performance based on efficient systolic architectures. This thesis examines high performance but also scalable implementations of multiplicative inversion or field division over Galois fields <i>GF</i>(2<i><sup>m</sup></i>) in the specific case of cryptographic applications where field dimension <i>m</i> may be very large (greater than 400) and either <i>m</i> or defining irreducible polynomial may vary. For this purpose, many inversion schemes with different basis representation are studied and most importantly variants of EEA and binary (Stein's) GCD computation implementations are reviewed. A set of common as well as contrasting characteristics of these variants are discussed. As a result a generalized and optimized variant of EEA is proposed which can compute division, and multiplicative inversion as its subset, with divisor in either <i>polynomial</i> or <i>triangular</i> basis representation. Further results regarding Hankel matrix formation for double-basis inversion is provided. The validity of using the same architecture to compute field division with polynomial or triangular basis representation is proved. Next, a scalable unidirectional bit serial systolic array implementation of this proposed variant of EEA is implemented. Its complexity measures are defined and these are compared against the best known architectures. It is shown that assuming the requirements specified above, this proposed architecture may achieve a higher clock rate performance w. r. t. other designs while being more flexible, reliable and with minimum number of inter-cell interconnects. The main contribution at system level architecture is the substitution of all counter or adder/subtractor elements with a simpler distributed and free of carry propagation delays structure. Further a novel restoring mechanism for result sequences of EEA is proposed using a double delay element implementation. Finally, using this systolic architecture a CMD (Combined Multiplier Divider) datapath is designed which is used as the core of a novel systolic elliptic curve processor. This EC processor uses affine coordinates to compute scalar point multiplication which results in having a very small control unit and negligible with respect to the datapath for all practical values of <i>m</i>. The throughput of this EC based on this bit serial systolic architecture is comparable with designs many times larger than itself reported previously.
543

High Performance Elliptic Curve Cryptographic Co-processor

Lutz, Jonathan January 2003 (has links)
In FIPS 186-2, NIST recommends several finite fields to be used in the elliptic curve digital signature algorithm (ECDSA). Of the ten recommended finite fields, five are binary extension fields with degrees ranging from 163 to 571. The fundamental building block of the ECDSA, like any ECC based protocol, is elliptic curve scalar multiplication. This operation is also the most computationally intensive. In many situations it may be desirable to accelerate the elliptic curve scalar multiplication with specialized hardware. In this thesis a high performance elliptic curve processor is developed which is optimized for the NIST binary fields. The architecture is built from the bottom up starting with the field arithmetic units. The architecture uses a field multiplier capable of performing a field multiplication over the extension field with degree 163 in 0. 060 microseconds. Architectures for squaring and inversion are also presented. The co-processor uses Lopez and Dahab's projective coordinate system and is optimized specifically for Koblitz curves. A prototype of the processor has been implemented for the binary extension field with degree 163 on a Xilinx XCV2000E FPGA. The prototype runs at 66 MHz and performs an elliptic curve scalar multiplication in 0. 233 msec on a generic curve and 0. 075 msec on a Koblitz curve.
544

Finite Field Multiplier Architectures for Cryptographic Applications

El-Gebaly, Mohamed January 2000 (has links)
Security issues have started to play an important role in the wireless communication and computer networks due to the migration of commerce practices to the electronic medium. The deployment of security procedures requires the implementation of cryptographic algorithms. Performance has always been one of the most critical issues of a cryptographic function, which determines its effectiveness. Among those cryptographic algorithms are the elliptic curve cryptosystems which use the arithmetic of finite fields. Furthermore, fields of characteristic two are preferred since they provide carry-free arithmetic and at the same time a simple way to represent field elements on current processor architectures. Multiplication is a very crucial operation in finite field computations. In this contribution, we compare most of the multiplier architectures found in the literature to clarify the issue of choosing a suitable architecture for a specific application. The importance of the measuring the energy consumption in addition to the conventional measures for energy-critical applications is also emphasized. A new parallel-in serial-out multiplier based on all-one polynomials (AOP) using the shifted polynomial basis of representation is presented. The proposed multiplier is area efficient for hardware realization. Low hardware complexity is advantageous for implementation in constrained environments such as smart cards. Architecture of an elliptic curve coprocessor has been developed using the proposed multiplier. The instruction set architecture has been also designed. The coprocessor has been simulated using VHDL to very the functionality. The coprocessor is capable of performing the scalar multiplication operation over elliptic curves. Point doubling and addition procedures are hardwired inside the coprocessor to allow for faster operation.
545

On Fault-based Attacks and Countermeasures for Elliptic Curve Cryptosystems

Dominguez Oviedo, Agustin January 2008 (has links)
For some applications, elliptic curve cryptography (ECC) is an attractive choice because it achieves the same level of security with a much smaller key size in comparison with other schemes such as those that are based on integer factorization or discrete logarithm. Unfortunately, cryptosystems including those based on elliptic curves have been subject to attacks. For example, fault-based attacks have been shown to be a real threat in today’s cryptographic implementations. In this thesis, we consider fault-based attacks and countermeasures for ECC. We propose a new fault-based attack against the Montgomery ladder elliptic curve scalar multiplication (ECSM) algorithm. For security reasons, especially to provide resistance against fault-based attacks, it is very important to verify the correctness of computations in ECC applications. We deal with protections to fault attacks against ECSM at two levels: module and algorithm. For protections at the module level, where the underlying scalar multiplication algorithm is not changed, a number of schemes and hardware structures are presented based on re-computation or parallel computation. It is shown that these structures can be used for detecting errors with a very high probability during the computation of ECSM. For protections at the algorithm level, we use the concepts of point verification (PV) and coherency check (CC). We investigate the error detection coverage of PV and CC for the Montgomery ladder ECSM algorithm. Additionally, we propose two algorithms based on the double-and-add-always method that are resistant to the safe error (SE) attack. We demonstrate that one of these algorithms also resists the sign change fault (SCF) attack.
546

Prediction Performance of Survival Models

Yuan, Yan January 2008 (has links)
Statistical models are often used for the prediction of future random variables. There are two types of prediction, point prediction and probabilistic prediction. The prediction accuracy is quantified by performance measures, which are typically based on loss functions. We study the estimators of these performance measures, the prediction error and performance scores, for point and probabilistic predictors, respectively. The focus of this thesis is to assess the prediction performance of survival models that analyze censored survival times. To accommodate censoring, we extend the inverse probability censoring weighting (IPCW) method, thus arbitrary loss functions can be handled. We also develop confidence interval procedures for these performance measures. We compare model-based, apparent loss based and cross-validation estimators of prediction error under model misspecification and variable selection, for absolute relative error loss (in chapter 3) and misclassification error loss (in chapter 4). Simulation results indicate that cross-validation procedures typically produce reliable point estimates and confidence intervals, whereas model-based estimates are often sensitive to model misspecification. The methods are illustrated for two medical contexts in chapter 5. The apparent loss based and cross-validation estimators of performance scores for probabilistic predictor are discussed and illustrated with an example in chapter 6. We also make connections for performance.
547

A Stochastic Programming Model for a Day-Ahead Electricity Market: a Heuristic Methodology and Pricing

Zhang, Jichen January 2009 (has links)
This thesis presents a multi-stage linear stochastic mixed integer programming (SMIP) model for planning power generation in a pool-type day-ahead electricity market. The model integrates a reserve demand curve and shares most of the features of a stochastic unit commitment (UC) problem, which is known to be NP-hard. We capture the stochastic nature of the problem through scenarios, resulting in a large-scale mixed integer programming (MIP) problem that is computationally challenging to solve. Given that an independent system operator (ISO) has to solve such a problem within a time requirement of an hour or so, in order to release operating schedules for the next day real-time market, the problem has to be solved efficiently. For that purpose, we use some approximations to maintain the linearity of the model, parsimoniously select a subset of scenarios, and invoke realistic assumptions to keep the size of the problem reasonable. Even with these measures, realistic-size SMIP models with binary variables in each stage are still hard to solve with exact methods. We, therefore, propose a scenario-rolling heuristic to solve the SMIP problem. In each iteration, the heuristic solves a subset of the scenarios, and uses part of the obtained solution to solve another group in the subsequent iterations until all scenarios are solved. Two numerical examples are provided to test the performance of the scenario-rolling heuristic, and to highlight the difference between the operative schedules of a deterministic model and the SMIP model. Motivated by previous studies on pricing MIP problems and their applications to pricing electric power, we investigate pricing issues and compensation schemes using MIP formulations in the second part of the thesis. We show that some ideas from the literature can be applied to pricing energy/reserves for a relatively realistic model with binary variables, but some are found to be impractical in the real world. We propose two compensation schemes based on the SMIP that can be easily implemented in practice. We show that the compensation schemes with make-whole payments ensure that generators can have non-negative profits. We also prove that under some assumptions, one of the compensation schemes has the interesting theoretical property of minimizing the variance of the profit of generators to zero. Theoretical and numerical results of these compensation schemes are presented and discussed.
548

New Non-Parametric Confidence Interval for the Youden

Zhou, Haochuan 18 July 2008 (has links)
Youden index, a main summary index for the Receiver Operating Characteristic (ROC) curve, is a comprehensive measurement for the effectiveness of a diagnostic test. For a continuous-scale diagnostic test, the optimal cut-point for the positive of disease is the cut-point leading to the maximization of the sum of sensitivity and specificity. Finding the Youden index of the test is equivalent to maximize the sum of sensitivity and specificity for all the possible values of the cut-point. In this thesis, we propose a new non-parametric confidence interval for the Youden index. Extensive simulation studies are conducted to compare the relative performance of the new interval with the existing intervals for the index. Our simulation results indicate that the newly developed non-parametric method performs as well as the existing parametric method but it has better finite sample performance than the existing non-parametric methods. The new method is flexible and easy to implement in practice. A real example is also used to illustrate the application of the proposed interval.
549

Statistical Evaluation of Continuous-Scale Diagnostic Tests with Missing Data

Wang, Binhuan 12 June 2012 (has links)
The receiver operating characteristic (ROC) curve methodology is the statistical methodology for assessment of the accuracy of diagnostics tests or bio-markers. Currently most widely used statistical methods for the inferences of ROC curves are complete-data based parametric, semi-parametric or nonparametric methods. However, these methods cannot be used in diagnostic applications with missing data. In practical situations, missing diagnostic data occur more commonly due to various reasons such as medical tests being too expensive, too time consuming or too invasive. This dissertation aims to develop new nonparametric statistical methods for evaluating the accuracy of diagnostic tests or biomarkers in the presence of missing data. Specifically, novel nonparametric statistical methods will be developed with different types of missing data for (i) the inference of the area under the ROC curve (AUC, which is a summary index for the diagnostic accuracy of the test) and (ii) the joint inference of the sensitivity and the specificity of a continuous-scale diagnostic test. In this dissertation, we will provide a general framework that combines the empirical likelihood and general estimation equations with nuisance parameters for the joint inferences of sensitivity and specificity with missing diagnostic data. The proposed methods will have sound theoretical properties. The theoretical development is challenging because the proposed profile log-empirical likelihood ratio statistics are not the standard sum of independent random variables. The new methods have the power of likelihood based approaches and jackknife method in ROC studies. Therefore, they are expected to be more robust, more accurate and less computationally intensive than existing methods in the evaluation of competing diagnostic tests.
550

How Effective is the Kyoto Protocol in Impelling Emission Reduction

Yang, Haoyuan, Zhang, Qian January 2011 (has links)
The Kyoto Protocol is one of the most important international climate change treaties aimed at fighting global warming. On January 1st 2005, the protocol was enforced with its first commitment period 2008-2012. However, the effectiveness of reducing CO2 emission has long been debated. The purpose of this thesis is to empirically as-sess the impact of the Kyoto Protocol on carbon dioxide reduction across countries, whether the protocol led significant difference after entering force in 2005. The data used in this thesis cover 37 Annex B countries and 148 non-annex B countries from 1990 to 2007. The models are constructed on the basis of the various contributing fac-tors to CO2 emissions and the Environmental Kuznets Curve model. The main find-ing is contrary against the result expected. The insignificant dummy variable cannot indicate that there is a “structural break” of CO2 emissions reduction after the Kyoto Protocol was implemented. The conclusion is that political agreements such as Kyoto Protocol cannot show critical effects on reducing carbon dioxide. The underlying main driving factors of CO2 emission are energy use, electricity from coal source, fossil fuel burning, in other words, industrialization. And the technology develop-ments cannot keep in pace with finding a new energy source and effectively control-ling CO2 emissions in the short run.

Page generated in 0.0447 seconds