571 |
A Formulation of Multidimensional Growth Models for the Assessment and Forecast of Technology AttributesDanner, Travis W. 05 July 2006 (has links)
A Formulation of Multidimensional Growth Models for the Assessment and Forecast of Technology Attributes
Travis W. Danner
229 Pages
Directed by Dr. Dimitri Mavris
This research proposes the formulation of multidimensional growth models as an approach to simulating the advancement of multi-objective technologies towards their upper limits. These multidimensional growth models are formulated by noticing and exploiting the correlation between technology growth models and technology frontiers. Both are frontiers in actuality. The technology growth curve is a frontier between capability levels of a single attribute and time, while a technology frontier is a frontier between the capability levels of two or more attributes. Multidimensional growth models are formulated by exploiting the mathematical significance of this correlation. The result is a model that can capture both the interaction between multiple system attributes and their expected rates of improvement over time. The fundamental nature of technology development is maintained and interdependent growth curves are generated for each system metric with minimal data requirements. Being founded on the basic nature of technology advancement, relative to physical limits, the availability for further improvement can be determined for a single metric relative to other system measures of merit. A byproduct of this modeling approach is a single n-dimensional technology frontier linking all n system attributes with time. This provides an environment capable of forecasting future system capability in the form of advancing technology frontiers.
In addition to formulating the multidimensional growth model, this research provides a systematic procedure for applying it to specific technology architectures. Researchers and decision-makers are able to investigate the potential for additional improvement within that technology architecture and estimate the expected cost of each incremental improvement relative to the cost of past improvements. In this manner, multidimensional growth models provide the necessary information to set reasonable program goals for the further development of a particular technological approach or to establish the need for new technological approaches in light of the constraining limits of conventional approaches.
|
572 |
Extension of the master sintering curve for constant heating rate modelingMcCoy, Tammy Michelle 15 January 2008 (has links)
The purpose of this work is to extend the functionality of the Master Sintering Curve (MSC) such that it can be used as a practical tool for predicting sintering schemes that combine both a constant heating rate and an isothermal hold. Rather than just being able to predict a final density for the object of interest, the extension to the MSC will actually be able to model a sintering run from start to finish. Because the Johnson model does not incorporate this capability, the work presented is an extension of what has already been shown in literature to be a valuable resource in many sintering situations. A predicted sintering curve that incorporates a combination of constant heating rate and an isothermal hold is more indicative of what is found in real-life sintering operations. This research offers the possibility of predicting the sintering schedule for a material, thereby having advanced information about the extent of sintering, the time schedule for sintering, and the sintering temperature with a high degree of accuracy and repeatability.
The research conducted in this thesis focuses on the development of a working model for predicting the sintering schedules of several stabilized zirconia powders having the compositions YSZ (HSY8), 10Sc1CeSZ, 10Sc1YSZ, and 11ScSZ1A. The compositions of the four powders are first verified using x-ray diffraction (XRD) and the particle size and surface area are verified using a particle size analyzer and BET analysis, respectively. The sintering studies were conducted on powder compacts using a double pushrod dilatometer. Density measurements are obtained both geometrically and using the Archimedes method.
Each of the four powders is pressed into 1/4 inch diameter pellets using a manual press with no additives, such as a binder or lubricant. Using a double push-rod dilatometer, shrinkage data for the pellets is obtained over several different heating rates. The shrinkage data is then converted to reflect the change in relative density of the pellets based on the green density and the theoretical density of each of the compositions. The Master Sintering Curve (MSC) model is then utilized to generate data that can be utilized to predict the final density of the respective powder over a range of heating rates.
The Elton Master Sintering Curve Extension (EMSCE) is developed to extend the functionality of the MSC tool. The parameters generated from the original MSC are used in tandem with the solution to a specific closed integral (discussed in document) over a set range of temperatures. The EMSCE is used to generate a set of sintering curves having both constant heating rate and isothermal hold portions. The EMSCE extends the usefulness of the MSC by allowing this generation of a complete sintering schedule rather than just being able to predict the final relative density of a given material. The EMSCE is verified by generating a set of curves having both constant heating rate and an isothermal hold for the heat-treatment. The modeled curves are verified experimentally and a comparison of the model and experimental results are given for a selected composition.
Porosity within the final product can hinder the product from sintering to full density. It is shown that some of the compositions studied did not sinter to full density because of the presence of large porosity that could not be eliminated in a reasonable amount of time. A statistical analysis of the volume fraction of porosity is completed to show the significance of the presence in the final product. The reason this is relevant to the MSC is that the model does not take into account the presence of porosity and assumes that the samples sinter to full density. When this does not happen, the model actually under-predicts the final density of the material.
|
573 |
A study on low voltage ride-through capability improvement for doubly fed induction generatorLin, Xiao-Chiu 02 September 2010 (has links)
Since large scale unscheduled tripping of wind power generation could lead to power system stability problem. Thus network interconnection regulations become more rigid when the wind power penetration reaches a non-neglible portion of the total power generation. This thesis presents a comparison of five different low voltage ride through (LVRT) capability enhancement technologies, i.e., additional rotor resistance, DC bus chopper, crowbar on rotor, the combination of above schemes, and grid voltage support by controlling grid side converter. System simulations are performed under Digsilent environment with model and control blocks provided by
the package. Additional models are developed to implement the LVRT enhancement schemes studied. A Doubly-Fed Induction Generator (DFIG) with pitch control is used to simulate different system fault scenarios with different voltage sag magnitude and duration time. Simulation results indicate that different enhancement schemes provide various levels in relieving DC bus overvoltage, rotor winding overcurrent, and overspeed problems, and the method combines all tested schemes seems to provide the best result.
|
574 |
Efficient Spatial Access Methods for Spatial Queries in Spatio-Temporal DatabasesChen, Hue-Ling 20 May 2011 (has links)
With the large number of spatial queries for spatial data objects changing with time in many applications, e.g., the location based services and geographic information systems, spatio-temporal databases have been developed to manipulate them in spatial or temporal databases. We focus on queries for stationary and moving objects in the spatial database in the present. However, there is no total ordering for the large volume and complicated objects which may change their geometries with time. A spatial access method based on the spatial index structure attempts to preserve the spatial proximity as much as possible. Then, the number of disk access which takes the response time is reduced during the query processing. Therefore, in this dissertation, based on the NA-tree, first, we propose the NA-tree join method over the stationary objects. Our NA-tree join simply uses the correlation table to directly obtain candidate leaf nodes based on two NA-trees which have non-empty overlaps. Moreover, our NA-tree join accesses objects once from those candidate leaf nodes and returns pairs of objects which have non-empty overlaps. Second, we propose the NABP method for the continuous range queries over the moving objects. Our NABP method uses the bit-patterns of regions in the NA-tree to check the relation between the range queries and moving objects. Our NABP method searches only one path in the NA-tree for the range query, instead of more than one path in the R*-tree-based method which has the overlapping problem. When the number of range queries increases with time, our NABP method incrementally updates the affected range queries by bit-patterns checking, instead of rebuilding the index like the cell-based method. From the experimental results, we have shown that our NABP method needs less time than the cell-based method for range queries update and less time than the R*-tree-based method for moving objects update. Based on the Hilbert curve with the good clustering property, we propose the ANHC method to answer the all-nearest-neighbors query by our ONHC method. Our ONHC method is used to answer the one-nearest-neighbor query over the stationary objects. We generate direction sequences to store the orientations of the query block in the Hilbert curve of different orders. By using quaternary numbers and direction sequences of the query block, we obtain the relative locations of the neighboring blocks and compute their quaternary numbers. Then, we directly access the neighboring blocks by their sequence numbers which is the transformation of the quaternary numbers from base four to ten. The nearest neighbor can be obtained by distance comparisons in these blocks. From the experimental results, we have shown that our ONHC and ANHC methods need less time than CCSF method for the one-nearest-neighbor query and the method based on R*-trees for the all-nearest-neighbors query, respectively.
|
575 |
High Speed Scalar Multiplication Architecture for Elliptic Curve CryptosystemHsu, Wei-Chiang 28 July 2011 (has links)
An important advantage of Elliptic Curve Cryptosystem (ECC) is the shorter key length in public key cryptographic systems. It can provide adequate security when the bit length over than 160 bits. Therefore, it has become a popular system in recent years. Scalar multiplication also called point multiplication is the core operation in ECC. In this thesis, we propose the ECC architectures of two different irreducible polynomial versions that are trinomial in GF(2167) and pentanomial in GF(2163). These architectures are based on Montgomery point multiplication with projective coordinate. We use polynomial basis representation for finite field arithmetic. All adopted multiplication, square and add operations over binary field can be completed within one clock cycle, and the critical path lies on multiplication. In addition, we use Itoh-Tsujii algorithm combined with addition chain, to execute binary inversion through using iterative binary square and multiplication.
Because the double and add operations in point multiplication need to run many iterations, the execution time in overall design will be decreased if we can improve this partition. We propose two ways to improve the performance of point multiplication. The first way is Minus Cycle Version. In this version, we reschedule the double and add operations according to point multiplication algorithm. When the clock cycle time (i.e., critical path) of multiplication is longer than that of add and square, this method will be useful in improving performance. The second way is Pipeline Version. It speeds up the multiplication operations by executing them in pipeline, leading to shorter clock cycle time.
For the hardware implementation, TSMC 0.13um library is employed and all modules are organized in a hierarchy structure. The implementation result shows that the proposed 167-bit Minus Cycle Version requires 156.4K gates, and the execution time of point multiplication is 2.34us and the maximum speed is 591.7Mhz. Moreover, we compare the Area x Time (AT) value of proposed architectures with other relative work. The results exhibit that proposed 167-bit Minus Cycle Version is the best one and it can save up to 38% A T value than traditional one.
|
576 |
A Hilbert Curve-Based Algorithm for Order-Sensitive Moving KNN QueriesFeng, Fei-Chung 11 July 2012 (has links)
¡@¡@Due to wireless communication technologies, positioning technologies, and mobile computing develop quickly, mobile services are becoming practical and important on big spatiotemporal databases management. Mobile service users move only inside a spatial space, e:g: a country. They often issue the K Nearest Neighbor (kNN) query to obtain data objects reachable through the spatial database. The challenge problem of mobile services is how to efficiently answer the data objects which users interest to the corresponding mobile users. One type of kNN query problems is the order-sensitive moving kNN (order-sensitive MkNN) query problem. In the order-sensitive MkNN query problem, the query point is dynamic and unpredictable, the kNN answers should be responded in real time and sorted by the distance in the ascending order. Therefore, how to respond the kNN answers effectively, incrementally and correctly is an important issue. Nutanong et al: have proposed the V*-kNN algorithm to process the order-sensitive MkNN query. The V*-kNN algorithm uses their the V*-diagram algorithm to generate the safe region. It also uses the Incremental Rank Updates algorithm (IRU) to handle the events while the query point passing the bisectors or the boundary of the safe region. However, the V*-kNN algorithm uses the BF-kNN algorithm to retrieve NNs, which is non-incremental. This makes the search time increase while the density of the object increases. Moreover, they do not consider the situation that there are multiple objects at the same order, and the situation that there are multiple events happen in a single step. These situations may cause that the kNN answers are incorrect. Therefore, in this thesis, we propose the Hilbert curve-based kNN algorithm (HC-kNN) algorithm to process the ordersensitive MkNN query. The HC-kNN algorithm can handle the situation that there are multiple events happen in a single step. We also propose new data structure of the kNN answers. Next, we propose the Intersection of Perpendicular Bisectors algorithm (IPB) in order to handle order update events of the kNN answers. The IPB algorithm handles the situation which there are multiple objects at the same order. Finally, based on the Hilbert curve index, we propose the ONHC-kNN algorithm to get NNs incrementally and to generate the safe region. The safe region will not be affected while the density of the object increases. The safe region of our algorithm is larger than that of the V*-kNN algorithm. From our simulation result, we show that the HC-kNN algorithm provides better performance than the V*-kNN algorithm.
|
577 |
Energy-Efficient Scalable Serial-Parallel Multiplication Architecture for Elliptic Curve CryptosystemSu, Chuan-Shen 25 July 2012 (has links)
In asymmetric cryptosystems, an important advantage of Elliptic Curve Cryptosystem (ECC) is the shorter key lengths than other cryptosystems. It can provide a level of security when the bit length over than 160 bits. So it has become a popular public key cryptographic system in recent year.
Multiplier needs to run many times in scalar multiplication and it plays an essential role in ECC. Since the registers in multiplier are shifted every iteration, it will consume a lot of power in the computing process. So in this thesis, we propose five methods to save multiplication¡¦s energy consumption based on a scalable serial-parallel algorithm[1]. The first method is to design a low-power shift-register by modifying shift-register B to reduce the frequency of registers shifted. The second method is to use a frequency divider circuit. It can make registers to access a value every two clock cycles by modifying RA units. The third method is to introduce the gated clock circuit, and the clock signal of register will be disabled if its value is the same. The fourth method is to skip redundant operations and it can decrease the number of clock cycles for completing a multiplication operation. The last method raises multiplier¡¦s throughput by modifying RA units. The former three methods focus on low-power design, and the latter two methods emphasize on improving performance. Reducing power consumption and improving performance will save multiplication¡¦s energy consumption. Finally, we propose a Half Cycles schedule to raise scalar multiplication¡¦s performance. It is based on Montgomery scalar multiplication algorithm with projective coordinate[22][26].
For the hardware implementation, TSMC 0.13um library is employed and all modules are organized in a hierarchy structure. The implementation results show that the proposed multipliers have less energy consumption than traditional multiplier. It can get 5% ~ 24% energy saving. For Montgomery scalar multiplication, it can also reduce 12% ~ 47% energy consumption and is suitable for portable electronic products because its low area complexity and low energy.
|
578 |
The Determination of Mechanical Properties of Biomedical MaterialsChien, Hui-Lung 29 August 2012 (has links)
The mechanical properties of biomedical materials were determined and discussed in this study. The extension and tensile tests for aorta and coronary artery were carried out using tensile testing machine. Based on incompressibility of biological soft tissue, the stress-stretch curves of arteries were obtained. This study proposed a nonlinear Ogden material model for the numerical simulation of coronary artery extension during stent implantation. The corresponding Ogden model parameters were derived by the obtained stress-stretch curves from tensile tests. For validation, the proposed nonlinear Ogden material model for coronary artery was applied to a Palmaz type stent implantation process. The simulated stent deformation was found to be reasonable. It had a good correlation with the measured results.
The microindentation experiments were used to measure the mechanical properties of enamel and dentine of human teeth in this study. To reveal the relation between the experimental parameters and measured mechanical properties, Young¡¦s moduli were investigated by varying experimental parameters. The parameter of maximum indentation load significantly influences measured values. Young¡¦s modulus varies very slightly when 10 to 100 mN of maximum indentation load applied. Young¡¦s modulus is not sensitive to the parameters of portion of unloading data and teeth age.
The combination of finite element analysis and curve-fitting method is proposed to determine the mechanical properties of thin film deposited on substrate. The mechanical properties of thin film, i.e. Young¡¦s modulus, yield strength and strain-hardening exponent, were extracted by applying an iterative curve-fitting scheme to the experimental and simulated force-indentation depth curves during the microindentation loading and unloading processes. The variation of mechanical properties of TiN thin films with thicknesses ranging from 0.2 to 1.4 £gm was extracted. The results presented the film thickness effect makes the Young¡¦s modulus of TiN thin films reduces with reducing film thickness, particularly at thicknesses less than 0.8 £gm. Therefore, it can be inferred that a film thickness of 0.8 £gm possibly represents the upper bound when employing macroscopic mechanics with bulk material properties.
|
579 |
Essays on Interest Rate Analysis with GovPX DataSong, Bong Ju 2009 August 1900 (has links)
U.S. Treasury Securities are crucially important in many areas of finance. However, zero-coupon yields are not observable in the market. Even though published zero-
coupon yields exist, they are sometimes not available for certain research topics or for high frequency. Recently, high frequency data analysis has become popular, and
the GovPX database is a good source of tick data for U.S. Treasury securities from which we can construct zero-coupon yield curves. Therefore, we try to t zero-
coupon yield curves from low frequency and high frequency data from GovPX by three different methods: the Nelson-Siegel method, the Svensson method, and the cubic spline method. Then, we try to retest the expectations hypothesis (EH) with new zero-coupon yields that are made from GovPX data by three methods using the Campbell and Shiller regression, the Fama and Bliss regression, and the Cochrane and Piazzesi regression. Regardless of the method used (the Nelson-Siegel method, the Svensson method, or the cubic spline method), the expectations hypothesis cannot be rejected in the period from June 1991 to December 2006 for most maturities in many cases. We suggest the possible explanation for the test result of the EH. Based on the overreaction hypothesis, the degree of the overreaction of spread falls over time. Thus,
our result supports that the evidence of rejection of the EH has weaken over time. Also, we introduce a new estimation method for the stochastic volatility model of the short-term interest rates. Then, we compare our method with the existing method. The results suggest that our new method works well for the stochastic volatility model of short-term interest rates.
|
580 |
Oblivious Handshakes and Sharing of Secrets of Privacy-Preserving Matching and Authentication ProtocolsDuan, Pu 2011 May 1900 (has links)
The objective of this research is focused on two of the most important privacy-preserving techniques: privacy-preserving element matching protocols and privacy-preserving credential authentication protocols, where an element represents the information generated by users themselves and a credential represents a group membership assigned from an independent central authority (CA). The former is also known as private set intersection (PSI) protocol and the latter is also known as secret handshake (SH) protocol. In this dissertation, I present a general framework for design of efficient and secure PSI and SH protocols based on similar message exchange and computing procedures to confirm “commonality” of their exchanged information, while protecting the information from each other when the commonalty test fails. I propose to use the homomorphic randomization function (HRF) to meet the privacy-preserving requirements, i.e., common element/credential can be computed efficiently based on homomorphism of the function and uncommon element/credential are difficult to derive because of the randomization of the same function.
Based on the general framework two new PSI protocols with linear computing and communication cost are proposed. The first protocol uses full homomorphic randomization function as the cryptographic basis and the second one uses partial homomorphic randomization function. Both of them achieve element confidentiality and private set intersection. A new SH protocol is also designed based on the framework, which achieves unlinkability with a reusable pair of credential and pseudonym and least number of bilinear mapping operations. I also propose to interlock the proposed PSI protocols and SH protocol to design new protocols with new security properties. When a PSI protocol is executed first and the matched elements are associated with the credentials in a following SH protocol, authenticity is guaranteed on matched elements. When a SH protocol is executed first and the verified credentials is used in a following PSI protocol, detection resistance and impersonation attack resistance are guaranteed on matching elements.
The proposed PSI and SH protocols are implemented to provide privacy-preserving inquiry matching service (PPIM) for social networking applications and privacy-preserving correlation service (PAC) of network security alerts. PPIM allows online social consumers to find partners with matched inquiries and verified group memberships without exposing any information to unmatched parties. PAC allows independent network alert sources to find the common alerts without unveiling their local network information to each other.
|
Page generated in 0.0469 seconds