• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1524
  • 363
  • 359
  • 194
  • 78
  • 48
  • 46
  • 39
  • 31
  • 26
  • 20
  • 18
  • 17
  • 13
  • 9
  • Tagged with
  • 3303
  • 1149
  • 436
  • 429
  • 323
  • 320
  • 303
  • 285
  • 269
  • 258
  • 236
  • 234
  • 218
  • 211
  • 205
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Numerical Methods for Optimal Trade Execution

Tse, Shu Tong January 2012 (has links)
Optimal trade execution aims at balancing price impact and timing risk. With respect to the mathematical formulation of the optimization problem, we primarily focus on Mean Variance (MV) optimization, in which the two conflicting objectives are maximizing expected revenue (the flip side of trading impact) and minimizing variance of revenue (a measure of timing risk). We also consider the use of expected quadratic variation of the portfolio value process as an alternative measure of timing risk, which leads to Mean Quadratic Variation (MQV) optimization. We demonstrate that MV-optimal strategies are quite different from MQV-optimal strategies in many aspects. These differences are in stark contrast to the common belief that MQV-optimal strategies are similar to, or even the same as, MV-optimal strategies. These differences should be of interest to practitioners since we prove that the classic Almgren-Chriss strategies (industry standard) are MQV-optimal, in contrary to the common belief that they are MV-optimal. From a computational point of view, we extend theoretical results in the literature to prove that the mean variance efficient frontier computed using our method is indeed the complete Pareto-efficient frontier. First, we generalize the result in Li (2000) on the embedding technique and develop a post-processing algorithm that guarantees Pareto-optimality of numerically computed efficient frontier. Second, we extend the convergence result in Barles (1990) to viscosity solution of a system of nonlinear Hamilton Jacobi Bellman partial differential equations (HJB PDEs). On the numerical aspect, we combine the techniques of similarity reduction, non-standard interpolation, and careful grid construction to significantly improve the efficiency of our numerical methods for solving nonlinear HJB PDEs.
132

Multiple antenna systems in a mobile-to-mobile environment

Kang, Heewon 20 November 2006 (has links)
The objective of this dissertation is to design new architectures for multiple antenna wireless communication systems operating in a mobile-to-mobile environment and to develop a theoretical framework according to which these systems can be analyzed. Recent information theory has demonstrated that the wireless channel can support enormous capacity if the multipath is properly exploited by using multiple antennas. Future communication systems will likely evolve into a variety of combinations encompassing mobile-to-mobile and mobile-to-fixed-station communications. Therefore, we explore the use of multiple antennas for mobile-to-mobile communications. Based on the characteristics of mobile-to-mobile radio channels, we propose new architectures that deploy directional antennas for multiple antenna systems operating in a mobile-to-mobile environment. The first architecture consists of multiple input and multiple output (MIMO) systems with directional antennas, which have good spatial correlation properties, and provides higher capacities than conventional systems without requiring a rich scattering environment. The second one consists of single input and multiple output (SIMO) systems with directional antennas, which improve signal-to-interference-plus-noise ratio (SINR) over conventional systems. We also propose a new combining scheme to select the outputs of optimal combing (SOOC) in this architecture. Optimal combining (OC) is the key technique for multiple antenna systems to suppress interference and mitigate the fading effects. Based on the complex random matrix theory, we develop an analytical framework for the performance analysis of OC. We derive several important closed-form solutions such as the moment generating function (MGF) and the joint eigenvalue distributions of SINR with arbitrary-power interferers and thermal noise. We also analyze the effects of spatial correlations on MIMO OC systems with arbitrary-power interferers in an interference environment. Our novel multiple antenna architectures and the theoretical framework according to which they can be analyzed would provide other researchers with useful tools to analyze and develop future MIMO systems.
133

A New Cooperative Strategy Using Parley Algorithm for Cooperative Communications.

Wu, Wei-Chia 19 July 2010 (has links)
This thesis proposes an alternative cooperating strategy for cooperative communications through the use of parley algorithm in cooperative communications. When employing parley algorithm in cooperative communications, the relay nodes and the destination node need to disseminate and agree on a common decision throughout the cooperation network via a consensus flooding procedure. In this thesis, a heuristic approach for improving the performance of the parley algorithm is proposed. This heuristic approach is to design power allocation method during each iteration of consensus flooding protocol. Specifically, when distributing the power to each node within the cooperative network, this thesis adopts the criterion of maximum capacity of the broadcast channel used for consensus flooding procedures. The simulation result obtained from the investigation of this thesis shows that the proposed power allocation approach can improve the performance in terms of bit error rate as compared with the parley algorithm with uniform power allocation, and, hence, confirms the proposed idea is useful.
134

Durable Goods, Price Indexes, and Monetary Policy

Han, Kyoung Soo 15 May 2009 (has links)
The dissertation studies the relationship among durable goods, price indexes and monetary policy in two sticky-price models with durable goods. One is a one-sector model with only durable goods and the other is a two-sector model with durable and non-durable goods. In the models with durable goods, the COLI (Cost of Living Index) and the PPI (Producer Price Index) identical to the CPI (Consumer Price Index) measured by the acquisitions approach are distinguished, and the COLI/PPI ratio plays an important rule in monetary policy transmission. The welfare function based on the household utility can be represented by a quadratic function of the quasi-differenced durables-stock gaps and the PPI inflation rates. In the one-sector model, the optimal policy maximizing welfare is to keep the (acquisition) price and the output gap at a constant rate which does not depend on the durability of consumption goods. In the two-sector model with sticky prices, the central bank has only one policy instrument, so it cannot cope with distortions in both sectors. Simulation results show that the PPI is an adequate price index for monetary policy and that a policy of targeting core inflation constructed by putting more weight on prices in the sector producing more durable goods is near optimal.
135

A Weighted Residual Framework for Formulation and Analysis of Direct Transcription Methods for Optimal Control

Singh, Baljeet 2010 December 1900 (has links)
In the past three decades, numerous methods have been proposed to transcribe optimal control problems (OCP) into nonlinear programming problems (NLP). In this dissertation work, a unifying weighted residual framework is developed under which most of the existing transcription methods can be derived by judiciously choosing test and trial functions. This greatly simplifies the derivation of optimality conditions and costate estimation results for direct transcription methods. Under the same framework, three new transcription methods are devised which are particularly suitable for implementation in an adaptive refinement setting. The method of Hilbert space projection, the least square method for optimal control and generalized moment method for optimal control are developed and their optimality conditions are derived. It is shown that under a set of equivalence conditions, costates can be estimated from the Lagrange multipliers of the associated NLP for all three methods. Numerical implementation of these methods is described using B-Splines and global interpolating polynomials as approximating functions. It is shown that the existing pseudospectral methods for optimal control can be formulated and analyzed under the proposed weighted residual framework. Performance of Legendre, Gauss and Radau pseudospectral methods is compared with the methods proposed in this research. Based on the variational analysis of first-order optimality conditions for the optimal control problem, an posteriori error estimation procedure is developed. Using these error estimates, an h-adaptive scheme is outlined for the implementation of least square method in an adaptive manner. A time-scaling technique is described to handle problems with discontinuous control or multiple phases. Several real-life examples were solved to show the efficacy of the h-adaptive and time-scaling algorithm.
136

Design of Model Reference Adaptive Tracking Controllers for Mismatched Uncertain Dynamic Systems

Chang, Chao-Chin 17 July 2002 (has links)
Based on the Lyapunov stability theorem, an optimal model reference adaptive control (OMRAC) scheme with perturbation estimation is presented in this thesis to solve robust tracking problems. The plant considered belongs to a class of MIMO perturbed dynamic systems with input nonlinearity and time varying delay in the state. The proposed control scheme contains three types of controllers. The first one is a linear feedback controller, which is an optimal controller if there is no perturbation. The second one is an adaptive controller, it is used for adapting the unknown upper bound of perturbation estimation error. The last one is the perturbation estimation mechanism. The property of uniformly ultimately boundness is proved under the proposed control scheme, and the effects of each design parameter on the dynamic performance is analyzed. Two numerical examples are given for demonstrating the feasibility of the proposed methodology.
137

Tax policies, vintage capital, and exit and entry of plants

Chang, Shao-Jung 12 April 2006 (has links)
Following Chamley, Lucas, Laitner, and Aiyagari, this dissertation continues to explore the answer for the question of zero capital taxation by discussing how taxes on capital income, labor income, and property affect the economy in the context of a vintage capital model where the embodied technology grows exogenously. The government maximizes social welfare by finding the optimal combinations of the three tax rates in the steady state and examines the welfare gain/loss over and after the transitions caused by different types of shocks. The simulation method used here is linear approximation. My results show that in the steady-state economy, given a fixed level of gov- ernment expenditure and a zero property tax rate, the capital-income tax rate that maximizes steady-state utility may be negative, zero, or positive depending on the level of government expenditure. I also find that, for many values of government spending, the highest level of steady-state utility occurs with a subsidy to capital income and a tax on labor income. Finally, I find that when taxes on capital income, labor income, and property are available, capital-income taxes are generally the last resort to finance government expenditures. My results show that in the transitional economy, when tax rates are perma- nently changed and the government expenditure is near zero, the loss of utility over the transition from no taxes to capital subsidies is too large so the idea itself is not utility-enhancing. Secondly, I find that when the government expenditure is low and a positive technology shock occurs, social welfare in the economy without capital-income taxes may perform better in the early phase of the transition but worse in the later phase of the transition than that in the economy without property taxes. How- ever, the situation becomes the opposite as government expenditures increase. In addition, when one tax is allowed to change, a changing labor-income tax may bring more utility over the transition than the other two taxes. Finally, when the govern- ment expenditure is unexpectedly reduced, I find that using property taxes rather than capital-income taxes stimulates consumption and employment more given a higher initial level of government expenditure.
138

Study on the production process of the recombinant his-tag streptavidin

Huang, Chi-tien 14 February 2008 (has links)
In this study, we used E. coli strain BL21 (DE3) to express the recombinant protein his-tag streptavidin. To find out the optimal production conditions, we studied on the culture conditions, medium composition, induction conditions and the timing of induction. In the purification processes we tried to find out the difference between hydrophobic column and affinity column. We also tested the effect of heat treatment on the crude extract to increase the recombinant protein yield. The results showed that when cultured in LB medium, the optimal culture conditions of recombinant protein expression are 37¢XC, pH 6.0 to 7.0, and the induction temperature is 37¢XC. The best induction time is at late log phase or the early stationary phase when OD600 values reached to the ranged of 1.1 to 1.8. The inducer, IPTG concentration is 0.1 mM, which can also replaced with 2 mM lactose. The best production medium is TB medium. When cultured in 5 liters fermentor with optimal culture and induction condition, the highest recombinant protein yield could be 81.1 mg /L. To improve the purification process, we used a affinity chromatography. The purified high homogeneous recombinant protein had a high biotin binding activity up to 14 U / mg, and the recovery yield could be as high as 97% in comparing with the hydrophobic column was only 51%. When we treated the crude extract with 75 ¢J for 10 min, the biotin binding activity was 14.1 U / mg, but the recovery rate decreased to 64 %.
139

Optimal Capital Structure and Industry Dynamic in Taiwan High-Technology Industries

Wu, Pei-hen 24 June 2008 (has links)
This paper studies the relation between the optimal capital structure and industry dynamic. First,we formulate a dynamic adjustment model. We specify and estimate the unobservable optimal capital structure using observable determinants Secondly,we apply dynamic factor demand model that assumes each firm derives an optimal plan such that the expected present value of current and future cost streams is minimized. In variables setting, capital inputs are divided into debt capital and equity capital. The empirical work is based on firm level data of Taiwan high-technology industries during 2003 ~2007. The empirical results show that (1) The capital structure of high- technology is adjusted dynamaic.(2) The contribution of debt on high-technology industries is negative.
140

Algorithms for Near-optimal Alignment Problems on Biosequences

Tseng, Kuo-Tsung 26 August 2008 (has links)
With the improvement of biological techniques, the amount of biosequences data, such as DNA, RNA and protein sequences, are growing explosively. It is almost impossible to handle such huge amount of data purely by manpower. Thus the requirement of the great computing power is essential. There are some ways to treat biosequence data, finding identical biosequences, searching similar biosequences, or mining the signature of biosequences. All of these are based on the same problems, the biosequence alignment problems. In this dissertation, we shall study the biosequence alignment problems to raise the biological meaning of the optimal or near-optimal alignments since the biologists and computer scientists sometimes argue the biological meaning of the mathematically optimal alignment obtained based on some scoring functions. We first study the methods to improve the optimal alignment of two given biosequences. Since usually the optimal alignment is not unique, there should exist the best one among the optimal alignments, and we try to extract this by defining some other criteria to judge the goodness of the alignments when the traditional methods cannot decide which is the better one. Two algorithms are proposed for solving the newly defined biosequence alignment problems, the smoothest optimal alignment and the most conserved optimal alignment problems. Some other criteria are also discussed since most of them can be solved in a similar way. Then we notice that the most biologically meaningful alignment may not be the optimal one since there is no perfect scoring matrix. We address our candidates in those near-optimal alignments, and present a tracing marking function to get all near-optimal alignments and use the criterion "the most conserved" to filter it, which is named as the near-optimal block alignment (NBA) problem. Finally, as everybody knows that existing scoring matrices are not perfect at all, we try to figure out how we choose the winner when multiple scoring matrices are applied. We define some reasonable schemes to decide the winner alignment. In this dissertation, we solve and discuss the algorithms for near-optimal alignment problems on biosequences. In the future, we would like to do some experiments to support or reject these concepts.

Page generated in 0.0366 seconds