• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 209
  • 31
  • 29
  • 13
  • 12
  • 10
  • 7
  • 5
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 408
  • 158
  • 59
  • 58
  • 57
  • 57
  • 55
  • 52
  • 49
  • 45
  • 42
  • 41
  • 39
  • 35
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Economic Analysis of Northwest Pacific sauy

Ho, Tzung-ying 20 July 2009 (has links)
In this research, using the statistic catch data of northwest pacific saury from the Overseas Fisheries Development Council of The Republic of China between 2004 to 2007 to conduct an resource assessment on pacific saury. First of all, calculate and compare the equilibrium level of open access fishery and the net present value maximization fishery. And then, using the northwest pacific saury statistic catch data from the Food and Agriculture Organization of The United Nations between 2005 to 2007 for the simulation analysis of resource of pacific saury, and the results simulated with the two fishing grounds of the equilibrium value for comparison. The results showed that the problem of depletion of the resource does not exist. The final, assumption that the value of parameters are changing in the range of reasonable in order to understand the impact of the changing in the fishery resource of the stock and effort. Collate and analyze the results, and expect that the results can be a management reference for the management of the northwest pacific saury fisheries.
42

Mining complex databases using the EM algorithm

Ordońẽz, Carlos January 2000 (has links)
No description available.
43

Market with transaction costs: optimal shadow state-price densities and exponential utility maximization

Nakatsu, Hitoshi Unknown Date
No description available.
44

Utility maximization in incomplete markets with random endowment

Cvitanic, Jaksa, Schachermayer, Walter, Wang, Hui January 2000 (has links) (PDF)
This paper solves a long-standing open problem in mathematical finance: to find a solution to the problem of maximizing utility from terminal wealth of an agent with a random endowment process, in the general, semimartingale model for incomplete markets, and to characterize it via the associated dual problem. We show that this is indeed possible if the dual problem and its domain are carefully defined. More precisely, we show that the optimal terminal wealth is equal to the inverse of marginal utility evaluated at the solution to the dual problem, which is in the form of the regular part of an element of(L∞)* (the dual space of L∞). (author's abstract) / Series: Working Papers SFB "Adaptive Information Systems and Modelling in Economics and Management Science"
45

How potential investments may change the optimal portfolio for the exponential utility

Schachermayer, Walter January 2002 (has links) (PDF)
We show that, for a utility function U: R to R having reasonable asymptotic elasticity, the optimal investment process H. S is a super-martingale under each equivalent martingale measure Q, such that E[V(dQ/dP)] < "unendlich", where V is conjugate to U. Similar results for the special case of the exponential utility were recently obtained by Delbaen, Grandits, Rheinländer, Samperi, Schweizer, Stricker as well as Kabanov, Stricker. This result gives rise to a rather delicate analysis of the "good definition" of "allowed" trading strategies H for the financial market S. One offspring of these considerations leads to the subsequent - at first glance paradoxical - example. There is a financial market consisting of a deterministic bond and two risky financial assets (S_t^1, S_t^2)_0<=t<=T such that, for an agent whose preferences are modeled by expected exponential utility at time T, it is optimal to constantly hold one unit of asset S^1. However, if we pass to the market consisting only of the bond and the first risky asset S^1, and leaving the information structure unchanged, this trading strategy is not optimal any more: in this smaller market it is optimal to invest the initial endowment into the bond. (author's abstract) / Series: Working Papers SFB "Adaptive Information Systems and Modelling in Economics and Management Science"
46

EMPIRICAL ANALYSIS OF THE RELATIONSHIP BETWEEN THE TAX BASE AND GOVERNMENT SPENDING: EVIDENCE FROM STATE PANEL DATA, 1977-1992

Boardman, Barry Wayne 01 January 2002 (has links)
Essentially, there are two competing propositions on tax base choices. The optimal tax theory on taxation asserts that the broader the tax base the better the tax. On the other hand, some public choice proponents have argued that, at the constitutional level, we should choose to restrict the power to tax and thus limit the available base. These theories assert fundamentally different views on the state and its citizens. Within the traditional optimal tax framework, governments maximize residents utility and tax base broadening lowers the tax rate, thus there is a revenue neutral response. When, however, governments do not choose to maximize residents utility, then increases in the tax base can have an impact on governments revenues and spending. In order to determine if tax bases influence government spending data on forty-eight states were compiled for the years 1977 through 1992. A state finance system of equations was developed. Using three-stage least squares estimation in a fixed effects econometric model, the relationship between the broadness of a tax base and state government spending was estimated. The state sales tax base was the tax base used to study this relationship. The results of this estimation found that states with broader sales tax bases had higher spending, all else equal. This result suggest that governments do not act as if they maximize resident utility when making tax base and rate decisions, otherwise base broadness would have no impact on spending. An additional result from this empirical analysis, is that tax base and rates are inversely related, but the relationship does not lead to revenue-neutral adjustments.
47

Network Traffic Control Based on Modern Control Techniques: Fuzzy Logic and Network Utility Maximization

Liu, Jungang 30 April 2014 (has links)
This thesis presents two modern control methods to address the Internet traffic congestion control issues. They are based on a distributed traffic management framework for the fast-growing Internet traffic in which routers are deployed with intelligent or optimal data rate controllers to tackle the traffic mass. The first one is called the IntelRate (Intelligent Rate) controller using the fuzzy logic theory. Unlike other explicit traffic control protocols that have to estimate network parameters (e.g., link latency, bottleneck bandwidth, packet loss rate, or the number of flows), our fuzzy-logic-based explicit controller can measure the router queue size directly. Hence it avoids various potential performance problems arising from parameter estimations while reducing much computation and memory consumption in the routers. The communication QoS (Quality of Service) is assured by the good performances of our scheme such as max-min fairness, low queueing delay and good robustness to network dynamics. Using the Lyapunov’s Direct Method, this controller is proved to be globally asymptotically stable. The other one is called the OFEX (Optimal and Fully EXplicit) controller using convex optimization. This new scheme is able to provide not only optimal bandwidth allocation but also fully explicit congestion signal to sources. It uses the congestion signal from the most congested link, instead of the cumulative signal from a flow path. In this way, it overcomes the drawback of the relatively explicit controllers that bias the multi-bottlenecked users, and significantly improves their convergence speed and throughput performance. Furthermore, the OFEX controller design considers a dynamic model by proposing a remedial measure against the unpredictable bandwidth changes in contention-based multi-access networks (such as shared Ethernet or IEEE 802.11). When compared with the former works/controllers, such a remedy also effectively reduces the instantaneous queue size in a router, and thus significantly improving the queueing delay and packet loss performance. Finally, the applications of these two controllers on wireless local area networks have been investigated. Their design guidelines/limits are also provided based on our experiences.
48

Oblivious and Non-oblivious Local Search for Combinatorial Optimization

Ward, Justin 07 January 2013 (has links)
Standard local search algorithms for combinatorial optimization problems repeatedly apply small changes to a current solution to improve the problem's given objective function. In contrast, non-oblivious local search algorithms are guided by an auxiliary potential function, which is distinct from the problem's objective. In this thesis, we compare the standard and non-oblivious approaches for a variety of problems, and derive new, improved non-oblivious local search algorithms for several problems in the area of constrained linear and monotone submodular maximization. First, we give a new, randomized approximation algorithm for maximizing a monotone submodular function subject to a matroid constraint. Our algorithm's approximation ratio matches both the known hardness of approximation bounds for the problem and the performance of the recent ``continuous greedy'' algorithm. Unlike the continuous greedy algorithm, our algorithm is straightforward and combinatorial. In the case that the monotone submodular function is a coverage function, we can obtain a further simplified, deterministic algorithm with improved running time. Moving beyond the case of single matroid constraints, we then consider general classes of set systems that capture problems that can be approximated well. While previous such classes have focused primarily on greedy algorithms, we give a new class that captures problems amenable to optimization by local search algorithms. We show that several combinatorial optimization problems can be placed in this class, and give a non-oblivious local search algorithm that delivers improved approximations for a variety of specific problems. In contrast, we show that standard local search algorithms give no improvement over known approximation results for these problems, even when allowed to search larger neighborhoods than their non-oblivious counterparts. Finally, we expand on these results by considering standard local search algorithms for constraint satisfaction problems. We develop conditions under which the approximation ratio of standard local search remains limited even for super-polynomial or exponential local neighborhoods. In the special case of MaxCut, we further show that a variety of techniques including random or greedy initialization, large neighborhoods, and best-improvement pivot rules cannot improve the approximation performance of standard local search.
49

Market with transaction costs: optimal shadow state-price densities and exponential utility maximization

Nakatsu, Hitoshi 11 1900 (has links)
This thesis discusses the financial market model with proportional transaction costs considered in Cvitanic and Karatzas (1996) (hereafter we use CK (1996)). For a modified dual problem introduced by Choulli (2009), I discuss solutions under weaker conditions than those of CK (1996), and furthermore the obtained solutions generalize the examples treated in CK (1996). Then, I consider the exponential utility which does not belong to the family of utility considered by CK (1996) due to the Inada condition. Finally, I elaborate the same results as in CK (1996) for the exponential utility, and I derive other related results using the specificity of the exponential utility function as well. These lead to a different method/approach than CK (1996) for our utility maximization problem, and different notion of admissibility for financial strategies as well. / Mathematical Finance
50

GMMEDA : A demonstration of probabilistic modeling in continuous metaheuristic optimization using mixture models

Naveen Kumar Unknown Date (has links)
Optimization problems are common throughout science, engineering and commerce. The desire to continually improve solutions and resolve larger, complex problems has given prominence to this field of research for several decades and has led to the development of a range of optimization algorithms for different class of problems. The Estimation of Distribution Algorithms (EDAs) are a relatively recent class of metaheuristic optimization algorithms based on using probabilistic modeling techniques to control the search process. Within the general EDA framework, a number of different probabilistic models have been previously proposed for both discrete and continuous optimization problems. This thesis focuses on GMMEDAs; continuous EDAs based on the Gaussian Mixture Models (GMM) with parameter estimation performed using the Expectation Maximization (EM) algorithm. To date, this type of model has only received limited attention in the literature. There are few previous experimental studies of the algorithms. Furthermore, a number of implementation details of Continuous Iterated Density Estimation Algorithm based on Gaussian Mixture Model have not been previously documented. This thesis intends to provide a clear description of the GMMEDAs, discuss the implementation decisions and details and provides experimental study to evaluate the performance of the algorithms. The effectiveness of the GMMEDAs with varying model complexity (structure of covariance matrices and number of components) was tested against five benchmark functions (Sphere, Rastrigin, Griewank, Ackley and Rosenbrock) with varying dimensionality (2−, 10− and 30−D). The effect of the selection pressure parameters is also studied in this experiment. The results of the 2D experiments show that a variant of the GMMEDA with moderate complexity (Diagonal GMMEDA) was able to optimize both unimodal and multimodal functions. Further, experimental analysis of the 10 and 30D functions optimized results indicates that the simpler variant of the GMMEDA (Spherical GMMEDA) was most effective of all three variants of the algorithm. However, a greater consistency in the results of these functions is achieved when the most complex variant of the algorithm (Full GMMEDA) is used. The comparison of the results for four artificial test functions - Sphere, Griewank, Ackley and Rosenbrock - showed that the GMMEDA variants optimized most of complex functions better than existing continuous EDAs. This was achieved because of the ability of the GMM components to model the functions effectively. The analysis of the results evaluated by variants of the GMMEDA showed that number of the components and the selection pressure does affect the optimum value of artificial test function. The convergence of the GMMEDA variants to the respective functions best local optimum has been caused more by the complexity in the GMM components. The complexity of GMMEDA because of the number of components increases as the complexity owing to the structure of the covariance matrices increase. However, while finding optimum value of complex functions the increased complexity in GMMEDA due to complex covariance structure overrides the complexity due to increase in number of components. Additionally, the affect on the convergence due to the number of components decreases for most functions when the selection pressure increased. These affects have been noticed in the results in the form of stability of the results related to the functions. Other factors that affect the convergence of the model to the local optima are the initialization of the GMM parameters, the number of the EM components, and the reset condition. The initialization of the GMM components, though not visible graphically in the 10D optimization has shown: for different initialization of the GMM parameters in 2D, the optimum value of the functions is affected. The initialization of the population in the Evolutionary Algorithms has shown to affect the convergence of the algorithm to the functions global optimum. The observation of similar affects due to initialization of GMM parameters on the optimization of the 2D functions indicates that the convergence of the GMM in the 10D could be affected, which in turn, could affect the optimum value of respective functions. The estimated values related to the covariance and mean over the EM iteration in the 2D indicated that some functions needed a greater number of EM iterations while finding their optimum value. This indicates that lesser number of EM iterations could affect the fitting of the components to the selected population in the 10D and the fitting can affect the effective modeling of functions with varying complexity. Finally, the reset condition has shown as resetting the covariance and the best fitness value of individual in each generation in 2D. This condition is certain to affect the convergence of the GMMEDA variants to the respective functions best local optimum. The rate at which the reset condition was invoked could certainly have caused the GMM components covariance values to reset to their initials values and thus the model fitting the percentage of the selected population could have been affected. Considering all the affects caused by the different factors, the results indicates that a smaller number of the components and percentage of the selected population with a simpler S-GMMEDA modeled most functions with a varying complexity.

Page generated in 0.1182 seconds