Spelling suggestions: "subject:"subgradient"" "subject:"subgradiente""
1 |
Inexact subgradient methods.Au, Kelly Thurston. January 1992 (has links)
In solving a mathematical program, the exact evaluation of the objective function and its subgradients can be computationally burdensome. For example, in a stochastic program, the objective function is typically defined through a multi-dimensional integration. Solution procedures for stochastic programs are usually based on functional approximation techniques, or statistical applications of subgradient methods. In this dissertation, we explore algorithms by combining functional approximation techniques with subgradient optimization methods. This class of algorithms is referred to as "inexact subgradient methods". First, we develop a basic inexact subgradient method and identify conditions under which this approach will lead to an optimal solution. We also offer an inexact subgradient algorithm by adaptively defining the steplengths via estimated bounds on the deviations from optimality. Second, we explore approaches in which functional approximation techniques can be combined with a primal-dual subgradient method. In these algorithms, the steplengths are defined via the primal and dual information. Hence suggestions to optimality can be reflected through the steplengths, as the iteration proceeds. We also incorporate space dilation operations, which stabilize the moving directions, within our basic inexact subgradient method. As an example of the applicability of these methods, we use statistically defined approximations, which are similar to those derived in Stochastic Decomposition, in some of our algorithms for the solutions of stochastic programs.
|
2 |
Analyzing Multi-Objective Linear and Mixed Integer Programs by Lagrange MultipliersRamakrishnan, V. S., Shapiro, Jeremy F., 1939- 08 1900 (has links)
A new method for multi-objective optimization of linear and mixed programs based on Lagrange multiplier methods is developed. The method resembles, but is distinct from, objective function weighting and goal programming methods. A subgradient optimization algorithm for selecting the multipliers is presented and analyzed. The method is illustrated by its application to a model for determining the weekly re-distribution of railroad cars from excess supply areas to excess demand areas, and to a model for balancing cost minimization against order completion requirements for a dynamic lot size model.
|
3 |
Comparison of Routing and Network Coding in Group CommunicationsXu, Yangyang 24 March 2009 (has links)
In traditional communication networks, information is delivered as a sequence of packets from source to destination by routing through intermediate nodes which only store and forward those packets. Recent research shows that routing alone is not sufficient to achieve the maximum information transmission rate across a communication network [1]. Network coding is a currently researched topic in information theory that allows the nodes to generate output data by encoding their received data. Thus, nodes may mix the input packets together and send them out as fewer packets. Potential throughput benefit is the initial motivation of the research in network coding.
Group communications refers to many-to-many communication sessions where multiple sources multicast independent data to the same group of receivers. Researchers always treat group communications as a simple problem by adding a super source which is connected to all the sources with unbounded capacity links. However, it cannot control the fairness between different sources in this method. Additionally, the method may be incorrect in some scenarios. In this research, we will present an example to illustrate that and analyze the reason for that.
The maximum multicast throughput problem using routing only is NP-complete. Wu et al. introduced a greedy tree-packing algorithm based on Prim's algorithm as an alternate sub-optimal solution [2] . This algorithm is modified in this work for group communications problem with routing in undirected networks. The throughput benefit for network coding has been shown in directed networks. However, in undirected networks, researchers have only investigated the multiple unicast sessions problem and one multicast session problem. In most cases, network coding does not seem to yield any throughput benefit [3] [4]. Li et al. introduced a c-flow algorithm using linear programming to find the maximum throughput for one multicast session using network coding [3] . We adapted this algorithm for group communications with network coding in undirected networks to overcome the disadvantage of the traditional method. Both algorithms were simulated using MATLAB and their results were compared. Further, it is demonstrated that network coding does not have constant throughput benefit in undirected networks.
|
4 |
Computational Study of Mean-Risk Stochastic ProgramsCotton, Tanisha Green 03 October 2013 (has links)
Mean-risk stochastic programs model uncertainty by including risk measures in the objective function. This allows for modeling risk averseness for many problems in science and engineering. This dissertation addresses gaps in the literature on stochastic programs with mean-risk objectives. This includes a need for a computational study of the few available algorithms for this class of problems. The study was aimed at implementing and performing an empirical investigation of decomposition algorithms for stochastic linear programs with absolute semideviation (ASD) and quantile deviation (QDEV) as mean-risk measures. Specifically, the goals of the study were to analyze for specific instances how algorithms perform across different levels of risk, investigate the effect of using ASD and QDEV as risk measures, and understand when it is appropriate to use the risk-averse approach over the risk-neutral one.
We derive two new subgradient based algorithms for the ASD and QDEV models, respectively. These algorithms are based on decomposing the stochastic program stage-wise and using a single (aggregated) cut in the master program to approximate the mean and deviation terms of the mean-risk objective function. We also consider a variant of each of the algorithms from the literature in which the mean-risk objective function is approximated by separate optimality cuts, one for the mean and one for the deviation term. These algorithms are implemented and applied to standard stochastic programming test instances to study their comparative performance. Both the aggregated cut and separate cut algorithms have comparable computational performance for ASD, while the separate cut algorithm outperforms its aggregate counterpart for QDEV. The computational study also reveals several insights on mean-risk stochastic linear programs. For example, the results show that for most standard test instances the risk-neutral approach is still appropriate. We show that this is the case due to the test instances having random variables with uniform marginal distributions. In contrast, when these distributions are changed to be non-uniform, the risk-averse approach is preferred. The results also show that the QDEV mean-risk measure has broader flexibility than ASD in modeling risk.
|
5 |
Lagrangian Relaxation / Dual Approaches For Solving Large-Scale Linear Programming ProblemsMadabushi, Ananth R. 17 February 1997 (has links)
This research effort focuses on large-scale linear programming problems that arise in the context of solving various problems such as discrete linear or polynomial, and continuous nonlinear, nonconvex programming problems, using linearization and branch-and-cut algorithms for the discrete case, and using polyhedral outer-approximation methods for the continuous case. These problems arise in various applications in production planning, location-allocation, game theory, economics, and many engineering and systems design problems. During the solution process of discrete or continuous nonconvex problems using polyhedral approaches, one has to contend with repeatedly solving large-scale linear programming(LP) relaxations. Thus, it becomes imperative to employ an efficient method in solving these problems. It has been amply demonstrated that solving LP relaxations using a simplex-based algorithm, or even an interior-point type of procedure, can be inadequately slow ( especially in the presence of complicating constraints, dense coefficient matrices, and ill-conditioning ) in comparison with a Lagrangian Relaxation approach. With this motivation, we present a practical primal-dual subgradient algorithm that incorporates a dual ascent, a primal recovery, and a penalty function approach to recover a near optimal and feasible pair of primal and dual solutions.
The proposed primal-dual approach is comprised of three stages. Stage I deals with solving the Lagrangian dual problem by using various subgradient deflection strategies such as the Modified Gradient Technique (MGT), the Average Direction Strategy (ADS), and a new direction strategy called the Modified Average Direction Strategy (M-ADS). In the latter, the deflection parameter is determined based on the process of projecting the unknown optimal direction onto the space spanned by the current subgradient direction and the previous direction. This projected direction approximates the desired optimal direction as closely as possible using the conjugate subgradient concept. The step-length rules implemented in this regard are the Quadratic Fit Line Search Method and a new line search method called the Directional Derivative Line Search Method in which we start with a prescribed step-length and then ascertain whether to increase or decrease the step-length value based on the right-hand and left-hand derivative information available at each iteration. In the second stage of the algorithm (Stage II), a sequence of updated primal solutions is generated using some convex combinations of the Lagrangian subproblem solutions. Alternatively, a starting primal optimal solution can be obtained using the complementary slackness conditions. Depending on the extent of feasibility and optimality attained, Stage III applies a penalty function method to improve the obtained primal solution toward a near feasible and optimal solution.
We present computational experience using a set of randomly generated, structured, linear programming problems of the type that might typically arise in the context of discrete optimization. / Master of Science
|
6 |
Limited Memory Space Dilation and Reduction AlgorithmsAnsari, Zafar A. 11 August 1998 (has links)
In this thesis, we present variants of Shor and Zhurbenko's r-algorithm, motivated by the memoryless and limited memory updates for differentiable quasi-Newton methods. This well known r-algorithm, which employs a space dilation strategy in the direction of the difference between two successive subgradients, is recognized as being one of the most effective procedures for solving nondifferentiable optimization problems. However, the method needs to store the space dilation matrix and update it at every iteration, resulting in a substantial computational burden for large-sized problems. To circumvent this difficulty, we first develop a memoryless update scheme. In the space transformation sense, the new update scheme can be viewed as a combination of space dilation and reduction operations. We prove convergence of this new algorithm, and demonstrate how it can be used in conjunction with a variable target value method that allows a practical, convergent implementation of the method. For performance comparisons we examine other memoryless and limited memory variants, and also prove a modification of a related algorithm due to Polyak that employs a projection on a pair of Kelley's cutting planes. These variants are tested along with Shor's r-algorithm on a set of standard test problems from the literature as well as on randomly generated dual transportation and assignment problems. Our computational experiments reveal that the proposed memoryless space dilation and reduction algorithm (VT-MSDR) and the proposed modification of the Polyak-Kelly cutting plane method (VT-PKC) provide an overall competitive performance relative to the other methods tested with respect to solution quality and computational effort. The r-Algorithm becomes increasingly more expensive with an increase in problem size, while not providing any gain in solution quality. The fixed dilation (with no reduction) strategy (VT-MSD) provides a comparable, though second-choice, alternative to VT-MSDR. Employing a two-step limited memory extension over VT-MSD sometimes helps in improving the solution quality, although it adds to computational effort, and is not as robust a procedure. / Master of Science
|
7 |
Vector-Valued Markov Games / Vektorwertige Markov-SpielePiskuric, Mojca 16 April 2001 (has links) (PDF)
The subject of the thesis are vector-valued Markov Games. Chapter 1 presents the idea, that has led to the development of the theory of general stochastic games. The work of Lloyd S. Shapley is outlined, and the most important authors and bibliography are stated. Also, the motivation behind the research of vector-valued game-theoretic problems is presented. Chapter 2 develops a rigorous mathematical model of vector-valued N-person Markov games. The corresponding definitions are stated, and the notations, as well as the notion of a strategy are explained in detail. On the basis of these definitions a probability measure is constructed, in an appropriate probability space, which controls the stochastic game process. Furthermore, as in all models of stochastic control, a payoff is specified, in our case the expected discounted payoff. The principles of vector optimization are stated in Chapter 3, and the concept of optimality with recpect to some convex cone is developed. This leads to the generalization of Nash-equilibria from scalar- to vector-valued games, the so-called D-equilibria. Examples are provided to show, that this definition really is a generalization of the existing definitions for scalar-valued games. For a given convex cone D, necessary and sufficient conditions are found to show, when a strategy is also a D-equilibrium. Furthermore it is shown that a D-equilibrium in stationary strategies exists, as one could expect from the known results from the theory of scalar-valued stochastic games. The main result of this chapter is a generalization of an existing result for 2-person vector-valued Markov games to N-person Markov Games, namely that a D-equilibrium of an N-person Markov game is a subgradient of specially constructed support functions of the original payoff functions. To be able to develop solution procedures in the simplest case, that is, the 2-person zero-sum case, Chapter 4 introduces the Denardo dynamic programming formalism. In the space of all p-dimensional functions we define a dynamic programming operator H? to describe the solutions of Markov games. The first of the two main results in this chapter is the following: the expected overall payoff to player 1, f(??), for a fixed stationary strategy ??, is the fixed point of the operator H?. The second theorem then shows, that the latter result is exactly the vector-valued generalization of the famous Shapley result. These theorems are fundamental for the subsequent development of two algorithms, the successive approximations and the Hoffman-Karp algorithm. A numerical example for both algorithms is presented. Chapter 4 finishes with a discussion on other significant results, and the outline of the further research. The Appendix finally presents the main results from general Game Theory, most of which were used for developing both theoretic and algorithmic parts of this thesis. / Das Thema der vorliegenden Arbeit sind vektorwertige Markov-Spiele. Im Kapitel 1 wird die Idee vorgestellt, die zur Entwicklung genereller stochastischer Spiele geführt hat. Die Arbeit von Lloyd S. Shapley wird kurz dargestellt, und die wichtigsten Autoren und Literaturquellen werden genannt. Es wird weiter die Motivation für das Studium der vektorwertigen Spiele erklärt. Kapitel 2 entwickelt ein allgemeines mathematisches Modell vektorwertiger N-Personen Markov-Spiele. Die entsprechenden Definitionen werden angegeben, und es wird auf die Bezeichnungen, sowie den Begriff einer Strategie eingegangen. Weiter wird im entsprechenden Wahrscheinlichkeitsraum ein Wahrscheinlichkeitsmaß konstruiert, das den zugrunde liegenden stochastischen Prozeß steuert. Wie bei allen Modellen gesteuerter stochastischen Prozesse wird eine Auszahlung spezifiziert, konkret der erwartete diskontierte Gesamtertrag. Im Kapitel 3 werden die Prinzipien der Vektoroptimierung erläutert. Es wird der Begriff der Optimalität bezüglich gegebener konvexer Kegel entwickelt. Dieser Begriff wird weiter benutzt, um die Definition der Nash-Gleichgewichte für skalarwertige Spiele auf unser vektorwertiges Modell, die sogenannten D-Gleichgewichte, zu erweitern. Anhand mehrerer Beispiele wird gezeigt, dass diese Definition eine Verallgemeinerung der existierenden Definitionen für skalarwertige Spiele ist. Weiter werden notwendige und hinreichende Bedingungen hinsichtlich des Optimierungskegels D angegeben, wann eine Strategie ein D-Gleichgewicht ist. Anschließend wird gezeigt, dass man sich ? wie bei Markov'schen Entscheidungsprozessen und skalarwertigen stochastischen Spielen - beim Suchen der D-Gleichgewichte auf stationäre Strategien beschränken kann. Das Hauptresultat dieses Kapitels ist die Verallgemeinerung einer schon bekannten Aussage für 2-Personen Markov-Spiele auf N-Personen Markov-Spiele: Ein D-Gleichgewicht im N-Personen Markov-Spiel ist ein Subgradient speziell konstruierter Trägerfunktionen des Gesamtertrags der Spieler. Um im einfachsten Fall der Markov-Spiele, den Zwei-Personen Nullsummenspielen, ein Lösungskonzept entwickeln zu können, wird im Kapitel 4 die Methode des Dynamischen Programmierens benutzt. Es wird der Denardo-Formalismus übernommen, um einen Operator H? im Raum aller p-dimensionalen vektorwertigen Funktionen zu entwickeln. Die Haputresultate dieses Kapitels sind zwei Sätze über optimale Lösungen, bzw. D-Gleichgewichte. Der erste Satz zeigt, dass für eine fixierte stationäre Strategie ?? der erwartete diskontierte Gesamtertrag f(??) der Fixpunkt des Operators H? ist. Anschließend zeigt der zweite Satz, dass diese Lösung genau der vektorwertigen Erweiterung des Resultats von Shapley entspricht. Anhand dieser Resultate werden nun zwei Algorithmen entwickelt: sukzessive Approximationen und Hoffman-Karp-Algorithmus. Es wird ein numerisches Beispiel für beide Algorithmen berechnet. Kapitel 4 schließt mit dem Abschnitt über weitere Resultate und Ansätze für weitere Forschung. Im Anhang werden die Hauptresultate der statischen Spieltheorie vorgestellt, viele von denen werden in der vorliegenden Arbeit benutzt.
|
8 |
Vector-Valued Markov GamesPiskuric, Mojca 23 April 2001 (has links)
The subject of the thesis are vector-valued Markov Games. Chapter 1 presents the idea, that has led to the development of the theory of general stochastic games. The work of Lloyd S. Shapley is outlined, and the most important authors and bibliography are stated. Also, the motivation behind the research of vector-valued game-theoretic problems is presented. Chapter 2 develops a rigorous mathematical model of vector-valued N-person Markov games. The corresponding definitions are stated, and the notations, as well as the notion of a strategy are explained in detail. On the basis of these definitions a probability measure is constructed, in an appropriate probability space, which controls the stochastic game process. Furthermore, as in all models of stochastic control, a payoff is specified, in our case the expected discounted payoff. The principles of vector optimization are stated in Chapter 3, and the concept of optimality with recpect to some convex cone is developed. This leads to the generalization of Nash-equilibria from scalar- to vector-valued games, the so-called D-equilibria. Examples are provided to show, that this definition really is a generalization of the existing definitions for scalar-valued games. For a given convex cone D, necessary and sufficient conditions are found to show, when a strategy is also a D-equilibrium. Furthermore it is shown that a D-equilibrium in stationary strategies exists, as one could expect from the known results from the theory of scalar-valued stochastic games. The main result of this chapter is a generalization of an existing result for 2-person vector-valued Markov games to N-person Markov Games, namely that a D-equilibrium of an N-person Markov game is a subgradient of specially constructed support functions of the original payoff functions. To be able to develop solution procedures in the simplest case, that is, the 2-person zero-sum case, Chapter 4 introduces the Denardo dynamic programming formalism. In the space of all p-dimensional functions we define a dynamic programming operator H? to describe the solutions of Markov games. The first of the two main results in this chapter is the following: the expected overall payoff to player 1, f(??), for a fixed stationary strategy ??, is the fixed point of the operator H?. The second theorem then shows, that the latter result is exactly the vector-valued generalization of the famous Shapley result. These theorems are fundamental for the subsequent development of two algorithms, the successive approximations and the Hoffman-Karp algorithm. A numerical example for both algorithms is presented. Chapter 4 finishes with a discussion on other significant results, and the outline of the further research. The Appendix finally presents the main results from general Game Theory, most of which were used for developing both theoretic and algorithmic parts of this thesis. / Das Thema der vorliegenden Arbeit sind vektorwertige Markov-Spiele. Im Kapitel 1 wird die Idee vorgestellt, die zur Entwicklung genereller stochastischer Spiele geführt hat. Die Arbeit von Lloyd S. Shapley wird kurz dargestellt, und die wichtigsten Autoren und Literaturquellen werden genannt. Es wird weiter die Motivation für das Studium der vektorwertigen Spiele erklärt. Kapitel 2 entwickelt ein allgemeines mathematisches Modell vektorwertiger N-Personen Markov-Spiele. Die entsprechenden Definitionen werden angegeben, und es wird auf die Bezeichnungen, sowie den Begriff einer Strategie eingegangen. Weiter wird im entsprechenden Wahrscheinlichkeitsraum ein Wahrscheinlichkeitsmaß konstruiert, das den zugrunde liegenden stochastischen Prozeß steuert. Wie bei allen Modellen gesteuerter stochastischen Prozesse wird eine Auszahlung spezifiziert, konkret der erwartete diskontierte Gesamtertrag. Im Kapitel 3 werden die Prinzipien der Vektoroptimierung erläutert. Es wird der Begriff der Optimalität bezüglich gegebener konvexer Kegel entwickelt. Dieser Begriff wird weiter benutzt, um die Definition der Nash-Gleichgewichte für skalarwertige Spiele auf unser vektorwertiges Modell, die sogenannten D-Gleichgewichte, zu erweitern. Anhand mehrerer Beispiele wird gezeigt, dass diese Definition eine Verallgemeinerung der existierenden Definitionen für skalarwertige Spiele ist. Weiter werden notwendige und hinreichende Bedingungen hinsichtlich des Optimierungskegels D angegeben, wann eine Strategie ein D-Gleichgewicht ist. Anschließend wird gezeigt, dass man sich ? wie bei Markov'schen Entscheidungsprozessen und skalarwertigen stochastischen Spielen - beim Suchen der D-Gleichgewichte auf stationäre Strategien beschränken kann. Das Hauptresultat dieses Kapitels ist die Verallgemeinerung einer schon bekannten Aussage für 2-Personen Markov-Spiele auf N-Personen Markov-Spiele: Ein D-Gleichgewicht im N-Personen Markov-Spiel ist ein Subgradient speziell konstruierter Trägerfunktionen des Gesamtertrags der Spieler. Um im einfachsten Fall der Markov-Spiele, den Zwei-Personen Nullsummenspielen, ein Lösungskonzept entwickeln zu können, wird im Kapitel 4 die Methode des Dynamischen Programmierens benutzt. Es wird der Denardo-Formalismus übernommen, um einen Operator H? im Raum aller p-dimensionalen vektorwertigen Funktionen zu entwickeln. Die Haputresultate dieses Kapitels sind zwei Sätze über optimale Lösungen, bzw. D-Gleichgewichte. Der erste Satz zeigt, dass für eine fixierte stationäre Strategie ?? der erwartete diskontierte Gesamtertrag f(??) der Fixpunkt des Operators H? ist. Anschließend zeigt der zweite Satz, dass diese Lösung genau der vektorwertigen Erweiterung des Resultats von Shapley entspricht. Anhand dieser Resultate werden nun zwei Algorithmen entwickelt: sukzessive Approximationen und Hoffman-Karp-Algorithmus. Es wird ein numerisches Beispiel für beide Algorithmen berechnet. Kapitel 4 schließt mit dem Abschnitt über weitere Resultate und Ansätze für weitere Forschung. Im Anhang werden die Hauptresultate der statischen Spieltheorie vorgestellt, viele von denen werden in der vorliegenden Arbeit benutzt.
|
9 |
Cutting plane methods and dual problemsGladin, Egor 28 August 2024 (has links)
Die vorliegende Arbeit befasst sich mit Schnittebenenverfahren, einer Gruppe von iterativen Algorithmen zur Minimierung einer (möglicherweise nicht glatten) konvexen Funktion über einer kompakten konvexen Menge. Wir betrachten zwei prominente Beispiele, nämlich die Ellipsoidmethode und die Methode der Vaidya, und zeigen, dass ihre Konvergenzrate auch bei Verwendung eines ungenauen Orakels erhalten bleibt. Darüber hinaus zeigen wir, dass es möglich ist, diese Methoden im Rahmen der stochastischen Optimierung effizient zu nutzen.
Eine andere Richtung, in der Schnittebenenverfahren nützlich sein können, sind duale Probleme. In der Regel können die Zielfunktion und ihre Ableitungen bei solchen Problemen nur näherungsweise berechnet werden. Daher ist die Unempfindlichkeit der Methoden gegenüber Fehlern in den Subgradienten von großem Nutzen. Als Anwendungsbeispiel schlagen wir eine linear konvergierende duale Methode für einen Markow-Entscheidungsprozess mit Nebenbedienungen vor, die auf der Methode der Vaidya basiert. Wir demonstrieren die Leistungsfähigkeit der vorgeschlagenen Methode in einem einfachen RL Problem.
Die Arbeit untersucht auch das Konzept der Genauigkeitszertifikate für konvexe Minimierungsprobleme. Zertifikate ermöglichen die Online-Überprüfung der Genauigkeit von Näherungslösungen. In dieser Arbeit verallgemeinern wir den Begriff der Genauigkeitszertifikate für die Situation eines ungenauen Orakels erster Ordnung. Darüber hinaus schlagen wir einen expliziten Weg zur Konstruktion von Genauigkeitszertifikaten für eine große Klasse von Schnittebenenverfahren vor. Als Nebenprodukt zeigen wir, dass die betrachteten Methoden effizient mit einem verrauschten Orakel verwendet werden können, obwohl sie ursprünglich für ein exaktes Orakel entwickelt wurden. Schließlich untersuchen wir die vorgeschlagenen Zertifikate in numerischen Experimenten und zeigen, dass sie eine enge obere Schranke für das objektive Residuum liefern. / The present thesis studies cutting plane methods, which are a group of iterative algorithms for minimizing a (possibly nonsmooth) convex function over a compact convex set. We consider two prominent examples, namely, the ellipsoid method and Vaidya's method, and show that their convergence rate is preserved even when an inexact oracle is used. Furthermore, we demonstrate that it is possible to use these methods in the context of stochastic optimization efficiently.
Another direction where cutting plane methods can be useful is Lagrange dual problems. Commonly, the objective and its derivatives can only be computed approximately in such problems. Thus, the methods' insensitivity to error in subgradients comes in handy. As an application example, we propose a linearly converging dual method for a constrained Markov decision process (CMDP) based on Vaidya's algorithm. We demonstrate the performance of the proposed method in a simple RL environment.
The work also investigates the concept of accuracy certificates for convex minimization problems. Certificates allow for online verification of the accuracy of approximate solutions. In this thesis, we generalize the notion of accuracy certificates for the setting of an inexact first-order oracle. Furthermore, we propose an explicit way to construct accuracy certificates for a large class of cutting plane methods. As a by-product, we show that the considered methods can be efficiently used with a noisy oracle even though they were originally designed to be equipped with an exact oracle. Finally, we illustrate the work of the proposed certificates in numerical experiments highlighting that they provide a tight upper bound on the objective residual.
|
10 |
An Iterative MPEG Super-Resolution with an Outer Approximation of Framewise Quantization ConstraintSAKANIWA, Kohichi, YAMADA, Isao, ONO, Toshiyuki, HASEGAWA, Hiroshi 01 September 2005 (has links)
No description available.
|
Page generated in 0.0692 seconds