• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 1
  • Tagged with
  • 4
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A Study of the Dumping Predictive Information Systems on Coated Steel Products ¡V A Case Study of S Company

Chuang, Kuo-Hsin 24 January 2005 (has links)
After joining in the World Trade Organization (WTO), the Taiwan¡¦s steel import tariff has been comprehensively reduced to almost nil customs duties (duty-free) since the year 2004. As a result of overwhelming importation of steel products and continuous investment in enhancing and enlarging output capabilities made by local steel manufacturers, the domestic steel markets are showing a tendency to a surplus status of supply over demand. However, due to the increasingly strong demand of steel materials made by the China mainland in the recent years, it has been not only greatly helped put off the execution of the world-wide scattered anti-dumping trade remedy assistance measures since the year 2000, but also benefited the domestic major steel manufacturers adjusting and improving their market structure and profit advantages in time. This enables the local steel producers to gradually enlarge their export quantity and aggressively expand their steel output on China mainland in the meantime. While the steel industries are boomingly developing, the international steel prices are rising and the steel market bargains are boosting, I am firmly confident that only the steel enterprises are able to concentrate their attention on what the anti-dumping measures formulated by WTO are stated, and to take appropriate dealing actions in the meantime, the competitive edge will be able to be elevated by the domestic steel producers. Therefore, to immediately setup information system on the forecast of dumping prediction and the knowledge of anti-dumping measures is a prerequisite to facilitate enhancing the enterprise¡¦s competitive edge and superiority. The results of this research are made into two fields on the application of a dumping prediction about the coated/galvanized plate steel products. i.e. (1) In line with the WTO regulations and the US anti-dumping law, I have made various collection concerning the information of law articles, dumping items, documents as well as miscellaneous information and have combined them into one consolidated concept to construct an up-dated dumping pre-warning alarm system. (2) Further study was made on the pre-warning subject concerning price negotiations and price finalizations of the S company¡¦s marketing department between the phases of order reception and goods issue. The result I discovered is that it is critically important for the enterprise to systematically pin down and appropriately structure its sales prices in advance and timely notice the pre-warning alarm measures in linking with the marketing model team before the entrepreneur is facing an international anti-dumping complaint or accusation in this regard.
2

Sufficient encoding of dynamical systems

Creutzig, Felix 04 July 2008 (has links)
Diese Doktorarbeit besteht aus zwei Teilen. In dem ersten Teil der Doktorarbeit behandele ich die Kodierung von Kommunikationssignalen in einem burstenden Interneuron im auditorischen System des Grashuepfers Chorthippus biguttulus. Mit der Anzahl der Aktionspotentialen im Burst wird eine zeitliche Komponente der Kommunikationssignale - die Pausendauer - wiedergegeben. Ein Modell basierend auf schneller Exzitation und langsamer Inhibition kann diese spezielle Kodierung erklaeren. Ich zeige, dass eine zeitliche Integration der Aktionspotentiale dieses burstenden Interneurons dazu genutzt werden kann, die Signale zeitskaleninvariant zu dekodieren. Dieser Mechanismus kann in ein umfassenderes Modell eingebaut werden, dass die Verhaltensantwort des Grashuepfers auf Kommunikationssignale widerspiegelt. Im zweiten Teil der Doktorarbeit benutze ich Konzepte aus der Informationstheorie und der Theorie linearer dynamisches Systeme, um den Begriff der ''vorhersagenden Information'' zu operationalisieren. Im einfachen Fall der informations-theoretisch optimalen Vorhersage des naechsten Zeitschrittes, erhalte ich Eigenvektoren, die denjenigen eines anderen etablierten Algorithmuses, der sogenannten ''Slow Feature Analysis'', entsprechen. Im allgemeinen Fall optimiere ich die vorhersagenden Information, die die Vergangenheit des Inputs eines dynamischen Systems ueber die Zukunft des Outputs enthaelt. Dabei gelange ich zu einer informations-theoretisch optimalen Charakterisierung eines reduzierten Systems, die auf den Eigenvektoren der konditionalen Kovarianzmatrix zwischen Inputvergangenheit und Outputzukunft basiert. / This thesis consists of two parts. In the first part, I investigate the coding of communication signal in a bursting interneuron in the auditory system of the grasshopper Chorthippus biguttulus. The intra-burst spike count codes one temporal feature of the communication signal - pause duration. I show that this code can be understood by a model of parallel fast excitation and slow inhibition. Furthermore, temporal integration of the spike train of this bursting interneuron results in a desirable time-scale invariant read-out of the communication signal. This mechanism can be integrated into a more comprehensive model that can explain behavioural response of grasshoppers. In the second part of this thesis, I combine concepts from information theory and linear system theory to operationalize the notion of ''predictive information''. In the simple case of predicting the next time-step of a signal in an information-theoretic optimal sense, I obtain a description by eigenvectors that are identical to another established algorith, the so-called ''Slow Feature Analysis''. In the general case I optimize a dynamical system such that the predictive information in the input past about the output future is optimalle compressed into the state space. Thereby, I obtain an information-theoretically optimal characterization of reduced system, based on the eigenvectors of the conditional covariance matrix between input past and output future.
3

Causal Models over Infinite Graphs and their Application to the Sensorimotor Loop / Kausale Modelle über unendlichen Grafen und deren Anwendung auf die sensomotorische Schleife - stochastische Aspekte und gradientenbasierte optimale Steuerung

Bernigau, Holger 27 April 2015 (has links) (PDF)
Motivation and background The enormous amount of capabilities that every human learns throughout his life, is probably among the most remarkable and fascinating aspects of life. Learning has therefore drawn lots of interest from scientists working in very different fields like philosophy, biology, sociology, educational sciences, computer sciences and mathematics. This thesis focuses on the information theoretical and mathematical aspects of learning. We are interested in the learning process of an agent (which can be for example a human, an animal, a robot, an economical institution or a state) that interacts with its environment. Common models for this interaction are Markov decision processes (MDPs) and partially observable Markov decision processes (POMDPs). Learning is then considered to be the maximization of the expectation of a predefined reward function. In order to formulate general principles (like a formal definition of curiosity-driven learning or avoidance of unpleasant situation) in a rigorous way, it might be desirable to have a theoretical framework for the optimization of more complex functionals of the underlying process law. This might include the entropy of certain sensor values or their mutual information. An optimization of the latter quantity (also known as predictive information) has been investigated intensively both theoretically and experimentally using computer simulations by N. Ay, R. Der, K Zahedi and G. Martius. In this thesis, we develop a mathematical theory for learning in the sensorimotor loop beyond expected reward maximization. Approaches and results This thesis covers four different topics related to the theory of learning in the sensorimotor loop. First of all, we need to specify the model of an agent interacting with the environment, either with learning or without learning. This interaction naturally results in complex causal dependencies. Since we are interested in asymptotic properties of learning algorithms, it is necessary to consider infinite time horizons. It turns out that the well-understood theory of causal networks known from the machine learning literature is not powerful enough for our purpose. Therefore we extend important theorems on causal networks to infinite graphs and general state spaces using analytical methods from measure theoretic probability theory and the theory of discrete time stochastic processes. Furthermore, we prove a generalization of the strong Markov property from Markov processes to infinite causal networks. Secondly, we develop a new idea for a projected stochastic constraint optimization algorithm. Generally a discrete gradient ascent algorithm can be used to generate an iterative sequence that converges to the stationary points of a given optimization problem. Whenever the optimization takes place over a compact subset of a vector space, it is possible that the iterative sequence leaves the constraint set. One possibility to cope with this problem is to project all points to the constraint set using Euclidean best-approximation. The latter is sometimes difficult to calculate. A concrete example is an optimization over the unit ball in a matrix space equipped with operator norm. Our idea consists of a back-projection using quasi-projectors different from the Euclidean best-approximation. In the matrix example, there is another canonical way to force the iterative sequence to stay in the constraint set: Whenever a point leaves the unit ball, it is divided by its norm. For a given target function, this procedure might introduce spurious stationary points on the boundary. We show that this problem can be circumvented by using a gradient that is tailored to the quasi-projector used for back-projection. We state a general technical compatibility condition between a quasi-projector and a metric used for gradient ascent, prove convergence of stochastic iterative sequences and provide an appropriate metric for the unit-ball example. Thirdly, a class of learning problems in the sensorimotor loop is defined and motivated. This class of problems is more general than the usual expected reward maximization and is illustrated by numerous examples (like expected reward maximization, maximization of the predictive information, maximization of the entropy and minimization of the variance of a given reward function). We also provide stationarity conditions together with appropriate gradient formulas. Last but not least, we prove convergence of a stochastic optimization algorithm (as considered in the second topic) applied to a general learning problem (as considered in the third topic). It is shown that the learning algorithm converges to the set of stationary points. Among others, the proof covers the convergence of an improved version of an algorithm for the maximization of the predictive information as proposed by N. Ay, R. Der and K. Zahedi. We also investigate an application to a linear Gaussian dynamic, where the policies are encoded by the unit-ball in a space of matrices equipped with operator norm.
4

Causal Models over Infinite Graphs and their Application to the Sensorimotor Loop: Causal Models over Infinite Graphs and their Application to theSensorimotor Loop: General Stochastic Aspects and GradientMethods for Optimal Control

Bernigau, Holger 04 July 2015 (has links)
Motivation and background The enormous amount of capabilities that every human learns throughout his life, is probably among the most remarkable and fascinating aspects of life. Learning has therefore drawn lots of interest from scientists working in very different fields like philosophy, biology, sociology, educational sciences, computer sciences and mathematics. This thesis focuses on the information theoretical and mathematical aspects of learning. We are interested in the learning process of an agent (which can be for example a human, an animal, a robot, an economical institution or a state) that interacts with its environment. Common models for this interaction are Markov decision processes (MDPs) and partially observable Markov decision processes (POMDPs). Learning is then considered to be the maximization of the expectation of a predefined reward function. In order to formulate general principles (like a formal definition of curiosity-driven learning or avoidance of unpleasant situation) in a rigorous way, it might be desirable to have a theoretical framework for the optimization of more complex functionals of the underlying process law. This might include the entropy of certain sensor values or their mutual information. An optimization of the latter quantity (also known as predictive information) has been investigated intensively both theoretically and experimentally using computer simulations by N. Ay, R. Der, K Zahedi and G. Martius. In this thesis, we develop a mathematical theory for learning in the sensorimotor loop beyond expected reward maximization. Approaches and results This thesis covers four different topics related to the theory of learning in the sensorimotor loop. First of all, we need to specify the model of an agent interacting with the environment, either with learning or without learning. This interaction naturally results in complex causal dependencies. Since we are interested in asymptotic properties of learning algorithms, it is necessary to consider infinite time horizons. It turns out that the well-understood theory of causal networks known from the machine learning literature is not powerful enough for our purpose. Therefore we extend important theorems on causal networks to infinite graphs and general state spaces using analytical methods from measure theoretic probability theory and the theory of discrete time stochastic processes. Furthermore, we prove a generalization of the strong Markov property from Markov processes to infinite causal networks. Secondly, we develop a new idea for a projected stochastic constraint optimization algorithm. Generally a discrete gradient ascent algorithm can be used to generate an iterative sequence that converges to the stationary points of a given optimization problem. Whenever the optimization takes place over a compact subset of a vector space, it is possible that the iterative sequence leaves the constraint set. One possibility to cope with this problem is to project all points to the constraint set using Euclidean best-approximation. The latter is sometimes difficult to calculate. A concrete example is an optimization over the unit ball in a matrix space equipped with operator norm. Our idea consists of a back-projection using quasi-projectors different from the Euclidean best-approximation. In the matrix example, there is another canonical way to force the iterative sequence to stay in the constraint set: Whenever a point leaves the unit ball, it is divided by its norm. For a given target function, this procedure might introduce spurious stationary points on the boundary. We show that this problem can be circumvented by using a gradient that is tailored to the quasi-projector used for back-projection. We state a general technical compatibility condition between a quasi-projector and a metric used for gradient ascent, prove convergence of stochastic iterative sequences and provide an appropriate metric for the unit-ball example. Thirdly, a class of learning problems in the sensorimotor loop is defined and motivated. This class of problems is more general than the usual expected reward maximization and is illustrated by numerous examples (like expected reward maximization, maximization of the predictive information, maximization of the entropy and minimization of the variance of a given reward function). We also provide stationarity conditions together with appropriate gradient formulas. Last but not least, we prove convergence of a stochastic optimization algorithm (as considered in the second topic) applied to a general learning problem (as considered in the third topic). It is shown that the learning algorithm converges to the set of stationary points. Among others, the proof covers the convergence of an improved version of an algorithm for the maximization of the predictive information as proposed by N. Ay, R. Der and K. Zahedi. We also investigate an application to a linear Gaussian dynamic, where the policies are encoded by the unit-ball in a space of matrices equipped with operator norm.

Page generated in 0.143 seconds