• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 3
  • Tagged with
  • 8
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Méthodes d'analyse fonctionnelle pour des systèmes de dimension infinie issus de la dynamique de populations / Functional analysis methods for infinite dimensional systems coming from population dynamics

Hegoburu, Nicolas 07 May 2019 (has links)
Cette thèse étudie les propriétés de contrôle par migration d’équations aux dérivées partielles modélisant la dynamique d’une population structurée en âge. Les équations de populations considérées seront essentiellement celles décrites par Lotka et McKendrick, en tenant compte ou non de la diffusion spatiale des individus, ainsi que leur pendant non-linéaire décrit par les équations de Gurtin et MacCamy. La première partie étudie les propriétés de contrôlabilité interne des équations linéaires de Lotka et McKendrick (sans diffusion), lorsque le contrôle n’agit que pour les jeunes individus formant la population. La contrôlabilité à zéro ainsi que la contrôlabilité vers les solutions stationnaires du système considéré est démontrée, en utilisant les propriétés du semi-groupe associé à l’opérateur de population originellement étudié par Song (contrôleur supposé responsable de la radicalisation de la politique de l’enfant unique en Chine). En outre, la conservation au cours du temps de la positivité de la densité de population contrôlée est étudiée. Les deux parties suivantes établissent respectivement des propriétés de contrôle à zéro et de contrôle en temps optimal pour l’équation de Lotka et McKendrick, lorsque le déplacement spatial des individus est considéré (ici, le contrôle agit pour tous les âges mais seulement dans une certaine zone du milieu considéré). Les méthodes employées relèvent d’une adaptation de celles originellement développées pour le contrôle d’équations paraboliques, notamment la méthode de Lebeau et Robbiano (pour l’étude du contrôle à zéro de l’équation de la chaleur), ainsi que leur généralisation développée par Wang pour l’étude du contrôle en temps optimal de l’équation de la chaleur. Une dernière partie étudie les propriétés de contrôlabilité des équations non-linéaires de Gurtin et MacCamy (sans diffusion), lorsque le contrôle est voué à n’agir que pour une certaine tranche d’âge d’individus. L’utilisation de principes de comparaison en dynamique de populations permet notamment d’obtenir le contrôle à zéro des équations considérées. / This work is devoted to study the controllability properties of some infinite dimensional systems modeling an age structured population dynamics. The considered equations are essentially those described by Lotka and McKendrick, with or without spatial diffusion, and their nonlinear versions described by the Gurtin and MacCamy equations. The first part of this thesis aims to study the controllability properties of the linear Lotka and McKendrick system (without diffusion), in the case when the control acts for the very young individuals. The null controllability and the controllability towards the stationnary solutions of the considered system are established, using a semigroup approach. In addition, the nonnegativity of the controlled population dynamics is studied. The next two parts are respectively devoted to establish a null controllability result and a time optimal control result for the Lotka McKendrick equation with spatial diffusion (here, the control acts for every ages but only on a subdomain of the considered spatial domain). The methods employed are those originally devoted to study the internal controllability properties of the heat equation. A last part studies the controllability properties of the Gurtin and MacCamy nonlinear equations (without diffuion), when the control acts only in an arbitrary age range. In this case, the use of comparison principles in age structured population dynamics ensures the null controllability of the considered equations.
2

On the significance of borders

Kubin, Ingrid, Gardini, Laura 08 1900 (has links) (PDF)
We propose a prototype model of market dynamics in which all functional relationships are linear. We take into account three borders, defined by linear functions, which are intrinsic to the economic reasoning: non-negativity of prices; downward rigidity of capacity (depreciation) and a capacity constraint for the production decision. Given the linear specification, the borders are the only source for the emerging of cyclical and more complex dynamics. In particular, we discuss centre bifurcations, border collision bifurcations and degenerate flip bifurcations - dynamic phenomena the occurrence of which are intimately related to the existence of borders. / Series: Department of Economics Working Paper Series
3

The impact of Brexit on trade patterns and industry location: a NEG analysis

Commendatore, Pasquale, Kubin, Ingrid, Sushko, Iryna 08 1900 (has links) (PDF)
We explore the effects of Brexit on trade patterns and on the spatial distribution of industry between the United Kingdom and the European Union and within the EU. Our study adopts a new economic geography (NEG) perspective developing a linear model with three regions, the UK and two separated regions composing the EU. The 3-region framework and linear demands allow for different trade patterns. Two possible ante-Brexit situations are possible, depending on the interplay between local market size, local competition and trade costs: industrial agglomeration or dispersion. Considering a soft and a hard Brexit scenario, the ante-Brexit situation is altered substantially, depending on which scenario prevails. UK firms could move to the larger EU market, even in the peripheral region, reacting to the higher trade barriers, relocation representing a substitute for trade. Alternatively, some EU firms could move in the more isolated UK market finding shelter from the competition inside the EU. We also consider the post-Brexit scenario of deeper EU integration, leading to a weakening of trade links between the EU and the UK. Our analysis also reveals a highly complex bifurcation sequence leading to many instances of multistability, intricate basins of attraction and cyclical and chaotic dynamics. / Series: Department of Economics Working Paper Series
4

HYPER-RECTANGLE COVER THEORY AND ITS APPLICATIONS

Chu, Xiaoxuan January 2022 (has links)
In this thesis, we propose a novel hyper-rectangle cover theory which provides a new approach to analyzing mathematical problems with nonnegativity constraints on variables. In this theory, two fundamental concepts, cover order and cover length, are introduced and studied in details. In the same manner as determining the rank of a matrix, we construct a specific e ́chelon form of the matrix to obtain the cover order of a given matrix efficiently and effectively. We discuss various structures of the e ́chelon form for some special cases in detail. Based on the structure and properties of the constructed e ́chelon form, the concepts of non-negatively linear independence and non-negatively linear dependence are developed. Using the properties of the cover order, we obtain the necessary and sufficient conditions for the existence and uniqueness of the solutions for linear equations system with nonnegativity constraints on variables for both homogeneous and non-homogeneous cases. In addition, we apply the cover theory to analyze some typical problems in linear algebra and optimization with nonnegativity constraints on variables, including linear programming problems and non-negative least squares (NNLS) problems. For linear programming problem, we study the three possible behaviors of the solutions for it through hyper-rectangle cover theory, and show that a series of feasible solutions for the problem with the zero-cover e ́chelon form structure. On the other hand, we develop a method to obtain the cover length of the covered variable. In the process, we discover the relationship between the cover length determination problem and the NNLS problem. This enables us to obtain an analytical optimal value for the NNLS problem. / Thesis / Doctor of Philosophy (PhD)
5

A Hierarchical Bayesian Model for the Unmixing Analysis of Compositional Data subject to Unit-sum Constraints

Yu, Shiyong 15 May 2015 (has links)
Modeling of compositional data is emerging as an active area in statistics. It is assumed that compositional data represent the convex linear mixing of definite numbers of independent sources usually referred to as end members. A generic problem in practice is to appropriately separate the end members and quantify their fractions from compositional data subject to nonnegative and unit-sum constraints. A number of methods essentially related to polytope expansion have been proposed. However, these deterministic methods have some potential problems. In this study, a hierarchical Bayesian model was formulated, and the algorithms were coded in MATLABÒ. A test run using both a synthetic and real-word dataset yields scientifically sound and mathematically optimal outputs broadly consistent with other non-Bayesian methods. Also, the sensitivity of this model to the choice of different priors and structure of the covariance matrix of error were discussed.
6

Algorithmes de diagonalisation conjointe par similitude pour la décomposition canonique polyadique de tenseurs : applications en séparation de sources / Joint diagonalization by similarity algorithms for the canonical polyadic decomposition of tensors : Applications in blind source separation

André, Rémi 07 September 2018 (has links)
Cette thèse présente de nouveaux algorithmes de diagonalisation conjointe par similitude. Cesalgorithmes permettent, entre autres, de résoudre le problème de décomposition canonique polyadiquede tenseurs. Cette décomposition est particulièrement utilisée dans les problèmes deséparation de sources. L’utilisation de la diagonalisation conjointe par similitude permet de paliercertains problèmes dont les autres types de méthode de décomposition canonique polyadiquesouffrent, tels que le taux de convergence, la sensibilité à la surestimation du nombre de facteurset la sensibilité aux facteurs corrélés. Les algorithmes de diagonalisation conjointe par similitudetraitant des données complexes donnent soit de bons résultats lorsque le niveau de bruit est faible,soit sont plus robustes au bruit mais ont un coût calcul élevé. Nous proposons donc en premierlieu des algorithmes de diagonalisation conjointe par similitude traitant les données réelles etcomplexes de la même manière. Par ailleurs, dans plusieurs applications, les matrices facteursde la décomposition canonique polyadique contiennent des éléments exclusivement non-négatifs.Prendre en compte cette contrainte de non-négativité permet de rendre les algorithmes de décompositioncanonique polyadique plus robustes à la surestimation du nombre de facteurs ou lorsqueces derniers ont un haut degré de corrélation. Nous proposons donc aussi des algorithmes dediagonalisation conjointe par similitude exploitant cette contrainte. Les simulations numériquesproposées montrent que le premier type d’algorithmes développés améliore l’estimation des paramètresinconnus et diminue le coût de calcul. Les simulations numériques montrent aussi queles algorithmes avec contrainte de non-négativité améliorent l’estimation des matrices facteurslorsque leurs colonnes ont un haut degré de corrélation. Enfin, nos résultats sont validés à traversdeux applications de séparation de sources en télécommunications numériques et en spectroscopiede fluorescence. / This thesis introduces new joint eigenvalue decomposition algorithms. These algorithms allowamongst others to solve the canonical polyadic decomposition problem. This decomposition iswidely used for blind source separation. Using the joint eigenvalue decomposition to solve thecanonical polyadic decomposition problem allows to avoid some problems whose the others canonicalpolyadic decomposition algorithms generally suffer, such as the convergence rate, theoverfactoring sensibility and the correlated factors sensibility. The joint eigenvalue decompositionalgorithms dealing with complex data give either good results when the noise power is low, orthey are robust to the noise power but have a high numerical cost. Therefore, we first proposealgorithms equally dealing with real and complex. Moreover, in some applications, factor matricesof the canonical polyadic decomposition contain only nonnegative values. Taking this constraintinto account makes the algorithms more robust to the overfactoring and to the correlated factors.Therefore, we also offer joint eigenvalue decomposition algorithms taking advantage of thisnonnegativity constraint. Suggested numerical simulations show that the first developed algorithmsimprove the estimation accuracy and reduce the numerical cost in the case of complexdata. Our numerical simulations also highlight the fact that our nonnegative joint eigenvaluedecomposition algorithms improve the factor matrices estimation when their columns have ahigh correlation degree. Eventually, we successfully applied our algorithms to two blind sourceseparation problems : one concerning numerical telecommunications and the other concerningfluorescence spectroscopy.
7

Identification aveugle de mélanges et décomposition canonique de tenseurs : application à l'analyse de l'eau / Blind identification of mixtures and canonical tensor decomposition : application to wateranalysis

Royer, Jean-Philip 04 October 2013 (has links)
Dans cette thèse, nous nous focalisons sur le problème de la décomposition polyadique minimale de tenseurs de dimension trois, problème auquel on se réfère généralement sous différentes terminologies : « Polyadique Canonique » (CP en anglais), « CanDecomp », ou encore « Parafac ». Cette décomposition s'avère très utile dans un très large panel d'applications. Cependant, nous nous concentrons ici sur la spectroscopie de fluorescence appliquée à des données environnementales particulières de type échantillons d'eau qui pourront avoir été collectés en divers endroits ou différents moments. Ils contiennent un mélange de plusieurs molécules organiques et l'objectif des traitements numériques mis en œuvre est de parvenir à séparer et à ré-estimer ces composés présents dans les échantillons étudiés. Par ailleurs, dans plusieurs applications comme l'imagerie hyperspectrale ou justement, la chimiométrie, il est intéressant de contraindre les matrices de facteurs recherchées à être réelles et non négatives car elles sont représentatives de quantités physiques réelles non négatives (spectres, fractions d'abondance, concentrations, ...etc.). C'est pourquoi tous les algorithmes développés durant cette thèse l'ont été dans ce cadre (l'avantage majeur de cette contrainte étant de rendre le problème d'approximation considéré bien posé). Certains de ces algorithmes reposent sur l'utilisation de méthodes proches des fonctions barrières, d'autres approches consistent à paramétrer directement les matrices de facteurs recherchées par des carrés. / In this manuscript, we focus on the minimal polyadic decomposition of third order tensors, which is often referred to: “Canonical Polyadic” (CP), “CanDecomp”, or “Parafac”. This decomposition is useful in a very wide panel of applications. However, here, we only address the problem of fluorescence spectroscopy applied to environment data collected in different locations or times. They contain a mixing of several organic components and the goal of the used processing is to separate and estimate these components present in the considered samples. Moreover, in some applications like hyperspectral unmixing or chemometrics, it is useful to constrain the wanted loading matrices to be real and nonnegative, because they represent nonnegative physical data (spectra, abundance fractions, concentrations, etc...). That is the reason why all the algorithms developed here take into account this constraint (the main advantage is to turn the approximation problem into a well-posed one). Some of them rely on methods close to barrier functions, others consist in a parameterization of the loading matrices with the help of squares. Many optimization algorithms were considered: gradient approaches, nonlinear conjugate gradient, that fits well with big dimension problems, Quasi-Newton (BGFS and DFP) and finally Levenberg-Marquardt. Two versions of these algorithms have been considered: “Enhanced Line Search” version (ELS, enabling to escape from local minima) and the “backtracking” version (alternating with ELS).
8

Nonnegative matrix and tensor factorizations, least squares problems, and applications

Kim, Jingu 14 November 2011 (has links)
Nonnegative matrix factorization (NMF) is a useful dimension reduction method that has been investigated and applied in various areas. NMF is considered for high-dimensional data in which each element has a nonnegative value, and it provides a low-rank approximation formed by factors whose elements are also nonnegative. The nonnegativity constraints imposed on the low-rank factors not only enable natural interpretation but also reveal the hidden structure of data. Extending the benefits of NMF to multidimensional arrays, nonnegative tensor factorization (NTF) has been shown to be successful in analyzing complicated data sets. Despite the success, NMF and NTF have been actively developed only in the recent decade, and algorithmic strategies for computing NMF and NTF have not been fully studied. In this thesis, computational challenges regarding NMF, NTF, and related least squares problems are addressed. First, efficient algorithms of NMF and NTF are investigated based on a connection from the NMF and the NTF problems to the nonnegativity-constrained least squares (NLS) problems. A key strategy is to observe typical structure of the NLS problems arising in the NMF and the NTF computation and design a fast algorithm utilizing the structure. We propose an accelerated block principal pivoting method to solve the NLS problems, thereby significantly speeding up the NMF and NTF computation. Implementation results with synthetic and real-world data sets validate the efficiency of the proposed method. In addition, a theoretical result on the classical active-set method for rank-deficient NLS problems is presented. Although the block principal pivoting method appears generally more efficient than the active-set method for the NLS problems, it is not applicable for rank-deficient cases. We show that the active-set method with a proper starting vector can actually solve the rank-deficient NLS problems without ever running into rank-deficient least squares problems during iterations. Going beyond the NLS problems, it is presented that a block principal pivoting strategy can also be applied to the l1-regularized linear regression. The l1-regularized linear regression, also known as the Lasso, has been very popular due to its ability to promote sparse solutions. Solving this problem is difficult because the l1-regularization term is not differentiable. A block principal pivoting method and its variant, which overcome a limitation of previous active-set methods, are proposed for this problem with successful experimental results. Finally, a group-sparsity regularization method for NMF is presented. A recent challenge in data analysis for science and engineering is that data are often represented in a structured way. In particular, many data mining tasks have to deal with group-structured prior information, where features or data items are organized into groups. Motivated by an observation that features or data items that belong to a group are expected to share the same sparsity pattern in their latent factor representations, We propose mixed-norm regularization to promote group-level sparsity. Efficient convex optimization methods for dealing with the regularization terms are presented along with computational comparisons between them. Application examples of the proposed method in factor recovery, semi-supervised clustering, and multilingual text analysis are presented.

Page generated in 0.0958 seconds