Spelling suggestions: "subject:"decomposition""
51 |
Décompositions spatio-temporelles pour l'étude des textures dynamiques : contribution à l'indexation vidéo / Spatio-temporal decompositions for the study of Dynamic Textures : contribution to video indexingDubois, Sloven 19 November 2010 (has links)
Nous nous intéresserons dans cette thèse à l'étude et la caractérisation des Textures Dynamiques (TDs), avec comme application visée l'indexation dans de grandes bases de vidéos. Ce thème de recherche étant émergent, nous proposons une définition des TDs, une taxonomie de celles-ci, ainsi qu'un état de l'art. La classe de TD la plus représentative est décrite par un modèle formel qui considère les TDs comme la superposition d'ondes porteuses et de phénomènes locaux. La construction d'outils d'analyse spatio-temporelle adaptés aux TDs est notre principale contribution. D'une part, nous montrons que la transformée en curvelets 2D+T est pertinente pour la représentation de l'onde porteuse. D'autre part, dans un objectif de décomposition des séquences vidéos, nous proposons d'utiliser l'approche par Analyse en Composantes Morphologiques. Notre contribution consiste en l'apport et l'étude de nouvelles stratégies de seuillage. Ces méthodes sont testées sur plusieurs applications: segmentation spatio-temporelle, décomposition de TDs, estimation du mouvement global d'une TD, ... Nous avons de plus montré que l'Analyse en Composantes Morphologiques et les approches multi-échelles donnent des résultats significatifs pour la recherche par le contenu et l'indexation de Textures Dynamiques de la base de données DynTex. Cette thèse constitue ainsi un premier pas vers l'indexation automatique de textures dynamiques dans des séquences d'images, et ouvre la voie à de nombreux développements sur ce sujet nouveau. Enfin, le caractère générique des approches proposées permet d'envisager leurs applications dans un cadre plus large mettant en jeu par exemple des données 3D. / This report is focused on the study and the characterization of Dynamic Textures (DTs), with the aim of video indexing in large databases. This research topic being new and emerging, we propose a taxonomy, a definition of DTs and a state of the art. The most representative DT class is described by a model that considers DTs as the superposition of several wavefronts and local oscillating phenomena. The design of spatio-temporal analysis tools adapted to DT is our main contribution. We first show that the 2D+T curvelet transform is relevant for representing wavefronts. In order to analyse and better understand the DTs, we propose in a second step to adapt the Morphological Component Analysis approach using new thresholding strategies. These methods are tested on several applications: decomposition of DTs, spatio-temporal segmentation, global motion estimation of a DT, ... We have shown that Morphological Component Analysis and multi-scale approaches enable significant results for content-based retrieval applications and dynamic texture indexing on the DynTex database. This thesis constitutes a first step towards automatic indexing of DTs in image sequences and opens the way for many new developments in this topic. Moreover, the proposed approaches are generic and could be applied in a broader context, for instance the processing of 3D data.
|
52 |
Centers and isochronicity of some polynomial differential systems / Centros e isocronicidade de alguns sistemas diferenciais polinomiaisFernandes, Wilker Thiago Resende 20 June 2017 (has links)
The center-focus and isochronicity problems are two classic problem in the qualitative theory of ordinary differential equations (ODEs). Although such problems have been studied during more than hundred years a complete understanding of them is far from be reached. Recently the computational algebra tools have been contributing significantly with the development of such problems. The aim of this thesis is to contribute with the studies of the center-focus and isochronicity problem. Using computational algebra tools we find conditions for the existence of two simultaneous centers for a family of quintic systems possessing symmetry. The studies of the simultaneous existence of two centers in differential systems is known as the bi-center problem. We investigate conditions for the isochronicity of centers for families of cubic and quintic systems and we study its global behaviour in the Poincaré disk. Finally, we study the existence of invariant surfaces and first integrals in a family of 3-dimensional systems. Such family is known as the May-Leonard asymmetric system and it appears in modelling, for instance it is a model for the competition of three species. / Os problemas do foco-centro e da isocronicidade são dois problemas clássicos da teoria qualitativa das equações diferenciais ordinárias (EDOs). Apesar de tais problemas serem investigados a mais de cem anos ainda pouco se sabe sobre eles. Recentemente o uso e desenvolvimento de ferramentas algebro-computacionais tem contribuído significativamente em seu avanço. O objetivo desta tese é colaborar com o estudo do problema do foco-centro e da isocronicidade. Utilizando ferramentas algebro-computacionais encontramos condições para a existência simultânea de dois centros em famílias de sistemas diferenciais quínticos com simetria. O estudo sobre a existência simultânea de dois centros é também conhecido como problema do bi-centro. Investigamos condições para a isocronicidade de centros para famílias de sistemas cubicos e quínticos e estudamos o comportamento global de suas órbitas no disco de Poincaré. Finalmente, tratamos da existência de superfícies invariantes e integrais primeiras para uma familia de sistemas 3-dimensionais encontrado entre outras situações na modelagem da competição entre três espécies e conhecido como sistema de May-Leonard.
|
53 |
Jeux de poursuite-évasion, décompositions et convexité dans les graphes / Pursuit-evasion, decompositions and convexity on graphsPardo Soares, Ronan 08 November 2013 (has links)
Cette thèse porte sur l’étude des propriétés structurelles de graphes dont la compréhension permet de concevoir des algorithmes efficaces pour résoudre des problèmes d’optimisation. Nous nous intéressons plus particulièrement aux méthodes de décomposition des graphes, aux jeux de poursuites et à la notion de convexité. Le jeu de Processus a été défini comme un modèle de la reconfiguration de routage. Souvent, ces jeux où une équipe de chercheurs doit effacer un graphe non orienté sont reliés aux décompositions de graphes. Dans les digraphes, nous montrons que le jeu de Processus est monotone et nous définissons une nouvelle décomposition de graphes que lui est équivalente. Ensuite, nous étudions d’autres décompositions de graphes. Nous proposons un algorithme FPT-unifiée pour calculer plusieurs paramètres de largeur de graphes. En particulier, ceci est le premier FPT-algorithme pour la largeur arborescente q-branché et spéciale d’un graphe. Nous étudions ensuite un autre jeu qui modélise les problèmes de pré-chargement. Nous introduisons la variante en ligne du jeu de surveillance. Nous étudions l’écart entre le jeu de surveillance classique et ses versions connecté et en ligne, en fournissant de nouvelles bornes. Nous définissons ensuite un cadre général pour l’étude des jeux poursuite-évasion. Cette méthode nous permet de donner les premiers résultats d’approximation pour certains de ces jeux. Finalement, nous étudions un autre paramètre lié à la convexité des graphes et à la propagation d’infection dans les réseaux, le nombre enveloppe. Nous fournissons plusieurs résultats de complexité en fonction des structures des graphes et en utilisant des décompositions de graphes. / This thesis focuses on the study of structural properties of graphs whose understanding enables the design of efficient algorithms for solving optimization problems. We are particularly interested in methods of decomposition, pursuit-evasion games and the notion of convexity. The Process game has been defined as a model for the routing reconfiguration problem in WDM networks. Often, such games where a team of searchers have to clear an undirected graph are closely related to graph decompositions. In digraphs, we show that the Process game is monotone and we define a new equivalent digraph decomposition. Then, we further investigate graph decompositions. We propose a unified FPT-algorithm to compute several graph width parameters. This algorithm turns to be the first FPT-algorithm for the special and the q-branched tree-width of a graph. We then study another pursuit-evasion game which models prefetching problems. We introduce the more realistic online variant of the Surveillance game. We investigate the gap between the classical Surveillance Game and its connected and online versions by providing new bounds. We then define a general framework for studying pursuit-evasion games, based on linear programming techniques. This method allows us to give first approximation results for some of these games. Finally, we study another parameter related to graph convexity and to the spreading of infection in networks, namely the hull number. We provide several complexity results depending on the graph structures making use of graph decompositions. Some of these results answer open questions of the literature.
|
54 |
Asymptotic staticity and tensor decompositions with fast decay conditionsAvila, Gastón January 2011 (has links)
Corvino, Corvino and Schoen, Chruściel and Delay have shown the existence of a large class of asymptotically flat vacuum initial data for Einstein's field equations which are static or stationary in a neighborhood of space-like infinity, yet quite general in the interior. The proof relies on some abstract, non-constructive arguments which makes it difficult to calculate such data numerically by using similar arguments.
A quasilinear elliptic system of equations is presented of which we expect that it can be used to construct vacuum initial data which are asymptotically flat, time-reflection symmetric, and asymptotic to static data up to a prescribed order at space-like infinity. A perturbation argument is used to show the existence of solutions. It is valid when the order at which the solutions approach staticity is restricted to a certain range.
Difficulties appear when trying to improve this result to show the existence of solutions that are asymptotically static at higher order. The problems arise from the lack of surjectivity of a certain operator.
Some tensor decompositions in asymptotically flat manifolds exhibit some of the difficulties encountered above. The Helmholtz decomposition, which plays a role in the preparation of initial data for the Maxwell equations, is discussed as a model problem. A method to circumvent the difficulties that arise when fast decay rates are required is discussed. This is done in a way that opens the possibility to perform numerical computations.
The insights from the analysis of the Helmholtz decomposition are applied to the York decomposition, which is related to that part of the quasilinear system which gives rise to the difficulties. For this decomposition analogous results are obtained. It turns out, however, that in this case the presence of symmetries of the underlying metric leads to certain complications. The question, whether the results obtained so far can be used again to show by a perturbation argument the existence of vacuum initial data which approach static solutions at infinity at any given order, thus remains open. The answer requires further analysis and perhaps new methods. / Corvino, Corvino und Schoen als auch Chruściel und Delay haben die Existenz einer grossen Klasse asymptotisch flacher Anfangsdaten für Einsteins Vakuumfeldgleichungen gezeigt, die in einer Umgebung des raumartig Unendlichen statisch oder stationär aber im Inneren der Anfangshyperfläche sehr allgemein sind. Der Beweis beruht zum Teil auf abstrakten, nicht konstruktiven Argumenten, die Schwierigkeiten bereiten, wenn derartige Daten numerisch berechnet werden sollen.
In der Arbeit wird ein quasilineares elliptisches Gleichungssystem vorgestellt, von dem wir annehmen, dass es geeignet ist, asymptotisch flache Vakuumanfangsdaten zu berechnen, die zeitreflektionssymmetrisch sind und im raumartig Unendlichen in einer vorgeschriebenen Ordnung asymptotisch zu statischen Daten sind. Mit einem Störungsargument wird ein Existenzsatz bewiesen, der gilt, solange die Ordnung, in welcher die Lösungen asymptotisch statische Lösungen approximieren, in einem gewissen eingeschränkten Bereich liegt.
Versucht man, den Gültigkeitsbereich des Satzes zu erweitern, treten Schwierigkeiten auf. Diese hängen damit zusammen, dass ein gewisser Operator nicht mehr surjektiv ist.
In einigen Tensorzerlegungen auf asymptotisch flachen Räumen treten ähnliche Probleme auf, wie die oben erwähnten. Die Helmholtzzerlegung, die bei der Bereitstellung von Anfangsdaten für die Maxwellgleichungen eine Rolle spielt, wird als ein Modellfall diskutiert. Es wird eine Methode angegeben, die es erlaubt, die Schwierigkeiten zu umgehen, die auftreten, wenn ein schnelles Abfallverhalten des gesuchten Vektorfeldes im raumartig Unendlichen gefordert wird. Diese Methode gestattet es, solche Felder auch numerisch zu berechnen. Die Einsichten aus der Analyse der Helmholtzzerlegung werden dann auf die Yorkzerlegung angewandt, die in den Teil des quasilinearen Systems eingeht, der Anlass zu den genannten Schwierigkeiten gibt. Für diese Zerlegung ergeben sich analoge Resultate. Es treten allerdings Schwierigkeiten auf, wenn die zu Grunde liegende Metrik Symmetrien aufweist. Die Frage, ob die Ergebnisse, die soweit erhalten wurden, in einem Störungsargument verwendet werden können um die Existenz von Vakuumdaten zu zeigen, die im räumlich Unendlichen in jeder Ordnung statische Daten approximieren, bleibt daher offen. Die Antwort erfordert eine weitergehende Untersuchung und möglicherweise auch neue Methoden.
|
55 |
Polynomial Matrix Decompositions : Evaluation of Algorithms with an Application to Wideband MIMO CommunicationsBrandt, Rasmus January 2010 (has links)
The interest in wireless communications among consumers has exploded since the introduction of the "3G" cell phone standards. One reason for their success is the increasingly higher data rates achievable through the networks. A further increase in data rates is possible through the use of multiple antennas at either or both sides of the wireless links. Precoding and receive filtering using matrices obtained from a singular value decomposition (SVD) of the channel matrix is a transmission strategy for achieving the channel capacity of a deterministic narrowband multiple-input multiple-output (MIMO) communications channel. When signalling over wideband channels using orthogonal frequency-division multiplexing (OFDM), an SVD must be performed for every sub-carrier. As the number of sub-carriers of this traditional approach grow large, so does the computational load. It is therefore interesting to study alternate means for obtaining the decomposition. A wideband MIMO channel can be modeled as a matrix filter with a finite impulse response, represented by a polynomial matrix. This thesis is concerned with investigating algorithms which decompose the polynomial channel matrix directly. The resulting decomposition factors can then be used to obtain the sub-carrier based precoding and receive filtering matrices. Existing approximative polynomial matrix QR and singular value decomposition algorithms were modified, and studied in terms of decomposition quality and computational complexity. The decomposition algorithms were shown to give decompositions of good quality, but if the goal is to obtain precoding and receive filtering matrices, the computational load is prohibitive for channels with long impulse responses. Two algorithms for performing exact rational decompositions (QRD/SVD) of polynomial matrices were proposed and analyzed. Although they for simple cases resulted in excellent decompositions, issues with numerical stability of a spectral factorization step renders the algorithms in their current form purposeless. For a MIMO channel with exponentially decaying power-delay profile, the sum rates achieved by employing the filters given from the approximative polynomial SVD algorithm were compared to the channel capacity. It was shown that if the symbol streams were decoded independently, as done in the traditional approach, the sum rates were sensitive to errors in the decomposition. A receiver with a spatially joint detector achieved sum rates close to the channel capacity, but with such a receiver the low complexity detector set-up of the traditional approach is lost. Summarizing, this thesis has shown that a wideband MIMO channel can be diagonalized in space and frequency using OFDM in conjunction with an approximative polynomial SVD algorithm. In order to reach sum rates close to the capacity of a simple channel, the computational load becomes restraining compared to the traditional approach, for channels with long impulse responses.
|
56 |
Investigation of probabilistic principal component analysis compared to proper orthogonal decomposition methods for basis extraction and missing data estimationLee, Kyunghoon 21 May 2010 (has links)
The identification of flow characteristics and the reduction of high-dimensional simulation data have capitalized on an orthogonal basis achieved by proper orthogonal decomposition (POD), also known as principal component analysis (PCA) or the Karhunen-Loeve transform (KLT). In the realm of aerospace engineering, an orthogonal basis is versatile for diverse applications, especially associated with reduced-order modeling (ROM) as follows: a low-dimensional turbulence model, an unsteady aerodynamic model for aeroelasticity and flow control, and a steady aerodynamic model for airfoil shape design. Provided that a given data set lacks parts of its data, POD is required to adopt a least-squares formulation, leading to gappy POD, using a gappy norm that is a variant of an L2 norm dealing with only known data. Although gappy POD is originally devised to restore marred images, its application has spread to aerospace engineering for the following reason: various engineering problems can be reformulated in forms of missing data estimation to exploit gappy POD. Similar to POD, gappy POD has a broad range of applications such as optimal flow sensor placement, experimental and numerical flow data assimilation, and impaired particle image velocimetry (PIV) data restoration.
Apart from POD and gappy POD, both of which are deterministic formulations, probabilistic principal component analysis (PPCA), a probabilistic generalization of PCA, has been used in the pattern recognition field for speech recognition and in the oceanography area for empirical orthogonal functions in the presence of missing data. In formulation, PPCA presumes a linear latent variable model relating an observed variable with a latent variable that is inferred only from an observed variable through a linear mapping called factor-loading. To evaluate the maximum likelihood estimates (MLEs) of PPCA parameters such as a factor-loading, PPCA can invoke an expectation-maximization (EM) algorithm, yielding an EM algorithm for PPCA (EM-PCA). By virtue of the EM algorithm, the EM-PCA is capable of not only extracting a basis but also restoring missing data through iterations whether the given data are intact or not. Therefore, the EM-PCA can potentially substitute for both POD and gappy POD inasmuch as its accuracy and efficiency are comparable to those of POD and gappy POD. In order to examine the benefits of the EM-PCA for aerospace engineering applications, this thesis attempts to qualitatively and quantitatively scrutinize the EM-PCA alongside both POD and gappy POD using high-dimensional simulation data.
In pursuing qualitative investigations, the theoretical relationship between POD and PPCA is transparent such that the factor-loading MLE of PPCA, evaluated by the EM-PCA, pertains to an orthogonal basis obtained by POD. By contrast, the analytical connection between gappy POD and the EM-PCA is nebulous because they distinctively approximate missing data due to their antithetical formulation perspectives: gappy POD solves a least-squares problem whereas the EM-PCA relies on the expectation of the observation probability model. To juxtapose both gappy POD and the EM-PCA, this research proposes a unifying least-squares perspective that embraces the two disparate algorithms within a generalized least-squares framework. As a result, the unifying perspective reveals that both methods address similar least-squares problems; however, their formulations contain dissimilar bases and norms. Furthermore, this research delves into the ramifications of the different bases and norms that will eventually characterize the traits of both methods. To this end, two hybrid algorithms of gappy POD and the EM-PCA are devised and compared to the original algorithms for a qualitative illustration of the different basis and norm effects. After all, a norm reflecting a curve-fitting method is found to more significantly affect estimation error reduction than a basis for two example test data sets: one is absent of data only at a single snapshot and the other misses data across all the snapshots.
From a numerical performance aspect, the EM-PCA is computationally less efficient than POD for intact data since it suffers from slow convergence inherited from the EM algorithm. For incomplete data, this thesis quantitatively found that the number of data-missing snapshots predetermines whether the EM-PCA or gappy POD outperforms the other because of the computational cost of a coefficient evaluation, resulting from a norm selection. For instance, gappy POD demands laborious computational effort in proportion to the number of data-missing snapshots as a consequence of the gappy norm. In contrast, the computational cost of the EM-PCA is invariant to the number of data-missing snapshots thanks to the L2 norm. In general, the higher the number of data-missing snapshots, the wider the gap between the computational cost of gappy POD and the EM-PCA. Based on the numerical experiments reported in this thesis, the following criterion is recommended regarding the selection between gappy POD and the EM-PCA for computational efficiency: gappy POD for an incomplete data set containing a few data-missing snapshots and the EM-PCA for an incomplete data set involving multiple data-missing snapshots.
Last, the EM-PCA is applied to two aerospace applications in comparison to gappy POD as a proof of concept: one with an emphasis on basis extraction and the other with a focus on missing data reconstruction for a given incomplete data set with scattered missing data.
The first application exploits the EM-PCA to efficiently construct reduced-order models of engine deck responses obtained by the numerical propulsion system simulation (NPSS), some of whose results are absent due to failed analyses caused by numerical instability.
Model-prediction tests validate that engine performance metrics estimated by the reduced-order NPSS model exhibit considerably good agreement with those directly obtained by NPSS. Similarly, the second application illustrates that the EM-PCA is significantly more cost effective than gappy POD at repairing spurious PIV measurements obtained from acoustically-excited, bluff-body jet flow experiments. The EM-PCA reduces computational cost on factors 8 ~ 19 compared to gappy POD while generating the same restoration results as those evaluated by gappy POD. All in all, through comprehensive theoretical and numerical investigation, this research establishes that the EM-PCA is an efficient alternative to gappy POD for an incomplete data set containing missing data over an entire data set.
|
57 |
Distribución del ingreso en América Latina: caracterización de las diferencias entre paísesHaimovich, Francisco January 2008 (has links) (PDF)
Este trabajo explora las diferencias entre las distribuciones del ingreso de las áreas urbanas de América Latina a través de ejercicios de microsimulaciones. Los principales insumos de estos ejercicios lo constituyen los microdatos de las encuestas de hogares de 16 países de la región. Los resultados indican que las diferencias entre países en los retornos a la educación formal y a factores inobservables en términos de salarios horarios dan cuenta de gran parte de las diferencias en pobreza y desigualdad entre las economías de la región. Las diferencias en términos de estructura sectorial del empleo, horas trabajadas, empleo, fecundidad, estructura de edades, diferencias salariales por género y edad, y aun de estructura educativa parecen tener, en promedio, un papel algo menor.
|
58 |
Reduced order modeling, nonlinear analysis and control methods for flow control problemsKasnakoglu, Cosku, January 2007 (has links)
Thesis (Ph. D.)--Ohio State University, 2007. / Title from first page of PDF file. Includes bibliographical references (p. 135-144).
|
59 |
Neighbour-distinguishing decompositions of graphs / Décompositions de graphes voisins-distinguantesSenhaji, Mohammed 14 September 2018 (has links)
Dans cette thèse nous explorons différentes décompositions de graphes. Le titre de la présente thèse est dû au fait que la majorité de ces décompositions sont des décompositions voisin-distinguantes. En d'autres mots, nous pouvons en extraire des colorations propres des sommets. La question principale présentée dans cette thèse a été introduite par Karoński, Łuczak et Thomason: Est il possible de pondérer les arêtes d'un graphes avec les poids 1, 2 et 3, afin que tous les sommets voisins soient distingués par la somme des poids de leurs arêtes incidentes ? Cette question deviendra plus tard la fameuse 1-2-3 Conjecture. Nous présentons différentes variantes de la 1-2-3 Conjecture, ainsi que leurs liens avec les décompositions localement irrégulières. Nous nous intéressons tant à des problèmes d'optimisation qu'à des problèmes algorithmiques. Nous commençons par introduire une variante équitable des arête-pondérations voisin-somme-distinguantes, où chaque poids doit être utilisé le même nombre de fois (à l'unité près). Ensuite nous présentons une variante injective ou chaque poids est utilisé au plus une seule fois. Ce qui est un cas particulier de la variante équitable. De plus les pondérations injectives sont une variante locale des étiquetages anti-magiques. Ensuite nous modifions les conditions de distinction entre voisin en introduisant une variante 2-distinguante. les pondérations voisins-somme-2-distinguantes requierent que deux sommets voisins dans le graphe aient des sommes incidentes qui diffèrent d'au moins 2. Nous étudions le poids maximum minimal dans de telles pondérations pour certaines familles de graphes, ainsi que des problèmes de complexité. Dû aux liens entre les pondérations voisins-sommet-distinguantes et les décompositions localement irrégulières, nous nous sommes aussi intéressé à ces dernières, particulièrement pour les graphes sub-cubiques, ainsi qu'à d'autres variantes des décompositions localement irrégulières. Finalement nous présentons un jeu de pondérations à deux joueurs, ainsi qu'une théorie de décompositions qui unifie les pondérations voisin-somme-distinguantes et les décompositions localement irrégulières. / In this thesis we explore graph decompositions under different constraints. The title of the is due to the fact that most of these decompositions are neighbour-distinguishing. That is, we can extract from each such decomposition a proper vertex colouring. Moreover, most of the considered decompositions are edge partitions, and therefore can be seen as edge-colourings. The main question presented in this thesis is was introduced by Karoński, Łuczak and Thomason in [KLT04]: Can we weight the edges of a graph G, with weights 1, 2, and 3, such that any two of adjacent vertices of G are distinguished by the sum of their incident weights ? This question later becomes the famous 1-2-3 Conjecture. In this thesis we explore several variants of the 1-2-3 Conjecture, and their links with locally irregular decompositions. We are interested in both optimisation results and algorithmic problems. We first introduce an equitable version of the neighbour-sum- distinguishing edge-weightings, that is a variant where we require every edge weight to be used the same number of times up to a difference of 1. Then we explore an inject- ive variant where each edge is assigned a different weight, which yields necessarily an equitable weighting. This gives us first general upper bounds on the equitable version. Moreover, the injective variant is also a local version of the well-known antimagic la- belling. After that we explore how neighbour-sum-distinguishing weightings behave if we require sums of neighbouring vertices to differ by at least 2. Namely, we present results on the smallest maximal weight needed to construct such weightings for some classes of graphs, and study some algorithmic aspects of this problem. Due to the links between neighbour-sum-distinguishing edge weightings and locally irregular decompositions, we also explore the locally irregular index of subcubic graphs, along with other variants of the locally irregular decomposition problem. Finally, we present a more general work to- ward a general theory unifying nsd edge-weightings and locally irregular decompositions. We also present a 2-player game version of neighbour-sum-distinguishing edge-weightings and exhibit sufficient conditions for each player to win the game.
|
60 |
Poincaré duality in equivariant intersection theory / Poincaré duality in equivariant intersection theoryGonzales Vilcarromero, Richard Paul 25 September 2017 (has links)
We study the Poincaré duality map from equivariant Chow cohomology to equivariant Chow groups in the case of torus actions on complete, possibly singular, varieties with isolated fixed points. Our main results yield criteria for the Poincaré duality map to become an isomorphism in this setting. The methods rely on the localization theorem for equivariant Chow cohomology and the notion of algebraic rational cell. We apply our results to complete spherical varieties and their generalizations. / En este artículo estudiamos el homomorfismo de dualidad de Poincaré, el cual relaciona cohomología de Chow equivariante y grupos de Chow equivariante en aquellos casos donde un toro algebraico actúa sobre una variedad singular compacta y con puntos fijos aislados. Nuestros resultados proporcionan criterios bajo los cuales el homomorfismo de dualidadde Poincaré es un isomorfismo. Para ello, usamos el teorema de localización en cohomología de Chow equivariante y la noción de célula algebraica racional. Aplicamos nuestros resultados a las variedades esféricas compactas y sus generalizaciones.
|
Page generated in 0.082 seconds