Spelling suggestions: "subject:"math."" "subject:"mata.""
121 |
Robust control strategies for mean-field collective dynamicsSegala, Chiara 30 June 2022 (has links)
The main topic of the thesis is the synthesis of control laws for interacting agent-based dynamics and their mean-field limit. In particular, after a general introduction, in the second chapter a linearization-based approach is used for the computation of sub-optimal feedback laws obtained from the solution of differential matrix Riccati equations. Quantification of dynamic performance of such control laws leads to theoretical estimates on suitable linearization points of the nonlinear dynamics. Subsequently, the feedback laws are embedded into a nonlinear model predictive control framework where the control is updated adaptively in time according to dynamic information on moments of linear mean-field dynamics. The performance and robustness of the proposed methodology is assessed through different numerical experiments in collective dynamics. In the other chapters of the thesis I present some related projects, robustness of systems with uncertainties, a proximal gradient approach for sparse control and an application in crowd evacuation dynamics.
|
122 |
From optimization to listing: theoretical advances in some enumeration problemsRaffaele, Alice 30 March 2022 (has links)
The main aim of this thesis is to investigate some problems relevant in enumeration and optimization, for which I present new theoretical results. First, I focus on a classical enumeration problem in graph theory with several applications, such as network reliability. Given an undirected graph, the objective is to list all its bonds, i.e., its minimal cuts. I provide two new algorithms, the former having the same time complexity as the state of the art by [Tsukiyama et al., 1980], whereas the latter offers an improvement. Indeed, by refining the branching strategy of [Tsukiyama et al., 1980] and relying on some dynamic data structures by [Holm et al., 2001], it is possible to define an eO(n)-delay algorithm to output each bond of the graph as a bipartition of the n vertices. Disregarding the polylogarithmic factors hidden in the eO notation, this is the first algorithm to list bonds in a time linear in the number of vertices. Then, I move to studying two well-known problems in theoretical computer science, that are checking the duality of two monotone Boolean functions, and computing the dual of a monotone Boolean function. Also these are relevant in many fields, such as linear programming. [Fredman and Khachiyan, 1996] developed the first quasi-polynomial time algorithm to solve the decision problem, thus proving that it is not coNP-complete. However, no polynomial-time algorithm has been discovered yet. Here, by focusing on the symmetry of the two input objects and exploiting full covers introduced by [Boros and Makino, 2009], I define an alternative decomposition approach. This offers a strong bound which, however, in the worst case, is still the same as [Fredman and Khachiyan, 1996]. Anyway, I also show how to adapt it to obtain a polynomial-space algorithm to solve the dualization problem. Finally, as extra content, this thesis contains an appendix about the topic of communicating operations research. By starting from two side projects not related to enumeration, and by comparing some relevant considerations and opinions by researchers
and practitioners, I discuss the problem of properly promoting, fostering, and communicating findings in this research area to laypeople.
|
123 |
Human Behavior in Epidemic ModellingPoletti, Piero January 2010 (has links)
Mathematical models represent a powerful tool for investigating the dynamics of human infection diseases, providing useful predictions about the spread of a disease and the effectiveness of possible control measures.
One of the central aspects to understand the dynamics of human infection is the heterogeneity in behavioral patters adopted by the host population. Beyond control measures imposed by public authorities, human behavioral changes can be triggered by uncoordinated responses driven by the diffusion of fear in the general population or by the risk perception.
In order to assess how and when behavioral changes can affect the spread of an epidemic, spontaneous social distancing - e.g. produced by avoiding crowded environments, using face masks or limiting travels - is investigated. Moreover, in order to assess whether vaccine preventable diseases can be eliminated through not compulsory vaccination programs, vaccination choices are investigated as well.
The proposed models are based on an evolutionary game theory framework. Considering dynamical games allows explicitly modeling the coupled dynamics of disease transmission and human behavioral changes. Specifically, the information diffusion is modeled through an imitation process in which the convenience of different behaviors depends on the perceived risk of infection and vaccine side effects. The proposed models allow the investigation of the effects of misperception of risks induced by partial, delayed or incorrect information (either concerning the state of the epidemic or vaccine side effects) as well.
The performed investigation highlights that a small reduction in the number of potentially infectious contacts in response to an epidemic and an initial misperception of the risk of infection can remarkably affect the spread of infection. On the other hand, the analysis of vaccination choices showed that concerns about proclaimed risks of vaccine side effects can result in widespread refusal of vaccination which in turn leads to drops in vaccine uptake and suboptimal vaccination coverage.
|
124 |
MCDM methods based on pairwise comparison matrices and their fuzzy extensionKrejčí, Jana January 2017 (has links)
Methods based on pairwise comparison matrices (PCMs) form a significant part of multi-criteria decision making (MCDM) methods. These methods are based on structuring pairwise comparisons (PCs) of objects from a finite set of objects into a PCM and deriving priorities of objects that represent the relative importance of each object with respect to all other objects in the set. However, the crisp PCMs are not able to capture uncertainty stemming from subjectivity of human thinking and from incompleteness of information about the problem that are often closely related to MCDM problems. That is why the fuzzy extension of methods based on PCMs has been of great interest. In order to derive fuzzy priorities of objects from a fuzzy PCM (FPCM), standard fuzzy arithmetic is usually applied to the fuzzy extension of the methods originally developed for crisp PCMs.
%Fuzzy extension of the methods based on PCMs usually consists in simply replacing the crisp PCs in the given model by fuzzy PCs and applying standard fuzzy arithmetic to obtain the desired fuzzy priorities.
However, such approach fails in properly handling uncertainty of preference information contained in the FPCM. Namely, reciprocity of the related PCs of objects in a FPCM and invariance of the given method under permutation of objects are violated when standard fuzzy arithmetic is applied to the fuzzy extension. This leads to distortion of the preference information contained in the FPCM and consequently to false results.
Thus, the first research question of the thesis is:
``Based on a FPCM of objects, how should fuzzy priorities of these objects be determined so that they reflect properly all preference information available in the FPCM?''
This research question is answered by introducing an appropriate fuzzy extension of methods originally developed for crisp PCMs. That is, such fuzzy extension that does not violate reciprocity of the related PCs and invariance under permutation of objects, and that does not lead to a redundant increase of uncertainty of the resulting fuzzy priorities of objects. Fuzzy extension of three different types of PCMs is examined in this thesis - multiplicative PCMs, additive PCMs with additive representation, and additive PCMs with multiplicative representation. In particular, construction of PCMs, verifying consistency, and deriving priorities of objects from PCMs are studied in detail for each type of these PCMs.
First, well-known and in practice most often applied methods based on crisp PCMs are reviewed.
Afterwards, fuzzy extensions of these methods proposed in the literature are reviewed in detail and their drawbacks regarding the violation of reciprocity of the related PCs and of invariance under permutation of objects are pointed out. It is shown that these drawbacks can be overcome by properly applying constrained fuzzy arithmetic instead of standard fuzzy arithmetic to the computations.
In particular, we always have to look at a FPCM as a set of PCMs with different degrees of membership to the FPCM, i.e. we always have to consider only PCs that are mutually reciprocal. Constrained fuzzy arithmetic allows us to impose the reciprocity of the related PCs as a constraint on arithmetic operations with fuzzy numbers, and its appropriate application also guarantees invariance of the methods under permutation of objects.
Finally, new fuzzy extensions of the methods are proposed based on constrained fuzzy arithmetic and it is proved that these methods do not violate the reciprocity of the related PCs and are invariant under permutation of objects.
Because of these desirable properties, fuzzy priorities of objects obtained by the methods proposed in this thesis reflect the preference information contained in fuzzy PCMs better in comparison to the fuzzy priorities obtained by the methods based on standard fuzzy arithmetic.
Beside the inability to capture uncertainty, methods based on PCMs are also not able to cope with situations where it is not possible or reasonable to obtain complete preference information from DMs. This problem occurs especially in the situations involving large-dimensional PCMs.
When dealing with incomplete large-dimensional PCMs, compromise between reducing the number of PCs required from the DM and obtaining reasonable priorities of objects is of paramount importance.
This leads to the second research question:
``How can the amount of preference information required from the DM in a large-dimensional PCM be reduced while still obtaining comparable priorities of objects?''
This research question is answered by introducing an efficient two-phase method. Specifically, in the first phase, an interactive algorithm based on weak-consistency condition is introduced for partially filling an incomplete PCM. This algorithm is designed in such a way that minimizes the number of PCs required from the DM and provides sufficient amount of preference information at the same time. The weak-consistency condition allows for providing ranges of possible intensities of preference for every missing PC in the incomplete PCM. Thus, at the end of the first phase, a PCM containing intervals for all PCs that were not provided by the DM is obtained.
Afterward, in the second phase, the methods for obtaining fuzzy priorities of objects from fuzzy PCMs proposed in this thesis within the answer to the first research question are applied to derive interval priorities of objects from this incomplete PCM. The obtained interval priorities cover all weakly consistent completions of the incomplete PCM and are very narrow. The performance of the method is illustrated by a real-life case study and by simulations that demonstrate the ability of the algorithm to reduce the number of PCs required from the DM in PCMs of dimension 15 and greater by more than 60\% on average while obtaining interval priorities comparable with the priorities obtainable from the hypothetical complete PCMs.
|
125 |
Arbitrary high order discontinuous Galerkin methods for the shallow water and incompressible Navier-Stokes equations on unstructured staggered meshesTavelli, Maurizio January 2016 (has links)
IIn this work we present a new class of well-balanced, arbitrary high order accurate semi-implicit discontinuous Galerkin methods for the solution of the shallow water and incompressible Navier-Stokes equations on staggered unstructured curved meshes. Isoparametric finite elements are used to take into account curved domain boundaries. Regarding two-dimensional shallow water equations, the discrete free surface elevation is defined on a primal triangular grid, while the discrete total height and the discrete velocity field are defined on an edge-based staggered dual grid. Similarly, for the two-dimensional incompressible Navier-Stokes case, the discrete pressure is defined on the main triangular grid and the velocity field is defined on the edge-based staggered grid. While staggered meshes are state of the art in classical finite difference approximations of the incompressible Navier-Stokes equations, their use in the context of high order DG schemes is novel and still quite rare. High order (better than second order) in time can be achieved by using a space-time finite element framework, where the basis and test functions are piecewise polynomials in both space and time. Formal substitution of the discrete momentum equation on the dual grid into the discrete continuity equation on the primary grid yields a very sparse system for the scalar pressure involving only the direct neighbor elements, so that it becomes a block four-point system in 2D and a block five-point system for 3D tetrahedral meshes. The resulting linear system is conveniently solved with a matrix-free GMRES algorithm. Note that the same space-time DG scheme on a collocated grid would lead to ten non-zero blocks per element in 2D and seventeen non-zero blocks in 3D, since substituting the discrete velocity into the discrete continuity equation on a collocated mesh would involve also neighbors of neighbors. From numerical experiments we find that our linear system is well-behaved and that the GMRES method converges quickly even without the use of any preconditioner, which is a unique feature in the context of high order implicit DG schemes. A very simple and efficient Picard iteration is then used in order to derive a space-time pressure correction algorithm that achieves also high order of accuracy in time, which is in general a non-trivial task in the context of high order discretizations for the incompressible Navier-Stokes equations. The special case of high order in space low order in time allows us to recover further regularity about the main linear system for the pressure, such as the symmetry and the positive semi-definiteness in the general case. This allows us to use a very fast linear solver such as the conjugate gradient (CG) method. The flexibility and accuracy of high order space-time DG methods on curved unstructured meshes allows to discretize even complex physical domains with very coarse grids in both space and time. We will further extend the previous method to three-dimensional incompressible Navier-Stokes system using a tetrahedral main grid and a corresponding face-based hexaxedral dual grid. The resulting dual mesh consists in non-standard 5-vertex hexahedral elements that cannot be represented using tensor products of one dimensional basis functions. Indeed a modal polynomial basis will be used for the dual mesh. This new family of numerical schemes is verified by solving a series of typical numerical test problems and by comparing the obtained numerical results with available exact analytical solutions or other numerical reference data. Furthermore, the comparison with available experimental results will be presented for incompressible Navier-Stokes equations.
|
126 |
Intrinsic Differentiability and Intrinsic Regular Surfaces in Carnot groupsDi Donato, Daniela January 2017 (has links)
The main object of our research is the notion of "intrinsic regular surfaces" introduced and studied by Franchi, Serapioni, Serra Cassano in a Carnot group G. More precisely, an intrinsic regular hypersurface (i.e. a topological codimension 1 surface) S is a subset of G which is locally defined as a non critical level set of a C^1 intrinsic function. In a similar way, a k-codimensional intrinsic regular surface is locally defined as a non critical level set of a C^1 intrinsic vector function. Through Implicit Function Theorem, S can be locally represented as an intrinsic graph by a function phi. Here the intrinsic graph is defined as follows: let V and W be complementary subgroups of G, then the intrinsic graph of phi defined from W to V is the set { A \cdot phi(A) | A belongs to W}, where \cdot indicates the group operation in G. A fine characterization of intrinsic regular surfaces in Heisenberg groups (examples of Carnot groups) as suitable 1-codimensional intrinsic graphs has been established in [1]. We extend this result in a general Carnot group introducing an appropriate notion of differentiability, denoted uniformly intrinsic differentiability, for maps acting between complementary subgroups of G. Finally we provide a characterization of intrinsic regular surfaces in terms of existence and continuity of suitable "derivatives" of phi introduced by Serra Cassano et al. in the context of Heisenberg groups. All the results have been obtained in collaboration with Serapioni. [1] L.Ambrosio, F. Serra Cassano, D. Vittone, \emph{Intrinsic regular hypersurfaces in Heisenberg groups}, J. Geom. Anal. 16, (2006), 187-232.
|
127 |
On the Necessity of Complex Numbers in Quantum MechanicsOppio, Marco January 2018 (has links)
In principle, the lattice of elementary propositions of a generic quantum system admits a representation in real, complex or quaternionic Hilbert spaces as established by Solèr’s theorem (1995) closing a long standing problem that can be traced back to von Neumann’s mathematical formulation of quantum mechanics. However up to now there are no examples of quantum systems described in Hilbert spaces whose scalar field is different from the set of complex numbers. We show that elementary relativistic systems cannot be described by irreducible strongly-continuous unitary representations of SL(2, C) on real or quaternionic Hilbert spaces as a consequence of some peculiarity of the generators related with the theory of polar decomposition of operators. Indeed such a ”naive” attempt leads necessarily to an equivalent formulation on a complex Hilbert space. Although this conclusion seems to give a definitive answer to the real/quaternionic-quantum-mechanics issue, it lacks consistency since it does not derive from more general physical hypotheses as the complex one does. Trying a more solid approach, in both situations we end up with three possibilities: an equivalent description in terms of a Wigner unitary representation in a real, complex or quaternionic Hilbert space. At this point the ”naive” result turns out to be a definitely important technical lemma, for it forbids the two extreme possibilities. In conclusion, the real/quaternionic theory is actually complex. This improved approach is based upon the concept of von Neumann algebra of observables. Unfortunately, while there exists a thorough literature about these algebras on real and complex Hilbert spaces, an analysis on the notion of von Neumann algebra over a quaternionic Hilbert space is completely absent to our knowledge. There are several issues in trying to define such a mathematical object, first of all the inability to construct linear combination of operators with quaternionic coeffients. Restricting ourselves to unital real *-algebras of operators we are able to prove the von Neumann Double Commutant Theorem also on quaternionc Hilbert spaces. Clearly, this property turns out to be crucial.
|
128 |
Unilateral CommitmentsBriata, Federica January 2010 (has links)
The research done in the thesis in within the non-cooperative games on the following three topics:
1) Binary symmetric games.
2) Quality unilateral commitments.
3) Essentializing equilibrium concepts.
|
129 |
The algebraic representation of OWA functions in the binomial decomposition framework and its applications in large-scale problemsNguyen, Hong Thuy January 2019 (has links)
In the context of multicriteria decision making, the ordered weighted averaging (OWA) functions play a crucial role in aggregating multiple criteria evaluations into an overall assessment to support decision makers reaching a decision. The determination of OWA weights is, therefore, an important task in this process. Solving real-life problems with a large number of OWA weights, however, can be very challenging and time consuming. In this research we recall that OWA functions correspond to the Choquet integrals associated with symmetric capacities. The problem of defining all Choquet capacities on a set of n criteria requires 2^n real coefficients. Grabisch introduced the k-additive framework to reduce the exponential computational burden. We review the binomial decomposition framework with a constraint on k-additivity whereby OWA functions can be expressed as linear combinations of the first k binomial OWA functions and the associated coefficients of the binomial decomposition framework. In particular, we investigate the role of k-additivity in two particular cases of the binomial decomposition of OWA functions, the 2-additive and 3-additive cases. We identify the relationship between OWA weights and the associated coefficients of the binomial decomposition of OWA functions. Analogously, this relationship is also studied for two well-known parametric families of OWA functions, namely the S-Gini and Lorenzen welfare functions. Finally, we propose a new approach to determine OWA weights in large-scale problems by using the binomial decomposition of OWA functions with natural constraints on k-additivity to control the complexity of the OWA weight distributions.
|
130 |
Intrinsic Lipschitz graphs in Heisenberg groups and non linear sub-elliptic PDEsPinamonti, Andrea January 2011 (has links)
In this thesis we study intrinsic Lipschitz functions. In particular we provide a regular approximation result and a Poincarè type inequality for this class of functions. Moreover we study the obstacle problem in the Heisenberg group and we prove a geometric Poincarè inequality for a class of semilinear equations in the Engel group.
|
Page generated in 0.0535 seconds