Spelling suggestions: "subject:"cooperativa""
31 |
Decomposition Methods and Network Design ProblemsParriani, Tiziano <1984> 03 April 2014 (has links)
Decomposition based approaches are recalled from primal and dual point of view. The possibility of building partially disaggregated reduced master problems is investigated. This extends the idea of aggregated-versus-disaggregated formulation to a gradual choice of alternative level of aggregation. Partial aggregation is applied to the linear multicommodity minimum cost flow problem. The possibility of having only partially aggregated bundles opens a wide range of alternatives with different trade-offs between the number of iterations and the required computation for solving it. This trade-off is explored for several sets of instances and the results are compared with the ones obtained by directly solving the natural node-arc formulation.
An iterative solution process to the route assignment problem is proposed, based on the well-known Frank Wolfe algorithm. In order to provide a first feasible solution to the Frank Wolfe algorithm, a linear multicommodity min-cost flow problem is solved to optimality by using the decomposition techniques mentioned above. Solutions of this problem are useful for network orientation and design, especially in relation with public transportation systems as the Personal Rapid Transit.
A single-commodity robust network design problem is addressed. In this, an undirected graph with edge costs is given together with a discrete set of balance matrices, representing different supply/demand scenarios. The goal is to determine the minimum cost installation of capacities on the edges such that the flow exchange is feasible for every scenario. A set of new instances that are computationally hard for the natural flow formulation are solved by means of a new heuristic algorithm.
Finally, an efficient decomposition-based heuristic approach for a large scale stochastic unit commitment problem is presented. The addressed real-world stochastic problem employs at its core a deterministic unit commitment planning model developed by the California Independent System Operator (ISO).
|
32 |
Networks, Uncertainty, Applications and a Crusade for OptimalityAlvarez Miranda, Eduardo Andre <1986> 03 April 2014 (has links)
In this thesis we address a collection of Network Design problems which are strongly motivated by applications from Telecommunications, Logistics and Bioinformatics. In most cases we justify the need of taking into account uncertainty in some of the problem parameters, and different Robust optimization models are used to hedge against it. Mixed integer linear programming formulations along with sophisticated algorithmic frameworks are designed, implemented and rigorously assessed for the majority of the studied problems.
The obtained results yield the following observations: (i) relevant real problems can be effectively represented as (discrete) optimization problems within the framework of network design; (ii) uncertainty can be appropriately incorporated into the decision process if a suitable robust optimization model is considered; (iii) optimal, or nearly optimal, solutions can be obtained for large instances if a tailored algorithm, that exploits the structure of the problem, is designed; (iv) a systematic and rigorous experimental analysis allows to understand both, the characteristics of the obtained (robust) solutions and the behavior of the proposed algorithm.
|
33 |
Localización de albergues y evacuación de personas para casos de inundación en la costa peruana utilizando métodos de optimización multicriterioVelasquez Lopez, Kengy Adrian 07 August 2018 (has links)
Esta investigación surge del análisis de los desastres mundiales a nivel mundial y de
América Latina lo cual revela un aumento considerable de los desastres causados
por las inundaciones. En el Perú, el litoral de Lima y Callao concentra una elevada
población expuesta a inundaciones por causa de un tsunami, asimismo, Lima y
Callao revela condiciones de vulnerabilidad social; estos dos factores señalan la
posibilidad de un desastre natural desencadenado por un tsunami. Por el lado
institucional, el INDECI cuenta con planes de prevención ante emergencias cuyos
procedimientos de toma de decisiones pueden mejorarse mediante el uso de técnicas
de optimización.
Ante un posible tsunami, un estudio realizado por la Dirección de Hidrografía y
Navegación (DHN) de la Marina de Guerra del Perú, con la colaboración del Centro
Peruano Japonés de Mitigación e Investigación de Desastres (CISMID) simularon los
efectos de un terremoto en Lima Metropolitana. Con base en ese estudio y con la
información del Sistema de Información sobre Recursos para Atención de Desastres
(SIRAD), se identificó la cantidad de afectados, los albergues para la evacuación y
los almacenes para trasladar ayuda humanitaria. El problema a resolver es de
minimizar el tiempo de traslado de los afectados hacia albergues y minimizar los
costos de transporte de la ayuda humanitaria desde los almacenes hasta los
albergues en donde se alojarán los afectados.
En esta tesis se propone técnicas de optimización multicriterio para la resolución del
modelo matemático y se obtuvo soluciones óptimas sobre la frontera de Pareto.
Luego, se escogió una solución óptima utilizando el método ELECTRE para cada
escenario propuesto. Con las soluciones encontradas, se determinó las rutas de
evacuación de las personas y la red de abastecimientos de ayuda humanitaria hacia
los afectados que suman 89,972 y 234,232 personas para el primer y segundo
escenario.
Por último, se concluye que para un terremoto de 8.5 Mw el tiempo promedio de
evacuación de los afectados es 58.59 minutos, el costo de transporte es S/.8,207 y
el volumen de ayuda humanitaria es 5,046 m3. Por otro lado, en un terremoto de 9.0
Mw el tiempo promedio de evacuación de los afectados es 128.81 minutos, el costo
de transporte es S/.95,483 y el volumen de ayuda humanitaria es 13,137 m3. Los
distritos de Callao, Lurín, Chorrillos y Ventanilla no cuentan con suficientes albergues
para atender a los afectados. / Tesis
|
34 |
Modelos estadísticos espacio temporales en perimetríaIbáñez Gual, Ma. Victoria 24 September 2003 (has links)
El desarrollo de gran parte de los modelos y métodos estadísticos que conocemos y utilizamos en la actualidad ha ido ligado al estudio de aplicaciones específicas dentro de diversos ámbitos científicos.Nuestra motivación al empezar a trabajar con datos perimétricos, fue la de construir un modelo espacio temporal que nos permitiera modelizar la evolución, tanto espacial (en la retina) como en el tiempo (evolución temporal), de las lesiones que aparecen en la retina del paciente debidas al glaucoma. Nuestro objetivo a lo largo de la tesis, ha sido el resolver diversos problemas ligados al estudio del glaucoma, enfermedad ocular muy extendida, que se caracteriza por producir una pérdida de visión gradual en el paciente, pudiendo llegar a producirle ceguera. Para diagnosticar y evaluar el glaucoma, los oftalmólogos se basan principalmente en el análisis de campos visuales (mapas numéricos que les informan de la intensidad de visión del paciente en un conjunto de puntos de su retina). Nosotros hemos trabajado con bases de datos de campos visuales de un conjunto de pacientes, obtenidos en la consulta oftalmológica.En este trabajo hemos analizado los campos visuales bajo dos perspectivas distintas. En la primera parte de la tesis, utilizamos la metodología Geoestadística para modelizar la distribución espacio temporal de campos visuales de pacientes sanos y de pacientes con glaucoma. El objetivo de la modelización, además de describir el proceso, es la de poder realizar simulaciones y predicciones a partir de ella. En la segunda parte de la tesis, trabajamos con los métodos propios de las series temporales multivariantes, desde el punto de vista clásico.Tras hacer una revisión teórica de los métodos a utilizar, planteamos un problema bajo un enfoque bayesiano, en el que pretendemos estimar si cada posición de un campo visual observado está sana o enferma, y otro problema en el que pretendemos construir un modelo espacio temporal conjunto para caracterizar la distribución espacio temporal de campos visuales de pacientes que sufren glaucoma, y utilizar el modelo obtenido para realizar predicciones.
|
35 |
Comparaciones Multivariantes de Vectores Aleatorios con AplicacionesMulero González, Julio 25 July 2012 (has links)
El resultado de un experimento es variable y está modelado por variables y vectores aleatorios. Comúnmente, las comparaciones entre están cantidades aleatorias están basadas en algunas medidas asociadas, pero a menudo no son demasiado informativas. En este caso, los órdenes estocásticos proporcionan una comparación más completa. A veces, estamos interesados en estudiar propiedades de una variable o vector aleatorio, es decir, sus propiedades de envejecimiento. Los órdenes estocásticos son también herramientas útiles para caracterizar y estudiar estas nociones de envejecimiento. A lo largo de esta memoria, proponemos y estudiamos nuevos órdenes mutivariantes y nuevas nociones de envejecimiento para comparar y clasificar vectores aleatorios. Asímismo, establecemos condiciones para la comparación de vectores aleatorios en el orden estocástico usual, el orden estocástico más importante, para la clase de los modelos TTE, que contiene, por ejemplo, la familia de los modelos de fragilidad, así como ejemplos y aplicaciones / The outcome of an experiment or game is random and is modeled
by random variables and vectors. The comparisons between these random
values are mainly based on the comparison of some measures associated to
random quantities. Stochastic orders and aging notions are a growing field
of research in applied probability and statistics. In this chapter, we present
the main definitions and properties that we will use along this memory
|
36 |
From optimization to listing: theoretical advances in some enumeration problemsRaffaele, Alice 30 March 2022 (has links)
The main aim of this thesis is to investigate some problems relevant in enumeration and optimization, for which I present new theoretical results. First, I focus on a classical enumeration problem in graph theory with several applications, such as network reliability. Given an undirected graph, the objective is to list all its bonds, i.e., its minimal cuts. I provide two new algorithms, the former having the same time complexity as the state of the art by [Tsukiyama et al., 1980], whereas the latter offers an improvement. Indeed, by refining the branching strategy of [Tsukiyama et al., 1980] and relying on some dynamic data structures by [Holm et al., 2001], it is possible to define an eO(n)-delay algorithm to output each bond of the graph as a bipartition of the n vertices. Disregarding the polylogarithmic factors hidden in the eO notation, this is the first algorithm to list bonds in a time linear in the number of vertices. Then, I move to studying two well-known problems in theoretical computer science, that are checking the duality of two monotone Boolean functions, and computing the dual of a monotone Boolean function. Also these are relevant in many fields, such as linear programming. [Fredman and Khachiyan, 1996] developed the first quasi-polynomial time algorithm to solve the decision problem, thus proving that it is not coNP-complete. However, no polynomial-time algorithm has been discovered yet. Here, by focusing on the symmetry of the two input objects and exploiting full covers introduced by [Boros and Makino, 2009], I define an alternative decomposition approach. This offers a strong bound which, however, in the worst case, is still the same as [Fredman and Khachiyan, 1996]. Anyway, I also show how to adapt it to obtain a polynomial-space algorithm to solve the dualization problem. Finally, as extra content, this thesis contains an appendix about the topic of communicating operations research. By starting from two side projects not related to enumeration, and by comparing some relevant considerations and opinions by researchers
and practitioners, I discuss the problem of properly promoting, fostering, and communicating findings in this research area to laypeople.
|
37 |
MCDM methods based on pairwise comparison matrices and their fuzzy extensionKrejčí, Jana January 2017 (has links)
Methods based on pairwise comparison matrices (PCMs) form a significant part of multi-criteria decision making (MCDM) methods. These methods are based on structuring pairwise comparisons (PCs) of objects from a finite set of objects into a PCM and deriving priorities of objects that represent the relative importance of each object with respect to all other objects in the set. However, the crisp PCMs are not able to capture uncertainty stemming from subjectivity of human thinking and from incompleteness of information about the problem that are often closely related to MCDM problems. That is why the fuzzy extension of methods based on PCMs has been of great interest. In order to derive fuzzy priorities of objects from a fuzzy PCM (FPCM), standard fuzzy arithmetic is usually applied to the fuzzy extension of the methods originally developed for crisp PCMs.
%Fuzzy extension of the methods based on PCMs usually consists in simply replacing the crisp PCs in the given model by fuzzy PCs and applying standard fuzzy arithmetic to obtain the desired fuzzy priorities.
However, such approach fails in properly handling uncertainty of preference information contained in the FPCM. Namely, reciprocity of the related PCs of objects in a FPCM and invariance of the given method under permutation of objects are violated when standard fuzzy arithmetic is applied to the fuzzy extension. This leads to distortion of the preference information contained in the FPCM and consequently to false results.
Thus, the first research question of the thesis is:
``Based on a FPCM of objects, how should fuzzy priorities of these objects be determined so that they reflect properly all preference information available in the FPCM?''
This research question is answered by introducing an appropriate fuzzy extension of methods originally developed for crisp PCMs. That is, such fuzzy extension that does not violate reciprocity of the related PCs and invariance under permutation of objects, and that does not lead to a redundant increase of uncertainty of the resulting fuzzy priorities of objects. Fuzzy extension of three different types of PCMs is examined in this thesis - multiplicative PCMs, additive PCMs with additive representation, and additive PCMs with multiplicative representation. In particular, construction of PCMs, verifying consistency, and deriving priorities of objects from PCMs are studied in detail for each type of these PCMs.
First, well-known and in practice most often applied methods based on crisp PCMs are reviewed.
Afterwards, fuzzy extensions of these methods proposed in the literature are reviewed in detail and their drawbacks regarding the violation of reciprocity of the related PCs and of invariance under permutation of objects are pointed out. It is shown that these drawbacks can be overcome by properly applying constrained fuzzy arithmetic instead of standard fuzzy arithmetic to the computations.
In particular, we always have to look at a FPCM as a set of PCMs with different degrees of membership to the FPCM, i.e. we always have to consider only PCs that are mutually reciprocal. Constrained fuzzy arithmetic allows us to impose the reciprocity of the related PCs as a constraint on arithmetic operations with fuzzy numbers, and its appropriate application also guarantees invariance of the methods under permutation of objects.
Finally, new fuzzy extensions of the methods are proposed based on constrained fuzzy arithmetic and it is proved that these methods do not violate the reciprocity of the related PCs and are invariant under permutation of objects.
Because of these desirable properties, fuzzy priorities of objects obtained by the methods proposed in this thesis reflect the preference information contained in fuzzy PCMs better in comparison to the fuzzy priorities obtained by the methods based on standard fuzzy arithmetic.
Beside the inability to capture uncertainty, methods based on PCMs are also not able to cope with situations where it is not possible or reasonable to obtain complete preference information from DMs. This problem occurs especially in the situations involving large-dimensional PCMs.
When dealing with incomplete large-dimensional PCMs, compromise between reducing the number of PCs required from the DM and obtaining reasonable priorities of objects is of paramount importance.
This leads to the second research question:
``How can the amount of preference information required from the DM in a large-dimensional PCM be reduced while still obtaining comparable priorities of objects?''
This research question is answered by introducing an efficient two-phase method. Specifically, in the first phase, an interactive algorithm based on weak-consistency condition is introduced for partially filling an incomplete PCM. This algorithm is designed in such a way that minimizes the number of PCs required from the DM and provides sufficient amount of preference information at the same time. The weak-consistency condition allows for providing ranges of possible intensities of preference for every missing PC in the incomplete PCM. Thus, at the end of the first phase, a PCM containing intervals for all PCs that were not provided by the DM is obtained.
Afterward, in the second phase, the methods for obtaining fuzzy priorities of objects from fuzzy PCMs proposed in this thesis within the answer to the first research question are applied to derive interval priorities of objects from this incomplete PCM. The obtained interval priorities cover all weakly consistent completions of the incomplete PCM and are very narrow. The performance of the method is illustrated by a real-life case study and by simulations that demonstrate the ability of the algorithm to reduce the number of PCs required from the DM in PCMs of dimension 15 and greater by more than 60\% on average while obtaining interval priorities comparable with the priorities obtainable from the hypothetical complete PCMs.
|
38 |
Unilateral CommitmentsBriata, Federica January 2010 (has links)
The research done in the thesis in within the non-cooperative games on the following three topics:
1) Binary symmetric games.
2) Quality unilateral commitments.
3) Essentializing equilibrium concepts.
|
39 |
The algebraic representation of OWA functions in the binomial decomposition framework and its applications in large-scale problemsNguyen, Hong Thuy January 2019 (has links)
In the context of multicriteria decision making, the ordered weighted averaging (OWA) functions play a crucial role in aggregating multiple criteria evaluations into an overall assessment to support decision makers reaching a decision. The determination of OWA weights is, therefore, an important task in this process. Solving real-life problems with a large number of OWA weights, however, can be very challenging and time consuming. In this research we recall that OWA functions correspond to the Choquet integrals associated with symmetric capacities. The problem of defining all Choquet capacities on a set of n criteria requires 2^n real coefficients. Grabisch introduced the k-additive framework to reduce the exponential computational burden. We review the binomial decomposition framework with a constraint on k-additivity whereby OWA functions can be expressed as linear combinations of the first k binomial OWA functions and the associated coefficients of the binomial decomposition framework. In particular, we investigate the role of k-additivity in two particular cases of the binomial decomposition of OWA functions, the 2-additive and 3-additive cases. We identify the relationship between OWA weights and the associated coefficients of the binomial decomposition of OWA functions. Analogously, this relationship is also studied for two well-known parametric families of OWA functions, namely the S-Gini and Lorenzen welfare functions. Finally, we propose a new approach to determine OWA weights in large-scale problems by using the binomial decomposition of OWA functions with natural constraints on k-additivity to control the complexity of the OWA weight distributions.
|
40 |
Sistema de radiocomunicación como alternativa al sistema de localización satelital en el monitoreo y localización de unidades móviles de las empresas de taxiVilcapoma Villalba, Jhon Andrés 14 June 2019 (has links)
El propósito de esta investigación se concentró en los sistemas de localización con los que actualmente cuentan las bases centrales en las empresas de taxis; además de cómo desarrollar la alternativa de localización y monitoreo al sistema GPS. Hoy en día nuestra ciudad de Huancayo vive una total inseguridad, en cuanto al transporte, es así que los taxistas son víctimas de la delincuencia, las modalidades que utilizan los criminales para infiltrarse en las empresas de taxis; por ejemplo, disfrazarse de conductor para así cometer delitos. La Ingeniería de Sistemas e Informática, la Ingeniería de Telecomunicaciones y la Ingeniería Electrónica, constituyen un factor fundamental y estratégico en el desarrollo de la seguridad, por esta razón, se desarrolló el sistema de radiolocalización en las unidades móviles de las empresas de taxis, lo cual es una necesidad fundamental y necesaria.
|
Page generated in 0.0903 seconds