• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • Tagged with
  • 11
  • 11
  • 11
  • 11
  • 11
  • 6
  • 5
  • 5
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

From optimization to listing: theoretical advances in some enumeration problems

Raffaele, Alice 30 March 2022 (has links)
The main aim of this thesis is to investigate some problems relevant in enumeration and optimization, for which I present new theoretical results. First, I focus on a classical enumeration problem in graph theory with several applications, such as network reliability. Given an undirected graph, the objective is to list all its bonds, i.e., its minimal cuts. I provide two new algorithms, the former having the same time complexity as the state of the art by [Tsukiyama et al., 1980], whereas the latter offers an improvement. Indeed, by refining the branching strategy of [Tsukiyama et al., 1980] and relying on some dynamic data structures by [Holm et al., 2001], it is possible to define an eO(n)-delay algorithm to output each bond of the graph as a bipartition of the n vertices. Disregarding the polylogarithmic factors hidden in the eO notation, this is the first algorithm to list bonds in a time linear in the number of vertices. Then, I move to studying two well-known problems in theoretical computer science, that are checking the duality of two monotone Boolean functions, and computing the dual of a monotone Boolean function. Also these are relevant in many fields, such as linear programming. [Fredman and Khachiyan, 1996] developed the first quasi-polynomial time algorithm to solve the decision problem, thus proving that it is not coNP-complete. However, no polynomial-time algorithm has been discovered yet. Here, by focusing on the symmetry of the two input objects and exploiting full covers introduced by [Boros and Makino, 2009], I define an alternative decomposition approach. This offers a strong bound which, however, in the worst case, is still the same as [Fredman and Khachiyan, 1996]. Anyway, I also show how to adapt it to obtain a polynomial-space algorithm to solve the dualization problem. Finally, as extra content, this thesis contains an appendix about the topic of communicating operations research. By starting from two side projects not related to enumeration, and by comparing some relevant considerations and opinions by researchers and practitioners, I discuss the problem of properly promoting, fostering, and communicating findings in this research area to laypeople.
2

MCDM methods based on pairwise comparison matrices and their fuzzy extension

Krejčí, Jana January 2017 (has links)
Methods based on pairwise comparison matrices (PCMs) form a significant part of multi-criteria decision making (MCDM) methods. These methods are based on structuring pairwise comparisons (PCs) of objects from a finite set of objects into a PCM and deriving priorities of objects that represent the relative importance of each object with respect to all other objects in the set. However, the crisp PCMs are not able to capture uncertainty stemming from subjectivity of human thinking and from incompleteness of information about the problem that are often closely related to MCDM problems. That is why the fuzzy extension of methods based on PCMs has been of great interest. In order to derive fuzzy priorities of objects from a fuzzy PCM (FPCM), standard fuzzy arithmetic is usually applied to the fuzzy extension of the methods originally developed for crisp PCMs. %Fuzzy extension of the methods based on PCMs usually consists in simply replacing the crisp PCs in the given model by fuzzy PCs and applying standard fuzzy arithmetic to obtain the desired fuzzy priorities. However, such approach fails in properly handling uncertainty of preference information contained in the FPCM. Namely, reciprocity of the related PCs of objects in a FPCM and invariance of the given method under permutation of objects are violated when standard fuzzy arithmetic is applied to the fuzzy extension. This leads to distortion of the preference information contained in the FPCM and consequently to false results. Thus, the first research question of the thesis is: ``Based on a FPCM of objects, how should fuzzy priorities of these objects be determined so that they reflect properly all preference information available in the FPCM?'' This research question is answered by introducing an appropriate fuzzy extension of methods originally developed for crisp PCMs. That is, such fuzzy extension that does not violate reciprocity of the related PCs and invariance under permutation of objects, and that does not lead to a redundant increase of uncertainty of the resulting fuzzy priorities of objects. Fuzzy extension of three different types of PCMs is examined in this thesis - multiplicative PCMs, additive PCMs with additive representation, and additive PCMs with multiplicative representation. In particular, construction of PCMs, verifying consistency, and deriving priorities of objects from PCMs are studied in detail for each type of these PCMs. First, well-known and in practice most often applied methods based on crisp PCMs are reviewed. Afterwards, fuzzy extensions of these methods proposed in the literature are reviewed in detail and their drawbacks regarding the violation of reciprocity of the related PCs and of invariance under permutation of objects are pointed out. It is shown that these drawbacks can be overcome by properly applying constrained fuzzy arithmetic instead of standard fuzzy arithmetic to the computations. In particular, we always have to look at a FPCM as a set of PCMs with different degrees of membership to the FPCM, i.e. we always have to consider only PCs that are mutually reciprocal. Constrained fuzzy arithmetic allows us to impose the reciprocity of the related PCs as a constraint on arithmetic operations with fuzzy numbers, and its appropriate application also guarantees invariance of the methods under permutation of objects. Finally, new fuzzy extensions of the methods are proposed based on constrained fuzzy arithmetic and it is proved that these methods do not violate the reciprocity of the related PCs and are invariant under permutation of objects. Because of these desirable properties, fuzzy priorities of objects obtained by the methods proposed in this thesis reflect the preference information contained in fuzzy PCMs better in comparison to the fuzzy priorities obtained by the methods based on standard fuzzy arithmetic. Beside the inability to capture uncertainty, methods based on PCMs are also not able to cope with situations where it is not possible or reasonable to obtain complete preference information from DMs. This problem occurs especially in the situations involving large-dimensional PCMs. When dealing with incomplete large-dimensional PCMs, compromise between reducing the number of PCs required from the DM and obtaining reasonable priorities of objects is of paramount importance. This leads to the second research question: ``How can the amount of preference information required from the DM in a large-dimensional PCM be reduced while still obtaining comparable priorities of objects?'' This research question is answered by introducing an efficient two-phase method. Specifically, in the first phase, an interactive algorithm based on weak-consistency condition is introduced for partially filling an incomplete PCM. This algorithm is designed in such a way that minimizes the number of PCs required from the DM and provides sufficient amount of preference information at the same time. The weak-consistency condition allows for providing ranges of possible intensities of preference for every missing PC in the incomplete PCM. Thus, at the end of the first phase, a PCM containing intervals for all PCs that were not provided by the DM is obtained. Afterward, in the second phase, the methods for obtaining fuzzy priorities of objects from fuzzy PCMs proposed in this thesis within the answer to the first research question are applied to derive interval priorities of objects from this incomplete PCM. The obtained interval priorities cover all weakly consistent completions of the incomplete PCM and are very narrow. The performance of the method is illustrated by a real-life case study and by simulations that demonstrate the ability of the algorithm to reduce the number of PCs required from the DM in PCMs of dimension 15 and greater by more than 60\% on average while obtaining interval priorities comparable with the priorities obtainable from the hypothetical complete PCMs.
3

Unilateral Commitments

Briata, Federica January 2010 (has links)
The research done in the thesis in within the non-cooperative games on the following three topics: 1) Binary symmetric games. 2) Quality unilateral commitments. 3) Essentializing equilibrium concepts.
4

The algebraic representation of OWA functions in the binomial decomposition framework and its applications in large-scale problems

Nguyen, Hong Thuy January 2019 (has links)
In the context of multicriteria decision making, the ordered weighted averaging (OWA) functions play a crucial role in aggregating multiple criteria evaluations into an overall assessment to support decision makers reaching a decision. The determination of OWA weights is, therefore, an important task in this process. Solving real-life problems with a large number of OWA weights, however, can be very challenging and time consuming. In this research we recall that OWA functions correspond to the Choquet integrals associated with symmetric capacities. The problem of defining all Choquet capacities on a set of n criteria requires 2^n real coefficients. Grabisch introduced the k-additive framework to reduce the exponential computational burden. We review the binomial decomposition framework with a constraint on k-additivity whereby OWA functions can be expressed as linear combinations of the first k binomial OWA functions and the associated coefficients of the binomial decomposition framework. In particular, we investigate the role of k-additivity in two particular cases of the binomial decomposition of OWA functions, the 2-additive and 3-additive cases. We identify the relationship between OWA weights and the associated coefficients of the binomial decomposition of OWA functions. Analogously, this relationship is also studied for two well-known parametric families of OWA functions, namely the S-Gini and Lorenzen welfare functions. Finally, we propose a new approach to determine OWA weights in large-scale problems by using the binomial decomposition of OWA functions with natural constraints on k-additivity to control the complexity of the OWA weight distributions.
5

Complexity in Infinite Games on Graphs and Temporal Constraint Networks

Comin, Carlo January 2017 (has links)
This dissertation deals with a number of algorithmic problems motivated by automated temporal planning and formal verification of reactive and finite state systems. Particularly, we shall focus on game theoretical methods in order to obtain improved complexity bounds and faster algorithms for the following models: Hyper Temporal Networks, Conditional Simple/Hyper Temporal Networks, Conditional Simple Temporal Networks with Instantaneous Reaction Time, Update Games, Explicit McNaughton-Muller Games, Mean Payoff Games.
6

A Reactive Search Optimization approach to interactive decision making

Campigotto, Paolo January 2011 (has links)
Reactive Search Optimization (RSO) advocates the integration of learning techniques into search heuristics for solving complex optimization problems. In the last few years, RSO has been mostly employed in self-adapting a local search method in a manner depending on the previous history of the search. The learning signals consisted of data about the structural characteristics of the instance collected while the algorithm is running. For example, data about sizes of basins of attraction, entrapment of trajectories, repetitions of previously visited configurations. In this context, the algorithm learns by interacting from a previously unknown environment given by an existing (and fixed) problem definition. This thesis considers a second interesting online learning loop, where the source of learning signals is the decision maker, who is fine-tuning her preferences (formalized as an utility function) based on a learning process triggered by the presentation of tentative solutions. The objective function and, more in general, the problem definition is not fully stated at the beginning and needs to be refined during the search for a satisfying solution. In practice, this lack of complete knowledge may occur for different reasons: insufficient or costly knowledge elicitation, soft constraints which are in the mind of the decision maker, revision of preferences after becoming aware of some possible solutions, etc. The work developed in the thesis can be classified within the well known paradigm of Interactive Decision Making (IDM). In particular, it considers interactive optimization from a machine learning perspective, where IDM is seen as a joint learning process involving the optimization component and the DM herself. During the interactive process, on one hand, the decision maker improves her knowledge about the problem in question and, on the other hand, the preference model learnt by the optimization component evolves in response to the additional information provided by the user. We believe that understanding the interplay between these two learning processes is essential to improve the design of interactive decision making systems. This thesis goes in this direction, 1) by considering a final user that may change her preferences as a result of the deeper knowledge of the problem and that may occasionally provide inconsistent feedback during the interactive process, 2) by introducing a couple of IDM techniques that can learn an arbitrary preference model in these changing and noisy conditions. The investigation is performed within two different problems settings, the traditional multi-objective optimization and a constraint-based formulation for the DM preferences. In both cases, the ultimate goal of the IDM algorithm developed is the identification of the solution preferred by the final user. This task is accomplished by alternating a learning phase generating an approximated model of the user preferences with an optimization stage identifying the optimizers of the current model. Current tentative solutions will be evaluated by the final user, in order to provide additional training data. However, the cognitive limitations of the user while analyzing the tentative solutions demands to minimize the amount of elicited information. This requires a shift of paradigm with respect to standard machine learning strategies, in order to model the relevant areas of the optimization surface rather than reconstruct it entirely. In our approach the shift is obtained both by the application of well known active learning principles during the learning phase and by the suitable trade-off among diversification and intensification of the search during the optimization stage.
7

Development of innovative tools for multi-objective optimization of energy systems

Mahbub, Md Shahriar January 2017 (has links)
From industrial revolution to the present day, fossil fuels are the main sources for ensuring energy supply. Fossil fuel usages have negative effects on environment that are highlighted by several local or international policy initiatives at support of the big energy transition. The effects urge energy planners to integrate renewable energies into the corresponding energy systems. However, large-scale incorporation of renewable energies into the systems is difficult because of intermittent behaviors, limited availability and economic barriers. It requires intricate balancing among different energy producing resources and the syringes among all the major energy sectors. Although it is possible to evaluate a given energy scenario (complete set of parameters describing a system) by using a simulation model, however, identifying optimal energy scenarios with respect to multiple objectives is a very difficult to accomplished. In addition, no generalized optimization framework is available that can handle all major sectors of an energy system. In this regards, we propose a complete generalized framework for identifying scenarios with respect to multiple objectives. The framework is developed by coupling a multi-objective evolutionary algorithm and EnergyPLAN. The results show that the tool has the capability to handle multiple energy sectors together; moreover, a number of optimized trade-off scenarios are identified. Furthermore, several improvements are proposed to the framework for finding better-optimized scenarios in a computationally efficient way. The framework is applied on two different real-world energy system optimization problems. The results show that the framework is capable to identify optimized scenarios both by considering recent demands and by considering projected demands. The proposed framework and the corresponding improvements make it possible to provide a complete tool for policy makers for designing optimized energy scenarios. The tool can be able to handle all major energy sectors and can be applied in short and long-term energy planning.
8

Knowledge and Artifact Representation in the Scientific Lifecycle

Chenu-Abente Acosta, Ronald January 2012 (has links)
This thesis introduces SKOs (Scientific Knowledge Object) a specification for capturing the knowledge and artifacts that are produced by the scientific research processes. Aiming to address the current existing limitations of scientific production this specification is focused on reducing the work overhead of scientific creation, being composable and reusable, allow continuous evolution and facilitate collaboration and discovery among researchers. To do so it introduces four layers that capture different aspects of the scientific knowledge: content, meaning, ordering and visualization.
9

Assessing solar radiation components over the alpine region Advanced modeling techniques for environmental and technological applications.

Castelli, Mariapina January 2015 (has links)
This thesis examines various methods for estimating the spatial distribution of solar radiation, and in particular its diffuse and direct components in mountainous regions. The study area is the Province of Bolzano (Italy). The motivation behind this work is that radiation components are an essential input for a series of applications, such as modeling various natural processes, assessing the effect of atmospheric pollutants on Earth's climate, and planning technological applications converting solar energy into electric power. The main mechanisms that should be considered when estimating solar radiation are: absorption and scattering by clouds and aerosols, and shading, reflections and sky obstructions by terrain. Ground-based measurements capture all these effects, but are unevenly distributed and poorly available in the Italian Alps. Consequently they are inadequate for assessing spatially distributed incoming radiation through interpolation. Furthermore conventional weather stations generally do not measure radiation components. As an alternative, decomposition methods can be applied for splitting global irradiance into the direct and diffuse components. In this study a logistic function was developed from the data measured at three alpine sites in Italy and Switzerland. The validation of this model gave MAB = 51 Wm^-2, and MBD = -17 Wm^-2 for the hourly averages of diffuse radiation. In addition, artificial intelligence methods, such as artificial neural networks (ANN), can be applied for reproducing the functional relationship between radiation components and meteorological and geometrical factors. Here a multilayer perceptron ANN model was implemented which derives diffuse irradiance from global irradiance and other predictors. Results show good accuracy (MAB in [32,43] Wm^-2, and MBD in [-7,-25] Wm^-2) suggesting that ANN are an interesting tool for decomposing solar radiation into direct and diffuse, and they can reach low error and high generality. On the other hand, radiative transfer models (RTM) can describe accurately the effect of aerosols and clouds. Indeed in this study the RTM libRadtran was exploited for calculating vertical profiles of direct aerosol radiative forcing, atmospheric absorption and heating rate from measurements of black carbon, aerosol number size distribution and chemical composition. This allowed to model the effect of aerosols on radiation and climate. However, despite their flexibility in including as much information as available on the atmosphere, RTM are computationally expensive, thus their operational application requires optimization strategies. Algorithms based on satellite data can overcome these limitations. They exploit RTM-based look up tables for modeling clear-sky radiation, and derive the radiative effect of clouds from remote observations of reflected radiation. However results strongly depend on the spatial resolution of satellite data and on the accuracy of the external input. In this thesis the algorithm HelioMont, developed by MeteoSwiss, was validated at three alpine locations. This algorithm exploits high temporal resolution METEOSAT satellite data (1 km at nadir). Results indicate that the algorithm is able to provide monthly climatologies of both global irradiance and its components over complex terrain with an error of 10 Wm^-2. However the estimation of the diffuse and direct components of irradiance on daily and hourly time scale is associated with an error exceeding 50 Wm^-2, especially under clear-sky conditions. This problem is attributable to the low spatial and temporal resolution of aerosol distribution in the atmosphere used in the clear-sky scheme. To quantify the potential improvement, daily averages of accurate aerosol and water vapor data were exploited at the AERONET stations of Bolzano and Davos. Clear-sky radiation was simulated by the RTM libRadtran, and low values of bias were found between RTM simulations and ground measurements. This confirmed that HelioMont performance would benefit from more accurate local-scale aerosol boundary conditions. In summary, the analysis of different methods demonstrates that algorithms based on geostationary satellite data are a suitable tool for reproducing both the temporal and the spatial variability of surface radiation at regional scale. However better performances are achievable with a more detailed characterization of the local-scale clear-sky atmospheric conditions. In contrast, for plot scale applications, either the logistic function or ANN can be used for retrieving solar radiation components.
10

Essays on Boundedly Rational Decision-making: Theory, Applications, and Experiments

Papi, Mauro January 2011 (has links)
We investigate individual decision-making by adopting Simon's view of bounded rationality.

Page generated in 0.0905 seconds