• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 1
  • 1
  • Tagged with
  • 10
  • 10
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Three essays in macroeconomics and financial economics

Oduncu, Arif 19 August 2010 (has links)
In the first chapter, I analyze the question that whether the elasticity of intertemporal substitution or risk aversion is more important determinant of precautionary savings. This is an important question since a significant fraction of the capital accumulation is due to precautionary savings according to studies. Thus, knowing the important determinant of precautionary savings will be helpful to understand the capital accumulation mechanism. I look into the effects of the elasticity of intertemporal substitution and risk aversion on precautionary savings separately by performing simulations in order to obtain numerical results. I find that the elasticity of intertemporal substitution is more important determinant than risk aversion. In the second chapter, I study the impact of the introduction of futures trading on the volatility of the underlying spot market for Turkish Istanbul Stock Exchange (ISE).The economic literature intensified the debate on the negative or positive impact of futures trading on the stock market volatility. Although there are empirical studies for different countries with mixed results, most of them focus on developed countries. There are a few empirical researches on emerging markets. Analyzing the data, following results are obtained for ISE. First, the results suggest that the introduction of futures trading has decreased the volatility of ISE. Second, the results show that futures trading increases the speed at which information is impounded into spot market prices. Third, the asymmetric responses of volatility to the arrival of news for ISE have increased after the introduction of futures trading. In the third chapter, I investigate the presence of calendar anomalies in ISE by using GARCH models. The presence of calendar anomalies and their persistence presence since their first discovery still remains a puzzle to be solved. On the other hand, there are some claims that general anomalies are much less pronounced after they became known to the public. Most of the studies have examined the developed financial markets. However, it is important to test the calendar effects in data sets that are different from those in which they are originally discovered and so ISE is a good case to test the calendar effects for a developing country. / text
2

Informativeness and the Computational Metrology of Collaborative Adaptive Sensor Systems

Hopf, Anthony P 13 May 2011 (has links)
Complex engineered systems evolve, with a tendency toward self-organization, which can, paradoxically, frustrate the aims of those seeking to develop them. The systems engineer, seeking to promote the development in the context of changing and uncertain requirements, is challenged by conceptual gaps that emerge within engineering projects, particularly as they scale up, that inhibit communication among the various stakeholders. Overall optimization, involving multiple criterion, is often expressed in the language of the individual parties, increasing the complexity of the overall situation, subsuming the participants within the evolution of the complex engineered system, containing the objective and subjective in counterproductive or inefficient ways that can arrest healthy development. The conventional pragmatic systems engineering approach to the resolution of such situations is to introduce architectural discipline by way of separation of concerns. In complex engineered systems projects, the crucial interface, at any level of abstraction, is between the technical domain experts and higher level decision makers. Bridging the ensuing conceptual gap requires models and methods that provide communication tools promoting a convergence of the conversation between these parties on a common "common sense" of the underlying reality of the evolving engineered system. In the interest of conceptual clarity, we confine our investigation to a restricted, but important general class of evolving engineered system, information gathering and utilizing systems. Such systems naturally resolve the underlying domain specific measures by reduction into common plausible information measures aimed at an overall sense of informativeness. For concreteness, we further restrict the investigation and the demonstration to a species that is well documented in the open literature: weather radar networks, and in particular to the case of the currently emerging system referred to as CASA. The multiobjective problem of objectively exploring the high dimensionality of the decision space is done using multiobjective genetic algorithms (MOGA), specifically the John Eddy genetic algorithms (JEGA), resulting in well-formed Pareto fronts and sets containing Pareto optimal points within 20% of the ideal point. A visualization technique ensures a clear separation of the subjective criterion provided by the decision makers by superficially adding preferences to the objective optimal solutions. To identify the integrative objective functions and test patterns utilized in the MOGA analysis, explorations of networked weather radar technologies and configuration are completed. The explorations identify trends within and between network topologies, and captures both the robustness and fragility of network based measurements. The information oriented measures of fusion accuracy and precision are used to evaluate pairs of networked weather radars against a standardized low order vortex test pattern, resulting in a metrics for characterizing the performance of dual-Doppler weather radar pairs. To define integrative measures, information oriented measures abstracting over sensor estimators and parameters used to estimate the radial velocity and returned signal from distributed targets, specifically precipitation, are shown to capture the single radar predicted performance against standardized test patterns. The methodology bridges the conceptual gap, based on plausible information oriented measures, standardized with test patterns, and objectively applied to a concrete case with high dimensionality, allowed the conversation to converge between the systems engineer, decision makers, and domain experts. The method is an informative objective process that can be generalized to enable expansion within the technology and to other information gathering and utilizing systems and sensor technologies.
3

Optimization-based Approximate Dynamic Programming

Petrik, Marek 01 September 2010 (has links)
Reinforcement learning algorithms hold promise in many complex domains, such as resource management and planning under uncertainty. Most reinforcement learning algorithms are iterative - they successively approximate the solution based on a set of samples and features. Although these iterative algorithms can achieve impressive results in some domains, they are not sufficiently reliable for wide applicability; they often require extensive parameter tweaking to work well and provide only weak guarantees of solution quality. Some of the most interesting reinforcement learning algorithms are based on approximate dynamic programming (ADP). ADP, also known as value function approximation, approximates the value of being in each state. This thesis presents new reliable algorithms for ADP that use optimization instead of iterative improvement. Because these optimization-based algorithms explicitly seek solutions with favorable properties, they are easy to analyze, offer much stronger guarantees than iterative algorithms, and have few or no parameters to tweak. In particular, we improve on approximate linear programming - an existing method - and derive approximate bilinear programming - a new robust approximate method. The strong guarantees of optimization-based algorithms not only increase confidence in the solution quality, but also make it easier to combine the algorithms with other ADP components. The other components of ADP are samples and features used to approximate the value function. Relying on the simplified analysis of optimization-based methods, we derive new bounds on the error due to missing samples. These bounds are simpler, tighter, and more practical than the existing bounds for iterative algorithms and can be used to evaluate solution quality in practical settings. Finally, we propose homotopy methods that use the sampling bounds to automatically select good approximation features for optimization-based algorithms. Automatic feature selection significantly increases the flexibility and applicability of the proposed ADP methods. The methods presented in this thesis can potentially be used in many practical applications in artificial intelligence, operations research, and engineering. Our experimental results show that optimization-based methods may perform well on resource-management problems and standard benchmark problems and therefore represent an attractive alternative to traditional iterative methods.
4

Whitehead's Decision Problems for Automorphisms of Free Group

Mishra, Subhajit January 2020 (has links)
Let F be a free group of finite rank. Given words u, v ∈ F, J.H.C. Whitehead solved the decision problem of finding an automorphism φ ∈ Aut(F), carrying u to v. He used topological methods to produce an algorithm. Higgins and Lyndon gave a very concise proof of the problem based on the works of Rapaport. We provide a detailed account of Higgins and Lyndon’s proof of the peak reduction lemma and the restricted version of Whitehead’s theorem, for cyclic words as well as for sets of cyclic words, with a full explanation of each step. Then, we give an inductive proof of Whitehead’s minimization theorem and describe Whitehead’s decision algorithm. Noticing that Higgins and Lyndon’s work is limited to the cyclic words, we extend their proofs to ordinary words and sets of ordinary words. In the last chapter, we mention an example given by Whitehead to show that the decision problem for finitely generated subgroups is more difficult and outline an approach due to Gersten to overcome this difficulty. We also give an extensive literature survey of Whitehead’s algorithm / Thesis / Master of Science (MSc)
5

Approximately Solving Set Equations

Baader, Franz, Marantidis, Pavlos, Okhotin, Alexander 20 June 2022 (has links)
Unification with constants modulo the theory ACUI of an associative (A), commutative (C) and idempotent (I) binary function symbol with a unit (U) corresponds to solving a very simple type of set equations. It is well-known that solvability of systems of such equations can be decided in polynomial time by reducing it to satisfiability of propositional Horn formulae. Here we introduce a modified version of this problem by no longer requiring all equations to be completely solved, but allowing for a certain number of violations of the equations. We introduce three different ways of counting the number of violations, and investigate the complexity of the respective decision problem, i.e., the problem of deciding whether there is an assignment that solves the system with at most l violations for a given threshold value l. / Submitted to 30th International Workshop on Unification
6

A Repeatable Multi-Criteria Decision Model for Social Housing Asset Intervention Decisions

Lundgren, Rebecka January 2019 (has links)
This report describes a case study where a multi criteria decision model is used to make decisions regarding asset interventions for four social housing complexes, similar in terms of issues and possible interventions, at Christchurch City Council. The value judgements from the decision makers and their advisors that were necessary for creating the decision model were elicited through three workshops; selecting aspects, weighting and rating and lastly reviewing the output. An analysis performed shows that the decision model is logically consistent and does not suffer from the rank reversal phenomenon. The validation of the model also included creating four individual decision models, one for each social housing complex, comparing the results of applying the joint model and the individual models, and revisiting and reconsidering the value judgments made in the different models when discrepancies were found. This included utility difference analysis and asking trade-off questions to the decision makers. Part of the validation was also to get acceptance of the output of the joint model from the social housing team. Applying the decision model on the four social housing complexes and receiving an output which is accepted by the social housing team suggests that the aggregated model can be used for future decision problems of the same kind, provided that they are within the set level ranges of the aspects. Since the decision model is transparent in terms of which values or priorities have been applied and which prerequisites must be met in order to apply the model to future decisions, it is possible to use the decision model as a ‘live model’ with adjustment being made to it when required.
7

Analýza možností efektivního rozvoje vztahů s novináři ve společnosti adidas ČR, s.r.o. / Analysis of opportunities of efective development of Media Relations in a company of adidas CR

Mikulová, Eva January 2007 (has links)
The thesis is focused on valuing of two projects that should improve cooperation between a company of adidas and journalists. The first project is a showroom opened for journalists, the second one are new web pages. The main goal of this thesis is to make an analysis of the projects using methods of decision making management. Second, also very important goal is to find other possibilities how to improve cooperation with journalists and reach higher positive publicity.
8

Funções parciais recursivas e funções parcialmente Turing-computáveis: uma prova de equivalência

Melo, Gustavo Cavalcanti 24 October 2016 (has links)
Submitted by Maike Costa (maiksebas@gmail.com) on 2017-09-20T12:52:54Z No. of bitstreams: 1 arquivototal.pdf: 1155001 bytes, checksum: c813651173e6bf037a98328b32bc7d5a (MD5) / Made available in DSpace on 2017-09-20T12:52:54Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 1155001 bytes, checksum: c813651173e6bf037a98328b32bc7d5a (MD5) Previous issue date: 2016-10-24 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / In the thirties of the last century, several formal versions for the intuitive notion of algorithmic function were offered. Among them, the version of the recursive functions and the version of the Turing-computable functions. Posteriorly, such versions were extended in order to also include the partial algorithmic functions, giving rise, in this way, to the version of the partial recursive functions and to the version of the partially Turing-computable functions. In this context, this research, located into Computability Theory domain and built in the light of theoretical assumptions of Davis (1982), Mendelson (2009), Dias & Weber (2010), Rogers (1987), Soare (1987), Cooper (2004), among others, is intended to rebuild the proof that the given formal versions referred to the intuitive notion of partial algorithmic function, despite being conceptually distinct, they are extensionally equivalents in the sense that they determine the same set of theoretical-numerical functions. As a part of this rebuilding, we shall prove, in na unprecedented way, using quintuples, that every partial recursive function is partially Turing-computable. In the literature, this theorem is proved by means of a set of quadruples. However, defining a lower cardinality set constructed by quintuples, it is possible to prove it in a smaller time interval, which representes a gain from the computational point of view. Besides presenting this alternative proof, posed by the Church-Turing thesis that the set of partial recursive functions includes all the partial algorithmic functions, we shall investigate if this set itself and its infinite subsets are or are not algorithmic. In this survey, we shall demonstrate, in arithmetical terms, with the aid of Rice‟s theorem, that although the set of partial recursive functions is algorithmic, all its subsets which are different from the empty set are not, among which are the set of recursive functions and the set of primitive recursive functions. / Na década de 30 do século passado, foram oferecidas várias versões formais para a noção intuitiva de função algorítmica. Dentre elas, a versão das funções recursivas e a versão das funções Turing-computáveis. Posteriormente, tais versões foram estendidas a fim de abranger também as funções parciais algorítmicas, dando origem, deste modo, à versão das funções parciais recursivas e à versão das funções parcialmente Turing-computáveis. Nesse contexto, esta pesquisa, situada dentro do domínio da Teoria da Computabilidade e construída à luz dos pressupostos teóricos de Davis (1982), Mendelson (2009), Dias e Weber (2010), Rogers (1987), Soare (1987), Cooper (2004), entre outros, destina-se a reconstruir a prova de que as referidas versões formais dadas para a noção intuitiva de função parcial algorítmica, apesar de conceitualmente distintas, são extensionalmente equivalentes no sentido de que elas determinam o mesmo conjunto de funções numéricas. Como parte desta reconstrução, provaremos, de modo inédito, mediante o uso de quíntuplas, que toda função parcial recursiva é parcialmente Turing-computável. Na literatura especializada, esse teorema é provado por meio de um conjunto de quádruplas. Porém, definindo um conjunto de menor cardinalidade constituído por quíntuplas, é possível prová-lo em um intervalo menor de tempo, o que representa um ganho do ponto de vista computacional. Além de apresentar essa prova alternativa, posto pela Tese de Church-Turing que o conjunto das funções parciais recursivas contém todas as funções parciais algorítmicas, investigaremos se ele próprio e os seus infinitos subconjuntos são ou não algorítmicos. Nesta investigação, demonstraremos, em termos aritméticos, com o auxílio do Teorema de Rice, que embora o conjunto das funções parciais recursivas seja algorítmico, todos os seus subconjuntos diferentes do conjunto vazio não o são, dentre os quais estão o conjunto das funções recursivas e o conjunto das funções recursivas primitivas.
9

Development and application of a multi-criteria decision-support framework for planning rural energy supply interventions in low-income households in South Africa

Dzenga, Bruce 25 August 2022 (has links) (PDF)
Problems in the public policy decision-making environments are typically complex and continuously evolve. In a resource-constrained environment, several alternatives, criteria, and conflicting objectives must be considered. As a result, solutions to these types of problems cannot be modelled solely using single-criteria techniques. It has been observed that most techniques used to shape energy policy and planning either produce sub-optimal solutions or use strong assumptions about the preferences of decision-maker(s). This difficulty creates a compelling need to develop novel techniques that can handle several alternatives, multiple criteria and conflicting objectives to support public sector decision-making processes. First, the study presents a novel scenario-based multi-objective optimisation framework based on the augmented Chebychev goal programming (GP) technique linked to a value function for analysing a decision environment underlying energy choice among low-income households in isolated rural areas and informal urban settlements in South Africa. The framework developed includes a multi-objective optimisation technique that produced an approximation of a Pareto front linked to an a priori aggregation function and a value function to select the best alternatives. Second, the study used this model to demonstrate the benefits of applying the framework to a previously unknown subject in public policy: a dynamic multi-technology decision problem under uncertainty involving multiple stakeholders and conflicting objectives. The results obtained suggest that while it is cost-optimal to pursue electrification in conjunction with other short-term augmentation solutions to meet South Africa's universal electrification target, sustainable energy access rates among low-income households can be achieved by increasing the share of clean energy generation technologies in the energy mix. This study, therefore, challenges the South African government's position on pro-poor energy policies and an emphasis on grid-based electrification to increase energy access. Instead, the study calls for a portfolio-based intervention. The study advances interventions based on micro-grid electrification made up of solar photovoltaics (PV), solar with storage, combined cycle gas turbine (CCGT) and wind technologies combined with either bioethanol fuel or liquid petroleum gas (LPG). The study has demonstrated that the framework developed can benefit public sector decision-makers in providing a balanced regime of technical, financial, social, environmental, public health, political and economic aspects in the decision-making process for planning energy supply interventions for low-income households. The framework can be adapted to a wide range of energy access combinatorial problems and in countries grappling with similar energy access challenges.
10

Development of Elicitation Methods for Managerial Decision Support

Riabacke, Ari January 2007 (has links)
Decision‐makers in organisations and businesses make numerous decisions every day, and these decisions are expected to be based on facts and carried out in a rational manner. However, most decisions are not based on precise information or careful analysis due to several reasons. People are, e.g., unable to behave rationally as a result of their experiences, socialisation, and additionally, because humans possess fairly limited capacities for processing information in an objective manner. In order to circumvent this human incapacity to handle decision situations in a rational manner, especially those involving risk and uncertainty, a widespread suggestion, at least in managerial decision making, is to take advantage of support in the form of decision support systems. One possibility involves decision analytical tools, but they are, almost without exception, not efficiently employed in organisations and businesses. It appears that one reason for this is the high demands the tools place on the decision‐maker in a variety of ways, e.g., by presupposing that reliable input data is obtainable by an exogenous process. Even though the reliability of current decision analytic tools is highly dependent on the quality of the input data, they rarely contain methods for eliciting data from the users. The problem focused on in this thesis is the unavailability and inefficiency of methods for eliciting decision information from the users. The aim is to identify problem areas regarding the elicitation of decision data in real decision making processes, and to propose elicitation methods that take people’s natural choice strategies and natural behaviour into account. In this effort, we have identified a conceptual gap between the decision‐makers, the decision models, and the decision analytical tools, consisting of seven gap components. The gap components are of three main categories (of which elicitation is one). In order to study elicitation problems, a number of empirical studies, involving more than 400 subjects in total, have been carried out in Sweden and Brazil. An iterative research approach has been adopted and a combination of quantitative and qualitative methods has been used. Findings made in this thesis include the fact that decision‐makers have serious problems in many decision situations due to not having access to accurate and relevant data in the first place, and secondly, not having the means for retrieving such data in a proper manner, i.e. lacking elicitation methods for this purpose. Employing traditional elicitation methods in this realm yield results that reveal an inertia gap, i.e. an intrinsic inertia in people’s natural behaviour to shift between differently framed prospects, and different groups of decisionmakers displaying different choice patterns. Since existing elicitation methods are unable to deal with the inertia, we propose a class of methods to take advantage of this natural behaviour, and also suggest a representation for the elicited information. An important element in the proposed class of methods is also that we must be able to fine‐tune methods and measuring instruments in order to fit into different types of decision situations, user groups, and choice behaviours.

Page generated in 0.0842 seconds