• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 29
  • 29
  • 29
  • 8
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Análise de desempenho de fundos comportamentais

Reis, Robson Costa January 2015 (has links)
Este trabalho analisou o desempenho de 31 fundos mútuos comportamentais atuantes nos EUA, Europa e Japão descritos em Santoni e Kelshiker (2010). Foram observados os desempenhos dos fundos e seus respectivos benchmarks em quatro indicadores: Índice de Sharpe, Índice de Sortino, Medida Ômega e Medida de Desempenho Comportamental. O horizonte da análise foi de 10 anos (jan/04 a dez/14) dividido em intervalos de 6, 12, 36, 60 e 120 meses. A partir da consolidação dos indicadores os fundos foram ranqueados e classificados em três faixas de desempenho: superior, intermediário e inferior. No intervalo de 120 meses não houve, na média geral, diferença de desempenho significativa (a 5%) entre os fundos e os Benchmarks. A análise por intervalos indicou que o desempenho dos fundos em relação aos Benchmarks piora conforme aumenta o prazo de aplicação. Nos intervalos mais curtos (6 e 12 meses) não houve, na média, diferença de desempenho significativa enquanto nos prazos mais longos (36 e 60 meses) o desempenho médio dos fundos foi significativamente inferior aos Benchmarks. Na média de todos os intervalos o desempenho médio dos fundos foi significativamente inferior aos Benchmarks. Dentre os indicadores utilizados, o índice de Sortino foi o que apresentou maior correlação com o desempenho geral dos fundos. / This work has analyzed the performance of 31 behavioral mutual funds operating in USA, Europe and Japan, as described in Santoni and Kelshiker (2010). It has been observed the performance of the funds and their respective Benchmarks according to four measures: Sharpe Index, Sortino Index, Omega Measure and Behavioral Perfomance Measure. The analysis covered a 10-year period (jan-04 to dec-14) slipt into intervals of 6, 12, 36, 60 and 120 months. Based on the consolidation of the performance measures, the funds have been ranked and classified into three performance categories: upper, intermediate and lower. In the 120-month interval there has not been, on average, a significant difference (at 5%) in performance between funds and Benchmarks. The analysis by intervals showed that the funds’ performance worsens in relation to the Benchmarks as the investment period increases. In shorter intervals (6 and 12 months) there has not been, on average, a significant difference in performance while in the longer intervals (36 and 60 months) the funds average performance was significantly lower than the Benchmarks. Computing the mean of all intervals, the funds average performance was significantly lower than the Benchmarks. Among the performance measures used, the Sortino Index presented the highest correlation with the general performance of the funds. / Dissertação (mestrado) - Pontifícia Universidade Católica do Rio de Janeiro, Rio de Janeiro, 2015 / Bibliografia: p. [77]-81
22

Essays on Dynamic Optimization for Markets and Networks

Gan, Yuanling January 2023 (has links)
We study dynamic decision-making problems in networks and markets under uncertainty about future payoffs. This problem is difficult in general since 1) Although the current decision (potentially) affects future decisions, the decision-maker does not have exact information on the future payoffs when he/she commits to the current decision; 2) The decision made at one part of the network usually interacts with the decisions made at the other parts of the network, which makes the computation scales very fast with the network size and brings computational challenges in practice. In this thesis, we propose computationally efficient methods to solve dynamic optimization problems on markets and networks, specify a general set of conditions under which the proposed methods give theoretical guarantees on global near-optimality, and further provide numerical studies to verify the performance empirically. The proposed methods/algorithms have a general theme as “local algorithms”, meaning that the decision at each node/agent on the network uses only partial information on the network. In the first part of this thesis, we consider a network model with stochastic uncertainty about future payoffs. The network has a bounded degree, and each node takes a discrete decision at each period, leading to a per-period payoff which is a sum of three parts: node rewards for individual node decisions, temporal interactions between individual node decisions from the current and previous periods, and spatial interactions between decisions from pairs of neighboring nodes. The objective is to maximize the expected total payoffs over a finite horizon. We study a natural decentralized algorithm (whose computational requirement is linear in the network size and planning horizon) and prove that our decentralized algorithm achieves global near-optimality when temporal and spatial interactions are not dominant compared to the randomness in node rewards. Decentralized algorithms are parameterized by the locality parameter L: An L-local algorithm makes its decision at each node v based on current and (simulated) future payoffs only up to L periods ahead, and only in an L-radius neighborhood around v. Given any permitted error ε > 0, we show that our proposed L-local algorithm with L = O(log(1/ε)) has an average per-node-per- period optimality gap bounded above by ε, in networks where temporal and spatial interactions are not dominant. This constitutes the first theoretical result establishing the global near-optimality of a local algorithm for network dynamic optimization. In the second part of this thesis, we consider the previous three types of payoff functions under adversarial uncertainty about the future. In general, there are no performance guarantees for arbitrary payoff functions. We consider an additional convexity structure in the individual node payoffs and interaction functions, which helps us leverage the tools in the broad Online Convex Optimization literature. In this work, we study the setting where there is a trade-off between developing future predictions for a longer lookahead horizon, denoted as k versus increasing spatial radius for decentralized computation, denoted as r. When deciding individual node decisions at each time, each node has access to predictions of local cost functions for the next k time steps in an r-hop neighborhood. Our work proposes a novel online algorithm, Localized Predictive Control (LPC), which generalizes predictive control to multi-agent systems. We show that LPC achieves a competitive ratio approaching to 1 exponentially fast in ρT and ρS in an adversarial setting, where ρT and ρS are constants in (0, 1) that increase with the relative strength of temporal and spatial interaction costs, respectively. This is the first competitive ratio bound on decentralized predictive control for networked online convex optimization. Further, we show that the dependence on k and r in our results is near-optimal by lower bounding the competitive ratio of any decentralized online algorithm. In the third part of this work, we consider a general dynamic matching model for online competitive gaming platforms. Players arrive stochastically with a skill attribute, the Elo rating. The distribution of Elo is known and i.i.d across players. However, the individual’s rating is only observed upon arrival. Matching two players with different skills incurs a match cost. The goal is tominimize a weighted combination of waiting costs and matching costs in the system. We investigate a popular heuristic used in industry to trade-off between these two costs, the Bubble algorithm. The algorithm places arriving players on the Elo line with a growing bubble around them. When two bubbles touch, the two players get matched. We show that, with the optimal bubble expansion rate, the Bubble algorithm achieves a constant factor ratio against the offline optimal cost when the match cost (resp. waiting cost) is a power of Elo difference (resp. waiting time). We use players’ activity logs data from a gaming start-up to validate our approach and further provide guidance on how to tune the Bubble expansion rate in practice.
23

A computer-based DSS for funds management in a large state university environment

Tyagi, Rajesh January 1986 (has links)
The comprehensive computerized decision support system developed in this research employs two techniques, computer modeling and goal programming, to assist top university financial officers in assessing the current status of funds sources and uses. The purpose of the DSS is to aid in reaching decisions concerning proposed projects, and to allocate funds from sources to uses on an aggregate basis according to a rational set of prescribed procedures. The computer model provides fast and easy access to the database and it permits the administrator to update the database as new information is received. Goal programming is used for modeling the allocation process since it provides a framework for the inclusion of multiple goals that may be conflicting and incommensurable. The goal programming model allocates funds from sources to uses based on a priority structure associated with the goals. The DSS, which runs interactively, performs a number of tasks that include: selection of model parameters, formulating goals and priority structure, and solving the GP model. It also provides on-line access to the database so that it may be updated as necessary. In addition, the DSS generates reports regarding funds allocation and goal achievements to allow analysis of the model results. The decision support system also provides a framework for experimentation with various goal and priority structures, thus facilitating what-if analyses. The user can also perform a sensitivity analysis by observing the effect of assigning different relative importance to a goal or set of goals. / Ph. D.
24

The Relationship of Wealth, Financial Literacy and Relative Financial Well-Being to Self-Assessed Risk Tolerance: A Secondary Analysis

Hui, Roslyn Yuk-Bo January 2024 (has links)
This paper explores factors related to self-assessed risk tolerance, focusing on its relationship to wealth, financial literacy, financial well-being relative to parents’ financial well-being at the respective age, and financial well-being relative to one’s historical self. Additional predictors included age and education. The analyses were conducted using data from the Federal Reserve Board’s 2019 Survey of Household Economics and Decision-making (SHED). The measure of financial literacy was constructed from several survey items assessing knowledge of investing and interest rates. A multinomial logistic regression model confirmed that all of the abovementioned variables are indeed significant contributors to the prediction of self-assessed risk tolerance. Wealth is positively related to self-assessed risk tolerance, as predicted by Bernoullian utility theory. Age exhibits a non-linear relationship with risk tolerance. Both financial well-being relative to parents’ financial well-being at the same age and financial well-being relative to one’s historical self exhibit a positive relationship with risk tolerance. Lastly, those with higher financial literacy scores tend to have higher risk tolerance, as did those with more education. Some implications of the findings are discussed.
25

Differences in School Districts' Decision-Making Processes Before and After Tax Limitation Elections: A Case Study

Travis, Rosemary Fechner 05 1900 (has links)
Using a case study approach, this investigation focused on the decision-making processes involved in developing budgets in two Texas school districts following a tax limitation, or rollback, election. Factors influencing the decision-making processes included the rollback election's outcome in each district, the participants, the perceptions participants held of themselves, the perceptions participants held of others in the district and community, the decisions made, and the factors influencing participants' decisions. Two Texas school districts were selected as subjects of this study which used qualitative data collection methods. In one school district, the rollback election passed. In the other, it failed. Data collection included observations of school board meetings and budget workshops. Structured interviews of school board members and administrators, pro- and antirollback proponents, and newspaper editors were conducted. Questions focused on the budgetary decision-making processes before and after the rollback elections. They also solicited information fromsubjects regarding rollback elections, the factors precipitating the rollback elections and the impact of the rollback election campaign upon each school district. Document analyses were triangulated with the observations and interviews to identify the factors influencing the budgetary decision-making process. Following the rollback elections, school officials in both districts adopted a conservative approach to budgetary decision-making. In both districts, school board members and administrators listened more carefully to citizens' concerns. Citizen finance committees were formed in both districts following the rollback elections to receive community input into the 1989-90 budgets. The decision-making processes in both districts were influenced by school board members' and administrators' personal philosophies, the presence or absence of long-range district goals, and pressures to finance unfunded and underfunded state mandates. The budget documents produced in both districts following the rollback elections reflected a commitment to funding curricular rather than extracurricular programs. School officials protected teachers' and support staffers' salaries, recognizing the importance of maintaining employee morale.
26

A study of genetic fuzzy trading modeling, intraday prediction and modeling. / CUHK electronic theses & dissertations collection

January 2010 (has links)
This thesis consists of three parts: a genetic fuzzy trading model for stock trading, incremental intraday information for financial time series forecasting, and intraday effects in conditional variance estimation. Part A investigates a genetic fuzzy trading model for stock trading. This part contributes to use a fuzzy trading model to eliminate undesirable discontinuities, incorporate vague trading rules into the trading model and use genetic algorithm to select an optimal trading ruleset. Technical indicators are used to monitor the stock price movement and assist practitioners to set up trading rules to make buy-sell decision. Although some trading rules have a clear buy-sell signal, the signals are always detected with 'hard' logical. These trigger the undesirable discontinuities due to the jumps of the Boolean variables that may occur for small changes of the technical indicator. Some trading rules are vague and conflicting. They are difficult to incorporate into the trading system while they possess significant market information. Various performance comparisons such as total return, maximum drawdown and profit-loss ratios among different trading strategies were examined. Genetic fuzzy trading model always gave moderate performance. Part B studies and contributes to the literature that focuses on the forecasting of daily financial time series using intraday information. Conventional daily forecast always focuses on the use of lagged daily information up to the last market close while neglecting intraday information from the last market close to current time. Such intraday information are referred to incremental intraday information. They can improve prediction accuracy not only at a particular instant but also with the intraday time when an appropriate predictor is derived from such information. These are demonstrated in two forecasting examples, predictions of daily high and range-based volatility, using linear regression and Neural Network forecasters. Neural Network forecaster possesses a stronger causal effect of incremental intraday information on the predictand. Predictability can be estimated by a correlation without conducting any forecast. Part C explores intraday effects in conditional variance estimation. This contributes to the literature that focuses on conditional variance estimation with the intraday effects. Conventional GARCH volatility is formulated with an additive-error mean equation for daily return and an autoregressive moving-average specification for its conditional variance. However, the intra-daily information doesn't include in the conditional variance while it should has implication on the daily variance. Using Engle's multiplicative-error model formulation, range-based volatility is proposed as an intraday proxy for several GARCH frameworks. The impact of significant changes in intraday data is reflected in the MEM-GARCH variance. For some frameworks, it is possible to use lagged values of range-based volatility to delay the intraday effects in the conditional variance equation. / Ng, Hoi Shing Raymond. / Adviser: Kai-Pui Lam. / Source: Dissertation Abstracts International, Volume: 72-01, Section: B, page: . / Thesis (Ph.D.)--Chinese University of Hong Kong, 2010. / Includes bibliographical references (leaves 107-114). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. Ann Arbor, MI : ProQuest Information and Learning Company, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese.
27

A methodology to improve third sector investment strategies: the development and application of a Western Cape based financial resource allocation decision making model

Smit, Andre de Villiers 12 1900 (has links)
Thesis (DPhill (Social Work))—University of Stellenbosch, 2005. / South Africa has high levels of social need which are steadily growing. While the third sector is large and contributes substantially to social service provision, it together with government is seemingly incapable of providing adequate social services, particularly in the more poverty-stricken rural areas of the country. Among other, at the root of the third sector’s inability to adequately serve the needy, is a lack of funding caused by poor funding policies and uninformed financial resource allocation decision making. As such, this study set out to develop an automated financial resource allocation decision making model that would provide extensive organised data to better inform the allocation decision making process – the first component of the study. It was also purposed to provide a range of otherwise lacking empirical data on the third sector to determine service and staffing norms, patterns of funding and to assess financial management of this sector. In so doing the Community Chest of the Western Cape was selected to serve as the locus of the study – their existing manual allocation approach was analysed and a new more sophisticated automated model was developed. Data generated by the model pointed to a further research need, that of a better understanding of the funding and financial management practices of the third sector. This gave rise to the third component of the study, a survey of 232 beneficiary organisations of the Community Chest. An analysis of the data generated by the model and collected from the survey highlighted yet another need, the poor financial management acumen of the sector. In order to address this need and hence the efficacy of the model, a survey of 207 University of Cape Town management accounting students was conducted to determine the feasibility of using their financial management knowledge and skills to support financially and IT-illiterate organisations – the fourth component of the study. The study primarily adopted a quantitative research paradigm; the research design was exploratory-descriptive and used a primary data design with limited secondary data analysis. Data was captured in MS Access and analysed using Statistica and MS Excel. Results indicated that the country’s funding policies were wanting and that the allocation of state and state-controlled funding agency resources were not being allocated in concert with adopted policy. In almost all cases the poorer rural areas had and received fewer resources. Most organisations surveyed were not financially secured and their ability to fundraise was very limited. Their financial management ability was not good. Fortunately a substantial number of accounting students indicated a willingness to improve the financial management ability of such needy organisations. The study concludes by recommending further development of the model, utilisation of accounting students and calls for a major assessment of third sector needs, its funding and financial management. It also recommends the formulation of new funding policies.
28

Determinants of the use of debt and leasing in UK corporate financing decisions

Dzolkarnaini, Mohd Nazam January 2009 (has links)
This thesis investigates the determinants of the use of debt and leasing in the UK using a comprehensive measure of debt and leases, in recognition of the link between lease and debt-type financing decisions, based on financial contracting theory and the tax advantage hypothesis. The design of the study takes account three lacunae in our current understanding of this topic. Firstly, despite the fact that the capital structure literature is voluminous, it is perhaps surprising that relatively little research has been carried out on lease finance, given its significant role as a major source of finance for many firms. Secondly, the role of tax in the capital structure decision is unclear. Empirically testing for tax effects is challenging because spurious relationships may exist between the financing decision and many commonly used tax proxies. More importantly, our understanding of the impact of taxes on UK financing decisions is far from complete, especially since several major corporate tax reforms have taken place in the last decade. Thirdly, empirical evidence on capital structure determinants is also voluminous but far from conclusive. Notably, contradictory signs and significance levels are commonly observed. Using the standard regression approach invariably involves identification of the average behaviour of firms, and therefore does not measure diversity across firms. In response to these three major issues, this study employs empirical research methods, namely cross-sectional pooled regression, static and dynamic panel data regression, and quantile regression to analyse a large sample of 361 non-financial firms, drawn from the FTSE 350 and FTSE All-Small indices over the tax years 1995 through 2003. The operating lease data are estimated using the constructive capitalisation method while the simulated before-financing marginal tax rate is used to proxy for the firms’ tax status. The endogeneity of corporate tax status is evident since the use of simple tax proxy, the effective tax rate, leads to a spurious negative relation between debt usage and tax rates. The problem was avoided with a better measure of tax variable that is the simulated before-financing marginal tax rate where it is found that the empirical relationships between the tax factor and debt and leasing are consistent with those theoretical predictions. Furthermore, there is a clear distinction between the effect of taxes on debt and leasing where the firm’s marginal tax status is only relevant when managers make decisions on debt financing. The use of quantile regression method in the present study represents a novel approach in investigating the determinants of the use of debt and leasing. The results reveal that the determinants of debt and leasing are heterogeneous across the whole distribution of firms, consistent with the notion of heterogeneity as promoted by Beattie et al. (2006), but contradicting their claim that the large-scale regression approach cannot measure firms’ diversity. This finding implies that average model results (e.g., from OLS or panel data models) may not apply to the tails of debt and leasing levels, and hence assuming that the determinants of debt and leasing decisions are the same for all firms in the economy is clearly unrealistic. Using the dynamic panel data model, this thesis confirms that debt and leasing are substitutes rather than complements, and that the degree of substitutability is more pronounced among smaller firms, where the degree of information asymmetry is greater. More importantly, the use of a joint specification for debt and leasing improves our understanding of the determinants of the two fixed-claim financing instruments. There is also significant evidence to support the view that firm characteristics affect contracting costs which in turn impact on the choice between alternative forms of finance, namely equity, debt and leasing.
29

Optimization and Decision-Making in Decentralized Finance, Scheduling, and Graphical Game Theory

Patange, Utkarsh January 2024 (has links)
We consider the problem of optimization and decision-making in various settings involving complex systems. In particular, we consider specific problems in decentralized finance which we address employing insights from mathematical finance, in course-mode selection that we solve by applying mixed-integer programming, and in social networks that we approach using tools from graphical game theory.In the first part of the thesis, we model and analyze fixed spread liquidation lending in DeFi as implemented by popular pooled lending protocols such as AAVE, JustLend, and Compound. Empirically, we observe that over 70% of liquidations occur in the absence of any downward price jumps. Then, assuming the borrowers monitor their loans with exponentially distributed horizons, we compute the expected liquidation cost incurred by the borrowers in closed form as a function of the monitoring frequency. We compare this cost against liquidation data obtained from AAVE protocol V2, and observe a match with our model assuming the borrowers monitor their loans five to six times more often than they interact with the pool. Such borrowers must balance the financing cost against the likelihood of liquidation. We compute the optimal health factor in this situation assuming a financing rate for the collateral. Empirically, we observe that borrowers are often more conservative compared to model predictions, though on average, model predictions match with empirical observations. In the second part of the thesis, we consider the problem of hybrid scheduling that was faced by Columbia Business School during the Covid-19 pandemic and describe the system that we implemented to address it. The system allows some students to attend in-person classes with social distancing, while their peers attend online, and schedules vary by day. We consider two variations of this problem: one where students have unique, individualized class enrollments, and one where they are grouped in teams that are enrolled in identical classes. We formulate both problems as mixed-integer programs. In the first setting, students who are scheduled to attend all classes in person on a given day may, at times, be required to attend a particular class on that day online due to social distancing constraints. We count these instances as “excess.” We minimize excess and related objectives, and analyze and solve the relaxed linear program. In the second setting, we schedule the teams so that each team’s in-person attendance is balanced over days of week and spread out over the entire term. Our objective is to maximize interaction between different teams. Our program was used to schedule over 2,500 students in student-level scheduling and about 790 students in team-level scheduling from the Fall 2020 through Summer 2021 terms at Columbia Business School. In the third part of the thesis, we consider a social network, where individuals choose actions which optimize utility which is a function of their neighbors’ actions. We assume that a central authority aiming to maximize social welfare at equilibrium can intervene by paying some cost to shift individual incentives, and that the cost is upper bounded by a budget. The intervention that maximizes the social welfare can be computed using the spectral decomposition of the adjacency matrix of the graph, yet this is infeasible in practice if the adjacency matrix is unknown. We study the question of designing intervention strategies for graphs where the adjacency matrix is unknown and is drawn from some distribution. For several commonly studied random graph models, we show that the competitive ratio of in intervention proportional to the first eigenvector of the expected adjacency matrix, approaches 1 in probability as the graph size increases. We also provide several efficient sampling-based approaches for approximately recovering the first eigenvector when we do not know the distribution. On the whole, our analysis compares three categories of interventions: those which use no data about the network, those which use some data (such as distributional knowledge or queries to the graph), and those which are fully optimal. We evaluate these intervention strategies on synthetic and real-world network data, and our results suggest that analysis of random graph models can be useful for determining when certain heuristics may perform well in practice.

Page generated in 0.09 seconds