• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 335
  • 89
  • 44
  • 22
  • 21
  • 19
  • 18
  • 11
  • 5
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 610
  • 610
  • 102
  • 94
  • 89
  • 83
  • 77
  • 64
  • 58
  • 57
  • 57
  • 56
  • 56
  • 55
  • 54
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
361

Structural Review and Performance Evaluation of Real Estate Tokens / Strukturell Granskning och Prestandautvärdering av Fastighet Tokens

Bayhoca, Berke January 2023 (has links)
This thesis study includes quantitative and qualitative research on real estate tokens, one of the leading security tokens. Security tokens, which are based on blockchain technology, are rapidly becoming widespread as new era investment products. Real estate tokens have long stood out as one of the most popular of these tokens. The underlying reason is that the real estate industry is associated with low liquidity and long and expensive transaction processes. Tokenization platforms and real estate market experts believe that tokenization will solve many problems in the traditional market. The products that will emerge through the tokenization of real estate assets can increase liquidity by removing high entry barriers in the market and create a secondary market where intermediaries are minimized. Theoretically, this technology, which can provide secure access to a wide market in a short time, can also mean a new platform for both debt and capital increase. Within the scope of this study, the structure of real estate tokens as financial products was examined, their similarities and differences with traditional products were discussed. Moreover, an empirical analysis has been made by comparing the financial performance of real estate tokens that are actively traded in the secondary market with certain reference indices. The empirical result found that token indices performed favorably with and relative to the crypto market and traditional market indices in the context of risk-adjusted returns. / Denna avhandlingsstudie inkluderar kvantitativ och kvalitativ forskning om fastighetspoletter, en av de ledande säkerhetspolletterna. Säkerhetstokens, som är baserade på blockchain-teknik, blir snabbt utbredda som investeringsprodukter i en ny era. Fastighetspoletter har länge stått ut som en av de mest populära av dessa tokens. Den bakomliggande orsaken är att fastighetsbranschen är förknippad med låg likviditet och långa och dyra transaktionsprocesser. Tokeniseringsplattformar och fastighetsmarknadsexperter tror att tokenisering kommer att lösa många problem på den traditionella marknaden. De produkter som kommer att dyka upp genom tokenisering av fastighetstillgångar kan öka likviditeten genom att ta bort höga inträdesbarriärer på marknaden och skapa en andrahandsmarknad där mellanhänder minimeras. Teoretiskt kan denna teknik, som kan ge säker tillgång till en bred marknad på kort tid, också innebära en ny plattform för både skuldsättning och kapitalökning. Inom ramen för studien undersöktes strukturen av fastighetspoletter som finansiella produkter, deras likheter och skillnader med traditionella produkter diskuterades. Dessutom har en empirisk analys gjorts genom att jämföra det finansiella resultatet för fastighetstokens som aktivt handlas på andrahandsmarknaden med vissa referensindex. Det empiriska resultatet fann att tokenindex presterade gynnsamt med och i förhållande till kryptomarknaden och traditionella marknadsindex i samband med riskjusterad avkastning.
362

A semantic Bayesian network for automated share evaluation on the JSE

Drake, Rachel 26 July 2021 (has links)
Advances in information technology have presented the potential to automate investment decision making processes. This will alleviate the need for manual analysis and reduce the subjective nature of investment decision making. However, there are different investment approaches and perspectives for investing which makes acquiring and representing expert knowledge for share evaluation challenging. Current decision models often do not reflect the real investment decision making process used by the broader investment community or may not be well-grounded in established investment theory. This research investigates the efficacy of using ontologies and Bayesian networks for automating share evaluation on the JSE. The knowledge acquired from an analysis of the investment domain and the decision-making process for a value investing approach was represented in an ontology. A Bayesian network was constructed based on the concepts outlined in the ontology for automatic share evaluation. The Bayesian network allows decision makers to predict future share performance and provides an investment recommendation for a specific share. The decision model was designed, refined and evaluated through an analysis of the literature on value investing theory and consultation with expert investment professionals. The performance of the decision model was validated through back testing and measured using return and risk-adjusted return measures. The model was found to provide superior returns and risk-adjusted returns for the evaluation period from 2012 to 2018 when compared to selected benchmark indices of the JSE. The result is a concrete share evaluation model grounded in investing theory and validated by investment experts that may be employed, with small modifications, in the field of value investing to identify shares with a higher probability of positive risk-adjusted returns.
363

The Black-Litterman Model : mathematical and behavioral finance approaches towards its use in practice

Mankert, Charlotta January 2006 (has links)
The financial portfolio model often referred to as the Black-Litterman model is analyzed using two approaches; a mathematical and a behavioral finance approach. After a detailed description of its framework, the Black-Litterman model is derived mathematically using a sampling theoretical approach. This approach generates a new interpretation of the model and gives an interpretable formula for the mystical parameter τ, the weight-on-views. Secondly, implications are drawn from research results within behavioral finance. One of the most interesting features of the Black-Litterman model is that the benchmark portfolio, against which the performance of the portfolio manager is evaluated, functions as the point of reference. According to behavioral finance, the actual utility function of the investor is reference-based and investors estimate losses and gains in relation to this benchmark. Implications drawn from research results within behavioral finance indicate and explain why the portfolio output given by the Black-Litterman model appears more intuitive to fund managers than portfolios generated by the Markowitz model. Another feature of the Black-Litterman model is that the user assigns levels of confidence to each asset view in the form of confidence intervals. Research results within behavioral finance have, however, shown that people tend to be badly calibrated when estimating their levels of confidence. Research has shown that people are overconfident in financial decision-making, particularly when stating confidence intervals. This is problematic. For a deeper understanding of the use of the Black-Litterman model it seems that we should turn to those financial fields in which social and organizational context and issues are taken into consideration, to generate better knowledge of the use of the Black-Litterman model. / QC 20101119
364

Strategic Improvement: A Systems Approach Using The Balanced Scorecard Methodology To Increase Federally Financed Research At The University Of Central Florida

Walters, Joseph 01 January 2013 (has links)
The University of Central Florida has many successful measures to reflect on as it celebrates its 50th year in 2013. It is the university with the 2nd largest student population in the U. S. and its overall ranking in the U.S. News & World Report has improved 4 years in a row. However, with respect to research, the federally funded research and development for the University of Central Florida (UCF) has remained flat. In addition, when compared to other schools, its portion of those federal research dollars is small. This thesis lays the groundwork for developing a model for improving the federally financed academic research and development. A systems approach using the balanced scorecard methodology was used to develop causal loop relationships between the many factors that influence the federal funding process. Measures are proposed that link back to the objectives and mission of the university. One particular measure found in the literature was refined to improve its integration into this model. The resulting work provides a framework with specific measures that can be incorporated at the university to improve their share of the federally financed research and development. Although developed for UCF this work could be applied to any university that desires to improve their standing in the federal financed academic research and development market.
365

Robust optimization for portfolio risk : a ravisit of worst-case risk management procedures after Basel III award.

Özün, Alper January 2012 (has links)
The main purpose of this thesis is to develop methodological and practical improvements on robust portfolio optimization procedures. Firstly, the thesis discusses the drawbacks of classical mean-variance optimization models, and examines robust portfolio optimization procedures with CVaR and worst-case CVaR risk models by providing a clear presentation of derivation of robust optimization models from a basic VaR model. For practical purposes, the thesis introduces an open source software interface called “RobustRisk”, which is developed for producing empirical evidence for the robust portfolio optimization models. The software, which performs Monte-Carlo simulation and out-of-sample performance for the portfolio optimization, is introduced by using a hypothetical portfolio data from selected emerging markets. In addition, the performance of robust portfolio optimization procedures are discussed by providing empirical evidence in the crisis period from advanced markets. Empirical results show that robust optimization with worst-case CVaR model outperforms the nominal CVaR model in the crisis period. The empirical results encourage us to construct a forward-looking stress test procedure based on robust portfolio optimization under regime switches. For this purpose, the Markov chain process is embedded into robust optimization procedure in order to stress regime transition matrix. In addition, assets returns, volatilities, correlation matrix and covariance matrix can be stressed under pre-defined scenario expectations. An application is provided with a hypothetical portfolio representing an internationally diversified portfolio. The CVaR efficient frontier and corresponding optimized portfolio weights are achieved under regime switch scenarios. The research suggests that stressed-CVaR optimization provides a robust and forward-looking stress test procedure to comply with the regulatory requirements stated in Basel II and CRD regulations.
366

Bond portfolio immunization with imperfect correlation of forward rates across maturities : risk minimization /

An, Jae-Wook January 1987 (has links)
No description available.
367

Decision making in innovation : understanding selection and prioritizaiton of development projects

Gutiérrez, Ernesto January 2008 (has links)
This thesis has its origin in empirical evidence. Some Swedish companies claimed that despite having plenty of proposals for developing new products, they experienced problems when choosing from all those alternatives. Their problem was how to select among new ideas the ones for being developed and the ones to be rejected, how many projects to run according to their capacity, when to start a development project and when to stop one, and how to decide among ongoing projects which the most important ones were. The companies’ problem was decision making in the context of innovation.  According to literature, a deeper understanding is needed of the decision making process in innovation, taking into account its organizational and procedural complexities. The purpose of this thesis is to achieve an understanding of the decision making process in innovation.  The thesis is based on an explorative study, with interviews carried out in three companies that have new product development as a core competitive factor. The empirical study focuses on the decisions made for selection and prioritization of different innovative alternatives.  As a result of the analysis of the empirical data a conceptualization of the decision making process was developed. Furthermore, it was described the relevant problems that decision makers experience, the main characteristics of the decision making process and the role that decision making plays in innovation. The implications of these findings for designing work procedures to support decision making in innovation were discussed; and general descriptions of two practical methods suggested.  The main findings indicate that for making decisions in the context of innovation, organizations must be able to face uncertain and ambiguous situations, and achieve a collective understanding about what is to be done. To do this, different approaches for making decisions and understanding innovation are needed. However, regardless of the appropriateness of these approaches, they receive different levels of acceptance within organizations; and decision makers must deal with the different grades of organizational acceptance of the different approaches. As a consequence, an organization displays certain dynamic using different approaches for making decisions and for understanding innovation. Such dynamic influences the companies’ innovative potential and the output of the innovation process. / QC 20101111
368

Serial acquisitions without synergies : a qualitative study on the Bergman & Beving sphere

Stanser, Theodor, Marken, Jakob January 2024 (has links)
The Bergman & Beving sphere is a group of Swedish companies that have been successfulduring the last decades as serial acquirers. This study examines how the Bergman & Bevingsphere operates and how management makes capital allocation decisions; this has beenexamined through interviews with key individuals and large owners within these companiesand has by that taken a qualitative approach to the research questions outlined.The business model of the Bergman & Beving sphere revolves around continuously acquiringniched companies, with a long track record of profitable growth and with a culture that alignswith that of their own. What distinguishes them from many serial acquirers is that they do nottry to integrate the acquired company into a bigger one and/or find synergies between them,but instead, they operate with a highly decentralized model where the companies actindependently. The group has for a long time used internal financial metrics, with theirtrademark metric being Profit/Working capital (P/WC), to guide their subsidiaries and aidmanagement in capital allocation decisions.
369

Data-dependent Regret Bounds for Adversarial Multi-Armed Bandits and Online Portfolio Selection

Putta, Sudeep Raja January 2024 (has links)
This dissertation studies 𝐷𝑎𝑡𝑎-𝐷𝑒𝑝𝑒𝑛𝑑𝑒𝑛𝑡 regret bounds for two online learning problems. As opposed to worst-case regret bounds, data-dependent bounds are able to adapt to the particular sequence of losses seen by the player. Thus, they offer a more fine grained performance guarantee compared to worst-case bounds. We start off with the Adversarial 𝑛-Armed Bandit problem. In prior literature it was a standard practice to assume that the loss vector belonged to a known domain, typically [0,1]ⁿ or [-1,1]ⁿ. We make no such assumption on the loss vectors, they may be completely arbitrary. We term this problem the Scale-Free Adversarial Multi Armed Bandit. At the beginning of the game, the player only knows the number of arms 𝑛. It does not know the scale and magnitude of the losses chosen by the adversary or the number of rounds 𝑇. In each round, it sees bandit feedback about the loss vectors 𝑙₁, . . . , 𝑙_𝑇 ⋲ ℝⁿ. Our goal is to bound its regret as a function of 𝑛 and norms of 𝑙₁, . . . , 𝑙_𝑇 . We design a bandit Follow The Regularized Leader (FTRL) algorithm, that uses a log-barrier regularizer along with an adaptive learning rate tuned via the AdaFTRL technique. We give two different regret bounds, based on the exploration parameter used. With non-adaptive exploration, our algorithm has a regret of 𝑂̃(√𝑛𝐿₂ + 𝐿_∞√𝑛𝑇) and with adaptive exploration, it has a regret of 𝑂(√𝑛𝐿₂ + 𝐿∞√𝑛𝐿₁). Here 𝐿∞ = sup_𝑡 ∥𝑙_𝑡∥_∞, 𝐿₂ = 𝚺ᵀ_𝑡₌₁ ∥𝑙_𝑡∥²₂, 𝐿₁ = 𝚺ᵀ_𝑡₌₁ ∥𝑙_𝑡∥₁ and the 𝑂̃ notation suppress logarithmic factors. These are the first MAB bounds that adapt to the ∥・∥₂, ∥・∥₁ norms of the losses. The second bound is the first data-dependent scale-free MAB bound as 𝑇 does not directly appear in the regret. We also develop a new technique for obtaining a rich class of local-norm lower-bounds for Bregman Divergences. This technique plays a crucial role in our analysis for controlling the regret when using importance weighted estimators of unbounded losses. Next, we consider the Online Portfolio Selection (OPS) problem over 𝑛 assets and 𝑇 time periods. This problem was first studied by Cover [1], who proposed the Universal Portfolio (UP) algorithm. UP is a computationally expensive algorithm with minimax optimal regret of 𝑂(𝑛 log 𝑇). There has been renewed interest in OPS due to a recently posed open problem Van Erven 𝑒𝑡 𝑎𝑙. [2] which asks for a computationally efficient algorithm that is also has minimax optimal regret. We study data-dependent regret bounds for OPS problem that adapt to the sequence of returns seen by the investor. Our proposed algorithm called AdaCurv ONS modifies the Online Newton Step(ONS) algorithm of [3] using a new adaptive curvature surrogate function for the log losses — log(𝑟_𝑡ᵀ𝑤). We show that the AdaCurv ONS algorithm has 𝑂(𝑅𝑛𝑙𝑜𝑔𝑇) regret where 𝑅 is the data-dependent quantity. For sequences where 𝑅=𝑂(1), the regret of AdaCurv ONS matches the optimal regret. However, for some sequences 𝑅 could be unbounded, making the regret bound vacuous. To overcome this issue, we propose the LB-AdaCurv ONS algorithm that adds a log-barrier regularizer along with an adaptive learning rate tuned via the AdaFTRL technique. LB-AdaCurv ONS has an adaptive regret of the form 𝑂(min(𝑅 log 𝑇, √𝑛𝑇 log 𝑇)). Thus, LB-AdaCurv ONS has a worst case regret of 𝑂(√𝑛𝑇 log 𝑇) while also having a data-dependent regret of 𝑂(𝑛𝑅 log 𝑇) when 𝑅 = 𝑂(1). Additionally, we show logarithmic First-Order and Second-Order regret bounds for AdaCurv ONS and LB-AdaCurv ONS. Finally, we consider the problem of Online Portfolio Selection (OPS) with predicted returns. We are the first to extend the paradigm of online learning with predictions to the portfolio selection problem. In this setting, the investor has access to noisy predictions of returns for the 𝑛 assets that can be incorporated into the portfolio selection process. We propose the Optimistic Expected Utility LB-FTRL (OUE-LB-FTRL) algorithm that incorporates the predictions using a utility function into the LB-FTRL algorithm. We explore the consistency-robustness properties for our algorithm. If the predictions are accurate, OUE-LB-FTRL's regret is 𝑂(𝑛 log 𝑇), providing a consistency guarantee. Even if the predictions are arbitrary, OUE-LB-FTRL's regret is always bounded by 𝑂(√𝑛𝑇 log 𝑇) providing a providing a robustness guarantee. Our algorithm also recovers a Gradual-variation regret bound for OPS. In the presence of predictions, we argue that the benchmark of static-regret becomes less meaningful. So, we consider the regret with respect to an investor who only uses predictions to select their portfolio (i.e., an expected utility investor). We provide a meta-algorithm called Best-of-Both Worlds for OPS (BoB-OPS), that combines the portfolios of an expected utility investor and a purely regret minimizing investor using a higher level portfolio selection algorithm. By instantiating the meta-algorithm and the purely regret minimizing investor with Cover's Universal Portfolio, we show that the regret of BoB-OPS with respect to the expected utility investor is 𝑂(log 𝑇). Simultaneously, BoB-OPS's static regret is 𝑂(𝑛 log 𝑇). This achieves a stronger form of consistency-robustness guarantee for OPS with predicted returns.
370

Project portfolio management : a model for improved decision making

Enoch, Clive Nathanael 03 April 2014 (has links)
The recent global financial crisis, regulatory and compliance requirements placed on organisations, and the need for scientific research in the project portfolio management discipline were factors that motivated this research. The interest and contribution to the body of knowledge in project portfolio management has been growing significantly in recent years, however, there still appears to be a misalignment between literature and practice. A particular area of concern is the decision-making, during the management of the portfolio, regarding which projects to accelerate, suspend, or terminate. A lack of determining the individual and cumulative contribution of projects to strategic objectives leads to poorly informed decisions that negate the positive effect that project portfolio management could have in an organisation. The focus of this research is, therefore, aimed at providing a mechanism to determine the individual and cumulative contribution of projects to strategic objectives so that the right decisions can be made regarding those projects. This thesis begins with providing a context for project portfolio management by confirming a definition and providing a theoretical background through related theories. An investigation into the practice of project portfolio management then provides insight into the alignment between literature and practice and confirms the problem that needed to be addressed. A conceptual model provides a solution to the problem of determining the individual and cumulative contribution of projects to strategic objectives. The researcher illustrates how the model can be extended before verifying and validating the conceptual model. Having the ability to determine the contributions of projects to strategic objectives affords decision makers the opportunity to conduct what-if scenarios, enabled through the use of dashboards as a visualization technique, in order to test the impact of their decisions before committing them. This ensures that the right decisions regarding the project portfolio are made and that the maximum benefit regarding the strategic objectives is achieved. This research provides the mechanism to enable better-informed decision- making regarding the project portfolio. / Computing / D. Phil. (Computer science)

Page generated in 0.0497 seconds