Spelling suggestions: "subject:" cofficient"" "subject:" coefficient""
261 |
Practical Application of Modern Portfolio TheoryPersson, Jakob, Lejon, Carl, Kierkegaard, Kristian January 2007 (has links)
There are several authors Markowitz (1991), Elton and Gruber (1997) that discuss the main issues that an investor faces when investing, for example how to allocate resources among the variety of different securities. These issues have led to the discussion of portfolio theories, especially the Modern Portfolio Theory (MPT), which is developed by Nobel Prize awarded economist Harry Markowitz. This theory is the philosophical opposite of tradi-tional asset picking. The purpose of this thesis is to investigate if an investor can apply MPT in order to achieve a higher return than investing in an index portfolio. Combining a strong portfolio that beats the market in the longrun would be the ultimate goal for most investors. The theories that are used to analyze the problem and the empirical findings provide the essential concepts such as standard deviation, risk and return of the portfolio. Further, diversification, correlation and covariance are used to achieve the optimal risky portfolio. There will be a walk-through of the MPT, with the efficient frontier as the graphical guide to express the optimal risky portfolio. The methodology constitutes as the frame for the thesis. The quantitative method is used since the data input is gathered from historical data. This thesis is based on existing theories, and the deductive approach aims to use these theories in order to accomplish a valid and accurate analysis. The benchmark that is used to compare the results from the portfolio is the Stockholm stock exchange OMX 30. This index mimics and reflects the market as a whole. The portfolio will be reweighed at a preplanned schedule, each quarter to constantly obtain an optimal risky portfolio. The finding from this study indicates that the actively managed portfolio outperforms the passive benchmark during the selected timeframe. The outcome someway differs when evaluating the risk adjusted result and becomes less significant. The risk adjusted result does not provide any strong evidence for a greater return than index. Finally, with this finding, the authors can conclude by stating that an actively managed optimal risky portfolio with guidance of the MPT can surpass the OMX 30 within the selected timeframe.
|
262 |
PANIC! PANIC! The sky is falling!! : A study of household’s reaction to financial news and whether their reaction is rationalvom Dorp, Mishka, Shaw, Kenneth January 2008 (has links)
If you happen to be an American and have trouble sleeping, do not attempt to fall asleep watching the nightly news because it is anything but boring. At a glance, the American economy seems to be in shambles. The United States has an all-time high deficit, the housing market has crashed or is in the process of doing so, capital markets are becoming increasingly volatile and credit institutions in and outside the US are reporting heavy losses. The American presidential elections will take place this November, and there is no question that the economy will be one of the main issues. How has the unstable economic atmosphere affected the financial behavior of households in the United States and where have they received the financial information and advice from? Have the changes that they have made in their personal savings/investments and asset portfolios changed in any way and if so, are these changes based on rational decisions or mere hunches? This paper intends to answer these questions through a qualitative approach by interviewing eight tailor picked households in the United States. We take a constructionist ontological position assuming that social entities have a reality that is constructed by the perception of social actors. Furthermore, we have taken the epistemological Interpretevist stance assuming that we study the world by looking at its social actors. We have utilized a number of theories to aid us through our deductive approach where we collect theory, then collect data, analyze the findings, confirm or reject existing theory, then revisit the existing theory with the new data. The main theories include the Efficient Market Hypothesis, Behavioral Finance, Metacommunication and Dissemination of Information and Animal Spirits including all their subsidiary theories. The interview process involved utilizing an unstructured format and once interviews were collected, they were compiled into summarized form through an emotionalist approach. Conclusions were then drawn by finding common denominators between the interviewees’ sentiments. We found the signs of Keynes’ Animal Spirits, overreaction to information, and amplification of information through private sources. Furthermore, we have been able to find that advice had changed over the past year although we were unable to conclude how it had changed. Finally, a number of findings including people’s risk averse behavior towards volatile stock markets gave us an overall picture of the Efficient Market Hypothesis being less true in this situation than Behavioral Finance.
|
263 |
Genererar insiderhandel överavkastning? : En studie om insiderhandel på StockholmsbörsenEdvardsson, David, Ruthberg, Fredrik January 2012 (has links)
Bakgrund: Börsen ger en möjlighet för företag att erhålla kapital och för placerare atttillgodogöra sig avkastning. Personer med insyn i det egna företaget, så kalladeinsiders, kan dock i egenskap av sin position inneha kurspåverkande information somövriga aktörer på marknaden inte har möjlighet att ta del av. Tidigare forskning harpåvisat att insiders utnyttjar denna informationsasymmetri för att på så sätt tillgodogörasig överavkastning. Syfte: Syftet med studien är att undersöka om insiders kan tillgodogöra sigöveravkastning genom handel med aktier på Nasdaq OMX. Vidare ämnar studienutröna eventuella skillnader i överavkastning beroende på företagsstorlek,transaktionsstorlek samt tidsperiod. Metod: I denna studie har en kvantitativ forskningsstrategi i form av en eventstudietillämpats. Studien har en deduktiv ansats och undersöker insidertransaktioner från 90företag på stockholmsbörsen under tidsperioden 2006-01-01 till 2011-12-31. Förberäkning av överavkastning har den justerade marknadsmodellen använts. Kursdata förrespektive företag har hämtats från databasen Thomson Reuters EcoWin Pro.Information om insidertransaktioner har hämtats från finansinspektionensinsynsregister. Resultat: Resultatet visar att insiders tillgodogör sig överavkastning genom handel medaktier i det egna företaget, främst i samband med säljtransaktioner. / People with insight into their own company, also known as insiders, can have access toprice-sensitive information which other investors are not able to access. Previousresearch has shown that insiders exploit this asymmetric information to thereby obtainabnormal returns. The purpose of this study is to investigate whether insiders can obtain abnormal returnsby trading shares on Nasdaq OMX. Furthermore, the study aims to investigate if thereare any differences in the abnormal return depending on company size, transaction size,and time period. This study investigates insider transactions of 90 companies on the Stockholm StockExchange during the time period 2006-01-01 to 2011-12-31. The results show that insiders obtain abnormal returns by trading shares in their owncompany. The abnormal returns occur primarily related to sales transactions.
|
264 |
Parallel Algorithm for Memory Efficient Pairwise and Multiple Genome Alignment in Distributed EnvironmentAhmed, Nova 20 December 2004 (has links)
The genome sequence alignment problems are very important ones from the computational biology perspective. These problems deal with large amount of data which is memory intensive as well as computation intensive. In the literature, two separate algorithms have been studied and improved – one is a Pairwise sequence alignment algorithm which aligns pairs of genome sequences with memory reduction and parallelism for the computation and the other one is the multiple sequence alignment algorithm that aligns multiple genome sequences and this algorithm is also parallelized efficiently so that the workload of the alignment program is well distributed. The parallel applications can be launched on different environments where shared memory is very well suited for these kinds of applications. But shared memory environment has the limitation of memory usage as well as scalability also these machines are very costly. A better approach is to use the cluster of computers and the cluster environment can be further enhanced to a grid environment so that the scalability can be improved introducing multiple clusters. Here the grid environment is studied as well as the shared memory and cluster environment for the two applications. It can be stated that for carefully designed algorithms the grid environment is comparable for its performance to other distributed environments and it sometimes outperforms the others in terms of the limitations of resources the other distributed environments have.
|
265 |
Space-Efficient Data Structures in the Word-RAM and Bitprobe ModelsNicholson, Patrick 06 August 2013 (has links)
This thesis studies data structures in the word-RAM and bitprobe models, with an emphasis on space efficiency. In the word-RAM model of computation the space cost of a data structure is measured in terms of the number of w-bit words stored in memory, and the cost of answering a query is measured in terms of the number of read, write, and arithmetic operations that must be performed. In the bitprobe model, like the word-RAM model, the space cost is measured in terms of the number of bits stored in memory, but the query cost is measured solely in terms of the number of bit accesses, or probes, that are performed.
First, we examine the problem of succinctly representing a partially ordered set, or poset, in the word-RAM model with word size
Theta(lg n) bits. A succinct representation of a combinatorial object is one that occupies space matching the information theoretic lower bound to within lower order terms. We show how to represent a poset on n vertices using a data structure that occupies n^2/4 + o(n^2) bits, and can answer precedence (i.e., less-than) queries in
constant time. Since the transitive closure of a directed acyclic graph is a poset, this implies that we can support reachability
queries on an arbitrary directed graph in the same space bound. As far as we are aware, this is the first representation of an arbitrary directed graph that supports reachability queries in constant time,
and stores less than n choose 2 bits. We also consider several additional query operations.
Second, we examine the problem of supporting range queries on strings
of n characters (or, equivalently, arrays of
n elements) in the word-RAM model with word size Theta(lg n) bits. We focus on the specific problem of answering range majority queries: i.e., given a range, report the
character that is the majority among those in the range, if one exists. We show that these queries can be supported in constant time
using a linear space (in words) data structure. We generalize this
result in several directions, considering various frequency thresholds, geometric variants of the problem, and dynamism. These
results are in stark contrast to recent work on the similar range mode problem, in which the query operation asks for the mode (i.e., most frequent) character in a given range. The current best data structures for the range mode problem take soft-Oh(n^(1/2)) time per query for linear space data structures.
Third, we examine the deterministic membership (or dictionary) problem in the bitprobe model. This problem asks us to store a set of n elements drawn from a universe [1,u] such that membership queries
can be always answered in t bit probes. We present several new fully explicit results for this problem, in particular for the case
when n = 2, answering an open problem posed by Radhakrishnan, Shah, and Shannigrahi [ESA 2010]. We also present a general strategy for the membership problem that can be used to solve many related fundamental problems, such as rank, counting, and emptiness queries.
Finally, we conclude with a list of open problems and avenues for future work.
|
266 |
Insider Trading - An Efficiency Contributor?Söderberg, Gustav, Nyström, Rikard January 2013 (has links)
This research has studied the relationship between insider trading activity and its effect on the level of informational efficiency. The authors have used insider data from Finansinspektionen and data regarding stock prices, market capitalization and GDP from Thomson Reuters Datastream. The sample includes 193 companies on the Swedish stock exchange for a period of 10 years. A Variance Ratio test employed on moving sub-sample windows was used to establish the level of time-varying informational efficiency, which subsequently was used in an OLS-regression as a dependent variable. The result of the regression implies a negative effect on firm price information efficiency by insider purchasing, while selling has a positive effect. This can be concluded using a confidence level of 99%. The results are interesting since they imply an asymmetrical effect of insider trading on informational efficiency, while current insider legislation treats buying and selling by insiders equal. Thus, the results are of interest in future adjustments of laws regulating insider trading.
|
267 |
HW/SW mechanisms for instruction fusion, issue and commit in modern u-processorsDeb, Abhishek 03 May 2012 (has links)
In this thesis we have explored the co-designed paradigm to show alternative processor design points. Specifically, we have provided HW/SW mechanisms for instruction fusion, issue and commit for modern processors. We have implemented a co-designed virtual machine monitor that binary translates x86 instructions into RISC like micro-ops. Moreover, the translations are stored as superblocks, which are a trace of basic blocks. These superblocks are further optimized using speculative and non-speculative optimizations. Hardware mechanisms exists in-order to take corrective action in case of misspeculations. During the course of this PhD we have made following contributions.
Firstly, we have provided a novel Programmable Functional unit, in-order to speed up general-purpose applications. The PFU consists of a grid of functional units, similar to CCA, and a distributed internal register file. The inputs of the macro-op are brought from the Physical Register File to the internal register file using a set of moves and a set of loads. A macro-op fusion algorithm fuses micro-ops at runtime. The fusion algorithm is based on a scheduling step that indicates whether the current fused instruction is beneficial or not. The micro-ops corresponding to the macro-ops are stored as control signals in a configuration. The macro-op consists of a configuration ID which helps in locating the configurations. A small configuration cache is present inside the Programmable Functional unit, that holds these configurations. In case of a miss in the configuration cache configurations are loaded from I-Cache. Moreover, in-order to support bulk commit of atomic superblocks that are larger
than the ROB we have proposed a speculative commit mechanism. For this we have proposed a Speculative commit register map table that holds the mappings of the speculatively committed instructions. When all the instructions of the superblock have committed the speculative state is copied to Backend Register Rename Table.
Secondly, we proposed a co-designed in-order processor with with two kinds of accelerators. These FU based accelerators run a pair of fused instructions. We have considered two kinds of instruction fusion. First, we fused a pair of independent loads together into vector loads and execute them on vector load units. For the second kind of instruction fusion we have fused a pair of dependent simple ALU instructions and execute them in Interlock Collapsing ALUs (ICALU). Moreover, we have evaluated performance of various code optimizations such as list-scheduling, load-store telescoping and load hoisting among others. We have compared our co-designed processor with small instruction window out-of-order processors.
Thirdly, we have proposed a co-designed out-of-order processor. Specifically we have reduced complexity in two areas. First
of all, we have co-designed the commit mechanism, that enable bulk commit of atomic superblocks. In this solution we got rid of the conventional ROB, instead we introduce the Superblock Ordering Buffer (SOB). SOB ensures program order is maintained at the granularity of the superblock, by bulk committing the program state. The program state consists of the register state and the memory state. The register state is held in a per superblock register map table, whereas the memory state is held in gated store buffer and updated in bulk. Furthermore, we have tackled the complexity of Out-of-Order issue logic by using FIFOs. We have proposed an enhanced steering heuristic that fixes the inefficiencies of the existing dependence-based heuristic. Moreover, a mechanism to release the FIFO entries earlier is also proposed that further improves the performance of the steering heuristic. / En aquesta tesis hem explorat el paradigma de les màquines issue i commit per processadors actuals. Hem implementat una màquina virtual que tradueix binaris x86 a micro-ops de tipus RISC. Aquestes traduccions es guarden com a superblocks, que en realitat no és més que una traça de virtuals co-dissenyades. En particular, hem proposat mecanismes hw/sw per a la fusió d’instruccions, blocs bàsics. Aquests superblocks s’optimitzen utilitzant optimizacions especualtives i d’altres no speculatives. En cas de les optimizations especulatives es consideren mecanismes per a la gestió de errades en l’especulació. Al llarg d’aquesta tesis s’han fet les següents contribucions:
Primer, hem proposat una nova unitat functional programmable (PFU) per tal de millorar l’execució d’aplicacions de proposit general. La PFU està formada per un conjunt d’unitats funcionals, similar al CCA, amb un banc de registres intern a la PFU distribuït a les unitats funcionals que la composen. Les entrades de la macro-operació que s’executa en la PFU es mouen del banc de registres físic convencional al intern fent servir un conjunt de moves i loads. Un algorisme de fusió combina més micro-operacions en temps d’execució. Aquest algorisme es basa en un pas de planificació que mesura el benefici de les decisions de fusió. Les micro operacions corresponents a la macro operació s’emmagatzemen com a senyals de control en una configuració. Les macro-operacions tenen associat un identificador de configuració que ajuda a localitzar d’aquestes. Una petita cache de configuracions està present dintre de la PFU per tal de guardar-les. En cas de que la configuració no estigui a la cache, les configuracions es carreguen de la cache d’instruccions. Per altre banda, per tal de donar support al commit atòmic dels superblocks que sobrepassen el tamany del ROB s’ha proposat un mecanisme de commit especulatiu. Per aquest mecanisme hem proposat una taula de mapeig especulativa dels registres, que es copia a la taula no especulativa quan totes les instruccions del superblock han comitejat.
Segon, hem proposat un processador en order co-dissenyat que combina dos tipus d’acceleradors. Aquests acceleradors executen un parell d’instruccions fusionades. S’han considerat dos tipus de fusió d’instructions. Primer, combinem un parell de loads independents formant loads vectorials i els executem en una unitat vectorial. Segon, fusionem parells d’instruccions simples d’alu que són dependents i que s’executaran en una Interlock Collapsing ALU (ICALU). Per altra aquestes tecniques les hem evaluat conjuntament amb diverses optimizacions com list scheduling, load-store telescoping i hoisting de loads, entre d’altres. Aquesta proposta ha estat comparada amb un processador fora d’ordre.
Tercer, hem proposat un processador fora d’ordre co-dissenyat efficient reduint-ne la complexitat en dos areas principals. En primer lloc, hem co-disenyat el mecanisme de commit per tal de permetre un eficient commit atòmic del superblocks. En aquesta solució hem substituït el ROB convencional, i en lloc hem introduït el Superblock Ordering Buffer (SOB). El SOB manté l’odre de programa a granularitat de superblock. L’estat del programa consisteix en registres i memòria. L’estat dels registres es manté en una taula per superblock, mentre que l’estat de memòria es guarda en un buffer i s’actulitza atòmicament. La segona gran area de reducció de complexitat considerarada és l’ús de FIFOs a la lògica d’issue. En aquest últim àmbit hem proposat una heurística de distribució que solventa les ineficiències de l’heurística basada en dependències anteriorment proposada. Finalment, i junt amb les FIFOs, s’ha proposat un mecanisme per alliberar les entrades de la FIFO anticipadament.
|
268 |
Energy Efficient Design for Deep Sub-micron CMOS VLSIsElgebaly, Mohamed January 2005 (has links)
Over the past decade, low power, energy efficient VLSI design has been the focal point of active research and development. The rapid technology scaling, the growing integration capacity, and the mounting active and leakage power dissipation are contributing to the growing complexity of modern VLSI design. Careful power planning on all design levels is required. This dissertation tackles the low-power, low-energy challenges in deep sub-micron technologies on the architecture and circuit levels.
Voltage scaling is one of the most efficient ways for reducing power and energy. For ultra-low voltage operation, a new circuit technique which allows bulk CMOS circuits to work in the sub-0. 5V supply territory is presented. The threshold voltage of the slow PMOS transistor is controlled dynamically to get a lower threshold voltage during the active mode. Due to the reduced threshold voltage, switching speed becomes faster while active leakage current is increased. A technique to dynamically manage active leakage current is presented. Energy reduction resulting from using the proposed structure is demonstrated through simulations of different circuits with different levels of complexity.
As technology scales, the mounting leakage current and degraded noise immunity impact performance especially that of high performance dynamic circuits. Dual threshold technology shows a good potential for leakage reduction while meeting performance goals. A model for optimally selecting threshold voltages and transistor sizes in wide fan-in dynamic circuits is presented. On the circuit level, a novel circuit level technique which handles the trade-off between noise immunity and energy dissipation for wide fan-in dynamic circuits is presented. Energy efficiency of the proposed wide fan-in dynamic circuit is further enhanced through efficient low voltage operation.
Another direct consequence of technology scaling is the growing impact of interconnect parasitics and process variations on performance. Traditionally, worst case process, parasitics, and environmental conditions are considered. Designing for worst case guarantees a fail-safe operation but requires a large delay and voltage margins. This large margin can be recovered if the design can adapt to the actual silicon conditions. Dynamic voltage scaling is considered a key enabler in reducing such margin. An on-chip process identifier to recover the margin required due to process variations is described. The proposed architecture adjusts supply voltage using a hybrid between the one-time voltage setting and the continuous monitoring modes of operation. The interconnect impact on delay is minimized through a novel adaptive voltage scaling architecture. The proposed system recovers the large delay and voltage margins required by conventional systems by closely tracking the actual critical path at anytime. By tracking the actual critical path, the proposed system is robust and more energy efficient compared to both the conventional open-loop and closed-loop systems.
|
269 |
Optimal Portfolio Selection Under the Estimation Risk in Mean ReturnZhu, Lei January 2008 (has links)
This thesis investigates robust techniques for mean-variance (MV) portfolio optimization problems under the estimation risk in mean return. We evaluate the performance of the optimal portfolios generated by the min-max robust MV portfolio optimization model. With an ellipsoidal uncertainty set based on the statistics of the sample mean estimates, minmax robust portfolios equal to the ones from the standard MV model based on the nominal mean estimates but with larger risk aversion parameters. With an interval uncertainty set for mean return, min-max robust portfolios can vary significantly with the initial data used to generate the uncertainty set. In addition, by focusing on the worst-case scenario in the mean return uncertainty set, min-max robust portfolios can be too conservative and unable to achieve a high return. Adjusting the conservatism level of min-max robust portfolios can only be achieved by excluding poor mean return scenarios from the uncertainty set, which runs counter to the principle of min-max robustness. We propose a CVaR robust MV portfolio optimization model in which the estimation risk is measured by the Conditional Value-at-Risk (CVaR). We show that, using CVaR to quantify the estimation risk in mean return, the conservatism level of CVaR robust portfolios can be more naturally adjusted by gradually including better mean return scenarios. Moreover, we compare min-max robust portfolios (with an interval uncertainty set for mean return) and CVaR robust portfolios in terms of actual frontier variation, portfolio efficiency, and portfolio diversification. Finally, a computational method based on a smoothing technique is implemented to solve the optimization problem in the CVaR robust model. We numerically show that, compared with the quadratic programming (QP) approach, the smoothing approach is more computationally efficient for computing CVaR robust portfolios.
|
270 |
Connected Dominating Set Based Topology Control in Wireless Sensor NetworksHe, Jing S 01 August 2012 (has links)
Wireless Sensor Networks (WSNs) are now widely used for monitoring and controlling of systems where human intervention is not desirable or possible. Connected Dominating Sets (CDSs) based topology control in WSNs is one kind of hierarchical method to ensure sufficient coverage while reducing redundant connections in a relatively crowded network. Moreover, Minimum-sized Connected Dominating Set (MCDS) has become a well-known approach for constructing a Virtual Backbone (VB) to alleviate the broadcasting storm for efficient routing in WSNs extensively. However, no work considers the load-balance factor of CDSsin WSNs. In this dissertation, we first propose a new concept — the Load-Balanced CDS (LBCDS) and a new problem — the Load-Balanced Allocate Dominatee (LBAD) problem. Consequently, we propose a two-phase method to solve LBCDS and LBAD one by one and a one-phase Genetic Algorithm (GA) to solve the problems simultaneously.
Secondly, since there is no performance ratio analysis in previously mentioned work, three problems are investigated and analyzed later. To be specific, the MinMax Degree Maximal Independent Set (MDMIS) problem, the Load-Balanced Virtual Backbone (LBVB) problem, and the MinMax Valid-Degree non Backbone node Allocation (MVBA) problem. Approximation algorithms and comprehensive theoretical analysis of the approximation factors are presented in the dissertation.
On the other hand, in the current related literature, networks are deterministic where two nodes are assumed either connected or disconnected. In most real applications, however, there are many intermittently connected wireless links called lossy links, which only provide probabilistic connectivity. For WSNs with lossy links, we propose a Stochastic Network Model (SNM). Under this model, we measure the quality of CDSs using CDS reliability. In this dissertation, we construct an MCDS while its reliability is above a preset applicationspecified threshold, called Reliable MCDS (RMCDS). We propose a novel Genetic Algorithm (GA) with immigrant schemes called RMCDS-GA to solve the RMCDS problem.
Finally, we apply the constructed LBCDS to a practical application under the realistic SNM model, namely data aggregation. To be specific, a new problem, Load-Balanced Data Aggregation Tree (LBDAT), is introduced finally. Our simulation results show that the proposed algorithms outperform the existing state-of-the-art approaches significantly.
|
Page generated in 0.0923 seconds