Spelling suggestions: "subject:" cofficient"" "subject:" coefficient""
81 |
The Momentum Effect: Evidence from the Swedish stock marketVilbern, Marcus January 2008 (has links)
<p>This thesis investigates the profitability of the momentum strategy in the Swedish stock market. The momentum strategy is an investment strategy where past winners are bought and past losers are sold short. In this paper Swedish stocks are analyzed during the period 1999 – 2007 with the approach first used by Jegadeesh and Titman (1993). The results indicate that momentum investing is profitable on the Swedish market. The main contribution to the profits is derived from investing in winners while the losers in most cases do not contribute at all to total profits. The profits remain after correcting for transaction costs for longer termed strategies while they diminish for the shorter termed ones. Compared to the market index, buying past winners yield an excess return while short selling of losers tend to make index investing more profitable. The analysis also shows that momentum can not be explained by the systematic risk of the individual stocks. The evidence in support of a momentum effect presented in this thesis also implies that predictable price patterns can be used to make excess returns; this contradicts the efficient market hypothesis.</p>
|
82 |
Energy efficient branch predictionHicks, Michael Andrew January 2010 (has links)
Energy efficiency is of the utmost importance in modern high-performance embedded processor design. As the number of transistors on a chip continues to increase each year, and processor logic becomes ever more complex, the dynamic switching power cost of running such processors increases. The continual progression in fabrication processes brings a reduction in the feature size of the transistor structures on chips with each new technology generation. This reduction in size increases the significance of leakage power (a constant drain that is proportional to the number of transistors). Particularly in embedded devices, the proportion of an electronic product’s power budget accounted for by the CPU is significant (often as much as 50%). Dynamic branch prediction is a hardware mechanism used to forecast the direction, and target address, of branch instructions. This is essential to high performance pipelined and superscalar processors, where the direction and target of branches is not computed until several stages into the pipeline. Accurate branch prediction also acts to increase energy efficiency by reducing the amount of time spent executing mis-speculated instructions. ‘Stalling’ is no longer a sensible option when the significance of static power dissipation is considered. Dynamic branch prediction logic typically accounts for over 10% of a processor’s global power dissipation, making it an obvious target for energy optimisation. Previous approaches at increasing the energy efficiency of dynamic branch prediction logic has focused on either fully dynamic or fully static techniques. Dynamic techniques include the introduction of a new cache-like structure that can decide whether branch prediction logic should be accessed for a given branch, and static techniques tend to focus on scheduling around branch instructions so that a prediction is not needed (or the branch is removed completely). This dissertation explores a method of combining static techniques and profiling information with simple hardware support in order to reduce the number of accesses made to a branch predictor. The local delay region is used on unconditional absolute branches to avoid prediction, and, for most other branches, Adaptive Branch Bias Measurement (through profiling) is used to assign a static prediction that is as accurate as a dynamic prediction for that branch. This information is represented as two hint-bits in branch instructions, and then interpreted by simple hardware logic that bypasses both the lookup and update phases for appropriate branches. The global processor power saving that can be achieved by this Combined Algorithm is around 6% on the experimental architectures shown. These architectures are based upon real contemporary embedded architecture specifications. The introduction of the Combined Algorithm also significantly reduces the execution time of programs on Multiple Instruction Issue processors. This is attributed to the increase achieved in global prediction accuracy.
|
83 |
An Algorithm for Efficient Computation of the Fast Fourier Transform Over Arbitrary Frequency IntervalsDaBell, Steve 10 1900 (has links)
International Telemetering Conference Proceedings / October 17-20, 1994 / Town & Country Hotel and Conference Center, San Diego, California / In many signal processing and telemetry applications only a portion of the Discrete Fourier Transform (DFT) of a data sequence is of interest. This paper develops an algorithm which enables computation of the FFT only over the frequency values of interest, reducing the computational complexity. As will be shown, the algorithm is also very modular which lends to efficient parallel processing implementation. This paper will begin by developing the frequency selective FFT algorithm, and conclude with a comparative analysis of the computational complexity of the algorithm with respect to the traditional FFT.
|
84 |
Hybrid spintronics and straintronics: An ultra-low-energy computing paradigmRoy, Kuntal 24 July 2012 (has links)
The primary obstacle to continued downscaling of charge-based electronic devices in accordance with Moore's law is the excessive energy dissipation that takes place in the device during switching of bits. Unlike charge-based devices, spin-based devices are switched by flipping spins without moving charge in space. Although some energy is still dissipated in flipping spins, it can be considerably less than the energy associated with current flow in charge-based devices. Unfortunately, this advantage will be squandered if the method adopted to switch the spin is so energy-inefficient that the energy dissipated in the switching circuit far exceeds the energy dissipated inside the system. Regrettably, this is often the case, e.g., switching spins with a magnetic field or with spin-transfer-torque mechanism. In this dissertation, it is shown theoretically that the magnetization of two-phase multiferroic single-domain nanomagnets can be switched very energy-efficiently, more so than any device currently extant, leading possibly to new magnetic logic and memory systems which might be an important contributor to Beyond-Moore's-Law technology. A multiferroic composite structure consists of a layer of piezoelectric material in intimate contact with a magnetostrictive layer. When a tiny voltage of few millivolts is applied across the structure, it generates strain in the piezoelectric layer and the strain is transferred to the magnetostrictive nanomagnet. This strain generates magnetostrictive anisotropy in the nanomagnet and thus rotates its direction of magnetization, resulting in magnetization reversal or 'bit-flip'. It is shown after detailed analysis that full 180 degree switching of magnetization can occur in the "symmetric" potential landscape of the magnetostrictive nanomagnet, even in the presence of room-temperature thermal fluctuations, which differs from the general perception on binary switching. With proper choice of materials, the energy dissipated in the bit-flip can be made as low as one attoJoule at room-temperature. Also, sub-nanosecond switching delay can be achieved so that the device is adequately fast for general-purpose computing. The above idea, explored in this dissertation, has the potential to produce an extremely low-power, yet high-density and high-speed, non-volatile magnetic logic and memory system. Such processors would be well suited for embedded applications, e.g., implantable medical devices that could run on energy harvested from the patient's body motion.
|
85 |
Green Clusters / Green ClustersVašut, Marek January 2015 (has links)
The thesis evaluates the viability of reducing power consumption of a contem- porary computer cluster by using more power-efficient hardware components. The cluster in question runs an Map-Reduce algorithm implementation and the worker nodes consist of either systems with an ARM CPU or systems which combine both an ARM CPU and an FPGA in a single package. The behavior of such cluster is discussed from both performance side as well as power consumption side. The text discusses the problems and peculiarities with the integration of an ARM-based and especially the combined ARM-FPGA-based systems into the Map-Reduce framework. The Map-Reduce framework performance itself is eval- uated to identify the gravest performance bottlenecks when using the framework in the environment with ARM systems. 1
|
86 |
Efficient, scalable, and fair read-modify-writesRajaram, Bharghava January 2015 (has links)
Read-Modify-Write (RMW) operations, or atomics, have widespread application in (a) synchronization, where they are used as building blocks of various synchronization constructs like locks, barriers, and lock-free data structures (b) supervised memory systems, where every memory operation is effectively an RMW that reads and modifies metadata associated with memory addresses and (c) profiling, where RMW instructions are used to increment shared counters to convey meaningful statistics about a program. In each of these scenarios, the RMWs pose a bottleneck to performance and scalability. We observed that the cost of RMWs is dependent on two major factors – the memory ordering enforced by the RMW, and contention amongst processors performing RMWs to the same memory address. In the case of both synchronization and supervised memory systems, the RMWs are expensive due to the memory ordering enforced due to the atomic RMW operation. Performance overhead due to contention is more prevalent in parallel programs which frequently make use of RMWs to update concurrent data structures in a non-blocking manner. Such programs also suffer from a degradation in fairness amongst concurrent processors. In this thesis, we study the cost of RMWs in the above applications, and present solutions to obtain better performance and scalability from RMW operations. Firstly, this thesis tackles the large overhead of RMW instructions when used for synchronization in the widely used x86 processor architectures, like in Intel, AMD, and Sun processors. The x86 processor architecture implements a variation of the Total-Store-Order (TSO) memory consistency model. RMW instructions in existing TSO architectures (we call them type-1 RMW) are ordered like memory fences, which makes them expensive. The strong fence-like ordering of type-1 RMWs is unnecessary for the memory ordering required by synchronization. We propose weaker RMW instructions for TSO consistency; we consider two weaker definitions: type-2 and type-3, each causing subtle ordering differences. Type-2 and type-3 RMWs avoid the fence-like ordering of type-1 RMWs, thereby reducing their overhead. Recent work has shown that the new C/C++11 memory consistency model can be realized by generating type-1 RMWs for SC-atomic-writes and/or SC-atomic-reads. We formally prove that this is equally valid for the proposed type-2 RMWs, and partially for type-3 RMWs. We also propose efficient implementations for type-2 (type-3) RMWs. Simulation results show that our implementation reduces the cost of an RMW by up to 58.9% (64.3%), which translates into an overall performance improvement of up to 9.0% (9.2%) for the programs considered. Next, we argue the case for an efficient and correct supervised memory system for the TSO memory consistency model. Supervised memory systems make use of RMW-like supervised memory instructions (SMIs) to atomically update metadata associated with every memory address used by an application program. Such a system is used to help increase reliability, security and accuracy of parallel programs by offering debugging/monitoring features. Most existing supervised memory systems assume a sequentially consistent memory. For weaker consistency models, like TSO, correctness issues (like imprecise exceptions) arise if the ordering requirement of SMIs is neglected. In this thesis, we show that it is sufficient for supervised instructions to only read and process their metadata in order to ensure correctness. We propose SuperCoP, a supervised memory system for relaxed memory models in which SMIs read and process metadata before retirement, while allowing data and metadata writes to retire into the write-buffer. Our experimental results show that SuperCoP performs better than the existing state-of-the-art correct supervision system by 16.8%. Finally, we address the issue of contention and contention-based failure of RMWs in non-blocking synchronization mechanisms. We leverage the fact that most existing lock-free programs make use of compare-and-swap (CAS) loops to access the concurrent data structure. We propose DyFCoM (Dynamic Fairness and Contention Management), a holistic scheme which addresses both throughput and fairness under increased contention. DyFCoM monitors the number of successful and failed RMWs in each thread, and uses this information to implement a dynamic backoff scheme to optimize throughput. We also use this information to throttle faster threads and give slower threads a higher chance of performing their lock-free operations, to increase fairness among threads. Our experimental results show that our contention management scheme alone performs better than the existing state-of-the-art CAS contention management scheme by an average of 7.9%. When fairness management is included, our scheme provides an average of 3.4% performance improvement over the constant backoff scheme, while showing increased fairness values in all cases (up to 43.6%).
|
87 |
Gestion d'une ressource en eau souterraine sujette aux sécheresses : analyse des stratégies d'adaptation / Groundwater resource management subject to droughts : analysis of adaptation strategiesFrutos Cachorro, Julia de 08 July 2014 (has links)
La gestion d'une ressource en eau souterraine utilisée pour l'irrigation est un phénomène dépendant de plusieurs facteurs et concernant différents acteurs (utilisateurs et gestionnaire). En cas d'aléa climatique comme la sécheresse, gérer une ressource devient un problème plus complexe. Il est justement important de mieux comprendre et d'anticiper les sécheresses car ils peuvent avoir des impacts significatifs sur l'activité économique agricole et sur les niveaux de la ressource. Pour cela, le type d'information dont disposent les utilisateurs et/ou les gestionnaire est essentiel. Dans les chapitres 2 et 3, nous analysons l'impact d'une sécheresse "hydrologique" sur la gestion optimale de la ressource, avant et après son arrivée. Dans le chapitre 2, nous montrons comment le gestionnaire de la ressource peut s'adapter le mieux possible à cette sécheresse selon l'information dont il dispose. Dans le chapitre 3, nous montrons que la prise en compte des interactions stratégiques et dynamiques entre les utilisateurs de la ressource entraîne une exploitation moins efficace de la ressource. Nous appliquons les modèles des chapitres 2 et 3 à l'aquifère La Mancha Occidentale au Sud de l'Espagne. Dans le chapitre 4, nous analysons l'impact d'une sécheresse "agronomique" sur la gestion optimale d'une exploitation agricole située dans la zone de la Beauce centrale, en France. Nous prenons en compte des informations de caractère hydrologique, agronomique et économique. En particulier, nous étudions l'impact d'une année sèche sur la valeur ajoutée de l'exploitation et sur la ressource en eau utilisée. De plus, nous nous intéressons au comportement stratégique que les agriculteurs peuvent avoir en année sèche, que ce soit sans ou avec restrictions des usages de l'eau. Nous montrons qu'une politique de régulation est nécessaire pour éviter la surexploitation de la nappe en année sèche. / The management of a groundwater resource used for irrigation is a phenomenon that depends on several factors and concerning various actors (users and manager). Moreover, the resource can be subject to droughts. In this case, the management of the resource becomes a more complex problem. Adaptation to droughts is important because they can have significant impacts on agriculturalactivity and on the water table of the resource. This adaptation hinges crucially on the information available to the manager and the resource users. In chapters 2 and 3, we analyze the impact of an hydrological drought on the optimal management of the resource, before and after its arrival. In particular, in chapter 2, we show how the manager can adapt as good as possible to this drought according to the nature of information he has. In chapter 3, we are show that taking into account strategic and dynamic interactions between the users of the ressource leads to less efficient resource use. We apply models of chapters 2 and 3 to the aquifer Western La Mancha, in Spain. In chapter 4, we study the impact of an agronomic drought on the optimal management of a farm in the area of Central Beauce, in France. We take into account hydrological, agronomic and economic informations. In particular, we analyze the impact of a dry period on the annual benefits of the farm and on the groundwater resource level. Furthermore, we study optimal strategic behavior of farmers in a dry year, whether they are subject to water restrictions or not. We show that a regulation policy is necessary to avoid the overexploitation of the ressource in a dry year.
|
88 |
SmartCell: An Energy Efficient Reconfigurable Architecture for Stream ProcessingLiang, Cao 04 May 2009 (has links)
Data streaming applications, such as signal processing, multimedia applications, often require high computing capacity, yet also have stringent power constraints, especially in portable devices. General purpose processors can no longer meet these requirements due to their sequential software execution. Although fixed logic ASICs are usually able to achieve the best performance and energy efficiency, ASIC solutions are expensive to design and their lack of flexibility makes them unable to accommodate functional changes or new system requirements. Reconfigurable systems have long been proposed to bridge the gap between the flexibility of software processors and performance of hardware circuits. Unfortunately, mainstream reconfigurable FPGA designs suffer from high cost of area, power consumption and speed due to the routing area overhead and timing penalty of their bit-level fine granularity. In this dissertation, we present an architecture design, application mapping and performance evaluation of a novel coarse-grained reconfigurable architecture, named SmartCell, for data streaming applications. The system tiles a large number of computing cell units in a 2D mesh structure, with four coarse-grained processing elements developed inside each cell to form a quad structure. Based on this structure, a hierarchical reconfigurable network is developed to provide flexible on-chip communication among computing resources: including fully connected crossbar, nearest neighbor connection and clustered mesh network. SmartCell can be configured to operate in various computing modes, including SIMD, MIMD and systolic array styles to fit for different application requirements. The coarse-grained SmartCell has the potential to improve the power and energy efficiency compared with fine-grained FPGAs. It is also able to provide high performance comparable to the fixed function ASICs through deep pipelining and large amount of computing parallelism. Dynamic reconfiguration is also addressed in this dissertation. To evaluate its performance, a set of benchmark applications has been successfully mapped onto the SmartCell system, ranging from signal processing, multimedia applications to scientific computing and data encryption. A 4 by 4 SmartCell prototype system was initially designed in CMOS standard cell ASIC with 130 nm process. The chip occupies 8.2 mm square and dissipates 1.6 mW/MHz under fully operation. The results show that the SmartCell can bridge the performance and flexibility gap between logic specific ASICs and reconfigurable FPGAs. SmartCell is also about 8% and 69% more energy efficient and achieves 4x and 2x throughput gains compared with Montium and RaPiD CGRAs. Based on our first SmartCell prototype experiences, an improved SmartCell-II architecture was developed, which includes distributed data memory, segmented instruction format and improved dynamic configuration schemes. A novel parallel FFT algorithm with balanced workloads and optimized data flow was also proposed and successfully mapped onto SmartCell-II for performance evaluations. A 4 by 4 SmartCell-II prototype was then synthesized into standard cell ASICs with 90 nm process. The results show that SmartCell-II consists of 2.0 million gates and is fully functional at up to 295 MHz with 3.1 mW/MHz power consumption. SmartCell-II is about 3.6 and 28.9 times more energy efficient than Xilinx FPGA and TI's high performance DSPs, respectively. It is concluded that the SmartCell is able to provide a promising solution to achieve high performance and energy efficiency for future data streaming applications.
|
89 |
Tailoring titanium dioxide thin films for photocatalysis and energy efficient glazing via dye-sensitised solar cellsAnderson, Ann-Louise January 2017 (has links)
This thesis focuses on the synthesis and characterisation of titanium dioxide (TiO2) thin films for photocatalytic applications and use in semi-transparent dye-sensitised solar cells for energy efficient glazing. Several synthetic methods for the production of TiO2 thin films are explored including sol-gel, aerosol-assisted chemical vapour deposition (CVD) and hybrid combinatorial CVD. For sol-gel processing two different precursors were studied; titanium tetra-isopropoxide (TTIP) and titanium bis-ammonium lactato dihydroxide (TiBALD). Non-ionic surfactants (Tween 20, 40, 60 and Brij 58 and 98) were successfully incorporated into all three methods for the production of TiO2 thin films modified morphology, microstructure and enhanced functional properties in some cases. All films are fully characterised using scanning electron microscopy, X-ray diffraction, atomic force microscopy, Raman spectroscopy, UV-Vis spectroscopy, contact angle analysis, as well as assessment for photocatalytic performance with resazurin 'intelligent' ink. Photocatalytic performance has been used as an indicator for performance in dye-sensitised solar cells (DSSCs). The best photocatalytic performances with half-lives of up to 2 minutes were obtained for thin films produced with the addition of Brij surfactants. A selection of thin films were tested in semi-transparent DSSC devices with up to 70% transparency, to determine their overall potential for use as energy-efficient glazing. Three DSSC device configurations were tested, whereby the optimum configuration used N3 "black" dye with a dye loading time of 42 hours in combination with a high performance iodine electrolyte and a platinum counter electrode. The highest power conversion efficiencies (PCE) obtained were within the region of 0.1 - 0.3 %, with the highest PCE of 0.3814 % obtained with a 3-layer TTIP sol-gel derived Brij 58 thin film (0.0006 mol dm3) which exhibited an short-circuit current of 0.857 mA/cm2, an open-circuit voltage of 0.71 V and a fill factor of 0.60.
|
90 |
Pharmacometrically driven optimisation of dose regimens in clinical trialsSoeny, Kabir January 2017 (has links)
The dose regimen of a drug gives important information about the dose sizes, dose frequency and the duration of treatment. Optimisation of dose regimens is critical to ensure therapeutic success of the drug and to minimise its possible adverse effects. The central theme of this thesis is the Efficient Dosing (ED) algorithm - a computation algorithm developed by us for optimisation of dose regimens. In this thesis, we have attempted to develop a quantitative framework for measuring the efficiency of a dose regimen for specified criteria and computing the most efficient dose regimen using the ED algorithm. The criteria considered by us seek to prevent over- and under-exposure to the drug. For example, one of the criteria is to maintain the drug's concentration around a desired target level. Another criterion is to maintain the concentration within a therapeutic range or window. The ED algorithm and its various extensions are programmed in MATLAB R . Some distinguishing features of our methods are: mathematical explicitness in the optimisation process for a general objective function, creation of a theoretical base to draw comparisons among competing dose regimens, adaptability to any drug for which the PK model is known, and other computational features. We develop the algorithm further to compute the optimal ratio of two partner drugs in a fixed dose combination unit and the efficient dose regimens. In clinical trials, the parameters of the PK model followed by the drug are often unknown. We develop a methodology to apply our algorithm in an adaptive setting which enables estimation of the parameters while optimising the dose regimens for the typical subject in each cohort. A potential application of the ED algorithm for individualisation of dose regimens is discussed. We also discuss an application for computation of efficient dose regimens for obliteration of a pre-specified viral load.
|
Page generated in 0.0785 seconds