191 |
A model for managing pension funds with benchmarking in an inflationary marketNsuami, Mozart January 2011 (has links)
<p>Aggressive fiscal and monetary policies by governments of countries and central banks in developed markets could somehow push inflation to some very high level in the long run. Due to the decreasing of pension fund benefits and increasing inflation rate, pension companies are selling inflation-linked products to hedge against inflation risk. Such companies are seriously considering the possible effects of inflation volatility on their investment, and some of them tend to include inflationary allowances in the pension payment plan. In this dissertation we study the management of pension funds of the defined contribution type in the presence of inflation-recession. We study how the fund manager maximizes his fund&rsquo / s wealth when the salaries and stocks are affected by inflation. In this regard, we consider the case of a pension company which invests in a stock, inflation-linked bonds and a money market account, while basing its investment on the contribution of the plan member. We use a benchmarking approach and martingale methods to compute an optimal strategy which maximizes the fund wealth.</p>
|
192 |
Modeling and Control of Flexible ManipulatorsMoberg, Stig January 2010 (has links)
Industrial robot manipulators are general-purpose machines used for industrial automation in order to increase productivity, flexibility, and product quality. Other reasons for using industrial robots are cost saving, and elimination of hazardous and unpleasant work. Robot motion control is a key competence for robot manufacturers, and the current development is focused on increasing the robot performance, reducing the robot cost, improving safety, and introducing new functionalities. Therefore, there is a need to continuously improve the mathematical models and control methods in order to fulfil conflicting requirements, such as increased performance of a weight-reduced robot, with lower mechanical stiffness and more complicated vibration modes. One reason for this development of the robot mechanical structure is of course cost-reduction, but other benefits are also obtained, such as lower environmental impact, lower power consumption, improved dexterity, and higher safety. This thesis deals with different aspects of modeling and control of flexible, i.e., elastic, manipulators. For an accurate description of a modern industrial manipulator, this thesis shows that the traditional flexible joint model, described in literature, is not sufficient. An improved model where the elasticity is described by a number of localized multidimensional spring-damper pairs is therefore proposed. This model is called the extended flexible joint model. The main contributions of this work are the design and analysis of identification methods, and of inverse dynamics control methods, for the extended flexible joint model. The proposed identification method is a frequency-domain non-linear gray-box method, which is evaluated by the identification of a modern six-axes robot manipulator. The identified model gives a good description of the global behavior of this robot. The inverse dynamics problem is discussed, and a solution methodology is proposed. This methodology is based on the solution of a differential algebraic equation (DAE). The inverse dynamics solution is then used for feedforward control of both a simulated manipulator and of a real robot manipulator. The last part of this work concerns feedback control. First, a model-based nonlinear feedback control (feedback linearization) is evaluated and compared to a model-based feedforward control algorithm. Finally, two benchmark problems for robust feedback control of a flexible manipulator are presented and some proposed solutions are analyzed.
|
193 |
3D-mesh segmentation: automatic evaluation and a new learning-based methodBenhabiles, Halim 18 October 2011 (has links) (PDF)
Dans cette thèse, nous abordons deux problèmes principaux, à savoir l'évaluation quantitative des algorithmes de segmentation de maillages ainsi que la segmentation de maillages par apprentissage en exploitant le facteur humain. Nous proposons les contributions suivantes : - Un benchmark dédié à l'évaluation des algorithmes de segmentation de maillages 3D. Le benchmark inclut un corpus de segmentations vérités-terrains réalisées par des volontaires ainsi qu'une nouvelle métrique de similarité pertinente qui quantifie la cohérence entre ces segmentations vérités-terrains et celles produites automatique- ment par un algorithme donné sur les mêmes modèles. De plus, nous menons un ensemble d'expérimentations, y compris une expérimentation subjective, pour respectivement démontrer et valider la pertinence de notre benchmark. - Un algorithme de segmentation par apprentissage. Pour cela, l'apprentissage d'une fonction d'arête frontière est effectué, en utilisant plusieurs critères géométriques, à partir d'un ensemble de segmentations vérités-terrains. Cette fonction est ensuite utilisée, à travers une chaîne de traitement, pour segmenter un nouveau maillage 3D. Nous montrons, à travers une série d'expérimentations s'appuyant sur différents benchmarks, les excellentes performances de notre algorithme par rapport à ceux de l'état de l'art. Nous présentons également une application de notre algorithme de segmentation pour l'extraction de squelettes cinématiques pour les maillages 3D dynamiques.
|
194 |
DSP Platform Benchmarking : DSP Platform BenchmarkingXinyuan, Luo January 2009 (has links)
Benchmarking of DSP kernel algorithms was conducted in the thesis on a DSP processor for teaching in the course TESA26 in the department of Electrical Engineering. It includes benchmarking on cycle count and memory usage. The goal of the thesis is to evaluate the quality of a single MAC DSP instruction set and provide suggestions for further improvement in instruction set architecture accordingly. The scope of the thesis is limited to benchmark the processor only based on assembly coding. The quality check of compiler is not included. The method of the benchmarking was proposed by BDTI, Berkeley Design Technology Incorporations, which is the general methodology used in world wide DSP industry. Proposals on assembly instruction set improvements include the enhancement of FFT and DCT. The cycle cost of the new FFT benchmark based on the proposal was XX% lower, showing that the proposal was right and qualified. Results also show that the proposal promotes the cycle cost score for matrix computing, especially matrix multiplication. The benchmark results were compared with general scores of single MAC DSP processors offered by BDTI.
|
195 |
Application of a heterogeneous coarse-mesh transport method (COMET) to radiation therapy problemsSatterfield, Megan E. 20 November 2006 (has links)
In recent years, there has been much improvement in radiation therapy delivery systems used in the treatment of cancer; however, in order to fully exploit this enhancement, the computational methodology associated with radiation therapy must improve as well. It is important to accurately determine where the radiation is depositing its energy within the patient. The treatment should allow for the maximal dose at the tumor site, while minimal radiation dose to the surrounding health tissue and structures. In the Computational Reactor and Medical Physics Group here at Georgia Tech, a heterogeneous coarse-mesh transport method (COMET) has been developed for neutron transport to analyze whole-core criticality. COMET decomposes a large, heterogeneous global problem into a set of small fixed source local problems. Response functions, or rather detailed solutions, are obtained for each unique local problem. These response functions are all precomputed and stored in a library. The solution to the global solution is then bound by a linear superposition of the local problems. In this project, COMET is applied for the first time to the transport of photons in human tissues. The parameter of interest in this case is the amount of energy (dose) deposition in tissue. To determine the strengths and weaknesses of the current system, it is important to construct benchmark problems for comparison. This project will encompass a number of benchmarks. The first will involve modeling a simple two-dimensional water phantom. A second benchmark problem involves the use of a heterogeneous phantom composed of different tissues. A third benchmark problem will involve transport through slabs of aluminum, water, and lung tissue. A last, more clinically relevant benchmark problem will involve using the data from a CT scan. For each of these cases the results from COMET will be compared to the computational results obtained from EGSnrc, a Monte Carlo particle transport code. In this study, it was found that generally the results using COMET were comparable with those obtained from the Monte Carlo solutions of EGSnrc. The COMET results were also typically found thousands of times faster than the reference solution.
|
196 |
Studie av utvecklingsverktyg med inriktning mot PLC-systemBrax, christoffer January 1999 (has links)
<p>Datoranvändningen i samhället växer för varje dag. Det är då viktigt att programvara håller hög kvalité, då vissa programvaror styr kritiska maskiner som exempelvis flygplan. Ett sätt att få kvalitativa programvaror är att använda bra utvecklingsverktyg. I detta arbete utvärderas fem olika utvecklingsverktyg: GNAT (Ada), Microsoft Visual C++, Microsoft J++, Borland Delphi och Active Perl. Inriktningen på utvärderingen är mot utveckling av programvara för PLC-system. Det som utvärderats är effektivitet av genererad kod, kompileringstid, storlek på genererad kod, kodportabilitet och utbud av färdiga komponenter. Undersökningen har genomförts med hjälp av praktiska tester.</p>
|
197 |
由國營企業轉成民營企業:由中華電信經驗對ONATAL之參考價值Mamadou Unknown Date (has links)
Around the world, countries are moving towards a market economy in order to integrate the global marketplace. Telecommunications are among the industries concerned the most in this trend, as governments are significantly reducing their involvement in this industry through liberalization and/or partial or complete privatization of their national telecommunications corporations. For Burkina Faso it is no different, as ONATEL, the national telecommunications company in Burkina Faso, has been caught in this trend. Since December 1998, the government of Burkina Faso initiated a reform of its telecommunications sector with the overall goal to achieve the liberalization of telecommunications services, and accomplish a mixed ownership of ONATEL.
The objective of this study is to review the ongoing privatization of ONATEL based on an analysis of general practices regarding economic reforms applied elsewhere, and then make recommendations for both the government of Burkina Faso and ONATEL, for a successful implementation of the process and the national telecommunication policies. We accomplish this objective through four research questions. The first one correlates privatization and economic development with the aim to see how the divesture of ONATEL can foster the telecommunications development in Burkina Faso. The second one emphasizes the government chosen strategy to privatize ONATEL, allowing for a review of alternative privatization methods and the rationale behind the government’s option. The third research questions deals with ONATEL’s strategies to sustain its development amongst an environment of increased competition. This question facilitates an assessment of the firm’s preparation for competition and allows for the formulation of some recommendations in such regard. The last research question touches on the privatization process of Chunghwa Telecom, the once, state-owned Telecommunications Company in Taiwan. This final aspect of the research helps to extract effective lessons that can be applicable for the government of Burkina Faso and ONATEL, by analyzing and understanding the privatization experience in Chunghwa Telecom’s methods and formulations of business strategies, for their own privatization.
|
198 |
Instruction Timing Analysis for Linux/x86-based Embedded and Desktop SystemsJohn, Tobias 19 October 2005 (has links) (PDF)
Real-time aspects are becoming more important in
standard desktop PC environments and x86 based
processors are being utilized in embedded systems
more often.
While these processors were not created for use
in hard real time systems, they are fast and
inexpensive and can be used if it is possible
to determine the worst case execution time.
Information on CPU caches (L1, L2) and
branch prediction architecture is necessary
to simulate best and worst cases in execution
timing, but is often not detailed
enough and sometimes not published at all.
This document describes how the underlying
hardware can be analysed to obtain
this information.
|
199 |
La mesure de performance dans les cartes à puceCordry, Julien 30 November 2009 (has links)
La mesure de performance est utilisée dans tous les systèmes informatiques pour garantir la meilleure performance pour le plus faible coût possible. L'établissement d'outils de mesures et de métriques a permis d'établir des bases de comparaison entre ordinateurs. Bien que le monde de la carte à puce ne fasse pas exception, les questions de sécurité occupent le devant de la scène pour celles-ci. Les efforts allant vers une plus grande ouverture des tests et de la mesure de performance restent discrets. Les travaux présentés ici ont pour objectif de proposer une méthode de mesure de la performance dans les plates-formes Java Card qui occupent une part considérable du marché de la carte à puce dans le monde d’aujourd’hui. Nous étudions en détails les efforts fournis par d'autres auteurs sur le sujet de la mesure de performance et en particulier la mesure de performance sur les cartes à puce. Un grand nombre de ces travaux restent embryonnaires ou ignorent certains aspects des mesures. Un des principaux défauts de ces travaux est le manque de rapport entre les mesures effectuées et les applications généralement utilisées dans les cartes à puce. Les cartes à puce ont par ailleurs des besoins importants en termes de sécurité. Ces besoins rendent les cartes difficiles à analyser. L'approche logique consiste à considérer les cartes à puce comme des boites noires. Après l'introduction de méthodologies de mesures de performance pour les cartes à puce, nous choisirons les outils et les caractéristiques des tests que nous voulons faire subir aux cartes, et nous analyserons la confiance à accorder aux données ainsi récoltées. Enfin une application originale des cartes à puce est proposée et permet de valider certains résultats obtenus. / Performance measurements are used in computer systems to guaranty the best performance at the lowest cost. Establishing measurement tools and metrics has helped build comparison scales between computers. Smart cards are no exception. But the centred stage of the smart card industry is mostly busy with security issues. Efforts towards a better integration of performance tests are still modest. Our work focused on a better approach in estimating the execution time within Java Card platforms. Those platforms constitute a big part of the modern smart card market share especially with regards to multi-applicative environments. After introducing some methodologies to better measure the performance of Java Cards, we detail the tools and the tests that we mean to use on smart cards. We will thereafter analyze the data obtained in this way. Finally, an original application for smart cards is proposed. We used it to validate some points about the results.
|
200 |
Hierarchical Bayesian Benchmark Dose AnalysisFang, Qijun January 2014 (has links)
An important objective in statistical risk assessment is estimation of minimum exposure levels, called Benchmark Doses (BMDs) that induce a pre-specified Benchmark Response (BMR) in a target population. Established inferential approaches for BMD analysis typically involve one-sided, frequentist confidence limits, leading in practice to what are called Benchmark Dose Lower Limits (BMDLs). Appeal to hierarchical Bayesian modeling and credible limits for building BMDLs is far less developed, however. Indeed, for the few existing forms of Bayesian BMDs, informative prior information is seldom incorporated. Here, a new method is developed by using reparameterized quantal-response models that explicitly describe the BMD as a target parameter. This potentially improves the BMD/BMDL estimation by combining elicited prior belief with the observed data in the Bayesian hierarchy. Besides this, the large variety of candidate quantal-response models available for applying these methods, however, lead to questions of model adequacy and uncertainty. Facing this issue, the Bayesian estimation technique here is further enhanced by applying Bayesian model averaging to produce point estimates and (lower) credible bounds. Implementation is facilitated via a Monte Carlo-based adaptive Metropolis (AM) algorithm to approximate the posterior distribution. Performance of the method is evaluated via a simulation study. An example from carcinogenicity testing illustrates the calculations.
|
Page generated in 0.0514 seconds