• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 324
  • 78
  • 38
  • 28
  • 21
  • 19
  • 13
  • 12
  • 8
  • 7
  • 6
  • 4
  • 3
  • 3
  • 2
  • Tagged with
  • 664
  • 452
  • 203
  • 177
  • 132
  • 104
  • 101
  • 94
  • 73
  • 65
  • 64
  • 61
  • 60
  • 59
  • 50
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Optimal Portfolio in Outperforming Its Liability Benchmark for a Defined Benefit Pension Plan

李意豐, Yi-Feng Li Unknown Date (has links)
摘要 本文於確定給付退休金計劃下,探討基金經理人於最差基金財務短絀情境發生前極大化管理目標之最適投資組合,基金比值過程定義為基金現值與負債指標之比例,管理人將於指定最差基金比值發生前極大化達成既定經營目標之機率,隨時間改變之基金投資集合包括無風險之現金、債券與股票。本研究建構隨機控制模型描述此最適化問題,並以動態規劃方法求解,由結果歸納,經理人之最適策略包含極小化基金比值變異之避險因素,風險偏好及跨期投資集合相關之避險因素與模型狀態變數相關之避險因素。本研究利用馬可夫練逼近法逼近隨機控制的數值解,結果顯示基金經理人須握有很大部位的債券,且不同的投資期間對於最適投資決策有很大的影響。 關鍵字: 短絀、確定給付、負債指標、隨機控制、動態規劃。 / Abstract This paper analyzes the portfolio problem that is a pension fund manager has to maximize the possibility of reaching his managerial goal before the worst scenario shortfall occurs in a defined benefit pension scheme. The fund ratio process defined as the ratio between the fund level and its accrued liability benchmark is attained to maximize the probability that the predetermined target is achieved before it falls below an intolerable boundary. The time-varying opportunity set in our study includes risk-free cash, bonds and stock index. The problems are formulated as a stochastic control framework and are solved through dynamics programming. In this study, the optimal portfolio are characterized by three components, the liability hedging component, the intertemporal hedging component against changes in the opportunity set, and the temporal hedging component minimizing the variation in fund ratio growth. The Markov chain approximation methods are employed to approximate the stochastic control solutions numerically. The result shows that fund managers should hold large proportions of bonds and time horizon plays a crucial role in constructing the optimal portfolio. Keywords: shortfall; defined benefit; liability benchmark; stochastic control; dynamic programming.
52

Towards a taxonomy of reusable CRM requirements for the Not for Profit sector

Flory, Peter January 2011 (has links)
Traditional (or commercial) CRM is a well-defined domain but there is currently no generally accepted definition of what constitutes CRM in the not for profit (NfP) sector. Not for profit organisations are organisations which exist for a social purpose, are independent of the State, and which re-invest all of their financial surpluses in the services they offer or in the organisation itself. This research aims to answer the question "What exactly is CRM as applied to the NfP sector, what are its boundaries and what functions should an NfP CRM information system perform?" Grounded Theory Method (GTM) within a Design Science framework was used to collect, analyse, categorise, generalise and structure data from a number of NfP organisations and NfP information systems suppliers. An NfP CRM model was constructed from this data in the form of three multi-level taxonomies. The main taxonomy relates to generic and reusable information system requirements both functional and non-functional. Within this taxonomy the high-level categorisations of commercial CRM, namely "Marketing, "Sales" and "Service", are greatly extended to reflect the special needs of the NfP sector and in particular a much broader definition of "customer". The two minor taxonomies relate to issues of CRM strategy and CRM systems architecture which need to be considered alongside the system requirements. In addition to and resulting from the taxonomies, an over-arching definition of NfP CRM was developed. NfP organisations now have a framework that will enable them to know what to expect of CRM systems and from which they can select requirements to build their own specification of information system needs. Using the requirements taxonomy for this task will make the process of requirements analysis and specification easier, quicker, cheaper and more complete than using traditional methods. The framework will also allow NfP system suppliers to know what NfP organisations expect of their systems and will assist them with the specification of new system features. The minor taxonomies will provide NfP organisations with a series of strategic issues and systems architecture options that should be considered when implementing a CRM system. This research also demonstrates how GTM can be utilised: as the development phase of Design Research, as a general method of domain analysis, and as a tool to develop a taxonomy of reusable information system requirements.
53

Who Can Retire with a 401(k)? Assessing the Effectiveness of Plans in the Changing Environment Around Retirement Planning in the United States

Gomez, Ramon 01 January 2017 (has links)
Over the past three decades, employer-sponsored 401(k) plans have grown in popularity as they have proved to be a valuable benefit employers can provide to employees and tax-deductible expense that employers can easily account for on their books. However, a major concern around these plans is that they have come to take the place of traditional pension plans offered by employers, forcing employees to assume full responsibility for their retirement savings. This paper evaluates the overall effectiveness of 401(k)s at the top 50 companies in the Fortune 100, examining participation rates, account balances, and employer contributions. It concludes that employees that have 401(k)s at these 50 companies fare much better than the average American with regard to retirement savings. Nonetheless, the substitution of traditional pensions with 401(k) plans by companies in the United States is problematic. Employees, which previously could rely on a company pension in retirement, are unintentionally delaying retirement due to a lack of savings. Furthermore, a growing number of workers without retirement savings will certainly put a strain on Social Security funds in the coming decades.
54

Head into the Cloud: An Analysis of the Emerging Cloud Infrastructure

Chandrasekaran, Balakrishnan January 2016 (has links)
<p>We are witnessing a paradigm shift in computing---people are increasingly using Web-based software for tasks that only a few years ago were carried out using software running locally on their computers. The increasing use of mobile devices, which typically have limited processing power, is catalyzing the idea of offloading computations to the cloud. It is within this context of cloud computing that this thesis attempts to address a few key questions: (a) With more computations moving to the cloud, what is the state of the Internet's core? In particular, do routing changes and consistent congestion in the Internet's core affect end users' experiences? (b) With software-defined networking (SDN) principles increasingly being used to manage cloud infrastructures, are the software solutions robust (i.e., resilient to bugs)? With service outage costs being prohibitively expensive, how can we support network operators in experimenting with novel ideas without crashing their SDN ecosystems? (c) How can we build a large-scale passive IP geolocation system to geolocate the entire IP address space at once so that cloud-based software can utilize the geolocation database in enhancing the end-user experience? (d) Why is the Internet so slow? Since a low-latency network allows more offloading of computations to the cloud, how can we reduce the latency in the Internet?</p> / Dissertation
55

Implementation of an SDR in Verilog

Skärpe, Anders January 2016 (has links)
This report presents an implementation of the software part in a software definedradio. The radio is not entirely implemented in software and therefore there arecertain limitations on the received signal. The parts implemented are oscillator,decimation filter, carrier synchronization, time synchronization, package detection,and demodulation. Different algorithms were tested for the different partsto measure the power consumption. To understand how the number of bits usedto represent the signal affects the power consumption, the number of bits wasreduced from 20 bits to 10 bits. This reduction reduced the power consumptionfrom 2.57mW to 1.89mW. A small change in the choice of algorithms was thenmade which reduced the power consumption to 1.86mW. Then the clock rate wasreduced for some parts of the system which reduced the power consumption to1.05mW.
56

Software Defined Networking : Virtual Router Performance

Svantesson, Björn January 2016 (has links)
Virtualization is becoming more and more popular since the hardware that is available today often has theability to run more than just a single machine. The hardware is too powerful in relation to the requirementsof the software that is supposed to run on the hardware, making it inefficient to run too little software ontoo powerful of machines. With virtualization, the ability exists to run a lot of different software on thesame hardware, thereby increasing the efficiency of hardware usage.Virtualization doesn't stop at just virtualizing operating systems or commodity software, but can also beused to virtualize networking components. These networking components include everything from routersto switches and are possible to set up on any kind of virtulized system.When discussing virtualization of networking components, the experssion “Software Defined Networking”is hard to miss. Software Defined Networking is a definition that contains all of these virtualized networkingcomponents and is the expression that should be used when researching further into this subject. There'san increasing interest in these virtualized networking components now in relation to just a few years ago.This is due to company networking becoming much more complex now in relation to the complexity thatcould be found in a network a few years back. More services need to be up inside of the network and a lotof people believe that Software Defined Networking can help in this regard.This thesis aim is to try to find out what kind of differences there are between multiple different softwarerouters. Finding out things like, which one of the routers that offer the highest network speed for the leastamount of hardware cost, are the kind of things that this thesis will be focused on. It will also look at somedifferent aspects of performance that the routers offer in relation to one another in order to try toestablish if there exists any kind of “best” router in multiple different areas.The idea is to build up a virtualized network that somewhat relates to how a normal network looks insmaller companies today. This network will then be used for different types of testing while having thesoftware based router placed in the middle and having it take care of routing between different local virtualnetworks. All of the routers will be placed on the same server and their configuration will be very basicwhile also making sure that each of the routers get access to the same amount of hardware.After initial testing, all routers that perform bad will be opted out for additional testing. This is done tomake sure that there's no unnecessary testing done on routers that seem to not be able to keep up withthe other ones. The results from these tests will be compared to the results of a hardware router with thesame kind of tests used with it in the middle in relation to the tests the software routers had to go through.The results from the testing were fairly surprising, only having one single router being eliminated early onas the remaining ones continued to “battle” one another with more tests. These tests were compared tothe results of a hardware router and the results here were also quite surprising with a much betterperformance in many different areas from the software routers perspective.
57

An SDN-based firewall shunt for data-intensive science applications

Miteff, Simeon January 2016 (has links)
A dissertation submitted to the Faculty of Engineering and the Built Environment, University of the Witwatersrand, Johannesburg, in fulfilment of the requirements for the degree of Master of Science in Engineering, 2016 / Data-intensive research computing requires the capability to transfer les over long distances at high throughput. Stateful rewalls introduce su cient packet loss to prevent researchers from fully exploiting high bandwidth-delay network links [25]. To work around this challenge, the science DMZ design [19] trades o stateful packet ltering capability for loss-free forwarding via an ordinary Ethernet switch. We propose a novel extension to the science DMZ design, which uses an SDN-based rewall. This report introduces NFShunt, a rewall based on Linux's Net lter combined with OpenFlow switching. Implemented as an OpenFlow 1.0 controller coupled to Net lter's connection tracking, NFShunt allows the bypass-switching policy to be expressed as part of an iptables rewall rule-set. Our implementation is described in detail, and latency of the control-plane mechanism is reported. TCP throughput and packet loss is shown at various round-trip latencies, with comparisons to pure switching, as well as to a high-end Cisco rewall. Cost, as well as operations and maintenance aspects, are compared and analysed. The results support reported observations regarding rewall introduced packet-loss, and indicate that the SDN design of NFShunt is a technically viable and cost-e ective approach to enhancing a traditional rewall to meet the performance needs of data-intensive researchers / GS2016
58

Controller-plane workload characterization and forecasting in software-defined networking

Nkosi, Emmanuel January 2017 (has links)
A research report submitted to the Faculty of Engineering and the Built Environment of the University of the Witwatersrand in partial fulfilment of the requirements for the degree of Master of Science in Engineering February 2017 / Software-defined networking (SDN) is the physical separation of the control and data planes in networking devices. A logically centralised controller plane which uses a network-wide view data structure to control several data plane devices is another defining attribute of SDN. The centralised controllers and the network-wide view data structure are difficult to scale as the network and the data it carries grow. Solutions which have been proposed to combat this challenge in SDN lack the use of the statistical properties of the workload or network traffic seen by SDN controllers. Hence, the objective of this research is twofold: Firstly, the statistical properties of the controller workload are investigated. Secondly, Autoregressive Integrated Moving Average Models (ARIMA) and Artificial Neural Network (ANN) models are investigated to establish the feasibility of forecasting the controller workload signal. Representations of the state of the controller plane in the network-wide view in the form of forecasts of the controller workload will enable control applications to detect dwindling controller resources and therefore alleviate controller congestion. On the other hand, realistic statistical traffic models of the controller workload variable are sought for the design and evaluation of SDN controllers. A data center network prototype is created by making use of an SDN network emulator called Mininet and an SDN controller called Onos. It was found that 1–2% of flows arrive within 10 s of each other and more than 80% have inter-arrival times in the range of 10 s–10ms. These inter-arrival times were found to follow a beta distribution, which is similar to findings made in Machine Type Communications (MTC). The use of ARIMA and ANN to forecast the controller workload established that it is feasible to forecast the workload seen by SDN controllers. The accuracy of these models was found to be comparable for continuously valued time series signals. The ANN model was found to be applicable even in discretely valued time series data. / MT2017
59

Software defined radio for cognitive networks

Dumont, Nathan January 2014 (has links)
The introduction of software radio has meant that standards for radio communication can evolve in a much more natural way, changing only a little at a time without making all of the hardware obsolete. It has become apparent that these changes may affect some systems more favourably than others so allowing the software radio to decide how to adapt can actually improve the link quality. This development is known as cognitive radio and can improve the performance of a single radio link. As an extension of this progress is being made on designing cognitive networks where the software radios which make up the network not only optimise their own link but share information about their goals and situation with other nodes in the network, using all of this data together can optimise overall end-to-end performance of the network. These advances in network design and optimisation come at a time where many parts of the world are re-structuring the television broadcast bands. These have been allocated for a long time and are a generous allocation of a valuable resource. With the power of a cognitive network it is possible to design equipment that can automatically avoid the licensed TV transmitters which only take a fraction of the total bandwidth in any one area. This allows many smaller cells to be fitted between the main transmitters. Assessing the availability of bandwidth and generating maps of available spectrum for these new cognitive networks requires a new approach to radio propagation modelling in the TV bands. Previous models use a worst case scenario to make sure that there is at least enough signal to receive the public service broadcasts in the majority of homes. Predicting where the limits of reception are and where it would be safe to broadcast on these channels requires a better, terrain dependent transmission model. In this thesis the Parabolic Equation Model is applied to the problem of predicting TV band occupancy and the results of this modelling is compared to field measurement to get an idea of how accurate the model is in practice.
60

From Theory to Practice: Evaluating Sparsening Filter designs on a Software-Defined Radio Platform

Machado, Raquel 23 December 2014 (has links)
"A comprehensive analysis of a novel detection scheme for SISO wireless transmission scenarios is presented in this dissertation. The scheme, which is based on Belief-Propagation (BP) detectors, is evaluated in both a computer simulation environment and a custom-built software-defined radio test-bed. In this dissertation, we address the design aspects of BP-based receivers, including several approaches to minimize the bit error rate of MAP detectors. We also present the development of an interface framework for a software defined radio platform that aims to implement complex communication transceivers capable of prototyping the hybrid structure with a pre-filter filter and BP detector. Numerical simulations compared the proposed schemes with an existing approaches and showed significant performance gains without requiring great computational cost at the receiver. Furthermore, experiments using GNU Radio Companion and the FMCOMMS software defined radio hardware platform confirm the correct functionality of the proposed interface, and stress tests are conducted to assess the functionality of the interface and how it deteriorates across a range of operating conditions. Finally, we present several experiments using the FMCOMMS software defined radio platform that implement the proposed BP-based receiver scheme and discuss its capabilities and limitations."

Page generated in 0.0562 seconds