• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 163
  • 41
  • 23
  • 21
  • 19
  • 16
  • 12
  • 11
  • 9
  • 3
  • 3
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 372
  • 113
  • 104
  • 69
  • 68
  • 67
  • 56
  • 47
  • 44
  • 41
  • 32
  • 31
  • 27
  • 27
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Topics in Delayed Renewal Risk Models

Kim, So-Yeun January 2007 (has links)
Main focus is to extend the analysis of the ruin related quantities, such as the surplus immediately prior to ruin, the deficit at ruin or the ruin probability, to the delayed renewal risk models. First, the background for the delayed renewal risk model is introduced and two important equations that are used as frameworks are derived. These equations are extended from the ordinary renewal risk model to the delayed renewal risk model. The first equation is obtained by conditioning on the first drop below the initial surplus level, and the second equation by conditioning on the amount and the time of the first claim. Then, we consider the deficit at ruin in particular among many random variables associated with ruin and six main results are derived. We also explore how the Gerber-Shiu expected discounted penalty function can be expressed in closed form when distributional assumptions are given for claim sizes or the time until the first claim. Lastly, we consider a model that has premium rate reduced when the surplus level is above a certain threshold value until it falls below the threshold value. The amount of the reduction in the premium rate can also be viewed as a dividend rate paid out from the original premium rate when the surplus level is above some threshold value. The constant barrier model is considered as a special case where the premium rate is reduced to $0$ when the surplus level reaches a certain threshold value. The dividend amount paid out during the life of the surplus process until ruin, discounted to the beginning of the process, is also considered.
92

En studie om införandet av expected loss model : - En mer tillförlitlig och relevant metod för nedskrivning av finansiella tillgångar?

Swärdh, Magnus, Hickman, Erik January 2011 (has links)
Accounting has been critized for being one of the leading factors in the latest financial crisis. One of the primary problem areas was identified as delayed recogonition of losses on financial instruments. Consequently, a new impairment model is being developed and is to be namned expected loss model. The difference from the present model, incurred loss model, is that it takes losses into consideration on an much earlier level. Even though the model may be theoretically feasible, in practice it may implicate a number of issues. This study examines this model and divides it into three divisions - classification of the assets, estimation procedure and disclosures. Reasearch has been conducted through in-depth interviews with practitioners within the accounting profession. The information gathered from the respondents have been analyzed using prior research that is considered closely linked to the model and the international accounting standardsetters IASBs and FASBs qualitiative characteristics, relevance and reliability. The conclusion is that the proposed model is deemed relevant even if it is difficult to reach high reliability as a consequense of the models high level of uncertainty, subjectivity and flexibility from a management perspective. To ensure sufficient capital reserves, despite the possibly low reliability, the model should be of conservative nature. Disclosures will continue to play an important role in conveying assumptions and minimizing manipulation if they can be presented in a comprehensive manner.
93

Sequential Auction Design and Participant Behavior

Taylor, Kendra C. 20 July 2005 (has links)
This thesis studies the impact of sequential auction design on participant behavior from both a theoretical and an empirical viewpoint. In the first of the two analyses, three sequential auction designs are characterized and compared based on expected profitability to the participants. The optimal bid strategy is derived as well. One of the designs, the alternating design, is a new auction design and is a blend of the other two. It assumes that the ability to bid in or initiate an auction is given to each side of the market in an alternating fashion to simulate seasonal markets. The conditions for an equilibrium auction design are derived and characteristics of the equilibrium are outlined. The primary result is that the alternating auction is a viable compromise auction design when buyers and suppliers disagree on whether to hold a sequence of forward or reverse auctions. We also found the value of information on future private value for a strategic supplier in a two-period case of the alternating and reverse auction designs. The empirical work studies the cause of low aggregation of timber supply in reverse auctions of an online timber exchange. Unlike previous research results regarding timber auctions, which focus on offline public auctions held by the U.S. Forest Service, we study online private auctions between logging companies and mills. A limited survey of the online auction data revealed that the auctions were successful less than 50% of the time. Regression analysis is used to determine which internal and external factors to the auction affect the aggregation of timber in an effort to determine the reason that so few auctions succeeded. The analysis revealed that the number of bidders, the description of the good and the volume demanded had a significant influence on the amount of timber supplied through the online auction exchange. A plausible explanation for the low aggregation is that the exchange was better suited to check the availability for custom cuts of timber and to transact standard timber.
94

On the separation of preferences among marked point process wager alternatives

Park, Jee Hyuk 15 May 2009 (has links)
A wager is a one time bet, staking money on one among a collection of alternatives having uncertain reward. Wagers represent a common class of engineering decision, where “bets” are placed on the design, deployment, and/or operation of technology. Often such wagers are characterized by alternatives having value that evolves according to some future cash flow. Here, the values of specific alternatives are derived from a cash flow modeled as a stochastic marked point process. A principal difficulty with these engineering wagers is that the probability laws governing the dynamics of random cash flow typically are not (completely) available; hence, separating the gambler’s preference among wager alternatives is quite difficult. In this dissertation, we investigate a computational approach for separating preferences among alternatives of a wager where the alternatives have values that evolve according to a marked point processes. We are particularly concerned with separating a gambler’s preferences when the probability laws on the available alternatives are not completely specified.
95

Locating and tracking assets using RFID

Kim, Gak Gyu 15 May 2009 (has links)
Being able to quickly locate equipment is critical inside buildings, including hospitals, manufacturing floors and warehouses. In order to utilize limited budget and resources efficiently, accurate locating or tracking is required in many fields. In this research, we will focus on how to find the location of an item by using RFID in real time indoors to track equipment. When an item needs to be located, the purpose of using RFID is to minimize the searching time, effort, and investment cost. Thus, this research presents a math¬ematical model of using RFID (both handheld readers and stationary readers) for efficient asset location. We derive the expected cost of locating RFID¬tagged objects in a multi¬area environment where hand¬held RF readers are used. We then discuss where to deploy stationary RF readers in order to maximize the efficiency of the search process.
96

Composite System based Multi-Area Reliability Evaluation

Nagarajan, Ramya 2009 December 1900 (has links)
Currently, major power systems almost invariably operate under interconnected conditions to transfer power in a stable and reliable manner. Multi-area reliability evaluation has thus become an invaluable tool in the planning and operation of such systems. Multi - area reliability evaluation is typically done by considering equivalent tie lines between different areas in an integrated power system. It gives approximate results for the reliability indices of a power system as it models each of the areas as a single node to which are connected the entire area generation and loads. The intratransmission lines are only indirectly modeled during the calculation of equivalent tie lines' capacities. This method is very widely used in the power industry, but the influence of the various approximations and assumptions, which are incorporated in this method, on reliability calculations has not been explored. The objective of the research work presented in this thesis is the development of a new method called Composite system based multi - area reliability model, which does multi - area reliability evaluation considering the whole composite system. It models the transmission system in detail and also takes into account the loss sharing policy within an area and no - load loss sharing policy among the areas. The proposed method is applied to standard IEEE 24 bus Reliability Test System (RTS) and the traditional equivalent tie-line method is applied to the multi-area configuration of the same test system. The results obtained by both the methods are analyzed and compared. It is found that the traditional model, although having some advantages, may not give accurate results.
97

A Study of Marketing Service Quality and Satisfaction Based on "Kuo Hua Life Insurance Co,Ltd"

Yeh, Kuan-Chieh 20 June 2002 (has links)
Abstracts This survey is done, based on the medium-large sized insurance company, Kuo-Hwa Life Insurance Company. It is focused on the interactions of its marketing, service quality, insurer¡¦s expected service, perceived service, perceived service quality, customer satisfaction, loyalty, persistency, repurchase and recommendation, in order for the company to evaluate and establish guidelines of the marketing st-rategy. Those who were questionnaired are the insurers over 18-years-old in the metropolitan areas of Tainan. The personnel of Customer Service within the company have distributed 800 questionnaires. Among these, 586 questionnaires were returned with 20 of them void, making it 566 valid. EXCEL, SPSS statistics software were applied to analyze the insured age, marital status, education, annual income and occupation, to better understand tleir perceived service quality, satisfaction, loyalty, persistency and the intention of repurchase of other products from Kuo-Hwa Life Insurance. The research has resulted in: Positive reflection between Perceived Service versus Perceived Service Quality; Positive reflection between Expected Service versus Perceived Service Quality; Positive reflection between Perceived Service versus Satisfaction; Positive reflection between Perceived Service Quality versus Satisfaction; Positive reflection between Expected Service versus Satisfaction; Positive reflection between Satisfaction versus Persistency and Repurchase of Other Insurance Products; Positive reflection between Customer Satisfaction versus recommendation.
98

none

Tsou, Tung-Ming 14 August 2002 (has links)
none
99

The effect of supplier¡¦s expected cost by using the sharing sales information in VMI.

Chen, Chiu-Miao 19 August 2003 (has links)
Vendor managed inventory (VMI) is a form of automated replenishment under which a supplier takes responsibility for managing a customer¡¦s inventory levels for a given product or material. The promise of VMI was more efficient inventory management with less out-of-stock, improved sales and improved consumer satisfaction. Based on the sharing sales information, VMI makes suppliers reduce inventory and cost. Therefore, VMI is one of the most widely discussed initiatives for improving multi-firm supply chain efficiency. The purpose of this paper is to discuss the effect of supplier¡¦s expected cost by using the sharing sales information .We assume the underlying demand process faced by the retailer is ARMA(1,2). We model the supplier¡¦s delivery variance, optimal delivery-up-to level, and expected cost under three different levels of information sharing, namely no information sharing, partial information sharing, and total information sharing. Furthermore, we study the sensitivity analysis of supplier¡¦s delivery variance, optimal delivery-up-to level, and expected cost for the following four factors: lead-time l, correlation coefficient , tow-weights in ARMA(1,2), and , followed by numerical examples to verify our findings. The main results are as follows. 1¡B Information sharing stabilizes the supplier¡¦s delivery and reduces supplier¡¦s optimal delivery-up-to level as well as expected cost. 2¡B In each level of information sharing, there is positive effect to the supplier¡¦s delivery variance, optimal delivery-up-to level, and expected cost for every factor. 3¡B All the factors multiply the cost reduction effect contributed by increasing the level of information sharing. Among all, the lead time l and cause the most significant effect.
100

Statistical Idealities and Expected Realities in the Wavelet Techniques Used for Denoising

DeNooyer, Eric-Jan D. 01 January 2010 (has links)
In the field of signal processing, one of the underlying enemies in obtaining a good quality signal is noise. The most common examples of signals that can be corrupted by noise are images and audio signals. Since the early 1980's, a time when wavelet transformations became a modernly defined tool, statistical techniques have been incorporated into processes that use wavelets with the goal of maximizing signal-to-noise ratios. We provide a brief history of wavelet theory, going back to Alfréd Haar's 1909 dissertation on orthogonal functions, as well as its important relationship to the earlier work of Joseph Fourier (circa 1801), which brought about that famous mathematical transformation, the Fourier series. We demonstrate how wavelet theory can be used to reconstruct an analyzed function, ergo, that it can be used to analyze and reconstruct images and audio signals as well. Then, in order to ground the understanding of the application of wavelets to the science of denoising, we discuss some important concepts from statistics. From all of these, we introduce the subject of wavelet shrinkage, a technique that combines wavelets and statistics into a "thresholding" scheme that effectively reduces noise without doing too much damage to the desired signal. Subsequently, we discuss how the effectiveness of these techniques are measured, both in the ideal sense and in the expected sense. We then look at an illustrative example in the application of one technique. Finally, we analyze this example more generally, in accordance with the underlying theory, and make some conclusions as to when wavelets are an effective technique in increasing a signal-to-noise ratio.

Page generated in 0.0362 seconds