Spelling suggestions: "subject:"los ratio""
1 |
Adaption layer enhancement : an investigation of support for independent link ARQAng, Eng Soon January 2003 (has links)
The most commonly used transport protocol, TCP (Transport Control protocol) reacts to loss by throttling the transmission rate. This impacts performance if the loss is non-congestion related, corruption loss. A link layer protocol may use ARQ to provide reliability and shield TCP from corruption loss. The advantage of fragmentation together with link ARQ is; it is able to retransmit the requested frame and instead of the entire data packet. For a link to perform transparent fragmentation, an adaption layer (AL) protocol is needed. Although link ARQ may improve TCP performance, it introduces undesirable delay (i.e. receiver side head of line blocking) and negatively impacts the end-to-end TCP performance. This thesis presents new results on the impacts link ARQ have on the <i>cwnd </i>(congestion window) limited TCP sessions sharing the same link ARQ. To minimise the delay, we proposed to use a more assertive link layer protocol (APRIL). To eliminate the interaction between classes of flow sharing the link with ARQ, flow isolation is required. We discussed the role of the virtual channel (VC) and how it can be used to provide flow isolation. We identified the role of the VC as related to the reassembly process at the receiver end. It allows different traffic classes/flows to be reassembled independently. Therefore, multiple reassembly processes are desirable, one for each traffic class/flow. Our novel approach performs reassembly in the link receive buffer, without demultiplexing frames into the respective channel (as in ATM and X.25) to eliminate the interaction between flows sent on different virtual channel. An approach to increase the robustness of sequence number wrapping in a VC reassembly process without increasing the protocol overhead is also proposed. The inefficiency in the multiple reassembly processes is discussed in the thesis. A simple reassembly process requires massive CPU effort at the receiver since it does not know what exists in the buffer before the process is triggered. We proposed the use of three lists, channel, retransmission and suspended list to minimise this inefficiency in the multiple reassembly processes. During link layer frame processing, it updates all the VC encountered in the block in the channel and retransmission lists. The adaption layer can refer back to these lists prior commencing reassembly process. Therefore, irrelevant blocks, frames and VC can be identified and ignored during the reassembly process. We demonstrate that these lists greatly reduced the processing cost.
|
2 |
Rapid Spatial Distribution Seismic Loss Analysis for Multistory BuildingsDeshmukh, Pankaj Bhagvatrao 2011 May 1900 (has links)
Tall building frames that respond to large seismic ground motions tend to have significant spatial variability of damage over their height, often with a concentration of that damage in the lower stories. In spite of this spatial variability of damage, existing damage and loss models tend to focus on taking the maximum story drift and then assuming the same drift applies over the entire height, damage is then calculated for the building—clearly a conservative approach. A new loss analysis approach is thus recommended that incorporates the effects of spatial distribution of earthquake induced damage to frame buildings. Moreover, the approach aims to discriminate between required repair and replacement damages. Suites of earthquakes and incremental dynamic analysis along with the commercial software SAP2000 are used to establish demands from which story damage and financial losses are computed directly and aggregated for the entire structure. Rigorous and simplified methods are developed that account for spatial distribution of different damage levels arising from individual story drifts.
|
3 |
The Application Of VaR In Taiwan Property And Casualty Insurance Industry And Influence Factor Of Underwriting Risk ResearchLiu, Cheng-chung 02 July 2008 (has links)
Abstract
In these years, Value at Risk (VaR) has been an important tool of risk management in the bank industry. In the past, property and casualty insurance industry does not have many correlation research in this aspect, especially in the key of the underwriting risk application may be collection difficulty in data , the domestic correlation research literature were actually few. In this paper, we use TEJ data bank to study the statistical data which needs for the research , the research sample total has 9 property insurance companies, By using the public information of TEJ data bank, it obtains the yearly and quarterly data, and uses the ¡§Fuzzy Distance Weighting Method¡¨ to change the quarterly data into monthly data , calculates loss ratio of the yearly, quarterly, monthly, then use the idea of VaR to compare the different of loss ratio-at-risk in yearly, quarterly, monthly¡CMoreover this study discusses the underwriting risk influence factor of domestic property and casualty insurance industry .This research discovers that yearly data will underestimate the actual of loss ratio at risk . In addition using regression analysis, the underwriting loss ratio-at- risk is influenced by free cash flow , leverage ratio , and firm size. According to the result of this paper, it could provide the reference rule when property and casualty insurance industry or supervisory authority set up the risk management rule.
Keywords: Value at risk, Loss ratio, Loss ratio-at-risk, Underwriting risk
|
4 |
A Dynamic Queue Adjustment Based on Packet Loss Ratio in Wireless NetworksChu, Tsuh-Feng 13 August 2003 (has links)
Traditional TCP when applied in wireless networks may encounter two limitations. The first limitation is the higher bit error rate (BER) due to noise, fading, and multipath interference. Because traditional TCP is designed for wired and reliable networks, packet loss is mainly caused by network congestions. As a result, TCP may decrease congestion window inappropriately upon detecting a packet loss. The second limitation is about the packet scheduling, which mostly does not consider wireless characteristics.
In this Thesis, we propose a local retransmission mechanism to improve TCP throughput for wireless networks with higher BER. In addition, we measure the packet loss ratio (PLR) to adjust the queue weight such that the available bandwidth for each queue can be changed accordingly. In our mechanism, the queue length is used to determine whether there is a congestion in wireless networks. When the queue length exceeds a threshold, it indicates that the wireless networks may have congestion very likely. We not only propose the dynamic weight-adjustment mechanism, but also solve the packet out-of-sequence problem, which results form when a TCP flow changes to a new queue.
For the purpose of demonstration, we implement the proposed weight-adjustment mechanisms on the Linux platform. Through the measurements and discussions, we have shown that the proposed mechanisms can effectively improve the TCP throughput in wireless networks.
|
5 |
An Enhanced Dynamic Algorithm For Packet BufferRajan, Vinod 11 December 2004 (has links)
A packet buffer for the protocol processor is a large memory space that holds incoming data packets for an application. Data packets for each application are stored in the form of FIFO queues in the packet buffer. Packets are dropped when the buffer is full. An efficient buffer management algorithm is required to manage the buffer space among the different FIFO queues and to avoid heavy packet loss. This thesis develops a simulation model for the packet buffer and studies the performance of conventional buffer management algorithms when applied to packet buffer. This thesis proposes a new buffer management algorithm, Dynamic Algorithm with Different Thresholds (DADT) to improve the packet loss ratio. This algorithm takes advantage of the different packet sizes for each application and proportionally allocates buffer space for each queue. The performance of the DADT algorithm is dependent upon the packet size distribution in a network traffic load. Three different network traffic loads are considered for our simulations. For the average network traffic load, the DADT algorithm shows an improvement of 6.7 % in packet loss ratio over the conventional dynamic buffer management algorithm. For the high and actual network traffic loads, the DADT algorithm shows an improvement of 5.45 % and 3.6 % in packet loss ratio respectively. Based on the simulation results, the DADT algorithm outperforms the conventional buffer management algorithms for various network traffic loads.
|
6 |
Comparative Analysis of VANET and Vehicular Cloud Models with Advanced Communications ProtocolsSukhu, Jonathan Brandon January 2024 (has links)
Vehicular communication systems are integral for efficient highway operational management and for mitigating severe traffic congestion. While vehicular ad hoc networks (VANET) are reliable and provide avenues to minimal reliance on existing infrastructure, they can experience high communication overhead and network disruptions. Vehicular micro clouds (VMCs) provide a promising solution to overcome the challenges of VANET by reducing communication latency through cooperative and collaborative resource allocation and data offloading. This thesis offers a comparative performance analysis of freeway incident management and vehicle platooning, comparing VANET communications versus stationary and platoon-based dynamic VMCs. Specifically, it studies speed and lane-changing advisories in addition to freeway platooning. To further enhance the analysis, the performance of both communication architectures is evaluated using communication protocols of DSRC versus cellular technologies of C-V2X, 4G LTE, and 5G NR for latency, bandwidth, range, and deployment considerations. The system-level features, such as driving safety and vehicular mobility are measured to evaluate the efficacy of the communication systems under incident-induced traffic conditions. The study uses the AIMSUN microscopic traffic simulator to model and analyze the performance of the proposed systems. Key performance indicators include communication latency and packet loss ratio. In addition, the stationary and dynamic cloud systems show advantages in reducing travel time delay, even at high penetration rates of the connected vehicles, whilst reducing collision risks. On average, we observe improvements in travel time by 10% by implementing vehicular clouds over traditional ad-hoc networks. From a communications standpoint, the overall latency delay and packet loss are reduced by 7% and 11%, respectively, with the implementation of cloud models. The findings also delineate the benefits of dynamic cloud models, given their improved manoeuvrability, can maximize the computational capabilities of CVs, even at high market penetrations in large-scale freeway demands. The results suggest a shift towards more reliance on connected vehicular clouds to minimize the risks associated with message interference and system overload, whilst fostering advancements in intelligent freeway traffic management systems. / Thesis / Master of Applied Science (MASc)
|
7 |
Spatial Pattern of Yield Distributions: Implications for Crop InsuranceAnnan, Francis 11 August 2012 (has links)
Despite the potential benefits of larger datasets for crop insurance ratings, pooling yields with similar distributions is not a common practice. The current USDA-RMA county insurance ratings do not consider information across state lines, a politically driven assumption that ignores a wealth of climate and agronomic evidence suggesting that growing regions are not constrained by state boundaries. We test the appropriateness of this assumption, and provide empirical grounds for benefits of pooling datasets. We find evidence in favor of pooling across state lines, with poolable counties sometimes being as far as 2,500 miles apart. An out-of-sample performance exercise suggests our proposed pooling framework out-performs a no-pooling alternative, and supports the hypothesis that economic losses should be expected as a result of not adopting our pooling framework. Our findings have strong empirical and policy implications for accurate modeling of yield distributions and vis-à-vis the rating of crop insurance products.
|
8 |
火災保險自負額運用之分析探討 / Analyses on the Usage of Deductibles in the Fire Insurance張天皓, Chang,Sky Unknown Date (has links)
自負額之運用是保險人在承接業務時,掌控損失頻率與損失幅度的重要方法之一,適當的運用自負額可以穩定保險公司核保利潤營運績效,惟現行自負額仍有許多功能無法完全發揮,甚至造成實務上許多問題,故本文擬研究新型態自負額,以解決實務問題並發揮自負額之功能。
本文選擇以某產險公司實際理賠資料,探討新型態自負額的訂定方式對損失率結果之影響,另以問卷調查方式訪談六家保險公司核保及營業人員對於本文所述目前自負額制度優缺點及新型態自負額可行性之看法,結果顯示在問卷調查中本文所述目前自負額制度現況之優缺點及有關新型態自負額之可行性皆獲訪談者高度認同,在模擬實證上亦能達到對損失率影響不顯著之預期結果。 / Deductible is a key element for insurance companies to control loss frequency and loss severity. Appropriate deductible usage can stabilize underwriting profit for insurance companies. The constraints on present deductible conduct many problems in practice, so this paper proposes new-type deductible research to solve practical problem and develop optimal function of deductible.
This article employs actual occurred loss database from one insurance company to study the impact on loss ratios with new-type deductible. Moreover, this paper also conducts qualitative interview survey approach to learn the viewpoints from underwriters and sales of six Taiwan insurance companies with merit and defect of present deductible and new-type deductible usage contained herein. The results show that merit and defect of present deductible and new-type deductible feasible contained herein is highly approved by interviewees. Besides, deductible usage is expected to have insignificant impact in increasing loss ratios through various case study simulations.
|
9 |
類神經網路在汽車保險費率擬訂的應用 / Artificial Neural Network Applied to Automobile Insurance Ratemaking陳志昌, Chen, Chi-Chang Season Unknown Date (has links)
自1999年以來,台灣汽車車體損失險的投保率下降且損失率逐年上升,與強制第三責任險損失率逐年下降形成強烈對比,理論上若按個人風險程度計收保費,吸引價格認同的被保險人加入並對高風險者加費,則可提高投保率並且確保損失維持在合理範圍內。基於上述背景,本文採用國內某產險公司1999至2002年汽車車體損失保險資料為依據,探討過去保費收入與未來賠款支出的關係,在滿足不偏性的要求下,尋求降低預測誤差變異數的方法。
研究結果顯示:車體損失險存在保險補貼。以最小誤差估計法計算的新費率,可以改善收支不平衡的現象,但對於應該減費的低風險保戶,以及應該加費的高高風險保戶,以類神經網路推計的加減費系統具有較大加減幅度,因此更能有效的區分高低風險群組,降低不同危險群組間的補貼現象,並在跨年度的資料中具有較小的誤差變異。 / In the past five years, the insured rate of Automobile Material Damage Insurance (AMDI) has been declined but the loss ratio is climbing, in contrast to the decreasing trend in the loss ratio of the compulsory automobile liability insurance. By charging corresponding premium based on individual risks, we could attract low risk entrant and reflect the highly risk costs. The loss ratio can thus be modified to a reasonable level. To further illustrate the concept, we aim to take the AMDI to study the most efficient estimator of the future claim. Because the relationship of loss experience (input) and future claim estimation (output) is similar to the human brain performs. We can analyze the relation by minimum bias procedure and artificial neural network, reducing error with overall rate level could go through with minimum error of classes or individual, demonstrated using policy year 1999 to 2002 data.
According to the thesis, cross subsidization exists in Automobile Material Damage Insurance. The new rate produced by minimum bias estimate can alleviate the unbalance between the premium and loss. However the neural network classification rating can allocate those premiums more fairly, where ‘fairly’ means that higher premiums are paid by those insured with greater risk of loss and vice-versa. Also, it is the more efficient than the minimum bias estimator in the panel data.
|
10 |
台灣壽險業健康保險損失率影響因素之探討 / The factors that influence the loss ratio of health insurance policies for life insurance companies in Taiwan邱于君, Chiu, Yu Chun Unknown Date (has links)
本研究主要在探討台灣壽險業健康保險損失率之影響因素。首先,了解健康險損失率是否因為壽險公司規模不同而有顯著差異。再者,將壽險公司依主要專注之通路類型分為三類,包括業務員通路、經代通路以及其他通路,而觀察通路對於健康險損失率的影響情形。最後,藉由個體變數與總體變數之分析,期望以其他不同的角度協助保險公司未來對於損失率的風險控制。
研究結果發現:
(一)當壽險公司之資產規模不同時,對於健康險損失率有顯著差異上的影響。大型壽險公司的平均健康險損失率顯著高於小型壽險公司之健康險平均損失率。
(二)壽險公司行銷通路的注重程度不同,確實會使健康保險損失率產生顯著的差異。研究結果發現其他通路運用程度越高的壽險公司,其健康險損失率顯著比使用經代人通路和業務員通路的壽險公司之損失率低。
(三)本研究發現健康險損失率受到總體因素的影響,一般而言比壽險公司個體因素的影響微弱。個體因素確實會顯著影響壽險公司健康險損失率,而且壽險公司不同的規模型態,其主要影響健康險損失率的因素亦會有所不同。 / This study examines the factors that influence the loss ratio of health insurance policies for life insurance companies in Taiwan. First, this thesis intends to investigate whether there are significant differences in loss ratios among insurers due to firm size. Secondly, the impact of marketing channels on health insurance loss ratio is analyzed where the distribution systems mainly used by insurers are divided into three categories: employee sales, agent/broker channel, and others. Finally, this study conducts regression analyses on the health insurance loss ratio with firm-specific and macroeconomic variables to help insurers in controlling risks in the future. The empirical results are shown as follows.
1.The loss ratios of health insurance vary significantly with firm size. The loss ratio of large insurance companies is significantly higher than that of small insurances companies.
2.Distribution system has a significant impact on the loss ratio of health insurance. When the insurer relies more on other channels, instead of employee sales and agent/broker, the insurer will have lower loss ratio.
3.The impact of macroeconomic variables on the loss ratio of health insurance is less than that of firm-specific variables. Additionally, the influential variables for loss ratio may be different between insurers of large and small sizes.
|
Page generated in 0.0631 seconds