Spelling suggestions: "subject:"risk based"" "subject:"risk eased""
41 |
Differential Default Risk Among Traditional and Non-Traditional Mortgage Products and Capital Adequacy StandardsLin, Che Chun, Prather, Larry J., Chu, Ting Heng, Tsay, Jing Tang 01 April 2013 (has links)
We develop a framework to quantify credit risks of non-traditional mortgage products (NMPs). Ex ante probabilities of default are caused by willingness-to-pay and ability-to-pay problems and the high default rates for NMPs confirm that payment shock is a critical default risk indicator. Monte Carlo simulations are conducted using three correlated stochastic variables (mortgage interest rate, home price, and household income) under normal and stressed economies. Results confirm that the default risk of 2/28 and option ARM contracts requiring a minimum monthly interest payment have a greater probability of default than other mortgage products in all economic scenarios. Additionally, the credit risk of NMPs is primarily systematic risk, suggesting that these products should require higher risk-based capital. Due to the non-linear distribution of credit risk, even the advanced internal-based rating approach of the Basle II framework can understate the risk involved in these NMPs.
|
42 |
Differential Default Risk Among Traditional and Non-Traditional Mortgage Products and Capital Adequacy StandardsLin, Che Chun, Prather, Larry J., Chu, Ting Heng, Tsay, Jing Tang 01 April 2013 (has links)
We develop a framework to quantify credit risks of non-traditional mortgage products (NMPs). Ex ante probabilities of default are caused by willingness-to-pay and ability-to-pay problems and the high default rates for NMPs confirm that payment shock is a critical default risk indicator. Monte Carlo simulations are conducted using three correlated stochastic variables (mortgage interest rate, home price, and household income) under normal and stressed economies. Results confirm that the default risk of 2/28 and option ARM contracts requiring a minimum monthly interest payment have a greater probability of default than other mortgage products in all economic scenarios. Additionally, the credit risk of NMPs is primarily systematic risk, suggesting that these products should require higher risk-based capital. Due to the non-linear distribution of credit risk, even the advanced internal-based rating approach of the Basle II framework can understate the risk involved in these NMPs.
|
43 |
Can reliability centered maintenance foster asset management? : A case study at the process-oriented steel company OutokumpuJonsson, Niklas January 2022 (has links)
No description available.
|
44 |
International workshop on safety assessment of consumer goods coming from recovered materials in a global scale perspective: Event reportBilitewski, Bernd, Barceló, Damià, Darbra, Rosa Mari, Voet, Ester van der, Belhaj, Mohammed, Benfenati, Emilio, Ginebreda, Antoni, Grundmann, Veit 09 November 2012 (has links)
Chemicals and additives in products being produced and marketed globally, this makes an international harmonised assessment and management essential. Chemical testing, research on risks, impacts
and management options are carried out throughout the globe but quite fractionated to certain areas and sectors and much too often with little linkages between the different scientific communities. The coordination action (CA) \'RISKCYCLE\' is aimed to establish and o-ordinate a global network of European and international experts and stakeholders to define together future
needs of R+D contributions for innovations in the risk-based management of chemicals and products in a circular economy of global scale leading to alternative strategies to animal tests and reduced
health hazards. The partners joining this action seek to explore the synergies of the research carried out within different programmes and countries of the EU, in Asia and overseas to facilitate the intensified communication with researchers, institutions and industries about the risks of hazardous chemicals and additives in products and risk reduction measures and to improve the dispersion of available information. The RISKCYCLE network will closely collaborate with related projects, EU and international bodies and authorities such as for example the Organisation for Economic Co-operation and Development (OECD), the European Chemical Industry Council (CEFIC)
and the Scientific Committee on Health and Environmental Risks in Europe. / Mục đích chính của RISKCYCLE là xác đinh các nghiện cứu và sự phát triển trong tương lai cấn thiết để thành lập một phương pháp đánh giá dựa trện rủi ro cho các hoá chất và các sản phấm. Phương pháp này sẽ giúp giảm bớt các thủ nghiệm trện động vật, đổng thời đảm bảo sự phát triển các hóa chất mới và một mô hình quản lý sản phấm để giảm thiểu rủi ro đối với sức khởe và môi trường. để đạt được mục tiệu này, trước hết cấn thu thập và đánh giá thông tin hiện có về các hoá
chất và đặc biệt là các chất phụ gia được sủ dụng trong sản phấm công nghiệp và tiệu dùng. Nhiều hợp chất độc hại tiềm tàng được giao dich mua bán trện toàn thế giới như là chất phụ gia trong các sản phấm khác nhau. RISKCYCLE sẽ tập trung vào tác động và hậu quả của các chất phụ gia trong sáu lĩnh vực: dệt may, điện tủ, nhựa, da, giấy và dấu mớ bôi trơn. Trong ngành công nghiệp dệt may việc sủ dụng các chất phụ gia sẽ được nghiện cứu, trong khi ở ngành điện tủ và công nghiệp dệt may, việc sủ dụng các chất chống cháy, đặc biệt là chất chống cháy chứa brôm như PBDEs và HBCD, sẽ được phân tích. Trong công nghiệp da, kim loại nặng như crom sẽ được quan tâm. Việc sủ dụng chất diệt côn trùng trong ngành công nghiệp giấy cũng sẽ là một mối quan
tâm chính của các hoạt động phối hợp.
|
45 |
Risk Based Decision Making Tools for Sewer Infrastructure ManagementAbdel Moteleb, Moustafa 28 September 2010 (has links)
No description available.
|
46 |
Optimal Data-driven Methods for Subject Classification in Public Health ScreeningSadeghzadeh, Seyedehsaloumeh 01 July 2019 (has links)
Biomarker testing, wherein the concentration of a biochemical marker is measured to predict the presence or absence of a certain binary characteristic (e.g., a disease) in a subject, is an essential component of public health screening. For many diseases, the concentration of disease-related biomarkers may exhibit a wide range, particularly among the disease positive subjects, in part due to variations caused by external and/or subject-specific factors. Further, a subject's actual biomarker concentration is not directly observable by the decision maker (e.g., the tester), who has access only to the test's measurement of the biomarker concentration, which can be noisy. In this setting, the decision maker needs to determine a classification scheme in order to classify each subject as test negative or test positive. However, the inherent variability in biomarker concentrations and the noisy test measurements can increase the likelihood of subject misclassification.
We develop an optimal data-driven framework, which integrates optimization and data analytics methodologies, for subject classification in disease screening, with the aim of minimizing classification errors. In particular, our framework utilizes data analytics methodologies to estimate the posterior disease risk of each subject, based on both subject-specific and external factors, coupled with robust optimization methodologies to derive an optimal robust subject classification scheme, under uncertainty on actual biomarker concentrations. We establish various key structural properties of optimal classification schemes, show that they are easily implementable, and develop key insights and principles for classification schemes in disease screening.
As one application of our framework, we study newborn screening for cystic fibrosis in the United States. Cystic fibrosis is one of the most common genetic diseases in the United States. Early diagnosis of cystic fibrosis can substantially improve health outcomes, while a delayed diagnosis can result in severe symptoms of the disease, including fatality. We demonstrate our framework on a five-year newborn screening data set from the North Carolina State Laboratory of Public Health. Our study underscores the value of optimization-based approaches to subject classification, and show that substantial reductions in classification error can be achieved through the use of the proposed framework over current practices. / Doctor of Philosophy / A biomarker is a measurable characteristic that is used as an indicator of a biological state or condition, such as a disease or disorder. Biomarker testing, where a biochemical marker is used to predict the presence or absence of a disease in a subject, is an essential tool in public health screening. For many diseases, related biomarkers may have a wide range of concentration among subjects, particularly among the disease positive subjects. Furthermore, biomarker levels may fluctuate based on external factors (e.g., temperature, humidity) or subject-specific characteristics (e.g., weight, race, gender). These sources of variability can increase the likelihood of subject misclassification based on a biomarker test.
We develop an optimal data-driven framework, which integrates optimization and data analytics methodologies, for subject classification in disease screening, with the aim of minimizing classification errors. We establish various key structural properties of optimal classification schemes, show that they are easily implementable, and develop key insights and principles for classification schemes in disease screening.
As one application of our framework, we study newborn screening for cystic fibrosis in the United States. Cystic fibrosis is one of the most common genetic diseases in the United States. Early diagnosis of cystic fibrosis can substantially improve health outcomes, while a delayed diagnosis can result in severe symptoms of the disease, including fatality. As a result, newborn screening for cystic fibrosis is conducted throughout the United States. We demonstrate our framework on a five-year newborn screening data set from the North Carolina State Laboratory of Public Health. Our study underscores the value of optimization-based approaches to subject classification, and show that substantial reductions in classification error can be achieved through the use of the proposed framework over current practices.
|
47 |
My Trash, Your Treasure: What Prevents Risk-Based Governance from Diffusing in American Coal Mining Safety Regulation?Yang, Binglin 10 February 2010 (has links)
Recently, there has been a growth of risk-based governance in coal mining safety regulation in many European and commonwealth countries. However, it is puzzling that the progress is much slower in the U.S. This dissertation seeks to explore this puzzle by examining the question what are the barriers keeping the American coal mining industry and the U.S. government from moving toward risk-based governance?
Based on the theoretical framework introduced by Braithwaite and Drahos (2000), particularly the theory of modeling, this research found three major barriers that keep the American coal mining industry from fully embracing the model of risk management. First, the existence of a large number of small operators prevents this model from being diffused in the industry. Second, increasingly prescriptive regulations have consumed the resources that companies could use to develop risk management systems and have created a mentality of compliance that is not compatible with the idea of risk management. Third, a group of model mongers, missionaries, and mercenaries have advocated a competing model — behavior-based safety — that is more attractive to the industry.
This dissertation also found that the lack of three factors helps explain the failure of the U.S. government's move toward risk-based governance: (1) strong imitative pressure from general occupational heath and safety (OHS) regulation; (2) strong model mongers, missionaries, and mercenaries; and (3) webs of dialogue. / Ph. D.
|
48 |
Optimal Risk-based Pooled Testing in Public Health Screening, with Equity and Robustness ConsiderationsAprahamian, Hrayer Yaznek Berg 03 May 2018 (has links)
Group (pooled) testing, i.e., testing multiple subjects simultaneously with a single test, is essential for classifying a large population of subjects as positive or negative for a binary characteristic (e.g., presence of a disease, genetic disorder, or a product defect). While group testing is used in various contexts (e.g., screening donated blood or for sexually transmitted diseases), a lack of understanding of how an optimal grouping scheme should be designed to maximize classification accuracy under a budget constraint hampers screening efforts.
We study Dorfman and Array group testing designs under subject-specific risk characteristics, operational constraints, and imperfect tests, considering classification accuracy-, efficiency-, robustness-, and equity-based objectives, and characterize important structural properties of optimal testing designs. These properties provide us with key insights and allow us to model the testing design problems as network flow problems, develop efficient algorithms, and derive insights on equity and robustness versus accuracy trade-off. One of our models reduces to a constrained shortest path problem, for a special case of which we develop a polynomial-time algorithm. We also show that determining an optimal risk-based Dorfman testing scheme that minimizes the expected number of tests is tractable, resolving an open conjecture.
Our case studies, on chlamydia screening and screening of donated blood, demonstrate the value of optimal risk-based testing designs, which are shown to be less expensive, more accurate, more equitable, and more robust than current screening practices. / PHD / Group (pooled) testing, i.e., testing multiple subjects simultaneously with a single test, is essential for classifying a large population of subjects as positive or negative for a binary characteristic (e.g., presence of a disease, genetic disorder, or a product defect). While group testing is used in various contexts (e.g., screening donated blood or for sexually transmitted diseases), a lack of understanding of how an optimal grouping scheme should be designed to maximize classification accuracy under a budget constraint hampers screening efforts.
We study Dorfman and Array group testing designs under subject-specific risk characteristics, operational constraints, and imperfect tests, considering classification accuracy-, efficiency-, robustness-, and equity-based objectives, and characterize important structural properties of optimal testing designs. These properties provide us with key insights and allow us to model the testing design problems as network flow problems, develop efficient algorithms, and derive insights on equity and robustness versus accuracy trade-off. We also show that determining an optimal risk-based Dorfman testing scheme that minimizes the expected number of tests is tractable, resolving an open conjecture.
Our case studies, on chlamydia screening and screening of donated blood, demonstrate the value of optimal risk-based testing designs, which are shown to be less expensive, more accurate, more equitable, and more robust than current screening practices.
|
49 |
<b>DEVELOPMENT OF A FORMALIZED CRITERIA FOR IN-SERVICE INSPECTION OF PEDESTRIAN BRIDGES</b>Aedh A Alharthi (20113011) 05 November 2024 (has links)
<p dir="ltr">In recent years (circa 2024), the purpose of pedestrian bridges has extended beyond simply providing a safe route for pedestrians to cross an obstacle. Nowadays, pedestrian bridges are becoming works of art integrated into the design plan for the whole city. The pleasant appearance of these bridges, however, comes at the cost of requiring complex structural analysis and design, unique fabrication requirements, and construction challenges. Therefore, inspecting different types of pedestrian bridges efficiently and adequately is crucial to avoid unexpected failure during their service life. While National Bridge Inspection Standers (NBIS) regulations are only applicable to all publicly owned <i>highway</i> bridges with spans longer than twenty feet, there is no standard inspection criteria applicable across the board for any type of pedestrian bridge (FHWA 2022a). The current criteria, implemented ad-hoc by many owners, is to inspect pedestrian bridges using the traditional calendar-based inspection approach. This method is based on assigning an inspection interval not to exceed some time frame (typically 24-months) for all bridges with exceptions for some specific bridges receiving special inspections. Although this method may provide an adequate level of safety for some bridges, it doesn’t explicitly account for the current condition, variation in operational environment, and the design characteristics of the bridge. In addition, the current inspection practice of pedestrian bridges considers only inspecting bridge's <i>structural conditions</i> while some unique safety and serviceability criteria should be considered to attain an optimum level of safety and serviceability for pedestrians and cyclists on the bridge such as railing, lighting, walking surface, etc.</p><p dir="ltr">The main objective of this research is to develop an inspection criterion specifically applicable to pedestrian bridges to ensure the optimal allocation of inspection resources while maintaining an optimum safety and serviceability. In its final form, the Risk Based Inspection (RBI) methodology is applied in conjunction with reliability concepts and expert inputs from the Risk Assessment Panel (RAP) of the Indiana Department of Transportation (INDOT) to systematically evaluate the key components of the proposed approach. The proposed methodology is based on the Reliability Based Inspection procedures presented in NCHRP 782 report (Washer et al. 2014a). In this method, the inspection interval is determined based on the risk assessment, which is the product of a combination of occurrence and consequence factors. The occurrence factor is calculated based on design, loading (mechanical and environmental), and condition attributes for each type of damage. The consequence factor measures the outcomes of the occurrence of the damage under consideration. This factor is evaluated at two stages, an immediate consequence in which outcomes impact the safety of the service on and under the bridge, and a short-term consequence, in which effects influence the serviceability of the service under the bridge. Furthermore, a new factor is also introduced to the RBI approach. Specifically, what will be referred to as the inspection effectiveness factor which attempts to accounts for the reliability of the inspection technique to identify and quantify a specific defect for a given components of the bridge. The proposed approach is then applied and validated on a set of real in-service pedestrian bridges with varying materials and structural systems. The results demonstrate that the approach improves the safety and serviceability of pedestrian bridge inspections. Furthermore, it ensures a better allocation of the limited inspection resources and proves to be more cost-effective compared to current inspection practices.</p>
|
50 |
相關係數對於風險基礎資本有效性之影響 / The Impact of Correlation on the Effectiveness of Risk-Based Capital潘原至, Pan Yuan Chih Unknown Date (has links)
本篇論文指出風險基礎資本對於保險公司的清償能力,並不是一個有效的預測工具。其中一個無效的理由可能是對於各個風險之間的相關係數矩陣沒有做正確的假設,但這個說法從未被證實。因此,本篇論文藉由一個模擬的產物保險公司資料,透過不同的共變數調整後總和風險基礎資本(Total RBC)的相關係數矩陣假設來檢測不同的相關係數矩陣對於風險基礎資本預測產險公司清償能力的有效性為何。我們建構了一個模擬模型來比較相關係數的設定對於資本要求有效性的影響。模擬結果證實,相關係數的設定對於預測產險公司清償能力的有效性並無影響。可能的原因是在模擬的過程中,計算風險基礎資本的風險類別的數量不夠多,所以造成相關係數並沒有顯著的影響。因此,調整風險基礎資本中共變異數的計算公式並不會增加風險基礎資本預測的有效性。 / From past work, it is believed that RBC is ineffective in predicting solvency. One of the possible reasons for causing ineffectiveness may be the unrealistic assumption about correlations among risks, but it is not yet confirmed. Thus, in this paper we investigate how the correlation specification in obtaining Total RBC after covariance affects the effectiveness of RBC for property-casualty insurers. We conduct simulations to compare the effectiveness of capital requirements with assorted correlation specifications. Simulation results confirm that correlation specification has no influence on effectiveness. Our conjuncture is that the number of risk categories in RBC is probably not large enough for correlation to have significant impact. Therefore, modifying the covariance formula alone will not improve the effectiveness of RBC.
|
Page generated in 0.0487 seconds