• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 456
  • 205
  • 61
  • 32
  • 30
  • 28
  • 26
  • 21
  • 7
  • 6
  • 6
  • 4
  • 3
  • 3
  • 3
  • Tagged with
  • 1036
  • 127
  • 126
  • 123
  • 100
  • 93
  • 83
  • 80
  • 76
  • 75
  • 68
  • 64
  • 62
  • 59
  • 57
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
551

Essays on Machine Learning in Risk Management, Option Pricing, and Insurance Economics

Fritzsch, Simon 05 July 2022 (has links)
Dealing with uncertainty is at the heart of financial risk management and asset pricing. This cumulative dissertation consists of four independent research papers that study various aspects of uncertainty, from estimation and model risk over the volatility risk premium to the measurement of unobservable variables. In the first paper, a non-parametric estimator of conditional quantiles is proposed that builds on methods from the machine learning literature. The so-called leveraging estimator is discussed in detail and analyzed in an extensive simulation study. Subsequently, the estimator is used to quantify the estimation risk of Value-at-Risk and Expected Shortfall models. The results suggest that there are significant differences in the estimation risk of various GARCH-type models while in general estimation risk for the Expected Shortfall is higher than for the Value-at-Risk. In the second paper, the leveraging estimator is applied to realized and implied volatility estimates of US stock options to empirically test if the volatility risk premium is priced in the cross-section of option returns. A trading strategy that is long (short) in a portfolio with low (high) implied volatility conditional on the realized volatility yields average monthly returns that are economically and statistically significant. The third paper investigates the model risk of multivariate Value-at-Risk and Expected Shortfall models in a comprehensive empirical study on copula GARCH models. The paper finds that model risk is economically significant, especially high during periods of financial turmoil, and mainly due to the choice of the copula. In the fourth paper, the relation between digitalization and the market value of US insurers is analyzed. Therefore, a text-based measure of digitalization building on the Latent Dirichlet Allocation is proposed. It is shown that a rise in digitalization efforts is associated with an increase in market valuations.:1 Introduction 1.1 Motivation 1.2 Conditional quantile estimation via leveraging optimal quantization 1.3 Cross-section of option returns and the volatility risk premium 1.4 Marginals versus copulas: Which account for more model risk in multivariate risk forecasting? 1.5 Estimating the relation between digitalization and the market value of insurers 2 Conditional Quantile Estimation via Leveraging Optimal Quantization 2.1 Introduction 2.2 Optimal quantization 2.3 Conditional quantiles through leveraging optimal quantization 2.4 The hyperparameters N, λ, and γ 2.5 Simulation study 2.6 Empirical application 2.7 Conclusion 3 Cross-Section of Option Returns and the Volatility Risk Premium 3.1 Introduction 3.2 Capturing the volatility risk premium 3.3 Empirical study 3.4 Robustness checks 3.5 Conclusion 4 Marginals Versus Copulas: Which Account for More Model Risk in Multivariate Risk Forecasting? 4.1 Introduction 4.2 Market risk models and model risk 4.3 Data 4.4 Analysis of model risk 4.5 Model risk for models in the model confidence set 4.6 Model risk and backtesting 4.7 Conclusion 5 Estimating the Relation Between Digitalization and the Market Value of Insurers 5.1 Introduction 5.2 Measuring digitalization using LDA 5.3 Financial data & empirical strategy 5.4 Estimation results 5.5 Conclusion
552

Analyses Of Crash Occurence And Injury Severities On Multi Lane Highways Using Machine Learning Algorithms

Das, Abhishek 01 January 2009 (has links)
Reduction of crash occurrence on the various roadway locations (mid-block segments; signalized intersections; un-signalized intersections) and the mitigation of injury severity in the event of a crash are the major concerns of transportation safety engineers. Multi lane arterial roadways (excluding freeways and expressways) account for forty-three percent of fatal crashes in the state of Florida. Significant contributing causes fall under the broad categories of aggressive driver behavior; adverse weather and environmental conditions; and roadway geometric and traffic factors. The objective of this research was the implementation of innovative, state-of-the-art analytical methods to identify the contributing factors for crashes and injury severity. Advances in computational methods render the use of modern statistical and machine learning algorithms. Even though most of the contributing factors are known a-priori, advanced methods unearth changing trends. Heuristic evolutionary processes such as genetic programming; sophisticated data mining methods like conditional inference tree; and mathematical treatments in the form of sensitivity analyses outline the major contributions in this research. Application of traditional statistical methods like simultaneous ordered probit models, identification and resolution of crash data problems are also key aspects of this study. In order to eliminate the use of unrealistic uniform intersection influence radius of 250 ft, heuristic rules were developed for assigning crashes to roadway segments, signalized intersection and access points using parameters, such as 'site location', 'traffic control' and node information. Use of Conditional Inference Forest instead of Classification and Regression Tree to identify variables of significance for injury severity analysis removed the bias towards the selection of continuous variable or variables with large number of categories. For the injury severity analysis of crashes on highways, the corridors were clustered into four optimum groups. The optimum number of clusters was found using Partitioning around Medoids algorithm. Concepts of evolutionary biology like crossover and mutation were implemented to develop models for classification and regression analyses based on the highest hit rate and minimum error rate, respectively. Low crossover rate and higher mutation reduces the chances of genetic drift and brings in novelty to the model development process. Annual daily traffic; friction coefficient of pavements; on-street parking; curbed medians; surface and shoulder widths; alcohol / drug usage are some of the significant factors that played a role in both crash occurrence and injury severities. Relative sensitivity analyses were used to identify the effect of continuous variables on the variation of crash counts. This study improved the understanding of the significant factors that could play an important role in designing better safety countermeasures on multi lane highways, and hence enhance their safety by reducing the frequency of crashes and severity of injuries. Educating young people about the abuses of alcohol and drugs specifically at high schools and colleges could potentially lead to lower driver aggression. Removal of on-street parking from high speed arterials unilaterally could result in likely drop in the number of crashes. Widening of shoulders could give greater maneuvering space for the drivers. Improving pavement conditions for better friction coefficient will lead to improved crash recovery. Addition of lanes to alleviate problems arising out of increased ADT and restriction of trucks to the slower right lanes on the highways would not only reduce the crash occurrences but also resulted in lower injury severity levels.
553

Demand- Side Financing In Education: A Critical Examination of a Girls' Scholarship Program in Malawi- (Case Study)

Sineta, Abraham 01 September 2012 (has links)
Despite the push for universal education, many disadvantaged and poor children in developing countries still do not have access to basic education. This among other reasons is due to poverty where poor families cannot afford the cost of basic education even when it is `free' of tuition (McDonald, 2007). Demand-side financing interventions such as scholarship programs are promising to be viable financing interventions of reaching out to the poor and marginalized children in order for them to access basic education. Although such financing strategies have been praised as having worked in mostly Latin American countries, very little is systematically known about how these interventions would work in poor African countries such as Malawi. This study therefore examines demand-side financing strategy through an evaluation of a scholarship program implemented in Malawi. It uses qualitative mode of inquiry through in-depth interviews of 36 key participants as a primary method of data collection. In addition it reviews program documents and conducts some cohort tracking on beneficiaries in Zomba rural district which is the site of the study. The findings show that community based targeting was used in the program and proved successful in identifying the right beneficiaries in a cost effective manner. It seems to offer a model to be adopted for such interventions in low resource countries. Findings further show that beneficiaries who received scholarships were able to persist however there was a substantial number that dropped out. There were a number of factors that caused this but it seems the internal motivation of beneficiaries to persist was very critical. This puts under the microscope an assumption that once scholarship is received, beneficiaries would persist in school. Last but not least, the findings also show that an assumption that local communities will be able to sustain such programs might be but a mere illusion as communities view themselves too poor to do this. Overall the study praises such programs as effective in targeting the poor and marginalized children however it puts a caution on assumptions about persistence & sustainability. It suggests further scrutiny on these assumptions to improve on the effectiveness of such programs and demand-side financing strategies in general.
554

Deep Synthetic Noise Generation for RGB-D Data Augmentation

Hammond, Patrick Douglas 01 June 2019 (has links)
Considerable effort has been devoted to finding reliable methods of correcting noisy RGB-D images captured with unreliable depth-sensing technologies. Supervised neural networks have been shown to be capable of RGB-D image correction, but require copious amounts of carefully-corrected ground-truth data to train effectively. Data collection is laborious and time-intensive, especially for large datasets, and generation of ground-truth training data tends to be subject to human error. It might be possible to train an effective method on a relatively smaller dataset using synthetically damaged depth-data as input to the network, but this requires some understanding of the latent noise distribution of the respective camera. It is possible to augment datasets to a certain degree using naive noise generation, such as random dropout or Gaussian noise, but these tend to generalize poorly to real data. A superior method would imitate real camera noise to damage input depth images realistically so that the network is able to learn to correct the appropriate depth-noise distribution.We propose a novel noise-generating CNN capable of producing realistic noise customized to a variety of different depth-noise distributions. In order to demonstrate the effects of synthetic augmentation, we also contribute a large novel RGB-D dataset captured with the Intel RealSense D415 and D435 depth cameras. This dataset pairs many examples of noisy depth images with automatically completed RGB-D images, which we use as proxy for ground-truth data. We further provide an automated depth-denoising pipeline which may be used to produce proxy ground-truth data for novel datasets. We train a modified sparse-to-dense depth-completion network on splits of varying size from our dataset to determine reasonable baselines for improvement. We determine through these tests that adding more noisy depth frames to each RGB-D image in the training set has a nearly identical impact on depth-completion training as gathering more ground-truth data. We leverage these findings to produce additional synthetic noisy depth images for each RGB-D image in our baseline training sets using our noise-generating CNN. Through use of our augmentation method, it is possible to achieve greater than 50% error reduction on supervised depth-completion training, even for small datasets.
555

ANTERIOR SEGMENT DYSGENESIS AND GLAUCOMATOUS FEATURES OBSERVED FOLLOWING CONDITIONAL DELETION OF AP-2β IN THE NEURAL CREST CELL POPULATION / AP-2β IN THE DEVELOPMENT OF THE ANTERIOR SEGMENT OF THE EYE

Martino, Vanessa 20 November 2015 (has links)
Glaucoma is a heterogeneous group of diseases that is currently considered to be the leading cause of irreversible blindness worldwide. Of the identified risk factors, elevated intraocular pressure remains the only modifiable risk factor that can be targeted clinically. Ocular hypertension is often a result of dysregulation of aqueous humour fluid dynamics in the anterior eye segment. Aqueous humour drainage is regulated by structures located in the anterior chamber of the eye. In some circumstances dysregulation occurs due to developmental abnormalities of these structures. The malformation of structures in the anterior segment is thought to be due to a defect in the differentiation and/or migration of the periocular mesenchyme during development. Unique to vertebrates, the neural crest cell (NCC) population contributes to the periocular mesenchyme and is instrumental to the proper development of structures in the anterior segment. For many years our laboratory has examined the role of the Activating Protein-2 (AP-2) transcription factors that are expressed in the neural crest and vital during the development of the eye. The purpose of this research project is to investigate the role of AP-2β in the NCC population during the development of the anterior segment of the eye. Conditional deletion of AP-2β expression in the NCC population demonstrated that mutants have dysgenesis of structures in the anterior segment including defects of the corneal endothelium, corneal stroma, ciliary body and a closed iridocorneal angle. Loss of retinal ganglion cells and their axons was also observed, likely due to the disruption of aqueous outflow, suggesting the development of glaucoma. The data generated from this research project will be critical in elucidating the role of AP-2β in the genetic cascade dictating the development of the anterior eye segment in addition to providing scientific research with a novel model of glaucomatous optic neuropathy. / Thesis / Master of Science (MSc)
556

Crash Risk Analysis of Coordinated Signalized Intersections

Qiming Guo (17582769) 08 December 2023 (has links)
<p dir="ltr">The emergence of time-dependent data provides researchers with unparalleled opportunities to investigate disaggregated levels of safety performance on roadway infrastructures. A disaggregated crash risk analysis uses both time-dependent data (e.g., hourly traffic, speed, weather conditions and signal controls) and fixed data (e.g., geometry) to estimate hourly crash probability. Despite abundant research on crash risk analysis, coordinated signalized intersections continue to require further investigation due to both the complexity of the safety problem and the relatively small number of past studies that investigated the risk factors of coordinated signalized intersections. This dissertation aimed to develop robust crash risk prediction models to better understand the risk factors of coordinated signalized intersections and to identify practical safety countermeasures. The crashes first were categorized into three types (same-direction, opposite-direction, and right-angle) within several crash-generating scenarios. The data needed were organized in hourly observations and included the following factors: road geometric features, traffic movement volumes, speeds, weather precipitation and temperature, and signal control settings. Assembling hourly observations for modeling crash risk was achieved by synchronizing and linking data sources organized at different time resolutions. Three different non-crash sampling strategies were applied to the following three statistical models (Conditional Logit, Firth Logit, and Mixed Logit) and two machine learning models (Random Forest and Penalized Support Vector Machine). Important risk factors, such as the presence of light rain, traffic volume, speed variability, and vehicle arrival pattern of downstream, were identified. The Firth Logit model was selected for implementation to signal coordination practice. This model turned out to be most robust based on its out-of-sample prediction performance and its inclusion of important risk factors. The implementation examples of the recommended crash risk model to building daily risk profiles and to estimating the safety benefits of improved coordination plans demonstrated the model’s practicality and usefulness in improving safety at coordinated signals by practicing engineers.</p>
557

Essays in Energy and Environmental Economics

Yassin, Kareman 28 November 2023 (has links)
This dissertation employ applied microeconomics techniques with a specific emphasis on behavioral dynamics within the realms of energy and environmental economics. In Chapter one, we investigates the impact of outdoor temperature on productivity in the service sector, using data from the India Human Development Survey. Our findings suggest a precisely estimated zero effect on interview duration, ruling out significant productivity impacts. In Chapter two, we employs a conditional demand analysis on a Canadian electricity consumer data set, highlighting the effectiveness of local heat pumps and thermostat setbacks for electricity savings. Results also reveal trends favoring newer homes in electricity consumption decline. In Chapter three, I study the causal relationship of spatial peer effects from Canada's largest home energy efficiency retrofit program on energy consumption. My results show that close neighbors to energy efficiency retrofitted homes experience a significant reduction in monthly natural gas and electricity consumption. Moreover, visible retrofits, such as windows and doors, significantly impact peer energy savings compared to less visible retrofits.
558

Genetic Regulation of Cytokine Response in Patients with Acute Community-Acquired Pneumonia

Kühnapfel, Andreas, Horn, Katrin, Klotz, Ulrike, Kiehntopf, Michael, Rosolowski, Maciej, Loeffler, Markus, Ahnert, Peter, Suttorp, Norbert, Witzenrath, Martin, Scholz, Markus 02 June 2023 (has links)
Background: Community-acquired pneumonia (CAP) is an acute disease condition with a high risk of rapid deteriorations. We analysed the influence of genetics on cytokine regulation to obtain a better understanding of patient’s heterogeneity. Methods: For up to N = 389 genotyped participants of the PROGRESS study of hospitalised CAP patients, we performed a genome-wide association study of ten cytokines IL-1β, IL-6, IL-8, IL-10, IL-12, MCP-1 (MCAF), MIP-1α (CCL3), VEGF, VCAM-1, and ICAM-1. Consecutive secondary analyses were performed to identify independent hits and corresponding causal variants. Results: 102 SNPs from 14 loci showed genome-wide significant associations with five of the cytokines. The most interesting associations were found at 6p21.1 for VEGF (p = 1.58 × 10−20), at 17q21.32 (p = 1.51 × 10−9) and at 10p12.1 (p = 2.76 × 10−9) for IL-1β, at 10p13 for MIP-1α (CCL3) (p = 2.28 × 10−9), and at 9q34.12 for IL-10 (p = 4.52 × 10−8). Functionally plausible genes could be assigned to the majority of loci including genes involved in cytokine secretion, granulocyte function, and cilial kinetics. Conclusion: This is the first context-specific genetic association study of blood cytokine concentrations in CAP patients revealing numerous biologically plausible candidate genes. Two of the loci were also associated with atherosclerosis with probable common or consecutive pathomechanisms.
559

Investigation of Information-Theoretic Bounds on Generalization Error

Qorbani, Reza, Pettersson, Kevin January 2022 (has links)
Generalization error describes how well a supervised machine learning algorithm predicts the labels of input data that it has not been trained with. This project aims to explore two different methods for bounding generalization error, f-CMI and ISMI, which explicitly use mutual information. Our experiments are based on the experiments in the papers in which the methods were proposed. The experiments implement and validate the accuracy of the mathematically derived bounds. Each methodology also has a different method for calculating mutual information. The ISMI bound experiment used a multivariate normal distribution dataset, whereas a dataset consisting of cats and dogs was used for the experiment using f-CMI. Our results show that both methods are capable of bounding the generalization error of a binary classification algorithm and provide bounds that closely follow the true generalization error. The results of the experiments agree with the original experiments, indicating that the proposed methods also work for similar applications with different datasets. / Generaliseringsfel beskriver hur väl en övervakad maskininlärnings algoritm förutspår etiketter av indata som den inte har blivit tränad med. Syftet med projektet är att utforska två olika metoder för att begränsa generaliseringsfelet, f-CMI och ISMI som explicit använder ömsesidig information. Vårt experiment är baserat på experimenten i artiklarna som tog fram metoderna. Experimenten implementerade och validerade noggrannheten av de matematiskt härleda gränserna. Varje metod har olika sätt att beräkna den ömsesidiga informationen. ISMI gräns experimentet använde en flerdimensionell normalfördelning som data set, medan en datauppsättning med katter och hundar användes för f-CMI gränsen. Våra resultat visar att båda metoder kan begränsa generaliseringsfelet av en binär klassificerings algoritm och förse gränser som nära följer det sanna generaliseringsfelet. Resultatet av experimenten instämmer med de ursprungliga författarnas experiment vilket indikerar att de föreslagna metoderna också fungerar for liknande tillämpningar med andra data set. / Kandidatexjobb i elektroteknik 2022, KTH, Stockholm
560

DCC-GARCH Estimation / Utvärdering av DCC-GARCH

Nordström, Christofer January 2021 (has links)
When modelling more that one asset, it is desirable to apply multivariate modeling to capture the co-movements of the underlying assets. The GARCH models has been proven to be successful when it comes to volatility forecast- ing. Hence it is natural to extend from a univariate GARCH model to a multivariate GARCH model when examining portfolio volatility. This study aims to evaluate a specific multivariate GARCH model, the DCC-GARCH model, which was developed by Engle and Sheppard in 2001. In this pa- per different DCC-GARCH models have been implemented, assuming both Gaussian and multivariate Student’s t distribution. These distributions are compared by a set of tests as well as Value at Risk backtesting. / I portföljanalys så är det åtråvärt att applicera flerdimensionella modeller för att kunna fånga hur de olika tillgångarna rör sig tillsammans. GARCH-modeller har visat sig vara framgångsrika när det kommer till prognoser av volatilitet. Det är därför naturligt att gå från endimensionella till flerdimensionella GARCH-modeller när volatiliteten av en portfölj skall utvärderas. Den här studien ämnar att utvärdera tillvägagångssättet för prognoser av en viss typ av flerdimensionell GARCH-modell, DCC-GARCH-modellen, vilken utvecklades av Engle och Sheppard 2001. I den här uppsatsen har olika DCC-GARCH modeller blivit implementerade, som antar innovationer enligt både flerdimensionell normalfördelning samt flerdimensionell student's t-fördelning. Dessa jämförs med hjälp av en handfull tester samt Value-at-Risk backtesting.

Page generated in 0.0694 seconds