1 |
Atmospheric Mercury Deposition In An Urban EnvironmentFulkerson, Mark 01 January 2006 (has links)
Atmospheric mercury deposition, known to be a major source of mercury to aquatic and terrestrial environments, was studied at an urban site in Orlando, FL. Precipitation sampling was conducted from September 2003 to May 2006 at a Mercury Deposition Network site located on the University of Central Florida campus. Weekly rainfall and mercury wet deposition data were gathered from this site, which provided the framework of data for this study. Historical mercury wet deposition data from several sites in Florida were used to develop a regression model to predict mercury deposition at any location in Florida. Stormwater runoff from a 2-acre impervious surface at this study area was monitored during the spring and summer of 2005. Runoff water quality was analyzed to characterize mercury dry deposition. Atmospheric monitoring was also conducted during this period to study the interaction of atmospheric constituents on wet and dry deposition patterns. Spatial and seasonal trends for the entire state suggest 80% of Florida's rainfall and mercury deposition occur during the wet season. A strong linear correlation was established between rainfall depth and mercury deposition (R2 = 0.8). Prediction equations for the entire state, for both wet and dry seasons, were strongly correlated with measured data. The results of two unique methods to quantify dry deposition were similar at this site during this study period. Runoff monitored at this site contained significant levels of mercury, primarily in particulate form (58%). The vast majority of particulate mercury was flushed from the surface during storm events, while significant dissolved fractions remained. Runoff mercury concentrations were consistently higher than rainfall mercury, suggesting dry deposition accounted for 22% of total mercury in runoff. Atmospheric monitoring at this location showed gaseous elemental mercury was the dominant form (99.5%) followed by reactive gaseous mercury (0.3%) and particulate mercury (0.2%). Comparison of the contributions of wet and dry deposition suggested 80% of total mercury deposition was wet deposited during this study, while dry deposition accounted for the remaining 20%. Statistical correlations revealed rainfall scavenging of reactive gaseous mercury was the main factor controlling dry deposition.
|
2 |
Value at Risk Estimation with Neural Networks: A Recurrent Mixture Density Approach / Value at Risk Estimering med Neurala Nätverk: En Recurrent Mixture Density ApproachKarlsson Lille, William, Saphir, Daniel January 2021 (has links)
In response to financial crises and opaque practices, governmental entities and financial regulatory bodies have implemented several pieces of legislature and directives meant to protect investors and increase transparency. Such regulations often impose strict liquidity requirements and robust estimations of the risk borne by a financial firm at any given time. Value at Risk (VaR) measures how much an investment can stand to lose with a certain probability over a specified period of time and is ubiquitous in its use by institutional investors and banks alike. In practice, VaR estimations are often computed from simulations of historical data or parameterized distributions. Inspired by the recent success of Arimond et al. (2020) in using a neural network for VaR estimation, we apply a combination of recurrent neural networks and a mixture density output layer for generating mixture density distributions of future portfolio returns from which VaR estimations are made. As in Arimond et al., we suppose the existence of two regimes stylized as bull and bear markets and employ Monte Carlo simulation to generate predictions of future returns. Rather than use a swappable architecture for the parameters in the mixture density distribution, we here let all parameters be generated endogenously in the neural network. The model's success is then validated through Christoffersen tests and by comparing it to the benchmark VaR estimation models, i.e., the mean-variance approach and historical simulation. We conclude that recurrent mixture density networks show limited promise for the task of predicting effective VaR estimates if used as is, due to the model consistently overestimating the true portfolio loss. However, for practical use, encouraging results were achieved when manually shifting the predictions based on an average of the overestimation observed in the validation set. Several theories are presented as to why overestimation occurs, while no definitive conclusion could be drawn. As neural networks serve as black box models, their use for conforming to regulatory requirements is thus deemed questionable, likewise the assumption that financial data carries an inherent pattern with potential to be accurately approximated. Still, reactivity in the VaR estimations by the neural network is significantly more pronounced than in the benchmark models, motivating continued experimentation with machine learning methods for risk management purposes. Future research is encouraged to identify the source of overestimation and explore different machine learning techniques to attain more accurate VaR predictions. / I respons till finanskriser och svårfattlig verksamhetsutövning har överstatliga organ och finansmyndigheter implementerat lagstiftning och utfärdat direktiv i syfte att skydda investerare och öka transparens. Sådana regleringar förelägger ofta strikta likviditetskrav och krav på redogörelse av den finansiella risk som en marknadsaktör har vid en given tidpunkt. Value at Risk (VaR) mäter hur mycket en investering kan förlora med en viss sannolikhet över en på förhand bestämd tidsperiod och är allestädes närvarande i dess användning av institutionella investerare såväl som banker. I praktiken beräknas estimeringar av VaR framför allt via simulering av historisk data eller en parametrisering av densamma. Inspirerade av Arimond et als (2020) framgång i användning av neurala nätverk för VaR estimering applicerar vi en kombination av "recurrent" neurala nätverk och ett "mixture density output"-lager i syfte att generera mixture density-fördelningar för framtida portföljavkastning. Likt Arimond et al. förutsätter vi existensen av två regimer stiliserade som "bull" och "bear" marknader och applicerar Monte Carlo simulering för att generera prediktioner av framtida avkastning. Snarare än att använda en utbytbar arkitektur för parametrarna i mixture density-fördelningen låter vi samtliga parametrar genereras endogent i det neurala nätverket. Modellens framgång valideras via Christoffersens tester samt jämförelse med de prevalenta metoderna för att estimera VaR, det vill säga mean-variance-metoden och historisk simulering. Vår slutsats är att recurrent mixture density-nätverk enskilt uppvisar begränsad tillämpbarhet för uppgiften av att uppskatta effektiva VaR estimeringar, eftersom modellen konsekvent överestimerar den sanna portföljförlusten. För praktisk användning visade modellen däremot uppmuntrande resultat när dess prediktioner manuellt växlades ner baserat på ett genomsnitt av överestimeringen observerad i valideringsdatat. Flera teorier presenteras kring varför överestimeringen sker men ingen definitiv slutsats kunde dras. Eftersom neurala nätverksmodeller agerar som svarta lådor är deras potential till att bemöta regulatoriska krav tveksam, likväl antagandet att finansiell data har ett inneboende mönster kapabelt till att approximeras. Med detta sagt uppvisar neurala nätverkets VaR estimeringar betydligt mer reaktivitet än i de prevalenta modellerna, varför fortsatt experimentation med maskininlärningsmetoder för riskhantering ändå kan vara motiverat. Framtida forskning uppmuntras för att identifera källan till överestimeringen, samt utforskningen av andra maskininlärningsmetoder för att erhålla mer precisa VaR prediktioner.
|
3 |
Probabilistic Regression using Conditional Generative Adversarial NetworksOskarsson, Joel January 2020 (has links)
Regression is a central problem in statistics and machine learning with applications everywhere in science and technology. In probabilistic regression the relationship between a set of features and a real-valued target variable is modelled as a conditional probability distribution. There are cases where this distribution is very complex and not properly captured by simple approximations, such as assuming a normal distribution. This thesis investigates how conditional Generative Adversarial Networks (GANs) can be used to properly capture more complex conditional distributions. GANs have seen great success in generating complex high-dimensional data, but less work has been done on their use for regression problems. This thesis presents experiments to better understand how conditional GANs can be used in probabilistic regression. Different versions of GANs are extended to the conditional case and evaluated on synthetic and real datasets. It is shown that conditional GANs can learn to estimate a wide range of different distributions and be competitive with existing probabilistic regression models.
|
Page generated in 0.0244 seconds