• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 46
  • 7
  • 2
  • 2
  • 1
  • Tagged with
  • 70
  • 70
  • 70
  • 18
  • 13
  • 12
  • 11
  • 10
  • 10
  • 9
  • 8
  • 8
  • 8
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Automated construction of generalized additive neural networks for predictive data mining / Jan Valentine du Toit

Du Toit, Jan Valentine January 2006 (has links)
In this thesis Generalized Additive Neural Networks (GANNs) are studied in the context of predictive Data Mining. A GANN is a novel neural network implementation of a Generalized Additive Model. Originally GANNs were constructed interactively by considering partial residual plots. This methodology involves subjective human judgment, is time consuming, and can result in suboptimal results. The newly developed automated construction algorithm solves these difficulties by performing model selection based on an objective model selection criterion. Partial residual plots are only utilized after the best model is found to gain insight into the relationships between inputs and the target. Models are organized in a search tree with a greedy search procedure that identifies good models in a relatively short time. The automated construction algorithm, implemented in the powerful SAS® language, is nontrivial, effective, and comparable to other model selection methodologies found in the literature. This implementation, which is called AutoGANN, has a simple, intuitive, and user-friendly interface. The AutoGANN system is further extended with an approximation to Bayesian Model Averaging. This technique accounts for uncertainty about the variables that must be included in the model and uncertainty about the model structure. Model averaging utilizes in-sample model selection criteria and creates a combined model with better predictive ability than using any single model. In the field of Credit Scoring, the standard theory of scorecard building is not tampered with, but a pre-processing step is introduced to arrive at a more accurate scorecard that discriminates better between good and bad applicants. The pre-processing step exploits GANN models to achieve significant reductions in marginal and cumulative bad rates. The time it takes to develop a scorecard may be reduced by utilizing the automated construction algorithm. / Thesis (Ph.D. (Computer Science))--North-West University, Potchefstroom Campus, 2006.
62

Essays on bayesian and classical econometrics with small samples

Jarocinski, Marek 15 June 2006 (has links)
Esta tesis se ocupa de los problemas de la estimación econométrica con muestras pequeñas, en los contextos del los VARs monetarios y de la investigación empírica del crecimiento. Primero, demuestra cómo mejorar el análisis con VAR estructural en presencia de muestra pequeña. El primer capítulo adapta la especificación con prior intercambiable (exchangeable prior) al contexto del VAR y obtiene nuevos resultados sobre la transmisión monetaria en nuevos miembros de la Unión Europea. El segundo capítulo propone un prior sobre las tasas de crecimiento iniciales de las variables modeladas. Este prior resulta en la corrección del sesgo clásico de la muestra pequeña en series temporales y reconcilia puntos de vista Bayesiano y clásico sobre la estimación de modelos de series temporales. El tercer capítulo estudia el efecto del error de medición de la renta nacional sobre resultados empíricos de crecimiento económico, y demuestra que los procedimientos econométricos robustos a incertidumbre acerca del modelo son muy sensibles al error de medición en los datos. / This thesis deals with the problems of econometric estimation with small samples, in the contexts of monetary VARs and growth empirics. First, it shows how to improve structural VAR analysis on short datasets. The first chapter adapts the exchangeable prior specification to the VAR context, and obtains new findings about monetary transmission in New Member States. The second chapter proposes a prior on initial growth rates of modeled variables, which tackles the Classical small-sample bias in time series, and reconciles Bayesian and Classical points of view on time series estimation. The third chapter studies the effect of measurement error in income data on growth empirics, and shows that econometric procedures which are robust to model uncertainty are very sensitive to measurement error of the plausible size and properties.
63

Statistical Modeling for Credit Ratings

Vana, Laura 01 August 2018 (has links) (PDF)
This thesis deals with the development, implementation and application of statistical modeling techniques which can be employed in the analysis of credit ratings. Credit ratings are one of the most widely used measures of credit risk and are relevant for a wide array of financial market participants, from investors, as part of their investment decision process, to regulators and legislators as a means of measuring and limiting risk. The majority of credit ratings is produced by the "Big Three" credit rating agencies Standard & Poors', Moody's and Fitch. Especially in the light of the 2007-2009 financial crisis, these rating agencies have been strongly criticized for failing to assess risk accurately and for the lack of transparency in their rating methodology. However, they continue to maintain a powerful role as financial market participants and have a huge impact on the cost of funding. These points of criticism call for the development of modeling techniques that can 1) facilitate an understanding of the factors that drive the rating agencies' evaluations, 2) generate insights into the rating patterns that these agencies exhibit. This dissertation consists of three research articles. The first one focuses on variable selection and assessment of variable importance in accounting-based models of credit risk. The credit risk measure employed in the study is derived from credit ratings assigned by ratings agencies Standard & Poors' and Moody's. To deal with the lack of theoretical foundation specific to this type of models, state-of-the-art statistical methods are employed. Different models are compared based on a predictive criterion and model uncertainty is accounted for in a Bayesian setting. Parsimonious models are identified after applying the proposed techniques. The second paper proposes the class of multivariate ordinal regression models for the modeling of credit ratings. The model class is motivated by the fact that correlated ordinal data arises naturally in the context of credit ratings. From a methodological point of view, we extend existing model specifications in several directions by allowing, among others, for a flexible covariate dependent correlation structure between the continuous variables underlying the ordinal credit ratings. The estimation of the proposed models is performed using composite likelihood methods. Insights into the heterogeneity among the "Big Three" are gained when applying this model class to the multiple credit ratings dataset. A comprehensive simulation study on the performance of the estimators is provided. The third research paper deals with the implementation and application of the model class introduced in the second article. In order to make the class of multivariate ordinal regression models more accessible, the R package mvord and the complementary paper included in this dissertation have been developed. The mvord package is available on the "Comprehensive R Archive Network" (CRAN) for free download and enhances the available ready-to-use statistical software for the analysis of correlated ordinal data. In the creation of the package a strong emphasis has been put on developing a user-friendly and flexible design. The user-friendly design allows end users to estimate in an easy way sophisticated models from the implemented model class. The end users the package appeals to are practitioners and researchers who deal with correlated ordinal data in various areas of application, ranging from credit risk to medicine or psychology.
64

Três ensaios sobre política monetária e crédito

Barbi, Fernando Carvalhaes 08 April 2014 (has links)
Submitted by Fernando Barbi (fcbarbi@gmail.com) on 2014-05-07T22:24:44Z No. of bitstreams: 1 TESE_FERNANDO_CARVALHAES_BARBI_204089_CDEE_FINAL.pdf: 966201 bytes, checksum: 6f481f17555ebd92319058e7f6e4c7ee (MD5) / Rejected by Suzinei Teles Garcia Garcia (suzinei.garcia@fgv.br), reason: Bom dia Fernando, Conforme conversamos por telefone. Att. Suzi on 2014-05-08T12:02:11Z (GMT) / Submitted by Fernando Barbi (fcbarbi@gmail.com) on 2014-05-08T12:30:01Z No. of bitstreams: 1 TESE_FERNANDO_CARVALHAES_BARBI_204089_CDEE.pdf: 963867 bytes, checksum: 6b78db46891b72b31e89059c2a176bc9 (MD5) / Rejected by Suzinei Teles Garcia Garcia (suzinei.garcia@fgv.br), reason: Fernando on 2014-05-08T12:33:32Z (GMT) / Submitted by Fernando Barbi (fcbarbi@gmail.com) on 2014-05-08T12:36:15Z No. of bitstreams: 1 TESE_FERNANDO_CARVALHAES_BARBI_204089_CDEE.pdf: 963906 bytes, checksum: 467d3c75aa7e81be984b8b5f22430c0b (MD5) / Approved for entry into archive by Suzinei Teles Garcia Garcia (suzinei.garcia@fgv.br) on 2014-05-08T12:38:09Z (GMT) No. of bitstreams: 1 TESE_FERNANDO_CARVALHAES_BARBI_204089_CDEE.pdf: 963906 bytes, checksum: 467d3c75aa7e81be984b8b5f22430c0b (MD5) / Made available in DSpace on 2014-05-08T13:28:07Z (GMT). No. of bitstreams: 1 TESE_FERNANDO_CARVALHAES_BARBI_204089_CDEE.pdf: 963906 bytes, checksum: 467d3c75aa7e81be984b8b5f22430c0b (MD5) Previous issue date: 2014-04-08 / In the first essay, 'Determinants of Credit Expansion in Brazil', analyzes the determinants of credit using an extensive bank level panel dataset. Brazilian economy has experienced a major boost in leverage in the first decade of 2000 as a result of a set factors ranging from macroeconomic stability to the abundant liquidity in international financial markets before 2008 and a set of deliberate decisions taken by President Lula's to expand credit, boost consumption and gain political support from the lower social strata. As relevant conclusions to our investigation we verify that: credit expansion relied on the reduction of the monetary policy rate, international financial markets are an important source of funds, payroll-guaranteed credit and investment grade status affected positively credit supply. We were not able to confirm the importance of financial inclusion efforts. The importance of financial sector sanity indicators of credit conditions cannot be underestimated. These results raise questions over the sustainability of this expansion process and financial stability in the future. The second essay, 'Public Credit, Monetary Policy and Financial Stability', discusses the role of public credit. The supply of public credit in Brazil has successfully served to relaunch the economy after the Lehman-Brothers demise. It was later transformed into a driver for economic growth as well as a regulation device to force private banks to reduce interest rates. We argue that the use of public funds to finance economic growth has three important drawbacks: it generates inflation, induces higher loan rates and may induce financial instability. An additional effect is the prevention of market credit solutions. This study contributes to the understanding of the costs and benefits of credit as a fiscal policy tool. The third essay, 'Bayesian Forecasting of Interest Rates: Do Priors Matter?', discusses the choice of priors when forecasting short-term interest rates. Central Banks that commit to an Inflation Target monetary regime are bound to respond to inflation expectation spikes and product hiatus widening in a clear and transparent way by abiding to a Taylor rule. There are various reports of central banks being more responsive to inflationary than to deflationary shocks rendering the monetary policy response to be indeed non-linear. Besides that there is no guarantee that coefficients remain stable during time. Central Banks may switch to a dual target regime to consider deviations from inflation and the output gap. The estimation of a Taylor rule may therefore have to consider a non-linear model with time varying parameters. This paper uses Bayesian forecasting methods to predict short-term interest rates. We take two different approaches: from a theoretic perspective we focus on an augmented version of the Taylor rule and include the Real Exchange Rate, the Credit-to-GDP and the Net Public Debt-to-GDP ratios. We also take an 'atheoretic' approach based on the Expectations Theory of the Term Structure to model short-term interest. The selection of priors is particularly relevant for predictive accuracy yet, ideally, forecasting models should require as little a priori expert insight as possible. We present recent developments in prior selection, in particular we propose the use of hierarchical hyper-g priors for better forecasting in a framework that can be easily extended to other key macroeconomic indicators. / O primeiro ensaio, "Determinantes da expansão do crédito no Brasil", analisa os determinantes do crédito usando um extenso conjunto de dados em painel sobre o sistema bancário. A economia brasileira teve um grande impulso na alavancagem na primeira década de 2000 como resultado de um conjunto de fatores que vão desde a estabilidade macroeconômica passando pela liquidez abundante nos mercados financeiros internacionais antes de 2008 até um conjunto de decisões deliberadas tomadas pelo presidente Lula para expandir o crédito, impulsionar o consumo e obter apoio político das camadas sociais mais baixas. Como conclusões verificamos que a expansão do crédito beneficiou-se da redução da taxa de juros, os mercados financeiros internacionais são uma fonte importante de recursos, o crédito garantido em folha de pagamento e o grau de investimento afetaram positivamente a oferta de crédito. Nós não fomos capazes de confirmar a importância dos esforços de inclusão financeira. A importância dos indicadores de sanidade do setor financeiro de condições de crédito não pode ser subestimada. Estes resultados levantam questões quanto à sustentabilidade desse processo de expansão e estabilidade financeira no futuro. O segundo ensaio, "Crédito Público, Política Monetária e Estabilidade Financeira", discute o papel do crédito público. A oferta de crédito público no Brasil serviu para relançar a economia após a crise desencadeada pela quebra do banco Lehman-Brothers. Mais tarde, ele foi transformado em um motor de crescimento econômico bem como num dispositivo de regulação para forçar os bancos privados a reduzir as taxas de juros. Argumenta-se que a utilização de fundos públicos para financiar o crescimento econômico tem três desvantagens importantes: ele gera inflação, induz taxas de financiamento mais elevadas e pode induzir à instabilidade financeira. Um efeito adicional é impedir o desenvolvimento de soluções de crédito de mercado. O terceiro ensaio, "Previsão Bayesiana de Taxas de Juros: as priors importam?", discute a escolha de priors para previsão das taxas de juros de curto prazo. Bancos Centrais que se comprometem com regimes de metas de inflação devem responder a variações nas expectativa de inflação e no hiato do produto de uma forma clara e transparente, respeitando a regra de Taylor. A estimativa de uma regra de Taylor pode ter que considerar um modelo não-linear com parâmetros variáveis no tempo. Este trabalho usa métodos de previsão bayesiana para as taxas de juro de curto prazo por duas abordagens diferentes. Por uma perspectiva teórica nos concentramos em uma versão aumentada da regra de Taylor. Também testamos uma abordagem baseada na teoria das expectativas da estrutura a termo cauva de juros para modelar os juros de curto prazo. A seleção dos priores é particularmente relevante para a precisão da previsão, no entanto deseja-se usar prior robustas a falta de conhecimento prévio. Apresentamos os recentes desenvolvimentos na seleção de priors, em especial, propomos o uso de priors hierárquicas da família de distribuição hiper-geométrica.
65

Vliv zaměstnání studenta na akademické výsledky: meta-analýza / The Impact of Student Employment on Educational Outcomes: A Meta-Analysis

Kroupová, Kateřina January 2021 (has links)
Despite the extensive body of empirical research, the discussion on whether student employment impedes or improves educational outcomes has not been resolved. Using meta-analytic methods, we conduct a quantitative review of 861 effect estimates collected from 69 studies describing the relationship between student work experience and academic performance. After outlining the theo- retical mechanisms and methodological challenges of estimating the effect, we test whether publication bias permeates the literature concerning educational implications of student employment. We find that researchers report negative estimates more often than they should. However, this negative publication bias is not present in a subset of studies controlling for the endogeneity of student decision to take up employment. Furthermore, after correcting for the negative publication bias, we find that the student employment-education relationship is close to zero. Additionally, we examine heterogeneity of the estimates using Bayesian Model Averaging. Our analysis suggests that employment intensity and controlling for student permanent characteristics are the most important factors in explaining the heterogeneity. In particular, working long hours re- sults in systematically more negative effect estimates than not working at...
66

Probabilistic Ensemble-based Streamflow Forecasting Framework

Darbandsari, Pedram January 2021 (has links)
Streamflow forecasting is a fundamental component of various water resources management systems, ranging from flood control and mitigation to long-term planning of irrigation and hydropower systems. In the context of floods, a probabilistic forecasting system is required for proper and effective decision-making. Therefore, the primary goal of this research is the development of an advanced ensemble-based streamflow forecasting framework to better quantify the predictive uncertainty and generate enhanced probabilistic forecasts. This research started by comprehensively evaluating the performances of various lumped conceptual models in data-poor watersheds and comparing various Bayesian Model Averaging (BMA) modifications for probabilistic streamflow simulation. Then, using the concept of BMA, two novel probabilistic post-processing approaches were developed to enhance streamflow forecasting performance. The combination of the entropy theory and the BMA method leads to an entropy-based Bayesian Model Averaging (En-BMA) approach for enhanced probabilistic streamflow and precipitation forecasting. Also, the integration of the Hydrologic Uncertainty Processor (HUP) and the BMA methods is proposed for probabilistic post-processing of multi-model streamflow forecasts. Results indicated that the MACHBV and GR4J models are highly competent in simulating hydrological processes within data-scarce watersheds, however, the presence of the lower skill hydrologic models is still beneficial for ensemble-based streamflow forecasting. The comprehensive verification of the BMA approach in terms of streamflow predictions has identified the merits of implementing some of the previously recommended modifications and showed the importance of possessing a mutually exclusive and collectively exhaustive ensemble. By targeting the remaining limitation of the BMA approach, the proposed En-BMA method can improve probabilistic streamflow forecasting, especially under high flow conditions. Also, the proposed HUP-BMA approach has taken advantage of both HUP and BMA methods to better quantify the hydrologic uncertainty. Moreover, the applicability of the modified En-BMA as a more robust post-processing approach for precipitation forecasting, compared to BMA, has been demonstrated. / Thesis / Doctor of Philosophy (PhD) / Possessing a reliable streamflow forecasting framework is of special importance in various fields of operational water resources management, non-structural flood mitigation in particular. Accurate and reliable streamflow forecasts lead to the best possible in-advanced flood control decisions which can significantly reduce its consequent loss of lives and properties. The main objective of this research is to develop an enhanced ensemble-based probabilistic streamflow forecasting approach through proper quantification of predictive uncertainty using an ensemble of streamflow forecasts. The key contributions are: (1) implementing multiple diverse forecasts with full coverage of future possibilities in the Bayesian ensemble-based forecasting method to produce more accurate and reliable forecasts; and (2) developing an ensemble-based Bayesian post-processing approach to enhance the hydrologic uncertainty quantification by taking the advantages of multiple forecasts and initial flow observation. The findings of this study are expected to benefit streamflow forecasting, flood control and mitigation, and water resources management and planning.
67

Estimating and Correcting the Effects of Model Selection Uncertainty / Estimating and Correcting the Effects of Model Selection Uncertainty

Nguefack Tsague, Georges Lucioni Edison 03 February 2006 (has links)
No description available.
68

Pojednání o empirické finanční ekonomii / Essays in Empirical Financial Economics

Žigraiová, Diana January 2018 (has links)
This dissertation is composed of four essays that empirically investigate three topics in financial economics; financial stress and its leading indicators, the relationship between bank competition and financial stability, and the link between management board composition and bank risk. In the first essay we examine which variables have predictive power for financial stress in 25 OECD countries, using a recently constructed financial stress index. We find that panel models can hardly explain FSI dynamics. Although better results are achieved in country models, our findings suggest that financial stress is hard to predict out-of- sample despite the reasonably good in-sample performance of the models. The second essay develops an early warning framework for assessing systemic risks and predicting systemic events over two horizons of different length on a panel of 14 countries. We build a financial stress index to identify the starting dates of systemic financial crises and select crisis-leading indicators in a two-step approach; we find relevant prediction horizons for each indicator and employ Bayesian model averaging to identify the most useful predictors. We find superior performance of the long-horizon model for the Czech Republic. The theoretical literature gives conflicting predictions on how bank...
69

Evaluation économique des aires marines protégées : apports méthodologiques et applications aux îles Kuriat (Tunisie) / Economic valuation of marine protected areas : methodological perspectives and empirical applications to Kuriat Islands (Tunisia)

Mbarek, Marouene 16 December 2016 (has links)
La protection des ressources naturelles marines est un enjeu fort pour les décideurs publics. Le développement récent des aires marines protégées (AMP) contribue à ces enjeux de préservation. Les AMP ont pour objectifs de conserver les écosystèmes marins et côtiers tout en favorisant les activités humaines. La complexité de ces objectifs les rend difficiles à atteindre. L’objectif de cette thèse est de mener une analyse ex ante d’un projet d’une AMP aux îles Kuriat (Tunisie). Cette analyse représente une aide aux décideurs pour une meilleure gouvernance en intégrant les acteurs impliqués (pêcheur, visiteur, plaisancier) dans le processus de gestion. Pour ce faire, nous appliquons la méthode d’évaluation contingente (MEC) à des échantillons des pêcheurs et des visiteurs aux îles Kuriat. Nous nous intéressons au traitement des biais de sélection et d’échantillonnage et à l’incertitude sur la spécification des modèles économétriques lors de la mise en œuvre de la MEC. Nous faisons appel au modèle HeckitBMA,qui est une combinaison du modèle de Heckman (1979) et de l’inférence bayésienne, pour calculer le consentement à recevoir des pêcheurs. Nous utilisons aussi le modèle Zero inflated ordered probit (ZIOP), qui est une combinaison d’un probit binaire avec un probit ordonné, pour calculer le consentement à payer des visiteurs après avoir corrigé l’échantillon par imputation multiple. Nos résultats montrent que les groupes d’acteurs se distinguent par leur activité et leur situation économique ce qui les amène à avoir des perceptions différentes. Cela permet aux décideurs d’élaborer une politique de compensation permettant d’indemniser les acteurs ayant subi un préjudice. / The protection of marine natural resources is a major challenge for policy makers. The recent development of marine protected areas (MPAs) contributes to the preservation issues. MPAs are aimed to preserve the marine and coastal ecosystems while promoting human activities. The complexity of these objectives makes them difficult to reach. The purpose of this work is to conduct an ex-ante analysis of a proposed MPA to Kuriat Islands (Tunisia). This analysis is an aid to decision makers for better governance by integrating the actors involved (fisherman, visitor, boater) in the management process. To do this, we use the contingent valuation method (CVM) to samples of fishermen and visitors to the islands Kuriat. We are interested in the treatment of selection and sampling bias and uncertainty about specifying econometric models during the implementation of the CVM. We use the model HeckitBMA, which is a combination of the Heckman model (1979) and Bayesian inference, to calculate the willingness to accept of fishermen. We also use the model Zero inflated ordered probit (ZIOP), which is a combination of a binary probit with an ordered probit, to calculate the willingness to pay of visitors after correcting the sample by multiple imputation. Our results show that groups of actors are distinguished by their activity and economic conditions that cause them to have different perceptions. This allows policy makers to develop a policy of compensation to compensate the players who have been harmed.
70

Fusion pour la séparation de sources audio / Fusion for audio source separation

Jaureguiberry, Xabier 16 June 2015 (has links)
La séparation aveugle de sources audio dans le cas sous-déterminé est un problème mathématique complexe dont il est aujourd'hui possible d'obtenir une solution satisfaisante, à condition de sélectionner la méthode la plus adaptée au problème posé et de savoir paramétrer celle-ci soigneusement. Afin d'automatiser cette étape de sélection déterminante, nous proposons dans cette thèse de recourir au principe de fusion. L'idée est simple : il s'agit, pour un problème donné, de sélectionner plusieurs méthodes de résolution plutôt qu'une seule et de les combiner afin d'en améliorer la solution. Pour cela, nous introduisons un cadre général de fusion qui consiste à formuler l'estimée d'une source comme la combinaison de plusieurs estimées de cette même source données par différents algorithmes de séparation, chaque estimée étant pondérée par un coefficient de fusion. Ces coefficients peuvent notamment être appris sur un ensemble d'apprentissage représentatif du problème posé par minimisation d'une fonction de coût liée à l'objectif de séparation. Pour aller plus loin, nous proposons également deux approches permettant d'adapter les coefficients de fusion au signal à séparer. La première formule la fusion dans un cadre bayésien, à la manière du moyennage bayésien de modèles. La deuxième exploite les réseaux de neurones profonds afin de déterminer des coefficients de fusion variant en temps. Toutes ces approches ont été évaluées sur deux corpus distincts : l'un dédié au rehaussement de la parole, l'autre dédié à l'extraction de voix chantée. Quelle que soit l'approche considérée, nos résultats montrent l'intérêt systématique de la fusion par rapport à la simple sélection, la fusion adaptative par réseau de neurones se révélant être la plus performante. / Underdetermined blind source separation is a complex mathematical problem that can be satisfyingly resolved for some practical applications, providing that the right separation method has been selected and carefully tuned. In order to automate this selection process, we propose in this thesis to resort to the principle of fusion which has been widely used in the related field of classification yet is still marginally exploited in source separation. Fusion consists in combining several methods to solve a given problem instead of selecting a unique one. To do so, we introduce a general fusion framework in which a source estimate is expressed as a linear combination of estimates of this same source given by different separation algorithms, each source estimate being weighted by a fusion coefficient. For a given task, fusion coefficients can then be learned on a representative training dataset by minimizing a cost function related to the separation objective. To go further, we also propose two ways to adapt the fusion coefficients to the mixture to be separated. The first one expresses the fusion of several non-negative matrix factorization (NMF) models in a Bayesian fashion similar to Bayesian model averaging. The second one aims at learning time-varying fusion coefficients thanks to deep neural networks. All proposed methods have been evaluated on two distinct corpora. The first one is dedicated to speech enhancement while the other deals with singing voice extraction. Experimental results show that fusion always outperform simple selection in all considered cases, best results being obtained by adaptive time-varying fusion with neural networks.

Page generated in 0.1222 seconds