• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 455
  • 205
  • 61
  • 32
  • 29
  • 28
  • 26
  • 21
  • 7
  • 6
  • 6
  • 4
  • 3
  • 3
  • 3
  • Tagged with
  • 1034
  • 127
  • 126
  • 123
  • 100
  • 93
  • 82
  • 79
  • 76
  • 75
  • 68
  • 64
  • 62
  • 59
  • 57
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

The Impact of Access to Conditional Cash Transfers and Remittances on Credit Markets: Evidence from Nicaragua and Bangladesh

Hernandez-Hernandez, Emilio 26 October 2009 (has links)
No description available.
142

Health and Environmental Benefits of Reduced Pesticide Use in Uganda: An Experimental Economics Analysis

Bonabana-Wabbi, Jackline 15 April 2008 (has links)
Two experimental procedures are employed to value both health and environmental benefits from reducing pesticides in Uganda. The first experiment, an incentive compatible auction involves subjects with incomplete information placing bids to avoid consuming potentially contaminated groundnuts/water in a framed field experimental procedure. Three experimental treatments (information, proxy good, and group treatments) are used. Subjects are endowed with a monetary amount (starting capital) equivalent to half the country's per capita daily income (in small denominations). Two hundred and fifty seven respondents were involved in a total of 35 experimental sessions in Kampala and Iganga districts. Tobit model results indicate that subjects place significant positive values to avoid ill health outcomes, although these values vary by region, by treatment and by socio-economic characteristics. Gender differences were important in explaining bidding behavior, with male respondents in both study areas bidding higher to avoid ill health outcomes than females. Consistent with a priori expectation, rural population's average willingness to pay (WTP) to avoid ill health outcomes was lower (by 11.4 percent) than the urban population's WTP possibly reflecting the poverty level in the rural areas and how it translates into reduced regard for health and environmental improvements. Tests of hypotheses suggest (i) providing brief information to subjects just prior to the valuation exercise does not influence bid behavior, (ii) subjects are indifferent to the source of contamination: WTP to avoid health outcomes from potentially contaminated water and groundnuts are not significantly different, and (iii) the classical tendency to free-ride in public goods provision was observed, and this phenomenon was more pronounced in the urban than the rural area. The second experimental procedure involved 132 urban respondents making repeated choices from a set of scenarios described by attributes of water quality, an environmental good. Water quality is represented by profiles of water safety levels at varying costs. Analysis using the conditional (fixed effects) logit showed that urban subjects highly discount unsafe drinking water, and were willing to pay less for safe agricultural water, a result not unexpected considering that the urban population is not directly involved in agricultural activities and thus does not value agricultural water quality as much as drinking water quality. Results also showed that subjects' utility increased with the cost of a water sample (inconsistent with a downward sloping demand curve), suggesting perhaps that they perceived higher costs to be associated with higher water quality. Some theoretically inconsistent results were obtained with choice experiments. / Ph. D.
143

Conditional Multifactorial Contingency (CMC) Model  and Its Applications

Cheng, Zuolin 17 January 2023 (has links)
In biology and bioinformatics, a variety of data share a common property that challenges numerous cutting-edge research studies: heterogeneities at the individual level with respect to more than one factor. Examples of such heterogeneities include but are not limited to: 1) unequal susceptibility of different patients, and 2) large diversity in gene length, GC content, etc., along with the resulting gene characteristics. For many biological data analysis studies, the critical first step is usually to infer null probability distribution of observed data with the heterogeneities in multiple (confounding) factors taken into account, so that we can further investigate the impact of other factor(s) of interest. Obviously, the heterogeneities heavily influence the potential conclusions that we may draw from statistical analyses of the data. However, modeling such heterogeneities has been challenging, not only due to the inapplicable explicit modeling of all factors with heterogeneous effects on the data, but also because of the non-independence of many factors from one another. Existing methods, either partially/fully neglected the heterogeneity issue at all, or took care of each factor's heterogeneity in isolation. Evidences have shown the insufficiency of such strategies and the errors they may produce in downstream analyses. The emergence of large-scale data sets provides the opportunity to directly and comprehensively learn the heterogeneity from the data without explicitly modeling the mechanisms behind or exerting strong assumptions. The data, as often stored or organized as multidimensional contingency tensors, lead to a natural perspective of modeling heterogeneity with each impact factor of interest being one dimension. The heterogeneity in each factor's impact on the variable of interest can be captured by the marginal property of the data tensor with respect to the corresponding dimension. For instance, in a single-cell sequencing dataset, which can be organized as a matrix with each row representing a gene and each column representing a cell, the heterogeneity caused by both the gene and cell factors can be modeled. In this dissertation, we develop a novel model, Conditional Multifactorial Contingency (CMC), that models the intertwined heterogeneities in all dimensions of the data tensor and infers the probability distribution of each entry of the data tensor jointly conditioned on these heterogeneities. In the proposed CMC model, the problem is formulated as a maximum entropy problem of the contingency tensor's probability distribution subject to the marginal constraints, under the assumption that the individuals within each dimension are independent. The marginal constraints are applied to the expected value instead of observed trial outcomes, which plays a key role in avoiding the innumerable combinations of trial outcomes and leading to an elegant expression form of the entry's probability distribution. The model is first developed for 3D binary data matrix, then extended to multidimensional data tensors and integer data tensors. Furthermore, missing values are taken into account and CMC is extended to be compatible with data with missing values. Being empowered by CMC, we conducted four case studies for real-world bioinformatics research problems: (1) driving transcription factor (TF) identification; (2) scRNA-seq data normalization; (3) cancer-associated gene identification; (4) cell similarity quantification. For each of these case studies, we proposed a whole analysis framework and specific adaptation design for CMC. For the driving-TF identification, compared with traditional methods, we considered the variations in the gene's binding affinity in addition to the typically considered variations in TF's binding affinity. The driving TFs were identified by comparing the observed binding state and the estimated binding probability conditioned on TF/gene binding affinities. For the scRNA-seq data normalization, besides gene factor and cell factor, we figured out one more factor impacting the read counts, cDNA length, and applied CMC to comprehensively analyze the three factors. For cancer-associated gene identification, the CMC model is applied to systematically model the patient, gene, and mutation type factors in the mutation count data. As for the last application, to the best of our knowledge, our solution is the first proposed cell-to-cell-type similarity quantification method, thanks to the availability of CMC to systematically model and remove the impact of cell and gene factors. We studied the theoretical properties of the proposed model and validated the effectiveness and efficiency of our method through experiments. The uniqueness of the probability solution and the convergence of the algorithm was proved. In the endeavor to identify true driving TFs, CMC significantly boosted the best record of success rate, which was proved using data with ground truth. Besides, in an exploratory study without ground truth, in addition to the previously known TFs, Olig1 (ranks 2nd), Olig2 (ranks 3rd), and Sox10 (ranks 4th), we successfully identified Ppp1r14b (ranks 1st) and Zfp36l1 (ranks 6th) that function in oligodendrocyte lineage development, which was validated via biological knock-out experiments and, has led to genuine biological discoveries. In the scRNA-seq data normalization, experimental results show that, by taking the cell, gene, and cDNA-length factors into account, the normalized data achieves lower variances for housekeeping genes than the peer methods. Besides, the data normalized by the CMC model leads to better accuracy of downstream DEG detection than that normalized by peer normalization methods. In cancer-associated gene identification, the CMC model is able to eliminate most of the likely artefactual findings resulted by considering the hidden factors separately. In the cell similarity quantification, CMC based model enables the identification of cell types by establishing between-species cell similarity quantification, regardless of contamination in scRNA-seq data. / Doctor of Philosophy / Biological data are complicated and typically influenced by numerous factors, including characteristics of biological subjects, physical or chemical properties of molecules, artifacts created by experimental operations, and so on. The information of real interest in a biology/bioinformatics study can be buried in all sorts of irrelevant factors and their impacts on the data. Consider a simple example where a study is conducted to figure out if an association exists between a specific gene and a cancer. Although this gene shows obviously different frequencies of mutation in two groups of people, patients and the normal, we cannot safely confirm the association from this observation. Such differential mutation levels can also be a result of the diversity among all these people in how easily this gene is mutated in a person (related to many characteristics of this person besides "cancer/not"). We call this diversity "heterogeneity", and it actually can be seen everywhere, in people, in genes, in cells, and in cell types, etc. One needs to take good care of such heterogeneities so as to draw firm statistical hence scientific conclusions. However, handling the heterogeneities is far from trivial. On the one hand, it is generally impossible to fully understand the mechanisms behind those diversities, let alone to explicitly and rigorously formulate them. One the other hand, it is not rare that multiple factors intertwine with one another, in which case all these factors must be considered systematically in order to model the data precisely. Existing methods, either partially/fully neglected the heterogeneity issue at all, or took care of each factor's heterogeneity in isolation. Evidences have shown the insufficiency of such strategies and the errors they may produce in downstream analyses. As the exact mechanisms behind heterogeneities are usually not available, we aim to learn and infer the heterogeneities' effects on data from data itself. A large group of biological data can be stored or organized as multidimensional contingency tensors, with each impact factor of interest being one dimension. The heterogeneity in each factor's impact on the variable of interest can be captured by the marginal property of the data tensor with respect to the corresponding dimension, for example, the row sum and the column sum in a 2D tensor. In this dissertation, under the assumption that the individuals of each dimension are independent, we proposed a novel model, Conditional Multifactorial Contingency (CMC), that models the intertwined heterogeneities in all dimensions of the data tensor and infers the probability distribution of each entry of the data tensor jointly conditioned on these heterogeneities. The eventual and most comprehensive version of CMC can work on multidimensional binary or integer data tensors, even in cases where some values in the tensor are missing. CMC was initiated from elegant and simple statistical principles, derived through rigorous theoretical proofs, but ended up as a powerful tool being widely applicable to real-world biology/bioinformatics studies. Being empowered by CMC, we conducted four case studies for real-world bioinformatics research problems: (1) driving transcription factor (TF) identification; (2) scRNA-seq data normalization; (3) cancer-associated gene identification; (4) cell similarity quantification. For each of these case studies, we proposed a whole analysis framework and specific adaptation design for CMC. In each of them, our method based on CMC outperformed existing methods and provided inspiring clues for biological discoveries, which have been validated by biological experiments.
144

Latent Walking Techniques for Conditioning GAN-Generated Music

Eisenbeiser, Logan Ryan 21 September 2020 (has links)
Artificial music generation is a rapidly developing field focused on the complex task of creating neural networks that can produce realistic-sounding music. Generating music is very difficult; components like long and short term structure present time complexity, which can be difficult for neural networks to capture. Additionally, the acoustics of musical features like harmonies and chords, as well as timbre and instrumentation require complex representations for a network to accurately generate them. Various techniques for both music representation and network architecture have been used in the past decade to address these challenges in music generation. The focus of this thesis extends beyond generating music to the challenge of controlling and/or conditioning that generation. Conditional generation involves an additional piece or pieces of information which are input to the generator and constrain aspects of the results. Conditioning can be used to specify a tempo for the generated song, increase the density of notes, or even change the genre. Latent walking is one of the most popular techniques in conditional image generation, but its effectiveness on music-domain generation is largely unexplored. This paper focuses on latent walking techniques for conditioning the music generation network MuseGAN and examines the impact of this conditioning on the generated music. / Master of Science / Artificial music generation is a rapidly developing field focused on the complex task of creating neural networks that can produce realistic-sounding music. Beyond simply generating music lies the challenge of controlling or conditioning that generation. Conditional generation can be used to specify a tempo for the generated song, increase the density of notes, or even change the genre. Latent walking is one of the most popular techniques in conditional image generation, but its effectiveness on music-domain generation is largely unexplored, especially for generative adversarial networks (GANs). This paper focuses on latent walking techniques for conditioning the music generation network MuseGAN and examines the impact and effectiveness of this conditioning on the generated music.
145

A abordagem de martingais para o estudo de ocorrência de palavras em ensaios independentes / The martingale approach to the study of occurrence of words in independent trials

Masitéli, Vanessa 07 April 2017 (has links)
Seja {Xn} uma sequência de variáveis aleatórias i.i.d. assumindo valores num alfabeto enumerável. Dada uma coleção de palavras finita, observamos esta sequência até o momento τ em que uma dessas palavras apareça em X1, X2, .....Neste trabalho utilizamos a abordagem de martingais, introduzida por Li (1980) e Gerber e Li (1981), para estudar o tempo de espera até que uma das palavras ocorra pela primeira vez, o tempo médio de τ e a probabilidade de uma palavra ser a primeira a aparecer. / Let {Xn} be a sequence of i.i.d. random variables talking values in an enumerable alphgabet. Given a finite collection of words, we observe this sequence till the moment τ at which one of these words appears as a run. In this work we apply the martingale approach introduced by Li (1980) and Gerber and Li (1981) in order to study the waiting time until one of the words occurs for the first time, the mean of τ and the probability of a word to be first on to appear.
146

Vybrané problémy finančních časových řad / Selected problems of financial time series modelling

Hendrych, Radek January 2015 (has links)
Title: Selected problems of financial time series modelling Author: Radek Hendrych Department: Department of Probability and Mathematical Statistics (DPMS) Supervisor: Prof. RNDr. Tomáš Cipra, DrSc., DPMS Abstract: The present dissertation thesis deals with selected problems of financial time series analysis. In particular, it focuses on two fundamental aspects of condi- tional heteroscedasticity modelling. The first part of the thesis introduces and discusses self-weighted recursive estimation algorithms for several classic univariate conditional heteroscedasticity models, namely for the ARCH, GARCH, RiskMetrics EWMA, and GJR-GARCH processes. Their numerical capabilities are demonstrated by Monte Carlo experiments and real data examples. The second part of the thesis proposes a novel approach to conditional covariance (correlation) modelling. The suggested modelling technique has been inspired by the essential idea of the multivariate orthogonal GARCH method. It is based on a suitable type of linear time-varying orthogonal transformation, which enables to employ the constant conditional correlation scheme. The correspond- ing model is implemented by using a nonlinear discrete-time state space representation. The proposed approach is compared with other commonly applied models. It demon- strates its...
147

Beyond Cheerleaders and Checklists: The Effects of the Feedback Environment on Employee Self-Development

Cavanaugh, Caitlin M. 04 October 2016 (has links)
No description available.
148

Conditional Moment Closure Methods for Turbulent Combustion Modelling

El Sayed, Ahmad 18 March 2013 (has links)
This thesis describes the application of the first-order Conditional Moment Closure (CMC) to the autoignition of high-pressure fuel jets, and to piloted and lifted turbulent jet flames using classical and advanced CMC submodels. A Doubly-Conditional Moment Closure (DCMC) formulation is further proposed. In the first study, CMC is applied to investigate the impact of C₂H₆, H₂ and N₂ additives on the autoignition of high-pressure CH₄ jets injected into lower pressure heated air. A wide range of pre-combustion air temperatures is considered and detailed chemical kinetics are employed. It is demonstrated that the addition of C₂H₆ and H₂ does not change the main CH₄ oxidisation pathways. The decomposition of these additives provides additional ignition-promoting radicals, and therefore leads to shorter ignition delays. N₂ additives do not alter the CH₄ oxidisation pathways, however, they reduce the amount of CH₄ available for reaction, causing delayed ignition. It is further shown that ignition always occurs in lean mixtures and at low scalar dissipation rates. The second study is concerned with the modelling of a piloted CH₄/air turbulent jet flame. A detailed assessment of several Probability Density Function (PDF), Conditional Scalar Dissipation Rate (CSDR) and Conditional Velocity (CV) submodels is first performed. The results of two β-PDF-based implementations are then presented. The two realisations differ by the modelling of the CSDR. Homogeneous (inconsistent) and inhomogeneous (consistent) closures are considered. It is shown that the levels of all reactive scalars, including minor intermediates and radicals, are better predicted when the effects of inhomogeneity are included in the modelling of the CSDR. The two following studies are focused on the consistent modelling of a lifted H₂/N₂ turbulent jet flame issuing into a vitiated coflow. Two approaches are followed to model the PDF. In the first, a presumed β-distribution is assumed, whereas in the second, the Presumed Mapping Function (PMF) approach is employed. Fully consistent CV and CSDR closures based on the β-PDF and the PMF-PDF are employed. The homogeneous versions of the CSDR closures are also considered in order to assess the effect of the spurious sources which stem from the inconsistent modelling of mixing. The flame response is analysed over a narrow range of coflow temperatures (Tc). The stabilisation mechanism is determined from the analysis of the transport budgets in mixture fraction and physical spaces, and the history of radical build-up ahead of the stabilisation height. The β-PDF realisations indicate that the flame is stabilised by autoignition irrespective of the value of Tc. On the other hand, the PMF realisations reveal that the stabilisation mechanism is susceptible to Tc. Autoignition remains the controlling stabilisation mechanism for sufficiently high Tc. However, as Tc is decreased, stabilisation is achieved by means of premixed flame propagation. The analysis of the spurious sources reveals that their effect is small but non-negligible, most notably within the flame zone. Further, the assessment of several H₂ oxidation mechanisms show that the flame is very sensitive to chemical kinetics. In the last study, a DCMC method is proposed for the treatment of fluctuations in non-premixed and partially premixed turbulent combustion. The classical CMC theory is extended by introducing a normalised Progress Variable (PV) as a second conditioning variable beside the mixture fraction. The unburnt and burnt states involved in the normalisation of the PV are specified such that they are mixture fraction-dependent. A transport equation for the normalised PV is first obtained. The doubly-conditional species, enthalpy and temperature transport equations are then derived using the decomposition approach and the primary closure hypothesis is applied. Submodels for the doubly-conditioned unclosed terms which arise from the derivation of DCMC are proposed. As a preliminary analysis, the governing equations are simplified for homogeneous turbulence and a parametric assessment is performed by varying the strain rate levels in mixture fraction and PV spaces.
149

Conditional Moment Closure Methods for Turbulent Combustion Modelling

El Sayed, Ahmad 18 March 2013 (has links)
This thesis describes the application of the first-order Conditional Moment Closure (CMC) to the autoignition of high-pressure fuel jets, and to piloted and lifted turbulent jet flames using classical and advanced CMC submodels. A Doubly-Conditional Moment Closure (DCMC) formulation is further proposed. In the first study, CMC is applied to investigate the impact of C₂H₆, H₂ and N₂ additives on the autoignition of high-pressure CH₄ jets injected into lower pressure heated air. A wide range of pre-combustion air temperatures is considered and detailed chemical kinetics are employed. It is demonstrated that the addition of C₂H₆ and H₂ does not change the main CH₄ oxidisation pathways. The decomposition of these additives provides additional ignition-promoting radicals, and therefore leads to shorter ignition delays. N₂ additives do not alter the CH₄ oxidisation pathways, however, they reduce the amount of CH₄ available for reaction, causing delayed ignition. It is further shown that ignition always occurs in lean mixtures and at low scalar dissipation rates. The second study is concerned with the modelling of a piloted CH₄/air turbulent jet flame. A detailed assessment of several Probability Density Function (PDF), Conditional Scalar Dissipation Rate (CSDR) and Conditional Velocity (CV) submodels is first performed. The results of two β-PDF-based implementations are then presented. The two realisations differ by the modelling of the CSDR. Homogeneous (inconsistent) and inhomogeneous (consistent) closures are considered. It is shown that the levels of all reactive scalars, including minor intermediates and radicals, are better predicted when the effects of inhomogeneity are included in the modelling of the CSDR. The two following studies are focused on the consistent modelling of a lifted H₂/N₂ turbulent jet flame issuing into a vitiated coflow. Two approaches are followed to model the PDF. In the first, a presumed β-distribution is assumed, whereas in the second, the Presumed Mapping Function (PMF) approach is employed. Fully consistent CV and CSDR closures based on the β-PDF and the PMF-PDF are employed. The homogeneous versions of the CSDR closures are also considered in order to assess the effect of the spurious sources which stem from the inconsistent modelling of mixing. The flame response is analysed over a narrow range of coflow temperatures (Tc). The stabilisation mechanism is determined from the analysis of the transport budgets in mixture fraction and physical spaces, and the history of radical build-up ahead of the stabilisation height. The β-PDF realisations indicate that the flame is stabilised by autoignition irrespective of the value of Tc. On the other hand, the PMF realisations reveal that the stabilisation mechanism is susceptible to Tc. Autoignition remains the controlling stabilisation mechanism for sufficiently high Tc. However, as Tc is decreased, stabilisation is achieved by means of premixed flame propagation. The analysis of the spurious sources reveals that their effect is small but non-negligible, most notably within the flame zone. Further, the assessment of several H₂ oxidation mechanisms show that the flame is very sensitive to chemical kinetics. In the last study, a DCMC method is proposed for the treatment of fluctuations in non-premixed and partially premixed turbulent combustion. The classical CMC theory is extended by introducing a normalised Progress Variable (PV) as a second conditioning variable beside the mixture fraction. The unburnt and burnt states involved in the normalisation of the PV are specified such that they are mixture fraction-dependent. A transport equation for the normalised PV is first obtained. The doubly-conditional species, enthalpy and temperature transport equations are then derived using the decomposition approach and the primary closure hypothesis is applied. Submodels for the doubly-conditioned unclosed terms which arise from the derivation of DCMC are proposed. As a preliminary analysis, the governing equations are simplified for homogeneous turbulence and a parametric assessment is performed by varying the strain rate levels in mixture fraction and PV spaces.
150

Imputation of Missing Data with Application to Commodity Futures / Imputation av saknad data med tillämpning på råvaruterminer

Östlund, Simon January 2016 (has links)
In recent years additional requirements have been imposed on financial institutions, including Central Counterparty clearing houses (CCPs), as an attempt to assess quantitative measures of their exposure to different types of risk. One of these requirements results in a need to perform stress tests to check the resilience in case of a stressed market/crisis. However, financial markets develop over time and this leads to a situation where some instruments traded today are not present at the chosen date because they were introduced after the considered historical event. Based on current routines, the main goal of this thesis is to provide a more sophisticated method to impute (fill in) historical missing data as a preparatory work in the context of stress testing. The models considered in this paper include two methods currently regarded as state-of-the-art techniques, based on maximum likelihood estimation (MLE) and multiple imputation (MI), together with a third alternative approach involving copulas. The different methods are applied on historical return data of commodity futures contracts from the Nordic energy market. By using conventional error metrics, and out-of-sample log-likelihood, the conclusion is that it is very hard (in general) to distinguish the performance of each method, or draw any conclusion about how good the models are in comparison to each other. Even if the Student’s t-distribution seems (in general) to be a more adequate assumption regarding the data compared to the normal distribution, all the models are showing quite poor performance. However, by analysing the conditional distributions more thoroughly, and evaluating how well each model performs by extracting certain quantile values, the performance of each method is increased significantly. By comparing the different models (when imputing more extreme quantile values) it can be concluded that all methods produce satisfying results, even if the g-copula and t-copula models seems to be more robust than the respective linear models. / På senare år har ytterligare krav införts för finansiella institut (t.ex. Clearinghus) i ett försök att fastställa kvantitativa mått på deras exponering mot olika typer av risker. Ett av dessa krav innebär att utföra stresstester för att uppskatta motståndskraften under stressade marknader/kriser. Dock förändras finansiella marknader över tiden vilket leder till att vissa instrument som handlas idag inte fanns under den dåvarande perioden, eftersom de introducerades vid ett senare tillfälle. Baserat på nuvarande rutiner så är målet med detta arbete att tillhandahålla en mer sofistikerad metod för imputation (ifyllnad) av historisk data som ett förberedande arbete i utförandet av stresstester. I denna rapport implementeras två modeller som betraktas som de bäst presterande metoderna idag, baserade på maximum likelihood estimering (MLE) och multiple imputation (MI), samt en tredje alternativ metod som involverar copulas. Modellerna tillämpas på historisk data förterminskontrakt från den nordiska energimarkanden. Genom att använda väl etablerade mätmetoder för att skatta noggrannheten förrespektive modell, är det väldigt svårt (generellt) att särskilja prestandan för varje metod, eller att dra några slutsatser om hur bra varje modell är i jämförelse med varandra. även om Students t-fördelningen verkar (generellt) vara ett mer adekvat antagande rörande datan i jämförelse med normalfördelningen, så visar alla modeller ganska svag prestanda vid en första anblick. Däremot, genom att undersöka de betingade fördelningarna mer noggrant, för att se hur väl varje modell presterar genom att extrahera specifika kvantilvärden, kan varje metod förbättras markant. Genom att jämföra de olika modellerna (vid imputering av mer extrema kvantilvärden) kan slutsatsen dras att alla metoder producerar tillfredställande resultat, även om g-copula och t-copula modellerna verkar vara mer robusta än de motsvarande linjära modellerna.

Page generated in 0.0905 seconds