• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 159
  • 45
  • 32
  • 16
  • 4
  • 4
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 311
  • 311
  • 79
  • 53
  • 52
  • 49
  • 44
  • 42
  • 42
  • 42
  • 35
  • 34
  • 32
  • 28
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
301

Zkoumání konektivity mozkových sítí pomocí hemodynamického modelování / Exploring Brain Network Connectivity through Hemodynamic Modeling

Havlíček, Martin January 2012 (has links)
Zobrazení funkční magnetickou rezonancí (fMRI) využívající "blood-oxygen-level-dependent" efekt jako indikátor lokální aktivity je velmi užitečnou technikou k identifikaci oblastí mozku, které jsou aktivní během percepce, kognice, akce, ale také během klidového stavu. V poslední době také roste zájem o studium konektivity mezi těmito oblastmi, zejména v klidovém stavu. Tato práce předkládá nový a originální přístup k problému nepřímého vztahu mezi měřenou hemodynamickou odezvou a její příčinou, tj. neuronálním signálem. Zmíněný nepřímý vztah komplikuje odhad efektivní konektivity (kauzálního ovlivnění) mezi různými oblastmi mozku z dat fMRI. Novost prezentovaného přístupu spočívá v použití (zobecněné nelineární) techniky slepé dekonvoluce, což dovoluje odhad endogenních neuronálních signálů (tj. vstupů systému) z naměřených hemodynamických odezev (tj. výstupů systému). To znamená, že metoda umožňuje "data-driven" hodnocení efektivní konektivity na neuronální úrovni i v případě, že jsou měřeny pouze zašumělé hemodynamické odezvy. Řešení tohoto obtížného dekonvolučního (inverzního) problému je dosaženo za použití techniky nelineárního rekurzivního Bayesovského odhadu, který poskytuje společný odhad neznámých stavů a parametrů modelu. Práce je rozdělena do tří hlavních částí. První část navrhuje metodu k řešení výše uvedeného problému. Metoda využívá odmocninové formy nelineárního kubaturního Kalmanova filtru a kubaturního Rauch-Tung-Striebelova vyhlazovače, ovšem rozšířených pro účely řešení tzv. problému společného odhadu, který je definován jako simultánní odhad stavů a parametrů sekvenčním přístupem. Metoda je navržena především pro spojitě-diskrétní systémy a dosahuje přesného a stabilního řešení diskretizace modelu kombinací nelineárního (kubaturního) filtru s metodou lokální linearizace. Tato inverzní metoda je navíc doplněna adaptivním odhadem statistiky šumu měření a šumů procesu (tj. šumů neznámých stavů a parametrů). První část práce je zaměřena na inverzi modelu pouze jednoho časového průběhu; tj. na odhad neuronální aktivity z fMRI signálu. Druhá část generalizuje navrhovaný přístup a aplikuje jej na více časových průběhů za účelem umožnění odhadu parametrů propojení neuronálního modelu interakce; tj. odhadu efektivní konektivity. Tato metoda představuje inovační stochastické pojetí dynamického kauzálního modelování, což ji činí odlišnou od dříve představených přístupů. Druhá část se rovněž zabývá metodami Bayesovského výběru modelu a navrhuje techniku pro detekci irelevantních parametrů propojení za účelem dosažení zlepšeného odhadu parametrů. Konečně třetí část se věnuje ověření navrhovaného přístupu s využitím jak simulovaných tak empirických fMRI dat, a je významných důkazem o velmi uspokojivých výsledcích navrhovaného přístupu.
302

Fusion pour la séparation de sources audio / Fusion for audio source separation

Jaureguiberry, Xabier 16 June 2015 (has links)
La séparation aveugle de sources audio dans le cas sous-déterminé est un problème mathématique complexe dont il est aujourd'hui possible d'obtenir une solution satisfaisante, à condition de sélectionner la méthode la plus adaptée au problème posé et de savoir paramétrer celle-ci soigneusement. Afin d'automatiser cette étape de sélection déterminante, nous proposons dans cette thèse de recourir au principe de fusion. L'idée est simple : il s'agit, pour un problème donné, de sélectionner plusieurs méthodes de résolution plutôt qu'une seule et de les combiner afin d'en améliorer la solution. Pour cela, nous introduisons un cadre général de fusion qui consiste à formuler l'estimée d'une source comme la combinaison de plusieurs estimées de cette même source données par différents algorithmes de séparation, chaque estimée étant pondérée par un coefficient de fusion. Ces coefficients peuvent notamment être appris sur un ensemble d'apprentissage représentatif du problème posé par minimisation d'une fonction de coût liée à l'objectif de séparation. Pour aller plus loin, nous proposons également deux approches permettant d'adapter les coefficients de fusion au signal à séparer. La première formule la fusion dans un cadre bayésien, à la manière du moyennage bayésien de modèles. La deuxième exploite les réseaux de neurones profonds afin de déterminer des coefficients de fusion variant en temps. Toutes ces approches ont été évaluées sur deux corpus distincts : l'un dédié au rehaussement de la parole, l'autre dédié à l'extraction de voix chantée. Quelle que soit l'approche considérée, nos résultats montrent l'intérêt systématique de la fusion par rapport à la simple sélection, la fusion adaptative par réseau de neurones se révélant être la plus performante. / Underdetermined blind source separation is a complex mathematical problem that can be satisfyingly resolved for some practical applications, providing that the right separation method has been selected and carefully tuned. In order to automate this selection process, we propose in this thesis to resort to the principle of fusion which has been widely used in the related field of classification yet is still marginally exploited in source separation. Fusion consists in combining several methods to solve a given problem instead of selecting a unique one. To do so, we introduce a general fusion framework in which a source estimate is expressed as a linear combination of estimates of this same source given by different separation algorithms, each source estimate being weighted by a fusion coefficient. For a given task, fusion coefficients can then be learned on a representative training dataset by minimizing a cost function related to the separation objective. To go further, we also propose two ways to adapt the fusion coefficients to the mixture to be separated. The first one expresses the fusion of several non-negative matrix factorization (NMF) models in a Bayesian fashion similar to Bayesian model averaging. The second one aims at learning time-varying fusion coefficients thanks to deep neural networks. All proposed methods have been evaluated on two distinct corpora. The first one is dedicated to speech enhancement while the other deals with singing voice extraction. Experimental results show that fusion always outperform simple selection in all considered cases, best results being obtained by adaptive time-varying fusion with neural networks.
303

A Logistic Regression Analysis of Utah Colleges Exit Poll Response Rates Using SAS Software

Stevenson, Clint W. 27 October 2006 (has links) (PDF)
In this study I examine voter response at an interview level using a dataset of 7562 voter contacts (including responses and nonresponses) in the 2004 Utah Colleges Exit Poll. In 2004, 4908 of the 7562 voters approached responded to the exit poll for an overall response rate of 65 percent. Logistic regression is used to estimate factors that contribute to a success or failure of each interview attempt. This logistic regression model uses interviewer characteristics, voter characteristics (both respondents and nonrespondents), and exogenous factors as independent variables. Voter characteristics such as race, gender, and age are strongly associated with response. An interviewer's prior retail sales experience is associated with whether a voter will decide to respond to a questionnaire or not. The only exogenous factor that is associated with voter response is whether the interview occurred in the morning or afternoon.
304

Some Advanced Model Selection Topics for Nonparametric/Semiparametric Models with High-Dimensional Data

Fang, Zaili 13 November 2012 (has links)
Model and variable selection have attracted considerable attention in areas of application where datasets usually contain thousands of variables. Variable selection is a critical step to reduce the dimension of high dimensional data by eliminating irrelevant variables. The general objective of variable selection is not only to obtain a set of cost-effective predictors selected but also to improve prediction and prediction variance. We have made several contributions to this issue through a range of advanced topics: providing a graphical view of Bayesian Variable Selection (BVS), recovering sparsity in multivariate nonparametric models and proposing a testing procedure for evaluating nonlinear interaction effect in a semiparametric model. To address the first topic, we propose a new Bayesian variable selection approach via the graphical model and the Ising model, which we refer to the ``Bayesian Ising Graphical Model'' (BIGM). There are several advantages of our BIGM: it is easy to (1) employ the single-site updating and cluster updating algorithm, both of which are suitable for problems with small sample sizes and a larger number of variables, (2) extend this approach to nonparametric regression models, and (3) incorporate graphical prior information. In the second topic, we propose a Nonnegative Garrote on a Kernel machine (NGK) to recover sparsity of input variables in smoothing functions. We model the smoothing function by a least squares kernel machine and construct a nonnegative garrote on the kernel model as the function of the similarity matrix. An efficient coordinate descent/backfitting algorithm is developed. The third topic involves a specific genetic pathway dataset in which the pathways interact with the environmental variables. We propose a semiparametric method to model the pathway-environment interaction. We then employ a restricted likelihood ratio test and a score test to evaluate the main pathway effect and the pathway-environment interaction. / Ph. D.
305

Essays on Development Policies : Social Protection, Community-Based Development and Regional Integration

Bah, Adama 31 January 2014 (has links)
Cette thèse propose une analyse de certaines des politiques considérées actuellement comme étant des éléments-clé de toute stratégie de développement, avec l’objectif de contribuer au récent débat sur le développement international. Je considère en particulier l’élaboration, la mise en oeuvre et l’évaluation des politiques de protection sociale, de développement participatif et d’intégration régionale. Le premier chapitre repose sur l’idée que, pour être efficaces en matière de réduction de la pauvreté, les politiques de protection sociale doivent avoir pour double objectif de permettre aux ménages pauvres d’accéder à des ressources suffisantes pour satisfaire leurs besoins de base, ainsi que de réduire le risque auquel les ménages non pauvres sont confrontés de voir leur niveau de bien-être diminuer sous le seuil de pauvreté. Je propose une méthode permettant d’estimer le degré de vulnérabilité à la pauvreté des ménages. La vulnérabilité est ici définie comme la probabilité pour un ménage de se trouver sous le seuil de pauvreté dans le futur, étant données ses caractéristiques actuelles. Dans le second chapitre, je me place dans un contexte de ciblage des programmes de protection sociale par un score approximant le niveau de vie (proxy-means testing). La précision, et donc l’efficacité, de cette approche pour identifier les ménages pauvres dépendent de la capacité à prédire avec exactitude le niveau de bien-être des ménages, laquelle découle de la sélection de variables pertinentes. Je propose une méthode basée sur l’estimation d’un échantillon aléatoire de modèles de consommation, pour identifier les variables dont la corrélation avec le bien-être des ménages est à la fois élevée et robuste. Ces variables appartiennent à différentes catégories, y compris la possession de biens durables, l’accès aux services d’énergie domestique et d’assainissement, la qualité et le statut d’occupation du logement, et le niveau d’éducation des membres du ménage. Les troisième et quatrième chapitres de cette thèse proposent une analyse ex-post des politiques de développement, et portent en particulier sur les conséquences inattendues d’un programme de développement participatif et les raisons de l’insuffisante performance de politiques d’intégration régionale, respectivement. Le troisième chapitre évalue dans quelle mesure la réaction des deux groupes rebelles présents aux Philippines face à la mise en oeuvre d’un programme participatif d’aide au développement est compatible avec l’idée que ces deux groupes ont différentes idéologies, caractéristiques et raisons pour lutter contre le gouvernement. Il utilise une base de données collectées en utilisant les reportages d’un journal local concernant les épisodes de guerre impliquant ces deux groupes, ainsi que les prédictions d’un modèle d’insurrection basé sur la recherche de rente (rent-seeking). Les résultats sont conformes à la classification proposée de ces deux groupes rebelles ; leur réaction face au projet dépend de leur position idéologique. Le dernier chapitre analyse l’impact des guerres civiles en Afrique sur la performance des communautés économiques régionales, approximée par la synchronisation des cycles économiques des différents partenaires régionaux. Les résultats montrent que la synchronisation des cycles économiques diminue avec l’occurrence de guerres civiles, non seulement pour les pays directement affectés, mais également pour leurs voisins en paix. / In this thesis, I aim to contribute to the recent international development debate, by providing an analysis of some of the policies that are considered key elements of a development strategy. Focusing on social protection, community-based development and regional integration, I consider aspects related to their design, implementation and evaluation. In the first chapter, I propose a method to estimate ex ante vulnerability to poverty, defined as the probability of being poor in the near future given one’s current characteristics. This is based on the premise that effective social protection policies should aim not only to help the poor move out of poverty, but also to protect the vulnerable from falling into it. In the second chapter, I consider the issue of identifying the poor in a context of targeting social protection programs using a Proxy-Means Testing (PMT) approach, which precision, and therefore usefulness relies on the selection of indicators that produce accurate predictions of household welfare. I propose a method based on model random sampling to identify indicators that are robustly and strongly correlated with household welfare, measured by per capita consumption. These indicators span the categories of household private asset holdings, access to basic domestic energy, education level, sanitation and housing. The third and fourth chapters of this thesis provide an ex-post analysis of development policies and focus in particular on the unintended consequences of a community-driven program and on the reasons for the lack of progress in regional economic integration. The third chapter assesses whether the reaction of the two distinct rebel groups that operate in the Philippines to the implementation of a large-scale community-driven development project funded by foreign aid is consistent with the idea that these two groups have different ideologies, characteristics and motives for fighting. It is based on a unique geo-referenced dataset that we collected from local newspaper reports on the occurrence of conflict episodes involving these rebel groups, and on the predictions of a rent-seeking model of insurgency. The findings are consistent with the proposed classification of the rebel groups; the impact of the foreign aid project on each rebel group depends on their ideological stance. In the last chapter, I analyze how civil conflicts affect the economic fate of African regional economic communities through its effect on the synchronicity of regional partners’ economies. I find that conflict decreases business cycle synchronicity when it occurs within a regional economic community, both for the directly affected countries and for their more peaceful regional peers.
306

Data-driven goodness-of-fit tests / Datagesteuerte Verträglichkeitskriteriumtests

Langovoy, Mikhail Anatolievich 09 July 2007 (has links)
No description available.
307

Tuning of machine learning algorithms for automatic bug assignment

Artchounin, Daniel January 2017 (has links)
In software development projects, bug triage consists mainly of assigning bug reports to software developers or teams (depending on the project). The partial or total automation of this task would have a positive economic impact on many software projects. This thesis introduces a systematic four-step method to find some of the best configurations of several machine learning algorithms intending to solve the automatic bug assignment problem. These four steps are respectively used to select a combination of pre-processing techniques, a bug report representation, a potential feature selection technique and to tune several classifiers. The aforementioned method has been applied on three software projects: 66 066 bug reports of a proprietary project, 24 450 bug reports of Eclipse JDT and 30 358 bug reports of Mozilla Firefox. 619 configurations have been applied and compared on each of these three projects. In production, using the approach introduced in this work on the bug reports of the proprietary project would have increased the accuracy by up to 16.64 percentage points.
308

Computational Bayesian techniques applied to cosmology

Hee, Sonke January 2018 (has links)
This thesis presents work around 3 themes: dark energy, gravitational waves and Bayesian inference. Both dark energy and gravitational wave physics are not yet well constrained. They present interesting challenges for Bayesian inference, which attempts to quantify our knowledge of the universe given our astrophysical data. A dark energy equation of state reconstruction analysis finds that the data favours the vacuum dark energy equation of state $w {=} -1$ model. Deviations from vacuum dark energy are shown to favour the super-negative ‘phantom’ dark energy regime of $w {< } -1$, but at low statistical significance. The constraining power of various datasets is quantified, finding that data constraints peak around redshift $z = 0.2$ due to baryonic acoustic oscillation and supernovae data constraints, whilst cosmic microwave background radiation and Lyman-$\alpha$ forest constraints are less significant. Specific models with a conformal time symmetry in the Friedmann equation and with an additional dark energy component are tested and shown to be competitive to the vacuum dark energy model by Bayesian model selection analysis: that they are not ruled out is believed to be largely due to poor data quality for deciding between existing models. Recent detections of gravitational waves by the LIGO collaboration enable the first gravitational wave tests of general relativity. An existing test in the literature is used and sped up significantly by a novel method developed in this thesis. The test computes posterior odds ratios, and the new method is shown to compute these accurately and efficiently. Compared to computing evidences, the method presented provides an approximate 100 times reduction in the number of likelihood calculations required to compute evidences at a given accuracy. Further testing may identify a significant advance in Bayesian model selection using nested sampling, as the method is completely general and straightforward to implement. We note that efficiency gains are not guaranteed and may be problem specific: further research is needed.
309

Análise de carteiras em tempo discreto / Discrete time portfolio analysis

Kato, Fernando Hideki 14 April 2004 (has links)
Nesta dissertação, o modelo de seleção de carteiras de Markowitz será estendido com uma análise em tempo discreto e hipóteses mais realísticas. Um produto tensorial finito de densidades Erlang será usado para aproximar a densidade de probabilidade multivariada dos retornos discretos uniperiódicos de ativos dependentes. A Erlang é um caso particular da distribuição Gama. Uma mistura finita pode gerar densidades multimodais não-simétricas e o produto tensorial generaliza este conceito para dimensões maiores. Assumindo que a densidade multivariada foi independente e identicamente distribuída (i.i.d.) no passado, a aproximação pode ser calibrada com dados históricos usando o critério da máxima verossimilhança. Este é um problema de otimização em larga escala, mas com uma estrutura especial. Assumindo que esta densidade multivariada será i.i.d. no futuro, então a densidade dos retornos discretos de uma carteira de ativos com pesos não-negativos será uma mistura finita de densidades Erlang. O risco será calculado com a medida Downside Risk, que é convexa para determinados parâmetros, não é baseada em quantis, não causa a subestimação do risco e torna os problemas de otimização uni e multiperiódico convexos. O retorno discreto é uma variável aleatória multiplicativa ao longo do tempo. A distribuição multiperiódica dos retornos discretos de uma seqüência de T carteiras será uma mistura finita de distribuições Meijer G. Após uma mudança na medida de probabilidade para a composta média, é possível calcular o risco e o retorno, que levará à fronteira eficiente multiperiódica, na qual cada ponto representa uma ou mais seqüências ordenadas de T carteiras. As carteiras de cada seqüência devem ser calculadas do futuro para o presente, mantendo o retorno esperado no nível desejado, o qual pode ser função do tempo. Uma estratégia de alocação dinâmica de ativos é refazer os cálculos a cada período, usando as novas informações disponíveis. Se o horizonte de tempo tender a infinito, então a fronteira eficiente, na medida de probabilidade composta média, tenderá a um único ponto, dado pela carteira de Kelly, qualquer que seja a medida de risco. Para selecionar um dentre vários modelos de otimização de carteira, é necessário comparar seus desempenhos relativos. A fronteira eficiente de cada modelo deve ser traçada em seu respectivo gráfico. Como os pesos dos ativos das carteiras sobre estas curvas são conhecidos, é possível traçar todas as curvas em um mesmo gráfico. Para um dado retorno esperado, as carteiras eficientes dos modelos podem ser calculadas, e os retornos realizados e suas diferenças ao longo de um backtest podem ser comparados. / In this thesis, Markowitz’s portfolio selection model will be extended by means of a discrete time analysis and more realistic hypotheses. A finite tensor product of Erlang densities will be used to approximate the multivariate probability density function of the single-period discrete returns of dependent assets. The Erlang is a particular case of the Gamma distribution. A finite mixture can generate multimodal asymmetric densities and the tensor product generalizes this concept to higher dimensions. Assuming that the multivariate density was independent and identically distributed (i.i.d.) in the past, the approximation can be calibrated with historical data using the maximum likelihood criterion. This is a large-scale optimization problem, but with a special structure. Assuming that this multivariate density will be i.i.d. in the future, then the density of the discrete returns of a portfolio of assets with nonnegative weights will be a finite mixture of Erlang densities. The risk will be calculated with the Downside Risk measure, which is convex for certain parameters, is not based on quantiles, does not cause risk underestimation and makes the single and multiperiod optimization problems convex. The discrete return is a multiplicative random variable along the time. The multiperiod distribution of the discrete returns of a sequence of T portfolios will be a finite mixture of Meijer G distributions. After a change of the distribution to the average compound, it is possible to calculate the risk and the return, which will lead to the multiperiod efficient frontier, where each point represents one or more ordered sequences of T portfolios. The portfolios of each sequence must be calculated from the future to the present, keeping the expected return at the desired level, which can be a function of time. A dynamic asset allocation strategy is to redo the calculations at each period, using new available information. If the time horizon tends to infinite, then the efficient frontier, in the average compound probability measure, will tend to only one point, given by the Kelly’s portfolio, whatever the risk measure is. To select one among several portfolio optimization models, it is necessary to compare their relative performances. The efficient frontier of each model must be plotted in its respective graph. As the weights of the assets of the portfolios on these curves are known, it is possible to plot all curves in the same graph. For a given expected return, the efficient portfolios of the models can be calculated, and the realized returns and their differences along a backtest can be compared.
310

Análise de carteiras em tempo discreto / Discrete time portfolio analysis

Fernando Hideki Kato 14 April 2004 (has links)
Nesta dissertação, o modelo de seleção de carteiras de Markowitz será estendido com uma análise em tempo discreto e hipóteses mais realísticas. Um produto tensorial finito de densidades Erlang será usado para aproximar a densidade de probabilidade multivariada dos retornos discretos uniperiódicos de ativos dependentes. A Erlang é um caso particular da distribuição Gama. Uma mistura finita pode gerar densidades multimodais não-simétricas e o produto tensorial generaliza este conceito para dimensões maiores. Assumindo que a densidade multivariada foi independente e identicamente distribuída (i.i.d.) no passado, a aproximação pode ser calibrada com dados históricos usando o critério da máxima verossimilhança. Este é um problema de otimização em larga escala, mas com uma estrutura especial. Assumindo que esta densidade multivariada será i.i.d. no futuro, então a densidade dos retornos discretos de uma carteira de ativos com pesos não-negativos será uma mistura finita de densidades Erlang. O risco será calculado com a medida Downside Risk, que é convexa para determinados parâmetros, não é baseada em quantis, não causa a subestimação do risco e torna os problemas de otimização uni e multiperiódico convexos. O retorno discreto é uma variável aleatória multiplicativa ao longo do tempo. A distribuição multiperiódica dos retornos discretos de uma seqüência de T carteiras será uma mistura finita de distribuições Meijer G. Após uma mudança na medida de probabilidade para a composta média, é possível calcular o risco e o retorno, que levará à fronteira eficiente multiperiódica, na qual cada ponto representa uma ou mais seqüências ordenadas de T carteiras. As carteiras de cada seqüência devem ser calculadas do futuro para o presente, mantendo o retorno esperado no nível desejado, o qual pode ser função do tempo. Uma estratégia de alocação dinâmica de ativos é refazer os cálculos a cada período, usando as novas informações disponíveis. Se o horizonte de tempo tender a infinito, então a fronteira eficiente, na medida de probabilidade composta média, tenderá a um único ponto, dado pela carteira de Kelly, qualquer que seja a medida de risco. Para selecionar um dentre vários modelos de otimização de carteira, é necessário comparar seus desempenhos relativos. A fronteira eficiente de cada modelo deve ser traçada em seu respectivo gráfico. Como os pesos dos ativos das carteiras sobre estas curvas são conhecidos, é possível traçar todas as curvas em um mesmo gráfico. Para um dado retorno esperado, as carteiras eficientes dos modelos podem ser calculadas, e os retornos realizados e suas diferenças ao longo de um backtest podem ser comparados. / In this thesis, Markowitz’s portfolio selection model will be extended by means of a discrete time analysis and more realistic hypotheses. A finite tensor product of Erlang densities will be used to approximate the multivariate probability density function of the single-period discrete returns of dependent assets. The Erlang is a particular case of the Gamma distribution. A finite mixture can generate multimodal asymmetric densities and the tensor product generalizes this concept to higher dimensions. Assuming that the multivariate density was independent and identically distributed (i.i.d.) in the past, the approximation can be calibrated with historical data using the maximum likelihood criterion. This is a large-scale optimization problem, but with a special structure. Assuming that this multivariate density will be i.i.d. in the future, then the density of the discrete returns of a portfolio of assets with nonnegative weights will be a finite mixture of Erlang densities. The risk will be calculated with the Downside Risk measure, which is convex for certain parameters, is not based on quantiles, does not cause risk underestimation and makes the single and multiperiod optimization problems convex. The discrete return is a multiplicative random variable along the time. The multiperiod distribution of the discrete returns of a sequence of T portfolios will be a finite mixture of Meijer G distributions. After a change of the distribution to the average compound, it is possible to calculate the risk and the return, which will lead to the multiperiod efficient frontier, where each point represents one or more ordered sequences of T portfolios. The portfolios of each sequence must be calculated from the future to the present, keeping the expected return at the desired level, which can be a function of time. A dynamic asset allocation strategy is to redo the calculations at each period, using new available information. If the time horizon tends to infinite, then the efficient frontier, in the average compound probability measure, will tend to only one point, given by the Kelly’s portfolio, whatever the risk measure is. To select one among several portfolio optimization models, it is necessary to compare their relative performances. The efficient frontier of each model must be plotted in its respective graph. As the weights of the assets of the portfolios on these curves are known, it is possible to plot all curves in the same graph. For a given expected return, the efficient portfolios of the models can be calculated, and the realized returns and their differences along a backtest can be compared.

Page generated in 0.1201 seconds