31 |
Medidas de seção de choque de fusão para os sistemas ANTPOT. 12 C + ANTPOT. 63, 65 CU. / Section Measurements of shock melting for systems. 12 C + 63, 65 CURocha, Carlos Antonio da 14 December 1987 (has links)
No presente trabalho é apresentado um conjunto de medidas de seção de choque do processo de fusão nuclear nos sistemas ANTPOT. 12 C + ANTPOT. 63, 65 CU, em um intervalo de energia correspondente à 0.91.8 vezes a barreira coulombiana. O método experimental utilizado foi a detecção dos resíduos de evaporação através da técnica de tempo de vôo, acoplado à um defletor eletrostático para separar as partículas provenientes do espalhamento elástico dos produtos de fusão. As vantagens e limitações do método são discutidas em detalhes. As funções de excitação de fusão dos sistemas citados foram analisadas via modelo de penetração de barreira unidimensional , utilizando vários potenciais nucleares. Dessa análise comprovou-se que a secção de choque de fusão é subestimada em energias em torno e abaixo da barreira coulombiana. Nesse sentido foi feito um cálculo, utilizando o código computacional CCFUS, que simula o acoplamento de canais, com a finalidade de descobrir quais canais contribuem para o processo de fusão nuclear em baixas energias. As seções de choque de fusão obtidas foram comparadas com outros sistemas - ANTPOT. 12 C + ANTPOT. 63, 65 CU.- que utilizam os mesmos núcleos-alvo. A comparação revelou que os sistemas ANTPOT. 12 C + ANTPOT. 63, 65 CU possuem o maior aumento de seção de choque de fusão subcoulombiana e esse fato pode estar relacionado à deformação estática do projétil. A comparação das distribuições da velocidade dos resíduos de evaporação para os dois de sistemas revelou um canal intenso para o sistema ANTPOT. 12 C + ANTPOT. 63 CU que não está presente para o outro sistema. / In this work we present cross-section measurements for nuclear fusion in the 12C+ 63,65 Cu system, at 12C energy range from 0.9 to 1.8 times the Coulomb barrier. In order to detect and to obtain the mass identification of the evaporation residues following the fusion process, the time of flight method was adopted in conjunction with an eletrostatic deflector capable of separating the evaporation residues from the beam particles. The limitation and advantadges of this method of measurement are discussed. The excitation functions for the above systems were analysed using the unidimensional barrier penetration model with different nuclear potentials. Theoretical fusion cross-section values obtained from this analysis were systematically smaller than our measured values, in the energy region below the Coulomb barrier. In order to discover which channel enhances the fusion cross-section in this region, a coupled channel calculation was performed, with the aid of CCFUS code. The experimental data for the above reactions were compared with the systems 12C+ 63,65 Cu, measured by our group. In this comparison, It was noted that the systems 12C+ 63,65 Cu have greater fusion cross section below the Coulomb barrier. This fact can be related to the static deformation of 12C. The comparison of velocity spectra of the evaporated residues for the two systems shows that 12C+63Cu has a strong reaction channel that was not present in the 12C+ 65Cu system.
|
32 |
Medidas de seção de choque de fusão para os sistemas ANTPOT. 12 C + ANTPOT. 63, 65 CU. / Section Measurements of shock melting for systems. 12 C + 63, 65 CUCarlos Antonio da Rocha 14 December 1987 (has links)
No presente trabalho é apresentado um conjunto de medidas de seção de choque do processo de fusão nuclear nos sistemas ANTPOT. 12 C + ANTPOT. 63, 65 CU, em um intervalo de energia correspondente à 0.91.8 vezes a barreira coulombiana. O método experimental utilizado foi a detecção dos resíduos de evaporação através da técnica de tempo de vôo, acoplado à um defletor eletrostático para separar as partículas provenientes do espalhamento elástico dos produtos de fusão. As vantagens e limitações do método são discutidas em detalhes. As funções de excitação de fusão dos sistemas citados foram analisadas via modelo de penetração de barreira unidimensional , utilizando vários potenciais nucleares. Dessa análise comprovou-se que a secção de choque de fusão é subestimada em energias em torno e abaixo da barreira coulombiana. Nesse sentido foi feito um cálculo, utilizando o código computacional CCFUS, que simula o acoplamento de canais, com a finalidade de descobrir quais canais contribuem para o processo de fusão nuclear em baixas energias. As seções de choque de fusão obtidas foram comparadas com outros sistemas - ANTPOT. 12 C + ANTPOT. 63, 65 CU.- que utilizam os mesmos núcleos-alvo. A comparação revelou que os sistemas ANTPOT. 12 C + ANTPOT. 63, 65 CU possuem o maior aumento de seção de choque de fusão subcoulombiana e esse fato pode estar relacionado à deformação estática do projétil. A comparação das distribuições da velocidade dos resíduos de evaporação para os dois de sistemas revelou um canal intenso para o sistema ANTPOT. 12 C + ANTPOT. 63 CU que não está presente para o outro sistema. / In this work we present cross-section measurements for nuclear fusion in the 12C+ 63,65 Cu system, at 12C energy range from 0.9 to 1.8 times the Coulomb barrier. In order to detect and to obtain the mass identification of the evaporation residues following the fusion process, the time of flight method was adopted in conjunction with an eletrostatic deflector capable of separating the evaporation residues from the beam particles. The limitation and advantadges of this method of measurement are discussed. The excitation functions for the above systems were analysed using the unidimensional barrier penetration model with different nuclear potentials. Theoretical fusion cross-section values obtained from this analysis were systematically smaller than our measured values, in the energy region below the Coulomb barrier. In order to discover which channel enhances the fusion cross-section in this region, a coupled channel calculation was performed, with the aid of CCFUS code. The experimental data for the above reactions were compared with the systems 12C+ 63,65 Cu, measured by our group. In this comparison, It was noted that the systems 12C+ 63,65 Cu have greater fusion cross section below the Coulomb barrier. This fact can be related to the static deformation of 12C. The comparison of velocity spectra of the evaporated residues for the two systems shows that 12C+63Cu has a strong reaction channel that was not present in the 12C+ 65Cu system.
|
33 |
Regularized Jackknife estimation with many instrumentsDoukali, Mohamed 10 1900 (has links)
No description available.
|
34 |
Medidas de secções de choque de fusão dos sistemas 16o + 46,50ti / Measures sections of shock melting systems 16th ti + 46.50Raphael Liguori Neto 29 September 1986 (has links)
Foram medidas funções de excitação para a fusão completa dos sistemas 16O+46,50Ti em energias abaixo e em torno da barreira Coulombiana. A seção de choque de fusão foi obtida utilizando-se os métodos da espectroscopia em linha e fora da linha através da soma das seções de choque dos canais de decaimento do núcleo composto formado, observados experimen- talmente. As vantagens e limitações do método são discutidas em detalhes. As funções de excitação da fusão dos sistemas citados foram analisados utilizando-se modelos semiclássicos de penetração de barreira. Através desta análise determinamos o raio e a altura da barreira de fusão para estes sistemas. Os valores obtidos concordam com valores da literatura na mesma região de massa. A seção de choque de fusão calculada pelo modelo de penetração de barreira unidimensional utilizando potenciais que descrevem a interação entre íons pesados, é subestimada em energias abaixo da barreira Coulombiana. A introdução da vibração de ponto zero da superfície dos núcleos interagentes, apesar de produzir um aumento da seção de choque em energias sub- Coulombianas, não consegue reproduzir os dados de maneira satisfatória, pois prevê uma diferença isotópica nas funções de excitação que não e observada experimentalmente. As previsões do modelo estatístico para o decaimento do núcleo composto (programa CASCA- DE) apresentam uma concordância satisfatória para os canais de decaimento mais intensos. / Excitation functions for complete fusion of the systems 16O + 46,50Ti, with energies near and below the Coulomb barrier, were measured. With the use of the in-beam and out of beam spectroscopy, the formation of the compound nucleus was experimentally detected. The fusi- on cross section was then attained by the sum of all observed compound nucleus decay chan- nels. The limitation and advantages of measurements methods are discussed. Theoretical analysis of the experimental results using the semi-classical barrier penetration model allowed us to obtain the fusion barrier height and radius for the studied systems. These values are in good agreement with others reported for this mass range. Using the unidimensionaL barrier penetration model with different nuclear potentials, descri- bing the heavy ion interactions gave theoretical fusion cross section values systematically smaller than our measured values in the energy region below the Coulomb barrier. The introduction of the nuclear surface zero point vibrations enhances the theoreticaL fusion cross sections in the sub-Coulomb region, but simultaneously introduces an isotopic difference in the fusion excitation functions that is not observed experimentally. The statistical model predictions for the compound nucleus decay (calculated by the CASCADE program) show reasonable agreement for the more intense decay channels.
|
35 |
Essais en économetrie et économie de l'éducationTchuente Nguembu, Guy 07 1900 (has links)
No description available.
|
36 |
Développement d’un nouveau modèle de criblage tridimensionnel pour la découverte de médicaments épigénétiques contre le cancer du poumonMc Innes, Gabrielle 11 1900 (has links)
La découverte de médicaments en oncologie repose encore majoritairement sur les criblages pharmacologiques à haut débit. Cependant, les modèles traditionnels de culture cellulaire en deux dimensions (2D) ne reflètent pas les conditions physiopathologiques des tumeurs solides in situ. Au laboratoire, notre hypothèse est que le manque de représentativité des cellules en culture 2D par rapport aux tumeurs in situ lors des essais de criblage pharmacologique in vitro est responsable du faible taux de succès des petites molécules lors d’essais cliniques. Pour pallier ce problème, nous avons développé une méthode de culture à long-terme en trois dimensions (3D) avec des cellules d’adénocarcinome du poumon qui permet le maintien des cellules en sphéroïdes jusqu’à 38 jours. Les cellules s’adaptent rapidement à la culture 3D en diminuant leur taille et en ralentissant significativement leur métabolisme. Au niveau épigénétique, l’expression du complexe KAT3A/KAT3B et de BRG1 est significativement diminuée, et ce de manière temps-dépendante. À l’inverse, l’expression de HDAC6 est augmentée lors du passage en 3D. Finalement, nous avons vérifié si les changements épigénétiques induits par la culture 3D influençaient significativement la réponse aux médicaments. Ainsi, nous avons traité les cellules en 2D et après 10 ou 24 jours en 3D avec une pharmacothèque de 154 médicaments épigénétiques. 60% des médicaments ont démontré une activité anticancéreuse significative sur les cellules en 2D, contrairement à 9% sur les sphéroïdes de 10 jours. Avec les sphéroïdes de 24 jours, uniquement 1 médicament, le MS023, un inhibiteur des arginines méthyltransférases (PRMT) de type I, a été efficace. L’augmentation de la sensibilité au MS023 concorde avec une augmentation de la méthylation des arginines dans les sphéroïdes. En conclusion, mon projet démontre que la culture 3D modifie l’épigénome des cellules cancéreuses du poumon de manière temps-dépendant et que ces changements sensibilisent les cellules à une inhibition des PRMT de type I. L’étude des sphéroïdes nous permet d’améliorer notre compréhension de la biologie tumorale et des processus de découverte de médicaments ce qui pourrait pallier le faible taux de succès associé aux modèles 2D classique. / Small molecule development in oncology mainly involves high-throughput drug screenings and preclinical validation studies using cancer cells grown in two-dimension (2D). However, classical cell culture methods poorly reflect tumor biology and cell epigenome. Here, our objective is to develop a 3D model that displays key epigenetic features of solid tumors in order to identify new actionable targets. First, we determined culture conditions for long-term expansion of adenocarcinoma spheroids to allow cell adaptation and the occurrence of specific epigenetic features triggered by 3D condition. Our results demonstrate that cells cultivated in 3D spheroids exhibit significant phenotypic and epigenetic changes as compared to 2D monolayers. Notably, we observed numerous expression changes of key epigenetic regulators, all taken place at a different time-point of the 3D cell culture. We observed a decrease in the expression of the KAT3A/KAT3B complex as well as BRG1. HDAC6 expression also increased in 3D. Then, we asked whether epigenetic changes triggered by 3D culture would modify drug sensitivity. We performed a screening of 154 epigenetic drugs on cancer cells cultivated in 2D and in 3D at two different time points. 60% of epigenetic drugs showed significant anticancer activity against 2D monolayers. Interestingly, A549 cells in 3D spheroids became gradually resistant over time. Against 3D spheroids cultivated for 10 days, only 9% of epigenetic drugs in the drug library showed anticancer activity. Against 3D spheroids cultivated for 24 days, only a single epigenetic compound called MS023, a selective agent against type I PRMTs, reduced cell viability significantly. This sensitivity is correlated with an increase of arginine methylation observed within spheroids.
Taken together, we show that 3D spheroids trigger a time-dependent epigenetic context that increases lung cancer cells sensitivity to type I PRMT inhibition. 3D spheroids of well-characterized cancer cell lines will improve our understanding of tumor biology and drug discovery and can overcome the high false discovery rate associated with 2D classical models.
|
37 |
Alminares mudéjares de la marca superior. Nueva aproximación a su evolución histórica. El caso de la torre de San Pablo en ZaragozaMolina Sánchez, Susana 13 October 2022 (has links)
[ES] Los ocho siglos de dominio de la cultura islámica en la Península Ibérica motivaron la proliferación de un extenso patrimonio cultural, arquitectónico y artístico. En este contexto, nació el arte mudéjar como una fusión de estilos dada la convivencia entre distintas culturas. Siendo las torres alminares-mudéjares aragonesas el fiel reflejo de este tipo de arquitectura, éstas constituyen el objeto de estudio del presente trabajo.
En este sentido, el marco geográfico de la tesis abarca el territorio de la antigua Marca Superior de Al-Andalus, que incluye los valles de los ríos Ebro, Jalón, Jiloca y afluentes, como centros neurálgicos de la esencia del mudéjar (con sus torres como máximo exponente).
La inquietud por investigar acerca de la evolución histórico-constructiva de dichas torres nace de su peculiar arquitectura, producto de la síntesis entre Oriente y Occidente. Asimismo, la escasa presencia de documentación de la época unida a los estudios que se han venido realizando hasta el momento, genera todavía una serie de carencias en el conocimiento de dichas torres, especialmente en lo referente a su origen.
Por tanto, resulta conveniente abordar dichas necesidades aportando un nuevo enfoque al estudio de su evolución histórico-constructiva. Para ello, se establece una metodología de levantamiento digital (específica para elementos con tipología de torre), mediante la aplicación de las técnicas de fotogrametría y escáner láser, como herramientas de apoyo a la lectura de sus fábricas.
El gran potencial a nivel gráfico que ofrecen los resultados obtenidos (modelos tridimensionales hiperrealistas texturizados, con alta precisión geométrica y muy buena definición), permite profundizar en el conocimiento de estas torres. Dichos resultados de la investigación, resultan eficaces y cuentan con el rigor científico adecuado para la posterior formulación de una hipótesis sobre el alminar original, del que procede la torre objeto de estudio, y que se podría extrapolar al resto de torres. / [CA] Els huit segles de domini de la cultura islàmica en la Península Ibèrica van motivar la proliferació d'un extens patrimoni cultural, arquitectònic i artístic. En aquest context, va nàixer l'art mudèjar com una fusió d'estils donada la convivència entre diferents cultures. Sent les torres minarets-mudèjars aragoneses el fidel reflex d'aquesta mena d'arquitectura, aquestes constitueixen l'objecte d'estudi del present treball.
En aquest sentit, el marc geogràfic de la tesi comprén el territori de l'antiga Marca Superior d'Al-Andalus, que inclou les valls dels rius Ebro, Jalón, Jiloca i afluents, com a centres neuràlgics de l'essència del mudèjar (amb les seues torres com a màxim exponent).
La inquietud per investigar sobre l'evolució històric-constructiva d'aquestes torres naix de la seua peculiar arquitectura, producte de la síntesi entre Orient i Occident. Així mateix, l'escassa presència de documentació de l'època unida als estudis que s'han realitzat fins al moment, genera encara una sèrie de mancances en el coneixement d'aquestes torres, especialment referent al seu origen.
Per tant, resulta convenient abordar aquestes necessitats aportant un nou enfocament a l'estudi de la seua evolució històric-constructiva. Per a això, s'estableix una metodologia d'alçament digital (específica per a elements amb tipologia de torre), mitjançant l'aplicació de les tècniques de fotogrametria i escàner làser, com a eines de suport a la lectura de les seues fàbriques.
El gran potencial a nivell gràfic que ofereixen els resultats obtinguts (models tridimensionals hiperrealistes texturizados, amb alta precisió geomètrica i molt bona definició), permet aprofundir en el coneixement d'aquestes torres. Aquests resultats de la investigació, resulten eficaces i compten amb el rigor científic adequat per a la posterior formulació d'una hipòtesi sobre el minaret original, del qual procedeix la torre objecte d'estudi, i que es podria extrapolar a la resta de torres. / [EN] The eight centuries of Islamic culture in Iberian Peninsula led to the proliferation of an extensive cultural, architectural and artistic heritage. In this context, Mudejar art was born as a fusion of styles due to the coexistence of different cultures. Aragonese minaret-Mudejar towers are a faithful reflection of this type of architecture, and are the object of study of this work.
In this sense, thesis' geographical framework covers the territory of ancient Marca Superior of Al-Andalus, which includes the river valleys of Ebro, Jalón, Jiloca and tributaries, as nerve centres of the essence of Mudejar architecture (with its towers as its maximum exponent).
The interest in researching the historical-constructive evolution of these towers stems from their peculiar architecture, a product of the synthesis between East and West. Likewise, the scarce documentation of that period, together with different studies that have been carried out up to now, still generate a series of deficiencies in the knowledge of these towers, especially with regard to their origin.
It is therefore advisable to address these needs by providing a new approach to the study of their historical-constructive evolution. For this purpose, a digital survey methodology is established (specifically for tower typology elements), using photogrammetry (SfM) and laser scanner techniques as tools to support the reading of their walls.
A great graphic potential offered by the results obtained (reality-based 3D models, with high geometric precision and optimal definition), allows us to deepen our knowledge of these towers. These research results are effective and have an appropriate scientific rigour for the subsequent formulation of a hypothesis about the original minaret, from which the tower under study originates, and which could be extrapolated to the rest of the towers. / Molina Sánchez, S. (2022). Alminares mudéjares de la marca superior. Nueva aproximación a su evolución histórica. El caso de la torre de San Pablo en Zaragoza [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/188135
|
38 |
Misspecified financial models in a data-rich environmentNokho, Cheikh I. 03 1900 (has links)
En finance, les modèles d’évaluation des actifs tentent de comprendre les différences de rendements observées entre divers actifs. Hansen and Richard (1987) ont montré que ces modèles sont des représentations fonctionnelles du facteur d’actualisation stochastique que les investisseurs utilisent pour déterminer le prix des actifs sur le marché financier. La littérature compte de nombreuses études économétriques qui s’intéressent à leurs estimations et à la comparaison de leurs performances, c’est-à-dire de leur capa- cité à expliquer les différences de rendement observées. Cette thèse, composée de trois articles, contribue à cette littérature.
Le premier article examine l’estimation et la comparaison des modèles d’évaluation des actifs dans un environnement riche en données. Nous mettons en œuvre deux méthodes de régularisation interprétables de la distance de Hansen and Jagannathan (1997, HJ ci-après) dans un contexte où les actifs sont nombreux. Plus précisément, nous introduisons la régularisation de Tikhonov et de Ridge pour stabiliser l’inverse de la matrice de covariance de la distance de HJ. La nouvelle mesure, qui en résulte, peut être interprétée comme la distance entre le facteur d’actualisation d’un modèle et le facteur d’actualisation stochastique valide le plus proche qui évalue les actifs avec des erreurs contrôlées. Ainsi, ces méthodes de régularisation relâchent l’équation fondamentale de l’évaluation des actifs financiers. Aussi, elles incorporent un paramètre de régularisation régissant l’ampleur des erreurs d’évaluation. Par la suite, nous présentons une procédure pour estimer et faire des tests sur les paramètres d’un modèle d’évaluation des actifs financiers avec un facteur d’actualisation linéaire en minimisant la distance de HJ régularisée. De plus, nous obtenons la distribution asymptotique des estimateurs lorsque le nombre d’actifs devient grand. Enfin, nous déterminons la distribution de la distance régularisée pour comparer différents modèles d’évaluation des actifs. Empiriquement, nous estimons et comparons quatre modèles à l’aide d’un ensemble de données comportant 252 portefeuilles.
Le deuxième article estime et compare dix modèles d’évaluation des actifs, à la fois inconditionnels et conditionnels, en utilisant la distance de HJ régularisée et 3 198 portefeuilles s’étendant de juillet 1973 à juin 2018. Ces portefeuilles combinent les portefeuilles bien connus triés par caractéristiques avec des micro-portefeuilles. Les micro-portefeuilles sont formés à l’aide de variables financières mais contiennent peu d’actions (5 à 10), comme indiqué dans Barras (2019). Par conséquent, ils sont analogues aux actions individuelles, offrent une grande variabilité de rendements et améliorent le pouvoir discriminant des portefeuilles classiques triés par caractéristiques. Parmi les modèles considérés, quatre sont des modèles macroéconomiques ou théoriques, dont le modèle de CAPM avec consommation (CCAPM), le modèle de CAPM avec consommation durable (DCAPM) de Yogo (2006), le modèle de CAPM avec capital humain (HCAPM) de Jagannathan and Wang (1996), et le modèle d’évaluation des actifs avec intermédiaires financiers (IAPM) de He, Kelly, and Manela (2017). Cinq modèles basés sur les anomalies sont considérés, tels que les modèles à trois (FF3) et à cinq facteurs (FF5) proposés par Fama and French, 1993 et 2015, le modèle de Carhart (1997) intégrant le facteur Momentum dans FF3, le modèle de liquidité de Pástor and Stambaugh (2003) et le modèle q5 de Hou et al. (2021). Le modèle de consommation de Lettau and Ludvigson (2001) utilisant des données trimestrielles est également estimé. Cependant, il n’est pas inclus dans les comparaisons en raison de la puissance de test réduite. Par rapport aux modèles inconditionnels, les modèles conditionnels tiennent compte des cycles économiques et des fluctuations des marchés financiers en utilisant les indices d’incertitude macroéconomique et financière de Ludvigson, Ma, and Ng (2021). Ces modèles conditionnels ont des erreurs de spécification considérablement réduites. Les analyses comparatives des modèles inconditionnels indiquent que les modèles macroéconomiques présentent globalement les mêmes pouvoirs explicatifs. De plus, ils ont un pouvoir explicatif global inférieur à celui des modèles basés sur les anomalies, à l’exception de FF3. L’augmentation de FF3 avec le facteur Momentum et de liquidité améliore sa capacité explicative. Cependant ce nouveau modèle est inférieur à FF5 et q5. Pour les modèles conditionnels, les modèles macroéconomiques DCAPM et HCAPM surpassent CCAPM et IAPM. En outre, ils ont des erreurs de spécification similaires à celles des modèles conditionnels de Carhart et de liquidité, mais restent en deçà des modèles FF5 et q5. Ce dernier domine tous les autres modèles.
Le troisième article présente une nouvelle approche pour estimer les paramètres du facteur d’actualisation linéaire des modèles d’évaluation d’actifs linéaires mal spécifiés avec de nombreux actifs. Contrairement au premier article de Carrasco and Nokho (2022), cette approche s’applique à la fois aux rendements bruts et excédentaires. La méthode proposée régularise toujours la distance HJ : l’inverse de la matrice de second moment est la matrice de pondération pour les rendements bruts, tandis que pour les rendements excédentaires, c’est l’inverse de la matrice de covariance. Plus précisément, nous dérivons la distribution asymptotique des estimateurs des paramètres du facteur d’actualisation stochastique lorsque le nombre d’actifs augmente. Nous discutons également des considérations pertinentes pour chaque type de rendements et documentons les propriétés d’échantillon fini des estimateurs. Nous constatons qu’à mesure que le nombre d’actifs augmente, l’estimation des paramètres par la régularisation de l’inverse de la matrice de covariance des rendements excédentaires présente un contrôle de taille supérieur par rapport à la régularisation de l’inverse de la matrice de second moment des rendements bruts. Cette supériorité découle de l’instabilité inhérente à la matrice de second moment des rendements bruts. De plus, le rendement brut de l’actif sans risque présente une variabilité minime, ce qui entraîne une colinéarité significative avec d’autres actifs que la régularisation ne parvient pas à atténuer. / In finance, asset pricing models try to understand the differences in expected returns observed among various assets. Hansen and Richard (1987) showed that these models are functional representations of the discount factor investors use to price assets in the financial market. The literature counts many econometric studies that deal with their estimation and the comparison of their performance, i.e., how well they explain the differences in expected returns. This thesis, divided into three chapters, contributes to this literature.
The first paper examines the estimation and comparison of asset pricing models in a data-rich environment. We implement two interpretable regularization schemes to extend the renowned Hansen and Jagannathan (1997, HJ hereafter) distance to a setting with many test assets. Specifically, we introduce Tikhonov and Ridge regularizations to stabilize the inverse of the covariance matrix in the HJ distance. The resulting misspecification measure can be interpreted as the distance between a proposed pricing kernel and the nearest valid stochastic discount factor (SDF) pricing the test assets with controlled errors, relaxing the Fundamental Equation of Asset Pricing. So, these methods incorporate a regularization parameter governing the extent of the pricing errors. Subsequently, we present a procedure to estimate the SDF parameters of a linear asset pricing model by minimizing the regularized distance. The SDF parameters completely define the asset pricing model and determine if a particular observed factor is a priced source of risk in the test assets. In addition, we derive the asymptotic distribution of the estimators when the number of assets and time periods increases. Finally, we derive the distribution of the regularized distance to compare comprehensively different asset pricing models. Empirically, we estimate and compare four empirical asset pricing models using a dataset of 252 portfolios.
The second paper estimates and compares ten asset pricing models, both unconditional and conditional, utilizing the regularized HJ distance and 3198 portfolios spanning July 1973 to June 2018. These portfolios combine the well-known characteristic-sorted portfolios with micro portfolios. The micro portfolios are formed using firms' observed financial characteristics (e.g. size and book-to-market) but contain few stocks (5 to 10), as discussed in Barras (2019). Consequently, they are analogous to individual stocks, offer significant return spread, and improve the discriminatory power of the characteristics-sorted portfolios. Among the models, four are macroeconomic or theoretical models, including the Consumption Capital Asset Pricing Model (CCAPM), Durable Consumption Capital Asset Pricing Model (DCAPM) by Yogo (2006), Human Capital Capital Asset Pricing Model (HCAPM) by Jagannathan and Wang (1996), and Intermediary Asset pricing model (IAPM) by He, Kelly, and Manela (2017). Five anomaly-driven models are considered, such as the three (FF3) and Five-factor (FF5) Models proposed by Fama and French, 1993 and 2015, the Carhart (1997) model incorporating momentum into FF3, the Liquidity Model by Pástor and Stambaugh (2003), and the Augmented q-Factor Model (q5) by Hou et al. (2021). The Consumption model of Lettau and Ludvigson (2001) using quarterly data is also estimated but not included in the comparisons due to the reduced power of the tests. Compared to the unconditional models, the conditional ones account for the economic business cycles and financial market fluctuations by utilizing the macroeconomic and financial uncertainty indices of Ludvigson, Ma, and Ng (2021). These conditional models show significantly reduced pricing errors. Comparative analyses of the unconditional models indicate that the macroeconomic models exhibit similar pricing performances of the returns. In addition, they display lower overall explanatory power than anomaly-driven models, except for FF3. Augmenting FF3 with momentum and liquidity factors enhances its explanatory capability. However, the new model is inferior to FF5 and q5. For the conditional models, the macroeconomic models DCAPM and HCAPM outperform CCAPM and IAPM. Furthermore, they have similar pricing errors as the conditional Carhart and liquidity models but still fall short of the FF5 and q5. The latter dominates all the other models.
This third paper introduces a novel approach for estimating the SDF parameters in misspecified linear asset pricing models with many assets. Unlike the first paper, Carrasco and Nokho (2022), this approach is applicable to both gross and excess returns as test assets. The proposed method still regularizes the HJ distance: the inverse of the second-moment matrix is the weighting matrix for the gross returns, while for excess returns, it is the inverse of the covariance matrix. Specifically, we derive the asymptotic distribution of the SDF estimators under a double asymptotic condition where the number of test assets and time periods go to infinity. We also discuss relevant considerations for each type of return and document the finite sample properties of the SDF estimators with gross and excess returns. We find that as the number of test assets increases, the estimation of the SDF parameters through the regularization of the inverse of the excess returns covariance matrix exhibits superior size control compared to the regularization of the inverse of the gross returns second-moment matrix. This superiority arises from the inherent instability of the second-moment matrix of gross returns. Additionally, the gross return of the risk-free asset shows minimal variability, resulting in significant collinearity with other test assets that the regularization fails to mitigate.
|
Page generated in 0.1088 seconds