• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 38
  • 21
  • 20
  • 5
  • 5
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 106
  • 24
  • 24
  • 19
  • 18
  • 17
  • 16
  • 15
  • 14
  • 13
  • 13
  • 12
  • 12
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

United Front and Action vs. Beautiful Coffee Cups: Fluxus Through the Publications of George Maciunas and Dick Higgins

Reeves, Chris M. 21 September 2012 (has links)
No description available.
62

Understanding Solute-Solvent Interaction and Evaporation Kinetic in Binary-Solvent and Solvent-Polymer Systems / Förståelse av lösningmedelsinteraktioner och avdunstningskinetik i binära lösningsmedel- och lösningsmedel-polymersystem

Henrysson, Sandra January 2024 (has links)
This thesis explores the evaporation kinetics of various polymer-solvent and binary solvent mixtures to explore possible connections between the solutions properties and their evaporation process. By looking at the evaporation of polymer-solutions and binary-solvent solutions, through the change in weight as the solvent evaporates and the evaporation rate of the evaporation process, potential connections could be found. The results indicate that the presence of polymers influence the solvent evaporation, with polystyrene (PS) generally accelerating and polymethyl methacrylate (PMMA) either decelerating or having minimal impact on evaporation rates. Binary solvent mixtures exhibited non-proportional increases in evaporation rates, suggesting complex intermolecular interactions, but no apparent patterns between their properties and deviation in the evaporation process. This would need further research to find possible connections to be able to predict the evaporation process. But these findings highlight the importance of understanding polymer-solvent compatibility and evaporation dynamics to enhance performance and to identify environmentally friendly solvents for organic photovoltaic (OPV) cell fabrication. / Detta examensarbete undersöker avdunstningskinetiken hos olika polymer-lösningsmedel och binära lösningsmedelsblandningar för att utforska möjliga samband mellan lösningarnas egenskaper och deras avdunstningsprocess. Genom att studera avdunstningen av polymer-lösningar och binära lösningsmedelslösningar, genom förändringen i vikt när lösningsmedlet avdunstar och avdunstningshastigheten, kan potentiella samband identifieras. Resultaten indikerar att närvaron av polymerer påverkar lösningsmedlets avdunstning, där polystyren (PS) generellt accelererar och polymetylmetakrylat (PMMA) antingen decelererar eller har minimal inverkan på avdunstningshastigheterna. Binära lösningsmedelsblandningar visade icke-proportionella ökningar i avdunstningstider, vilket tyder på komplexa intermolekylära interaktioner, men inga tydliga mönster mellan deras egenskaper och avvikelser i avdunstningsprocessen kunde identifieras. Ytterligare forskning behövs för att finna möjliga samband för att kunna förutsäga avdunstningsprocessen. Dessa fynd understryker vikten av att förstå polymer-lösningsmedelskompatibilitet och avdunstningsdynamik för att förbättra effektiviteten och kunna identifiera miljövänliga lösningsmedel för tillverkning av organiska solceller (OPV).
63

Minimally Invasive, Integrated Endoscopic Hemilaminectomy for Hansen Type I Intervertebral Disc Extrusion in Chondrodystrophic Dogs

Drury, Adam Gardner 24 May 2021 (has links)
The objective of this prospective pilot study is to assess the feasibility of a minimally invasive, integrated endoscopic hemilaminectomy in chondrodystrophic dogs with clinically relevant Hansen type 1 intervertebral disc extrusion (IVDE). Study subjects included five client-owned chondrodystrophic dogs under 15kg with an acute, single site IVDE between T10 and L5 of less than 90 days duration and no loss of deep pain perception. The extent of the extrusion could not exceed 2/3 the diameter of the cannula to be used as defined by magnetic resonance imaging (MRI). A postoperative MRI was performed to assess remaining spinal cord compression. If significant compression remained, patients returned to surgery for a standard, open hemilaminectomy. Only the first dog required conversion to an open approach which resulted in adequate decompression. The same dog had a significant surgical complication of iatrogenic damage to the spinal cord during the minimally invasive approach. The other 4 dogs had no complications and achieved adequate spinal cord decompression. Three dogs eventually returned to normal neurologic status and another was improved compared to presentation. One dog was euthanized for reasons unrelated to IVDE. The authors conclude that a minimally invasive, integrated endoscopic hemilaminectomy is a feasible approach and can allow for adequate decompression of the spinal cord secondary to acute, single-site extrusion. Endoscopic approaches have a steep learning curve and extra care is required in the learning phase to avoid complications. Further studies are warranted to compare the safety and efficacy of this technique to a standard approach. / Master of Science / Acute intervertebral disc extrusion, or "slipped disc", is a common spinal emergency in dogs, particularly in small, chondrodystrophic breeds like dachshunds. Surgery is aimed at removing the disc material causing spinal cord compression. The traditional approach, known as a hemilaminectomy, involves elevating the muscles along the spine over multiple vertebrae, followed by creating a window in the bone with a surgical burr. Minimally invasive spinal surgery that minimizes the elevation of muscles, has the potential to decrease postoperative pain, surgical time, hospital stay, intraoperative blood loss and recovery time. This study was designed to assess the use of a minimally invasive, integrated endoscopic approach to a hemilaminectomy in clinical patients. Five dogs were enrolled with an acute, single site intervertebral disc extrusion between T10 and L5 that was no more than 2/3 the diameter of the cannula to be used in surgery. Study subjects were chondrodystrophic breeds under 15kg. All dogs had intact deep pain perception. Spinal cord compression was assessed by magnetic resonance imaging (MRI) both before and after a minimally invasive approach. If significant acute compression remained, a standard, open approach was immediately performed. Spinal cord decompression was adequate in all but one dog which required a second procedure to remove the remaining material. This same dog had accidental damage to the spinal cord during the minimally invasive approach. Three dogs eventually returned to normal neurologic status and the dog that required a second, traditional approach surgery eventually improved compared to his preoperative status. One dog was improving but euthanized eight days later due to chronic disease unrelated to IVDE. This approach is feasible for decompressing the spinal cord after a single site, acute intervertebral disc extrusion in a chondrodystrophic dog. However, like any endoscopic surgery, previous experience is of great benefit and errors are more likely to happen during the learning phase.
64

Avaliando o desempenho preditivo de modelos de taxa de câmbio real efetiva: análise do caso brasileiro

Saba, Nicole de Mendonça 19 August 2015 (has links)
Submitted by Nicole de Mendonça Saba (nicolesaba@gmail.com) on 2015-09-24T17:21:52Z No. of bitstreams: 1 Nicole Saba versao final_vf.pdf: 1360650 bytes, checksum: aafa0056ed232ccdb36bda0393568740 (MD5) / Approved for entry into archive by Renata de Souza Nascimento (renata.souza@fgv.br) on 2015-09-24T17:28:57Z (GMT) No. of bitstreams: 1 Nicole Saba versao final_vf.pdf: 1360650 bytes, checksum: aafa0056ed232ccdb36bda0393568740 (MD5) / Made available in DSpace on 2015-09-24T17:33:40Z (GMT). No. of bitstreams: 1 Nicole Saba versao final_vf.pdf: 1360650 bytes, checksum: aafa0056ed232ccdb36bda0393568740 (MD5) Previous issue date: 2015-08-19 / Este trabalho procura identificar quais variáveis são as mais relevantes para previsão da taxa de câmbio real do Brasil e analisar a robustez dessas previsões. Para isso foram realizados testes de cointegração de Johansen em 13 variáveis macroeconômicas. O banco de dados utilizado são séries trimestrais e compreende o período de 1970 a 2014 e os testes foram realizados sobre as séries combinadas dois a dois, três a três e quatro a quatro. Por meio desse método, encontramos nove grupos que cointegram entre si. Utilizando esses grupos, são feitas previsões fora da amostra com a partir das últimas 60 observações. A qualidade das previsões foi avaliada por meio dos testes de Erro Quadrático Médio, teste de Diebold-Mariano e, além disso, foi utilizado um modelo de passeio aleatório do câmbio real como benchmark para o procedimento de Hansen. Todos os testes mostram que, à medida que se aumenta o horizonte de projeção, o passeio aleatório perde poder preditivo e a maioria dos modelos são mais informativos sobre o futuro da o câmbio real efetivo. O horizonte é de três a quatro anos à frente. No caso do teste de Hansen, o passeio aleatório é completamente eliminado do grupo final de modelos, mostrando que é possível fazer previsões superiores ao passeio aleatório. / This paper seeks to identify which variables are most relevant to forecast Brazil's real exchange rate and also analyze the robustness of the results. To that end, we conducted Johansen cointegration tests on 13 different variables. The database covers the period of 1970 to 2014 with quarterly frequency. The series were combined in subsets of two, three and four variables. After conducting the Johansen cointegration test, we found that nine different groups that are cointegrated. We then proceed to estimate out-of-sample forecasts of the real exchange rate for each of these nine groups. Once we have these forecasts, we evaluate their quality by calculating their mean squared errors and conduct the Diebold-Mariano Test. We also use a random walk model of the exchange rate as benchmark for Hansen's model confidence set. All of the tests show that, as we expand the forecast horizon, the random walk series' predictive power is far worse than the other forecasts. In the case of Hansen's model confidence set, the random walk series is eliminated from the final confidence set of models. The time horizon is three to four years, which gives us evidence of forecast accuracy gains superior to the random walk model for the exchange rate in the long term.
65

Folkbibliotek och Facebook : Hur folkbibliotek arbetar med Facebook / Public libraries and Facebook : How public libraries use Facebook

Cattani, Fredrik, Brassman, Anna-Stina January 2018 (has links)
The aim of this study is to examine how and why public libraries use Facebook. Internet has changed how people interact with one another in society. This is also true for how public libraries are active on social media just as their users are. This study is based on two questions; How do public libraries work with Facebook? Which purposes does work with Facebook serve? Employees at eight different public libraries were interviewed. Analysis was made by using the four-space model by Jochumsen et al. The findings show that the work with Facebook is organised differently between the libraries, which is either by a group or one or two individuals. Number of posts, how they are written and when they post, vary among the participating libraries as well. Consistency in posting leads to more followers and success on Facebook. The findings also show that the main reason for public libraries to use Facebook is to promote their activities and services. A majority of the libraries wants to interact with their users, but it is difficult to achieve. Contrary to what has been written before about this subject, a majority of the informants agreed that time is not an issue as the work is planned to include time for Facebook.
66

Vers l'industrialisation de cellules solaires photovoltaïques organiques imprimables à base de semi-conducteurs moléculaires / Toward the industrialization of organic printable solar cells based on molecular semiconductors

Destouesse, Élodie 24 June 2016 (has links)
Les cellules solaires organiques ont longtemps été qualifiées de cellules solaires« polymères ». Cette appellation découle du fait que la couche active de telles cellules solaires a majoritairement été réalisée avec un polymère donneur d’électrons. L’utilisation d’un polymère au sein de la couche active a permis d’envisager la production de cellules solaires organiques par voie liquide avec des procédés d’impression à grande vitesse. Il existe cependant un autre type de matériau donneur d’électrons : les petites molécules. Ces dernières déposées par évaporation thermique permettent d’obtenir des cellules à haut rendement. A cause de leur faible propriété filmogène, les petites molécules n’ont cependant pas été envisagées pour un procédé d’impression industrielle. Or, en 2012 plusieurs petites molécules déposables par voie liquide font leur apparition et permettent d’obtenir des rendements suffisamment élevés à l’échelle laboratoire, pour envisager leur à l’échelle industrielle. Ces travaux de thèse ont été conduits en collaboration avec ARMOR, une entreprise visant à commercialiser les cellules solaires organiques, dans le but d’évaluer le potentiel d’industrialisation des petites molécules donneuses d’électrons. Le p-DTS(FBTTh2)2 a été choisi pour cette étude. Il a été montré qu’il était possible d’atteindre des rendements de 2% avec ce matériau à l’air, avec des solvants non toxiques en utilisant un procédé d’enduction à racle. L’industrialisation du p-DTS(FBTTh2)2 n’a cependant pas été poursuivie car ce dernier est très instable à l’air. Ces travaux présentent une méthodologie pouvant être utilisée pour évaluer l’industrialisation d’autres matériaux de ce type. / Organic solar cells are often called “polymer” solar cells. This term comes from the fact that the active layer of such solar cells have been widely made with a donor polymer. The use of polymer inthe active layer gives interesting filming properties that can be used to produce these solar cells industrially with a high speed printing process. Yet, another type of donor materials exists: the small molecules. Deposited by thermal evaporation, this type of materials can allow to reach high efficiency solar cells. Because of their poor filming properties, small molecules were not a good candidate for an industrialization using high speed printing. However, in 2012 several solution processable small molecules were proven particularly promising by demonstrating high efficiency at a laboratory scale.These encouraging results let imagine that it could be possible to produce organic solar cells with such materials. This PhD work has been done in collaboration with ARMOR, a company highly implied in the commercialization of organic solar cells, in order to evaluate if small molecules materials could be use dindustrially with a high speed printing process. The p-DTS(FBTTh2)2 has been chosen for this study. It has been shown that it is possible to reach efficiencies as high as 2 % with such a material, using non toxicsolvents and by making the solar cell in the air with a Doctor Blade. Nevertheless, the industrialization ofthe p-DTS(FBTTh2)2 has not been pursued due to the rapid degradation of this molecule in the air. This work presents a method that can be used to evaluate the industrialization of other efficient small molecules.
67

Etude physico-chimique d’organogels et d’aérogels de faible poids moléculaire dérivés d’acides aminés / Physico-chemical study of amino-acid-based low-molecular-weight organogels and aerogels

Allix, Florent 14 June 2011 (has links)
Ce travail décrit la synthèse et les propriétés gélifiantes de nouveaux dérivés d’acides aminés de faible poids moléculaire dans des solvants organiques ainsi que l’élaboration d’aérogels correspondants par séchage au CO2 supercritique. Nous avons pu montrer, dans notre cas, que seuls les dérivés de la leucine et de la phénylalanine étaient nécessaires au phénomène de gélation. L’étude des paramètres des solvants a permis de montrer que les paramètres de Hansen h des solvants gélifiés s’inscrivaient dans un domaine étroit de valeurs faibles ; il inclut des solvants aromatiques et des solvants chlorés. L’usage de spectroscopies diverses (IR, RMN, dichroïsme circulaire et fluorescence) a permis de mettre en évidence les interactions responsables du phénomène de gélation. Les liaisons hydrogène permettent l’empilement unidimensionnel des molécules gélatrices, ces empilements s’associent ensuite grâce à des interactions de - stacking intercolonnaires. Des aérogels monolithiques ont pu être obtenus. Ils présentent des propriétés remarquables parmi lesquelles une conductivité thermique sous vide extrêmement faible / This work describes the synthesis and the gelation properties of new amino-acid-based low-molecular-weight derivatives in organic solvents as well as the development of the corresponding aerogels by supercritical CO2 drying. We have proved that in our case the presence of phenylalanine or leucine lateral chains were necessary for gelation. A solvent parameters study led us to define a favourable narrow h Hansen parameter domain for gelation including aromatic and chlorinated solvents. The use of several spectroscopy methods (IR, NMR, circular dihroism and fluorescence) allowed to settle the interactions accountable for gelation phenomenon. Hydrogen bonds enable the unidimensional stacking-up of gelator molecules; next, the stacking-up are associated through intercolumnar - stacking interactions. Monolithic aerogels were obtained. They display noteworthy properties among them an extremely low thermal conductivity under vacuum
68

Misspecified financial models in a data-rich environment

Nokho, Cheikh I. 03 1900 (has links)
En finance, les modèles d’évaluation des actifs tentent de comprendre les différences de rendements observées entre divers actifs. Hansen and Richard (1987) ont montré que ces modèles sont des représentations fonctionnelles du facteur d’actualisation stochastique que les investisseurs utilisent pour déterminer le prix des actifs sur le marché financier. La littérature compte de nombreuses études économétriques qui s’intéressent à leurs estimations et à la comparaison de leurs performances, c’est-à-dire de leur capa- cité à expliquer les différences de rendement observées. Cette thèse, composée de trois articles, contribue à cette littérature. Le premier article examine l’estimation et la comparaison des modèles d’évaluation des actifs dans un environnement riche en données. Nous mettons en œuvre deux méthodes de régularisation interprétables de la distance de Hansen and Jagannathan (1997, HJ ci-après) dans un contexte où les actifs sont nombreux. Plus précisément, nous introduisons la régularisation de Tikhonov et de Ridge pour stabiliser l’inverse de la matrice de covariance de la distance de HJ. La nouvelle mesure, qui en résulte, peut être interprétée comme la distance entre le facteur d’actualisation d’un modèle et le facteur d’actualisation stochastique valide le plus proche qui évalue les actifs avec des erreurs contrôlées. Ainsi, ces méthodes de régularisation relâchent l’équation fondamentale de l’évaluation des actifs financiers. Aussi, elles incorporent un paramètre de régularisation régissant l’ampleur des erreurs d’évaluation. Par la suite, nous présentons une procédure pour estimer et faire des tests sur les paramètres d’un modèle d’évaluation des actifs financiers avec un facteur d’actualisation linéaire en minimisant la distance de HJ régularisée. De plus, nous obtenons la distribution asymptotique des estimateurs lorsque le nombre d’actifs devient grand. Enfin, nous déterminons la distribution de la distance régularisée pour comparer différents modèles d’évaluation des actifs. Empiriquement, nous estimons et comparons quatre modèles à l’aide d’un ensemble de données comportant 252 portefeuilles. Le deuxième article estime et compare dix modèles d’évaluation des actifs, à la fois inconditionnels et conditionnels, en utilisant la distance de HJ régularisée et 3 198 portefeuilles s’étendant de juillet 1973 à juin 2018. Ces portefeuilles combinent les portefeuilles bien connus triés par caractéristiques avec des micro-portefeuilles. Les micro-portefeuilles sont formés à l’aide de variables financières mais contiennent peu d’actions (5 à 10), comme indiqué dans Barras (2019). Par conséquent, ils sont analogues aux actions individuelles, offrent une grande variabilité de rendements et améliorent le pouvoir discriminant des portefeuilles classiques triés par caractéristiques. Parmi les modèles considérés, quatre sont des modèles macroéconomiques ou théoriques, dont le modèle de CAPM avec consommation (CCAPM), le modèle de CAPM avec consommation durable (DCAPM) de Yogo (2006), le modèle de CAPM avec capital humain (HCAPM) de Jagannathan and Wang (1996), et le modèle d’évaluation des actifs avec intermédiaires financiers (IAPM) de He, Kelly, and Manela (2017). Cinq modèles basés sur les anomalies sont considérés, tels que les modèles à trois (FF3) et à cinq facteurs (FF5) proposés par Fama and French, 1993 et 2015, le modèle de Carhart (1997) intégrant le facteur Momentum dans FF3, le modèle de liquidité de Pástor and Stambaugh (2003) et le modèle q5 de Hou et al. (2021). Le modèle de consommation de Lettau and Ludvigson (2001) utilisant des données trimestrielles est également estimé. Cependant, il n’est pas inclus dans les comparaisons en raison de la puissance de test réduite. Par rapport aux modèles inconditionnels, les modèles conditionnels tiennent compte des cycles économiques et des fluctuations des marchés financiers en utilisant les indices d’incertitude macroéconomique et financière de Ludvigson, Ma, and Ng (2021). Ces modèles conditionnels ont des erreurs de spécification considérablement réduites. Les analyses comparatives des modèles inconditionnels indiquent que les modèles macroéconomiques présentent globalement les mêmes pouvoirs explicatifs. De plus, ils ont un pouvoir explicatif global inférieur à celui des modèles basés sur les anomalies, à l’exception de FF3. L’augmentation de FF3 avec le facteur Momentum et de liquidité améliore sa capacité explicative. Cependant ce nouveau modèle est inférieur à FF5 et q5. Pour les modèles conditionnels, les modèles macroéconomiques DCAPM et HCAPM surpassent CCAPM et IAPM. En outre, ils ont des erreurs de spécification similaires à celles des modèles conditionnels de Carhart et de liquidité, mais restent en deçà des modèles FF5 et q5. Ce dernier domine tous les autres modèles. Le troisième article présente une nouvelle approche pour estimer les paramètres du facteur d’actualisation linéaire des modèles d’évaluation d’actifs linéaires mal spécifiés avec de nombreux actifs. Contrairement au premier article de Carrasco and Nokho (2022), cette approche s’applique à la fois aux rendements bruts et excédentaires. La méthode proposée régularise toujours la distance HJ : l’inverse de la matrice de second moment est la matrice de pondération pour les rendements bruts, tandis que pour les rendements excédentaires, c’est l’inverse de la matrice de covariance. Plus précisément, nous dérivons la distribution asymptotique des estimateurs des paramètres du facteur d’actualisation stochastique lorsque le nombre d’actifs augmente. Nous discutons également des considérations pertinentes pour chaque type de rendements et documentons les propriétés d’échantillon fini des estimateurs. Nous constatons qu’à mesure que le nombre d’actifs augmente, l’estimation des paramètres par la régularisation de l’inverse de la matrice de covariance des rendements excédentaires présente un contrôle de taille supérieur par rapport à la régularisation de l’inverse de la matrice de second moment des rendements bruts. Cette supériorité découle de l’instabilité inhérente à la matrice de second moment des rendements bruts. De plus, le rendement brut de l’actif sans risque présente une variabilité minime, ce qui entraîne une colinéarité significative avec d’autres actifs que la régularisation ne parvient pas à atténuer. / In finance, asset pricing models try to understand the differences in expected returns observed among various assets. Hansen and Richard (1987) showed that these models are functional representations of the discount factor investors use to price assets in the financial market. The literature counts many econometric studies that deal with their estimation and the comparison of their performance, i.e., how well they explain the differences in expected returns. This thesis, divided into three chapters, contributes to this literature. The first paper examines the estimation and comparison of asset pricing models in a data-rich environment. We implement two interpretable regularization schemes to extend the renowned Hansen and Jagannathan (1997, HJ hereafter) distance to a setting with many test assets. Specifically, we introduce Tikhonov and Ridge regularizations to stabilize the inverse of the covariance matrix in the HJ distance. The resulting misspecification measure can be interpreted as the distance between a proposed pricing kernel and the nearest valid stochastic discount factor (SDF) pricing the test assets with controlled errors, relaxing the Fundamental Equation of Asset Pricing. So, these methods incorporate a regularization parameter governing the extent of the pricing errors. Subsequently, we present a procedure to estimate the SDF parameters of a linear asset pricing model by minimizing the regularized distance. The SDF parameters completely define the asset pricing model and determine if a particular observed factor is a priced source of risk in the test assets. In addition, we derive the asymptotic distribution of the estimators when the number of assets and time periods increases. Finally, we derive the distribution of the regularized distance to compare comprehensively different asset pricing models. Empirically, we estimate and compare four empirical asset pricing models using a dataset of 252 portfolios. The second paper estimates and compares ten asset pricing models, both unconditional and conditional, utilizing the regularized HJ distance and 3198 portfolios spanning July 1973 to June 2018. These portfolios combine the well-known characteristic-sorted portfolios with micro portfolios. The micro portfolios are formed using firms' observed financial characteristics (e.g. size and book-to-market) but contain few stocks (5 to 10), as discussed in Barras (2019). Consequently, they are analogous to individual stocks, offer significant return spread, and improve the discriminatory power of the characteristics-sorted portfolios. Among the models, four are macroeconomic or theoretical models, including the Consumption Capital Asset Pricing Model (CCAPM), Durable Consumption Capital Asset Pricing Model (DCAPM) by Yogo (2006), Human Capital Capital Asset Pricing Model (HCAPM) by Jagannathan and Wang (1996), and Intermediary Asset pricing model (IAPM) by He, Kelly, and Manela (2017). Five anomaly-driven models are considered, such as the three (FF3) and Five-factor (FF5) Models proposed by Fama and French, 1993 and 2015, the Carhart (1997) model incorporating momentum into FF3, the Liquidity Model by Pástor and Stambaugh (2003), and the Augmented q-Factor Model (q5) by Hou et al. (2021). The Consumption model of Lettau and Ludvigson (2001) using quarterly data is also estimated but not included in the comparisons due to the reduced power of the tests. Compared to the unconditional models, the conditional ones account for the economic business cycles and financial market fluctuations by utilizing the macroeconomic and financial uncertainty indices of Ludvigson, Ma, and Ng (2021). These conditional models show significantly reduced pricing errors. Comparative analyses of the unconditional models indicate that the macroeconomic models exhibit similar pricing performances of the returns. In addition, they display lower overall explanatory power than anomaly-driven models, except for FF3. Augmenting FF3 with momentum and liquidity factors enhances its explanatory capability. However, the new model is inferior to FF5 and q5. For the conditional models, the macroeconomic models DCAPM and HCAPM outperform CCAPM and IAPM. Furthermore, they have similar pricing errors as the conditional Carhart and liquidity models but still fall short of the FF5 and q5. The latter dominates all the other models. This third paper introduces a novel approach for estimating the SDF parameters in misspecified linear asset pricing models with many assets. Unlike the first paper, Carrasco and Nokho (2022), this approach is applicable to both gross and excess returns as test assets. The proposed method still regularizes the HJ distance: the inverse of the second-moment matrix is the weighting matrix for the gross returns, while for excess returns, it is the inverse of the covariance matrix. Specifically, we derive the asymptotic distribution of the SDF estimators under a double asymptotic condition where the number of test assets and time periods go to infinity. We also discuss relevant considerations for each type of return and document the finite sample properties of the SDF estimators with gross and excess returns. We find that as the number of test assets increases, the estimation of the SDF parameters through the regularization of the inverse of the excess returns covariance matrix exhibits superior size control compared to the regularization of the inverse of the gross returns second-moment matrix. This superiority arises from the inherent instability of the second-moment matrix of gross returns. Additionally, the gross return of the risk-free asset shows minimal variability, resulting in significant collinearity with other test assets that the regularization fails to mitigate.
69

Efeitos da radiação ultra-sônica pulsada e de baixa intensidade sobre o mal perfurante plantar (MPP), manifestação cutânea decorrente da Hanseníase / Effects of the low intensity pulsed ultrasound on the Hansen’s perforating plantar disease (MPP), cutaneous manifestation from Hansen’s disease

Campanelli, Fabio 18 January 2005 (has links)
Diante de pesquisas realizadas com o 5Ultra-som pulsado de baixa intensidade na regeneração de pele de ratos submetidos a queimaduras provocadas por calor (ALVES, 1988) e em pacientes portadores de úlceras tróficas de perna (HILÁRIO, 1993) se propôs estudar os efeitos do ultra-Som pulsado de baixa intensidade em pacientes com Mal de Hansen (MH) cujas manifestações cutâneas eram caracterizadas como o Mal Perfurante Plantar (MPP) e úlceras do tegumento, sendo que o emprego do ultra-som pulsado e de baixa intensidade mostrou-se eficaz na reparação das referidas lesões. O presente trabalho foi realizado em pacientes assistidos pelo Sistema Único de Saúde (SUS) na cidade de Bebedouro-SP lotados no setor de Vigilância Epidemiológica. A casuística constitui-se de seis pacientes apresentando MPP nos quais aplicou-se o ultra-som pulsado (U.S.P.) em uma freqüência de três vezes por semana no mesmo período circadiano. Os tratamentos tiveram duração variando entre vinte e quarenta minutos consoante a extensão das lesões cutânea. Independente do tempo das lesões não foi estabelecido previamente o número de aplicações a serem executadas, mas as aplicações foram realizadas até a obtenção da cicatrização total das lesões. A evolução das lesões até a cicatrização foi feita mediante o emprego de um software especialmente desenvolvido para tal finalidade e documentados fotograficamente. Não houve correlação entre o número de aplicações com o tamanho da lesão ou com o tempo decorrido do aparecimento das mesmas, embora a extensão e a forma das diferentes ulcerações não sejam equivalentes, quer no tempo de evolução, quer na profundidade das lesões, os resultados dos tratamentos com U.S.P. e de baixa intensidade mostraram-se segundo a metodologia de avaliação ser altamente satisfatório para o tratamento de lesões cutâneas decorrente da Hanseníase / Based on researches carried out with the low intensity pulsed ultrasound on the regeneration of burned rat skin previously exposed to heat and on legs trophic ulcers we proposed to study the effects of the low intensity pulsed ultrasound on Hansen’s disease patients whose cutaneous manifestations were characterized as the MPP. The study was carried out on six patients supported by the Public Health Care System (SUS/PHCS) in the city of Bebedouro- SP, crowded on the Epidemiological Observation ward. The low intensity pulsed ultrasonic radiation administrations were carried out three times a week on the same circadian. Irradiation time varied from twenty to forty minutes according to the extension of the lesion. The number of the administrations, which were carried out until the complete lesion cicatrisation in all patients, was not previously stipulated. Evaluation of cicatrisation was carried out making use of software and photographs taken at the beginning of the treatment and after every ten administrations of the pulsed ultrasound until the complete ulcers cicatrisation. It was not observed any correlation between number of ultrasound applications and area of the lesion or age of them. Although extension and shape were not equivalent, according to these results, pulsed low intensity ultrasound can be considered as an adjunctive treatment for cutaneous manifestations of Hansen’s disease
70

The search for ancient hair: a scientific approach to the probabilities and recovery of unattached hair in archaeological sites

Turner-Pearson, Katherine 15 May 2009 (has links)
A recent upsurge exists of archaeologists using ancient hair as a research tool, with new uses of this previously discarded archaeological material being introduced annually. Human hair deteriorates extremely slowly, and since the average modern human sheds approximately one hundred hairs per day, there should be copious amounts of hair debris left behind after humans leave a site; it is just a matter of how much of the hair survives in the archaeological environment. Most loose hair recovered from archaeological sites, however, is found fortuitously and in many cases, because archaeologists were not actively searching for ancient hair, it is possible they tainted the hair they later tested in ways that compromised their data, or more importantly contaminated their samples with modern hair and did not test ancient hair at all. No standardized method has previously been established for searching for ancient hair in an archaeological site. This paper considers (a) a method of soil extraction in the field that avoids contamination with modern hair and elements that might hinder later test data; (b) the processing of samples in the laboratory while continuing sample integrity; (c) identification of the types of soils and environments that are most favorable to hair preservation; and (d) an examination of the relevance of hair extraction from sites including the practicality and research potential. This paper examines five archaeological sites, using three different methods of hair extraction, examining the pros and cons of each. This should enable future researchers to find a method that works best for their particular site. It also analyzes the soil chemistry of the sites in order to study the soil and hair survival relationship, so that scientists can better determine which soils hold the best potential for hair survival. Laboratory methods that avoid contamination of the samples are also outlined in order to help researchers keep sample integrity after leaving the archaeological site.

Page generated in 0.0439 seconds