421 |
Asymptotic methods for tests of homogeneity for finite mixture modelsStewart, Michael Ian January 2002 (has links)
We present limit theory for tests of homogeneity for finite mixture models. More specifically, we derive the asymptotic distribution of certain random quantities used for testing that a mixture of two distributions is in fact just a single distribution. Our methods apply to cases where the mixture component distributions come from one of a wide class of one-parameter exponential families, both continous and discrete. We consider two random quantities, one related to testing simple hypotheses, the other composite hypotheses. For simple hypotheses we consider the maximum of the standardised score process, which is itself a test statistic. For composite hypotheses we consider the maximum of the efficient score process, which is itself not a statistic (it depends on the unknown true distribution) but is asymptotically equivalent to certain common test statistics in a certain sense. We show that we can approximate both quantities with the maximum of a certain Gaussian process depending on the sample size and the true distribution of the observations, which when suitably normalised has a limiting distribution of the Gumbel extreme value type. Although the limit theory is not practically useful for computing approximate p-values, we use Monte-Carlo simulations to show that another method suggested by the theory, involving using a Studentised version of the maximum-score statistic and simulating a Gaussian process to compute approximate p-values, is remarkably accurate and uses a fraction of the computing resources that a straight Monte-Carlo approximation would.
|
422 |
Vers une philosophie du Web : le Web comme devenir-artefact de la philosophie (entre URIs, tags, ontologie (s) et ressources) / Towards a philosophy of the Web : the Web as philosophy becoming artefact (between URIs, tags, ontologies and resources)Monnin, Alexandre 08 April 2013 (has links)
Cette thèse entend prendre acte de l'importance du Web d'un point de vue philosophique. Importance double : à la fois comme objet de recherche, qui, dans le sillage du Web Sémantique et de l'architecture du Web, à des titres divers, entre en résonance évidente avec les problématiques classiques de la métaphysique et de la philosophie du langage. Dans cette perspective, nous étudions quelques-uns de ses composants principaux (URI, ressources, tags, etc.). En parallèle, nous soulignons son importance au regard de la question du devenir de la philosophie elle-même. En effet, le travail entrepris ne s'est nullement contenté de projeter les concepts à priori d'une philosophia perennis. Il a consisté, au contraire, à interroger les architectes du Web eux-mêmes pour faire émerger leur métaphysique empirique, en observant les controverses qu'elle a suscitées. Prendre acte de la portée ontogonique d'une pratique telle que « l'ingénierie philosophique », selon l'expression de Tim Berners-Lee, pensée ici comme la production de nouvelles distinctions dans un monde en train de se faire, nous conduit à mener une réflexion plus vaste sur la nature de l'objectivation. Celle-ci rejoint en fin de compte des préoccupations politiques, dans la perspective de l'établissement d'un monde commun, auquel le Web participe activement. / The aim of this thesis is to account for the importance of the Web from a philosophical point of view. In a twofold fashion: as an object for research that, in the wake of the Semantic Web and Webarch, in different ways, is obviously consonant with many classical issues in metaphysics and the philosophy of language. From this perspective, we study some of its main building blocks (URI, resources, tags, etc.). Along with this aspect, we underline its importance as regards what's becoming of philosophy itself. This is all the more important since the task at hand demanded that we did not project philosophical categories a priori and lend ourselves to commit the "inscription error" acutely described by Brian Cantwell Smith, by resorting to a form of philosophia perennis. Conversely, we tried to focus our attention on Web architects themselves in order to bring their empirical metaphysics to the forefront, observing the controversies to which it lent itself. By acknowledging the « ontogonic » scope of such a practice as « philosophical engineering », an expression coined by no other than Tim Berners-Lee himself, understood as the production of new distinctions and entities like resources in a world that unfolds, we were made to ponder broader topics like the nature of objectivation. In the end, this gave rise to politic al concerns in line with the establishment of a shared world, in which the Web is heavily involved.
|
423 |
Empirical likelihood with applications in time seriesLi, Yuyi January 2011 (has links)
This thesis investigates the statistical properties of Kernel Smoothed Empirical Likelihood (KSEL, e.g. Smith, 1997 and 2004) estimator and various associated inference procedures in weakly dependent data. New tests for structural stability are proposed and analysed. Asymptotic analyses and Monte Carlo experiments are applied to assess these new tests, theoretically and empirically. Chapter 1 reviews and discusses some estimation and inferential properties of Empirical Likelihood (EL, Owen, 1988) for identically and independently distributed data and compares it with Generalised EL (GEL), GMM and other estimators. KSEL is extensively treated, by specialising kernel-smoothed GEL in the working paper of Smith (2004), some of whose results and proofs are extended and refined in Chapter 2. Asymptotic properties of some tests in Smith (2004) are also analysed under local alternatives. These special treatments on KSEL lay the foundation for analyses in Chapters 3 and 4, which would not otherwise follow straightforwardly. In Chapters 3 and 4, subsample KSEL estimators are proposed to assist the development of KSEL structural stability tests to diagnose for a given breakpoint and for an unknown breakpoint, respectively, based on relevant work using GMM (e.g. Hall and Sen, 1999; Andrews and Fair, 1988; Andrews and Ploberger, 1994). It is also original in these two chapters that moment functions are allowed to be kernel-smoothed after or before the sample split, and it is rigorously proved that these two smoothing orders are asymptotically equivalent. The overall null hypothesis of structural stability is decomposed according to the identifying and overidentifying restrictions, as Hall and Sen (1999) advocate in GMM, leading to a more practical and precise structural stability diagnosis procedure. In this framework, these KSEL structural stability tests are also proved via asymptotic analysis to be capable of identifying different sources of instability, arising from parameter value change or violation of overidentifying restrictions. The analyses show that these KSEL tests follow the same limit distributions as their counterparts using GMM. To examine the finite-sample performance of KSEL structural stability tests in comparison to GMM's, Monte Carlo simulations are conducted in Chapter 5 using a simple linear model considered by Hall and Sen (1999). This chapter details some relevant computational algorithms and permits different smoothing order, kernel type and prewhitening options. In general, simulation evidence seems to suggest that compared to GMM's tests, these newly proposed KSEL tests often perform comparably. However, in some cases, the sizes of these can be slightly larger, and the false null hypotheses are rejected with much higher frequencies. Thus, these KSEL based tests are valid theoretical and practical alternatives to GMM's.
|
424 |
Compostos fenólicos e atividade antioxidante do quiabo (Abelmoschus Esculentus (L) Moech) em pó obtido em secador de leito fixo.LISBOA, Verilânea Neyonara Faustino. 17 October 2018 (has links)
Submitted by Maria Medeiros (maria.dilva1@ufcg.edu.br) on 2018-10-17T13:12:41Z
No. of bitstreams: 1
VERILÂNEA NEYONARA FAUSTINO LISBOA - DISSERTAÇÃO (PPGEQ) 2017.pdf: 1445004 bytes, checksum: c2f80a5d04e632d9c74d55a966dca898 (MD5) / Made available in DSpace on 2018-10-17T13:12:41Z (GMT). No. of bitstreams: 1
VERILÂNEA NEYONARA FAUSTINO LISBOA - DISSERTAÇÃO (PPGEQ) 2017.pdf: 1445004 bytes, checksum: c2f80a5d04e632d9c74d55a966dca898 (MD5)
Previous issue date: 2017-11-28 / CNPq / O quiabo é um vegetal presente o ano inteiro sendo bastante consumido em diversas partes do mundo. Contudo apresenta tempo de prateleira curto gerando perdas pós-colheita. Assim a secagem constitui uma alternativa para minimizar o desperdício. Dentro dessa linha o presente trabalho tem como objetivo avaliar o efeito da secagem em temperaturas distintas sobre o teor de compostos fenólicos, atividade antioxidante e características físico-químicas do quiabo. Foi usado o secador de leito fixo com as temperaturas de 43ºC e 65ºC e uma velocidade do ar de 0,85 m/s. Todas as análises feitas no quiabo in natura foram realizadas no quiabo desidratado. Os resultados obtidos mostraram que houve uma concentração de todos os nutrientes avaliados após o processo de secagem. A aplicação de modelos empíricos resultou que o modelo de Page foi o mais adequado (R2=99,95%, SSE= 0,002 e RMSE= 0,0007) para a temperatura de 43ºC e para a temperatura de 65ºC o modelo Logarítmico forneceu melhor ajuste (R2=99,6%, SSE=0,008 e RMSE=0,020). A aplicação do modelo fenomenológico forneceu um valor de Deff = 9,16x10-8 m2/s (R2=98,5%) para a temperatura de 43ºC e um Deff = 2,16x10-7 m2/s (R2=98,2%) para 65ºC. / The okra is a vegetable present the whole year being quite consumed in several parts of the world. However, it presents short shelf life generating post-harvest losses. Thus drying is an alternative to minimize waste. Within this line the objective of this work is to evaluate the effect of drying at different temperatures on the content of phenolic compounds, antioxidant activity and physical and chemical characteristics of okra. The fixed bed drier was used with temperatures of 43ºC and 65°C and an air velocity of 0.85 m/s. All the analyzes made on okra in natura were carried out on dehydrated okra. The results showed that there was a concentration of all nutrients evaluated after the drying process. The application of empirical models showed that the Page model was the most adequate (R2 = 99.95%, SSE = 0.002 and RMSE = 0.0007) for the temperature of 43 ° C and for the temperature of 65 ° C the logarithmic model provided the best fit (R2 = 99.6%, SSE = 0.008 and RMSE = 0.020). The application of the phenomenological model yielded a Deff = 9.16x10-8 m2 / s (R2 = 98.5%) for the temperature of 43ºC and a Deff = 2,16x10-7 m2 / s (R2 = 98.2 %) to 65 ° C.
|
425 |
Contributions à l’usage des détecteurs de clones pour des tâches de maintenance logicielle / Contributions to the use of code clone detectors in software maintenance tasksCharpentier, Alan 17 October 2016 (has links)
L’existence de plusieurs copies d’un même fragment de code (nommées des clones dans lalittérature) dans un logiciel peut compliquer sa maintenance et son évolution. La duplication decode peut poser des problèmes de consistance, notamment lors de la propagation de correction debogues. La détection de clones est par conséquent un enjeu important pour préserver et améliorerla qualité logicielle, propriété primordiale pour le succès d’un logiciel.L’objectif général de cette thèse est de contribuer à l’usage des détecteurs de clones dans destâches de maintenance logicielle. Nous avons centré nos contributions sur deux axes de recherche.Premièrement, la méthodologie pour comparer et évaluer les détecteurs de clones, i.e. les benchmarksde clones. Nous avons empiriquement évalué un benchmark de clones et avons montré queles résultats dérivés de ce dernier n’étaient pas fiables. Nous avons également identifié des recommandationspour fiabiliser la construction de benchmarks de clones. Deuxièmement, la spécialisationdes détecteurs de clones dans des tâches de maintenance logicielle.Nous avons développé uneapproche spécialisée dans un langage et une tâche (la réingénierie) qui permet aux développeursd’identifier et de supprimer la duplication de code de leurs logiciels. Nous avons mené des étudesde cas avec des experts du domaine pour évaluer notre approche. / The existence of several copies of a same code fragment—called code clones in the literature—in a software can complicate its maintenance and evolution. Code duplication can lead to consistencyproblems, especially during bug fixes propagation. Code clone detection is therefore a majorconcern to maintain and improve software quality, which is an essential property for a software’ssuccess.The general objective of this thesis is to contribute to the use of code clone detection in softwaremaintenance tasks. We chose to focus our contributions on two research topics. Firstly, themethodology to compare and assess code clone detectors, i.e. clone benchmarks. We perform anempirical assessment of a clone benchmark and we found that results derived from this latter arenot reliable. We also identified recommendations to construct more reliable clone benchmarks.Secondly, the adaptation of code clone detectors in software maintenance tasks. We developed aspecialized approach in one language and one task—refactoring—allowing developers to identifyand remove code duplication in their softwares. We conducted case studies with domain experts toevaluate our approach.
|
426 |
[en] EMPIRICAL EVALUATION OF EFFORT ON COMPOSING DESIGN MODELS / [pt] AVALIAÇÃO EMPÍRICA DE ESFORÇO EM COMPOSIÇÃO DE MODELOS DE PROJETOKLEINNER SILVA FARIAS DE OLIVEIRA 19 January 2017 (has links)
[pt] Composição de modelos desempenha um papel fundamental em muitas atividades de engenharia de software como, por exemplo, evolução e reconciliação de modelos conflitantes desenvolvido em paralelo por diferentes times de desenvolvimento. Porém, os desenvolvedores têm dificuldades de realizar análises de custos e benefícios, bem como entender o real esforço de composição. Sendo assim, eles são deixados sem qualquer conhecimento prático sobre quanto é investido; além das estimativas de evangelistas que frequentemente divergem. Se o esforço de composição é alto, então os potenciais benefícios tais como aumento de produtividade podem ser comprometidos. Esta incapacidade de avaliar esforço de composição é motivada por três problemas: (i) as abordagens de avaliação atuais são inadequadas para mensurar os conceitos encontrados em composição, por exemplo, esforço e conflito; (ii) pesquisadores não sabem quais fatores podem influenciar o esforço de composição na prática. Exemplos de tais fatores seriam linguagem de modelagem e técnicas de composição que são responsáveis para manipular os modelos; (iii) a falta de conhecimento sobre como tais fatores desconhecidos afetam o esforço de composição. Esta tese, portanto, apresenta uma abordagem de avaliação de esforço de composição de modelos derivada de um conjunto de estudos experimentais. As principais contribuições são: (i) um modelo de qualidade para auxiliar a avaliação de esforço em composição de modelos; (ii) conhecimento prático sobre o esforço de composição e o impacto de fatores que afetam tal esforço; e (iii) diretivas sobre como avaliar esforço de composição, minimizar a propensão a erros, e reduzir os efeitos negativos dos fatores na prática de composição de modelos. / [en] Model composition plays a central role in many software engineering activities such as evolving models to add new features and reconciling conflicting design models developed in parallel by different development teams. As model composition is usually an error-prone and effort-consuming task, its potential benefits, such as gains in productivity can be compromised. However, there is no empirical knowledge nowadays about the effort required to compose design models. Only feedbacks of model composition evangelists are available, and they often diverge. Consequently, developers are unable to conduct any cost-effectiveness analysis as well as identify, predict, or reduce composition effort. The inability of evaluating composition effort is due to three key problems. First, the current evaluation frameworks do not consider fundamental concepts in model composition such as conflicts and inconsistencies. Second, researchers and developers do not know what factors can influence the composition effort in practice. Third, practical knowledge about how such influential factors may affect the developers effort is severely lacking. In this context, the contributions of this thesis are threefold: (i) a quality model for supporting the evaluation of model composition effort, (ii) practical knowledge, derived from a family of quantitative and qualitative empirical studies, about model composition effort and its influential factors, and (iii) insight about how to evaluate model composition efforts and tame the side effects of such influential factors.
|
427 |
Investigações sobre raciocínio e aprendizagem temporal em modelos conexionistas / Investigations about temporal reasoning and learning in connectionist modelsBorges, Rafael Vergara January 2007 (has links)
A inteligência computacional é considerada por diferentes autores da atualidade como o destino manifesto da Ciência da Computação. A modelagem de diversos aspectos da cognição, tais como aprendizagem e raciocínio, tem sido a motivação para o desenvolvimento dos paradigmas simbólico e conexionista da inteligência artificial e, mais recentemente, para a integração de ambos com o intuito de unificar as vantagens de cada abordagem em um modelo único. Para o desenvolvimento de sistemas inteligentes, bem como para diversas outras áreas da Ciência da Computação, o tempo é considerado como um componente essencial, e a integração de uma dimensão temporal nestes sistemas é fundamental para conseguir uma representação melhor do comportamento cognitivo. Neste trabalho, propomos o SCTL (Sequential Connectionist Temporal Logic), uma abordagem neuro-simbólica para integrar conhecimento temporal, representado na forma de programas em lógica, em redes neurais recorrentes, de forma que a caracterização semântica de ambas representações sejam equivalentes. Além da estratégia para realizar esta conversão entre representações, e da verificação formal da equivalência semântica, também realizamos uma comparação da estratégia proposta com relação a outros sistemas que realizam representação simbólica e temporal em redes neurais. Por outro lado, também descrevemos, de foma algorítmica, o comportamento desejado para as redes neurais geradas, para realizar tanto inferência quanto aprendizagem sob uma ótica temporal. Este comportamento é analisado em diversos experimentos, buscando comprovar o desempenho de nossa abordagem para a modelagem cognitiva considerando diferentes condições e aplicações. / Computational Intelligence is considered, by di erent authors in present days, the manifest destiny of Computer Science. The modelling of di erent aspects of cognition, such as learning and reasoning, has been a motivation for the integrated development of the symbolic and connectionist paradigms of artificial intelligence. More recently, such integration has led to the construction of models catering for integrated learning and reasoning. The integration of a temporal dimension into such systems is a relevant task as it allows for a richer representation of cognitive behaviour features, since time is considered an essential component in intelligent systems development. This work introduces SCTL (Sequential Connectionist Temporal Logic), a neuralsymbolic approach for integrating temporal knowledge, represented as logic programs, into recurrent neural networks. This integration is done in such a way that the semantic characterization of both representations are equivalent. Besides the strategy to achieve translation from one representation to another, and verification of the semantic equivalence, we also compare the proposed approach to other systems that perform symbolic and temporal representation in neural networks. Moreover, we describe the intended behaviour of the generated neural networks, for both temporal inference and learning through an algorithmic approach. Such behaviour is then evaluated by means several experiments, in order to analyse the performance of the model in cognitive modelling under di erent conditions and applications.
|
428 |
A pesquisa empírica sobre o planejamento da execução instrumental : uma reflexão crítica do sujeito de um estudo de casoBarros, Luís Cláudio January 2008 (has links)
Este trabalho propõe discutir o planejamento da execução instrumental (piano) na área de Práticas Interpretativas. Foram examinadas as diferentes abordagens e os posicionamentos de especialistas na área proposta através da reflexão crítica do conhecimento produzido pelas pesquisas empíricas. O objetivo foi examinar as temáticas, as estratégias de estudo, o campo teórico, os procedimentos metodológicos, a sistematização do processo de aprendizagem do repertório pianístico e as relações interatuantes ocorrentes nas pesquisas descritivas de delineamento experimental, nos estudos de caso, estudos com entrevista e levantamentos selecionados. Foi realizada uma análise crítica sobre as investigações que abordaram o comportamento da prática e as estratégias de estudo, além de uma reflexão sobre a construção dos referenciais dos trabalhos empíricos e os elementos envolvidos no processo de pesquisa. O presente trabalho estabeleceu seus pilares centrais no estudo crítico sobre as pesquisas empíricas na área do planejamento da execução instrumental (o pilar teórico) e a sua conexão com a experiência pessoal deste autor como sujeito de um estudo de caso durante seu Estágio de Doutorado no Exterior (o pilar prático, vivenciado). No estudo de caso, duas estratégias de estudo para resgatar as informações musicais da memória de longa duração foram elaboradas pelo sujeito para melhorar seu desempenho no Teste Experimental II. A partir da inter-relação entre os pilares e de seus reflexos na construção crítica sobre a estruturação dos processos investigativos, buscou-se discutir diversos pontos nevrálgicos nas pesquisas empíricas, como as relações hierárquicas entre pesquisador e sujeito e as possíveis lacunas investigativas. Foi sugerida uma proposta de modelo para maior interação entre as áreas da Psicologia da Música e da área de Práticas Interpretativas. Desse modo, buscouse examinar as implicações pedagógicas provenientes do profundo conhecimento das etapas do planejamento e do entendimento de como se constrói uma execução instrumental em nível de excelência. / The present thesis discusses the research line in piano performance planning. It examined different research approaches and points of view of specialists in the music field through a critical reflection about the knowledge produced by empirical research. The goal was to investigate the subjects, especially the strategies, the theoretical framework, methodological procedures, learning process and the relationship between experimental research, case studies, survey and studies with interview. A critical analysis of the research studies considering the behavior of the musicians during practice, their strategies, the construction of the theoretical references and the elements involved was undertaken. The present work established two pillars: (1) theoretical, focused on the empirical research, and (2) practical, related to the personal experience of the author as the subject in a case study during his doctoral internship. During that time, the present author elaborated two practice strategies to reinforce memory retrieval and to aid him in succeeding in the Experimental Test II. As a result of relationship between these two pillars of construction in the investigative processes, the present work discusses basic points relating to empirical research, such as the investigative problems and the hierarchy between the experimenter and the subject. The author proposed a model of interaction between the Psychology and Music areas. It is aimed at examining the pedagogical implications based on the profound understanding of performance planning, as well as how to construct an instrumental performance at a level of excellence.
|
429 |
Understanding And Guiding Software Product Lines Evolution Based On Requirements Engineering ActivitiesOliveira, Raphael Pereira de 10 September 2015 (has links)
Submitted by Kleber Silva (kleberbs@ufba.br) on 2017-06-01T20:36:17Z
No. of bitstreams: 1
2015_Thesis_Final_v01.pdf: 25381943 bytes, checksum: cf9b5a7ab05c5f433c6abe06c7c8815e (MD5) / Approved for entry into archive by Vanessa Reis (vanessa.jamile@ufba.br) on 2017-06-07T11:38:56Z (GMT) No. of bitstreams: 1
2015_Thesis_Final_v01.pdf: 25381943 bytes, checksum: cf9b5a7ab05c5f433c6abe06c7c8815e (MD5) / Made available in DSpace on 2017-06-07T11:38:56Z (GMT). No. of bitstreams: 1
2015_Thesis_Final_v01.pdf: 25381943 bytes, checksum: cf9b5a7ab05c5f433c6abe06c7c8815e (MD5) / Software Product Line (SPL) has emerged as an important strategy to cope with the increasing demand of large-scale products customization. SPL has provided companies with an efficient and effective means of delivering products with higher quality at a lower cost, when compared to traditional software engineering strategies. However, such benefits do not come for free.
There is a necessity in SPL to deal with the evolution of its assets to support changes within the environment and user needs. These changes in SPL are firstly represented by requirements. Thus, SPL should manage the commonality and variability of products by means of a “Requirements Engineering (RE) - change management” process. Hence, besides dealing with the reuse and evolution of requirements in an SPL, the RE for SPL also needs an approach to represent explicitly the commonality and variability information (e.g., through feature models and use cases).
To understand the evolution in SPL, this Thesis presents two empirical studies within industrial SPL projects and a systematic mapping study on SPL evolution. The two empirical studies evaluated Lehman’s laws of software evolution in two industrial SPL projects,demonstrating that most of the laws are supported by SPL environments. The systematic mapping study on SPL evolution identified approaches in the area and revealed gaps for researching, such as, that most of the proposed approaches perform the evolution of SPL requirements in an ad-hoc way and were evaluated through feasibility studies.
These results led to systematize, through guidelines, the SPL processes by starting with the SPL requirements. Thus, it was proposed an approach to specify SPL requirements called Feature-Driven Requirements Engineering (FeDRE). FeDRE specifies SPL requirements in a systematic way driven by a feature model. To deal with the evolution of FeDRE requirements, a new approach called Feature-Driven Requirements Engineering Evolution (FeDRE2) was presented. FeDRE2 is responsible for guiding, in a systematic way, the SPL evolution based on activities from RE.
Both proposed approaches, FeDRE and and FeDRE2, were evaluated and the results, besides being preliminaries, shown that the approaches were perceived as easy to use and also useful, coping with the improvement and systematization of SPL processes.
|
430 |
Essays on illiquidity premiumPereira, Ricardo Buscariolli 23 May 2014 (has links)
Submitted by Ricardo Buscariolli Pereira (ribusca@yahoo.com) on 2014-06-18T16:45:36Z
No. of bitstreams: 1
tese_final.pdf: 7712126 bytes, checksum: 31167f00e858b4955d0dbdbde639006a (MD5) / Approved for entry into archive by Suzinei Teles Garcia Garcia (suzinei.garcia@fgv.br) on 2014-06-18T18:36:04Z (GMT) No. of bitstreams: 1
tese_final.pdf: 7712126 bytes, checksum: 31167f00e858b4955d0dbdbde639006a (MD5) / Made available in DSpace on 2014-06-18T20:06:52Z (GMT). No. of bitstreams: 1
tese_final.pdf: 7712126 bytes, checksum: 31167f00e858b4955d0dbdbde639006a (MD5)
Previous issue date: 2014-05-23 / This dissertation is composed of three related essays on the relationship between illiquidity and returns. Chapter 1 describes the time-series properties of the relationship between market illiquidity and market return using both yearly and monthly datasets. We find that stationarized versions of the illiquidity measure have a positive, significant, and puzzling high premium. In Chapter 2, we estimate the response of illiquidity to a shock to returns, assuming that causality runs from returns to illiquidity and find that an increase in firms' returns lowers illiquidity. In Chapter 3 we take both effects into account and account for the endogeneity of returns and illiquidity to estimate the liquidity premium. We find evidence that the illiquidity premium is a smaller than the previous evidence suggests. Finally, Chapter 4 shows topics for future research where we describe a return decomposition with illiquidity costs.
|
Page generated in 0.0498 seconds