Spelling suggestions: "subject:"cmpirical."" "subject:"7empirical.""
11 |
Clergywomen in the Church of England : ministry and personalityRobbins, Mandy January 2002 (has links)
No description available.
|
12 |
Empirical essays on common beliefs in the valuation of some alternative assetsNouvellon, Edouard 14 September 2021 (has links) (PDF)
The dissertation is a collection of 3 essays of empirical finance. / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished
|
13 |
Essays on Empirical Industrial OrganizationRen, Junqiushi 11 August 2017 (has links)
No description available.
|
14 |
Three Essays in Investments:Kim, Jinyoung January 2024 (has links)
Thesis advisor: David H. Solomon / My dissertation comprises three essays delving into questions that contemporary investors encounter in the ever-evolving landscape of investments. The first essay examines how the presence of public pension funds as limited partners influences venture capitalists' (VCs) risk-taking behaviors. It notes that investments by public pension funds in the venture capital market have increased over the past two decades, and these funds possess unique objective functions compared to other venture capital investors. Findings suggest that VCs backed by public pensions tend to invest in startups with lower-risk profiles, such as those with technologies related to public companies, numerous patents, and later funding rounds, leading to more frequent and quicker exits but lower returns. To establish causality, I employ an instrumental variable evaluating the likelihood of public pension funding based on the location of funds initiated during a typical fundraising cycle in a venture capital firm. Furthermore, I find that public pensions prefer venture capital firms with a track record of conservatively managing funds, particularly those pensions that have previously engaged with such firms. The second essay shifts focus to the stock market, documenting higher returns from companies developing new technologies. The advancement of new technologies is pivotal to an economy’s potential, yet it carries inherent risks. As per investment theories, investors demand premiums for holding stocks associated with high uncertainty, prompting questions about whether they are adequately compensated for investing in companies undertaking highly uncertain projects. A novel application of a graph-neural network model identifies new technology patent publications annually, enabling the calculation of firms' exposure to new technologies. With the measure, I find that portfolios with high new-tech exposure outperform those with low exposure, driven by significant risk premiums. This sheds light on the positive correlation between idiosyncratic risk and stock returns, contributing to our understanding of the market's valuation of technological innovation. The third essay presents a systematic analysis of stock market valuations of Corporate Social Responsibility (CSR) initiatives. The study identifies public demand for CSR as a pivotal factor in enhancing the value of CSR activities. Analyzing market reactions to CSR activities via cumulative abnormal returns, the research finds overall neutral market responses. Nonetheless, it finds that heightened public concern for specific issues can sway market reactions positively. Also, when CSR initiatives employ strategies that extend beyond the capabilities of individuals, the market responses tend to be favorable. The paper further shows that firms strategically increase their CSR activities and choose implementation modes, aiming to enhance their value. To explain why market reactions are, on average, neutral, I further provide evidence suggesting reasons such as virtue signaling, a lack of understanding of the importance of profitability, and other executive motives. Together, these essays deepen our understanding of investments by exploring how financial market participants, corporate endeavors in technological advancements, and societal expectations for corporate social responsibility influence investor behavior and asset prices. / Thesis (PhD) — Boston College, 2024. / Submitted to: Boston College. Carroll School of Management. / Discipline: Finance.
|
15 |
Přesné kvantově mechanické výpočty nekovalentních interakcí: Racionalizace rentgenových krystalových geometrií aparátem kvantové chemie / Accurate Quantum Mechanical Calculations on Noncovalent Interactions: Rationalization of X-ray Crystal Geometries by Quantum Chemistry ToolsHostaš, Jiří January 2017 (has links)
There is a need for reliable rules of thumb for various applications in the area of biochemistry, supramolecular chemistry and material sciences. Simultaneously, the amount of information, which we can gather from X-ray crystal geometries about the nature of recognition processes, is limited. Deeper insight into the noncovalent interactions playing the most important role is needed in order to revise these universal rules governing any recognition process. In this thesis, systematic development and study of the accuracy of the computational chemistry methods followed by their applications in protein DNA and host guest systems, are presented. The non-empirical quantum mechanical tools (DFT-D, MP2.5, CCSD(T) etc. methods) were utilized in several projects. We found and confirmed unique low lying interaction energies distinct from the rest of the distributions in several amino acid−base pairs opening a way toward universal rules governing the selective binding of any DNA sequence. Further, the predictions and examination of changes of Gibbs energies (ΔG) and its subcomponents have been made in several cases and carefully compared with experiments. We determined that the choline (Ch+) guest is bound 2.8 kcal/mol stronger (calculated ΔG) than acetylcholine (ACh+) to self-assembled triple helicate rigid...
|
16 |
A case study of cross-branch porting in Linux KernelHua, Jinru 23 July 2014 (has links)
To meet different requirements for different stakeholders, branches are widely used to maintain multiple product variants simultaneously. For example, Linux Kernel has a main development branch, known as the mainline; 35 branches to maintain older product versions which are called stable branches; and hundreds of branches for experimental features. To maintain multiple branch-based product variants in parallel, developers often port new features or bug-fixes from one branch to another. In particular, the process of propagating bug-fixes or feature additions to an older version is commonly called backporting. Prior to our study, backporting practices in large scale projects have not been systematically studied. This lack of empirical knowledge makes it difficult to improve the current backporting process in the industry. We hypothesized that cross-branch porting practice is frequent, repetitive, and error-prone. It required significant effort for developers to select patches that need to be backported and then apply them to the target implementation. We carried out two complementary studies to examine this hypothesis. To investigate the extent and effort of porting practice, this thesis first conducted a quantitative study of backporting activities in Linux Kernel with a total of 8 years version history using the data of the main branch and the 35 stable branches. Our study showed that backporting happened at a rate of 149 changes per month, and it took 51 days to propagate patches on average. 40% of changes in the stable branches were ported from the mainline and 64% of ported patches propagated to more than one branch. Out of all backporting changes from the mainline to stable branches, 97.5% were applied without any manual modifications. To understand how Linux Kernel developers keep up to date with development activities across different branches, we carried out an online survey with engineers who may have ported code from the mainline to stable branches based on our prior analysis of Linux Kernel version history. We received 14 complete responses. The participants have 12.6 years of Linux development experience on average and are either maintainers or experts of Linux Kernel. The survey showed that most backporting work was done by the maintainers who knew the program quite well. Those experienced maintainers could easily identify the edits that need to be ported and propagate them with all relevant changes to ensure consistency in multiple branches. Inexperience developers were seldom given an opportunity to backport features or bug-fixes to stable branches. In summary, based on the version history study and the online survey, we concluded that cross-branch porting is frequent, periodic, and repetitive. It required a manual effort to selectively identify the changes that need to be ported, to analyze the dependency of the selected changes, and to apply all required changes to ensure consistency. To eliminate human's omission mistakes, most backporting work was done only by experienced maintainers who could identify all relevant changes along with the change that needed to be backported. Currently inexperienced developers were excluded from cross-branch porting activities from the mainline to stable branches in Linux Kernel. Our results call for an automated approach to identify the patches that require to be ported, to collect context information to help developers become aware of relevant changes, and to notify pertinent developers who may be responsible for the corresponding porting events. / text
|
17 |
Jackknife Empirical Likelihood for the Variance in the Linear Regression ModelLin, Hui-Ling 25 July 2013 (has links)
The variance is the measure of spread from the center. Therefore, how to accurately estimate variance has always been an important topic in recent years. In this paper, we consider a linear regression model which is the most popular model in practice. We use jackknife empirical likelihood method to obtain the interval estimate of variance in the regression model. The proposed jackknife empirical likelihood ratio converges to the standard chi-squared distribution. The simulation study is carried out to compare the jackknife empirical likelihood method and standard method in terms of coverage probability and interval length for the confidence interval of variance from linear regression models. The proposed jackknife empirical likelihood method has better performance. We also illustrate the proposed methods using two real data sets.
|
18 |
Empirical Likelihood Inference for Two-Sample ProblemsYan, Ying January 2010 (has links)
In this thesis, we are interested in empirical likelihood (EL) methods for two-sample problems, with focus on the difference of the two population means. A
weighted empirical likelihood method (WEL) for two-sample problems is developed. We also consider a scenario where sample data on auxiliary variables are fully observed for both samples but values of the response variable are subject to missingness. We develop an adjusted empirical likelihood method for inference of the difference of the two population means for this scenario where missing values are handled by a regression imputation method. Bootstrap calibration for WEL is also developed. Simulation studies are conducted to evaluate the performance of naive EL, WEL and WEL with bootstrap calibration (BWEL) with comparison to the usual two-sample t-test in terms of power of the tests and coverage accuracies. Simulation for the adjusted EL for the linear regression model with missing data is also conducted.
|
19 |
Approach to Evaluating Clustering Using Classification Labelled DataLuu, Tuong January 2010 (has links)
Cluster analysis has been identified as a core task in data mining for which many different algorithms have been proposed. The diversity, on one hand, provides us a wide collection of tools. On the other hand, the profusion of options easily causes confusion. Given a particular task, users do not know which algorithm is good since it is not clear how clustering algorithms should be evaluated. As a consequence, users often select clustering algorithm in a very adhoc manner.
A major challenge in evaluating clustering algorithms is the scarcity of real data
with a "correct" ground truth clustering. This is in stark contrast
to the situation for classification tasks, where there
are abundantly many data sets labeled with their correct classifications. As a result, clustering research often relies on labeled data to evaluate and compare the results of clustering algorithms.
We present a new perspective on how to use labeled data for evaluating clustering algorithms, and develop an approach for comparing clustering algorithms on the basis of classification labeled data. We then use this approach to support a novel technique for choosing among clustering algorithms when no labels are available.
We use these tools to demonstrate that the utility of an algorithm depends on the specific clustering task. Investigating a set of common clustering
algorithms, we demonstrate that there are cases where each one of them
outputs better clusterings. In contrast to the current trend of looking for a superior clustering algorithm, our findings demonstrate the need for a variety of different clustering algorithms.
|
20 |
An empirical study of banks' merger and acquisitionLin, Zi-Jiun 21 June 2000 (has links)
about bank merger and acquisition
|
Page generated in 0.0457 seconds