41 |
Special metric structures and closed formsWitt, Frederik January 2005 (has links)
In recent work, N. Hitchin described special geometries in terms of a variational problem for closed generic $p$-forms. In particular, he introduced on 8-manifolds the notion of an integrable $PSU(3)$-structure which is defined by a closed and co-closed 3-form. In this thesis, we first investigate this $PSU(3)$-geometry further. We give necessary conditions for the existence of a topological $PSU(3)$-structure (that is, a reduction of the structure group to $PSU(3)$ acting through its adjoint representation). We derive various obstructions for the existence of a topological reduction to $PSU(3)$. For compact manifolds, we also find sufficient conditions if the $PSU(3)$-structure lifts to an $SU(3)$-structure. We find non-trivial, (compact) examples of integrable $PSU(3)$-structures. Moreover, we give a Riemannian characterisation of topological $PSU(3)$-structures through an invariant spinor valued 1-form and show that the $PSU(3)$-structure is integrable if and only if the spinor valued 1-form is harmonic with respect to the twisted Dirac operator. Secondly, we define new generalisations of integrable $G_2$- and $Spin(7)$-manifolds which can be transformed by the action of both diffeomorphisms and 2-forms. These are defined by special closed even or odd forms. Contraction on the vector bundle $Toplus T^*$ defines an inner product of signature $(n,n)$, and even or odd forms can then be naturally interpreted as spinors for a spin structure on $Toplus T^*$. As such, the special forms we consider induce reductions from $Spin(7,7)$ or $Spin(8,8)$ to a stabiliser subgroup conjugate to $G_2 times G_2$ or $Spin(7) times Spin(7)$. They also induce a natural Riemannian metric for which we can choose a spin structure. Again we state necessary and sufficient conditions for the existence of such a reduction by means of spinors for a spin structure on $T$. We classify topological $G_2 times G_2$-structures up to vertical homotopy. Forms stabilised by $G_2 times G_2$ are generic and an integrable structure arises as the critical point of a generalised variational principle. We prove that the integrability conditions on forms imply the existence of two linear metric connections whose torsion is skew, closed and adds to 0. In particular we show these integrability conditions to be equivalent to the supersymmetry equations on spinors in supergravity theory of type IIA/B with NS-NS background fields. We explicitly determine the Ricci-tensor and show that over compact manifolds, only trivial solutions exist. Using the variational approach we derive weaker integrability conditions analogous to weak holonomy $G_2$. Examples of generalised $G_2$- and $Spin(7)$ structures are constructed by the device of T-duality.
|
42 |
Corporate governance and firm outcomes: causation or spurious correlation?Tan, David Tatwei, Banking & Finance, Australian School of Business, UNSW January 2009 (has links)
The rapid growth of financial markets and the increasing diffusion of corporate ownership have placed tremendous emphasis on the effectiveness of corporate governance in resolving agency conflicts within the firm. This study investigates the corporate governance and firm performance/failure relation by implementing various econometric modelling methods to disaggregate causal relations and spurious correlations. Using a panel dataset of Australian firms, a comprehensive suite of corporate governance mechanisms are considered; including the ownership, remuneration, and board structures of the firm. Initial ordinary least squares (OLS) and fixed-effects panel specifications report significant causal relations between various corporate governance measures and firm outcomes. However, the dynamic generalised method of moments (GMM) results indicate that no causal relations exist when taking into account the effects of simultaneity, dynamic endogeneity, and unobservable heterogeneity. Moreover, these results remain robust when accounting for the firm??s propensity for fraud. The findings support the equilibrium theory of corporate governance and the firm, suggesting that a firm??s corporate governance structure is an endogenous characteristic determined by other firm factors; and that any observed relations between governance and firm outcomes are spurious in nature. Chapter 2 examines the corporate governance and firm performance relation. Using a comprehensive suite of corporate governance measures, this chapter finds no evidence of a causal relation between corporate governance and firm performance when accounting for the biases introduced by simultaneity, dynamic endogeneity, and unobservable heterogeneity. This result is consistent across all firm performance measures. Chapter 3 explores the corporate governance and likelihood of firm failure relation by implementing the Merton (1974) model of firm-valuation. Similarly, no significant causal relations between a firm??s corporate governance structure and its likelihood of failure are detected when accounting for the influence of endogeneity on the parameter estimates. Chapter 4 re-examines the corporate governance and firm performance/failure relation within the context of corporate fraud. Using KPMG and ASIC fraud databases, the corporate governance and firm outcome relations are estimated whilst accounting for the firms?? vulnerability to corporate fraud. This chapter finds no evidence of a causal relation between corporate governance and firm outcomes when conditioning on a firm??s propensity for fraud.
|
43 |
Corporate governance and firm outcomes: causation or spurious correlation?Tan, David Tatwei, Banking & Finance, Australian School of Business, UNSW January 2009 (has links)
The rapid growth of financial markets and the increasing diffusion of corporate ownership have placed tremendous emphasis on the effectiveness of corporate governance in resolving agency conflicts within the firm. This study investigates the corporate governance and firm performance/failure relation by implementing various econometric modelling methods to disaggregate causal relations and spurious correlations. Using a panel dataset of Australian firms, a comprehensive suite of corporate governance mechanisms are considered; including the ownership, remuneration, and board structures of the firm. Initial ordinary least squares (OLS) and fixed-effects panel specifications report significant causal relations between various corporate governance measures and firm outcomes. However, the dynamic generalised method of moments (GMM) results indicate that no causal relations exist when taking into account the effects of simultaneity, dynamic endogeneity, and unobservable heterogeneity. Moreover, these results remain robust when accounting for the firm??s propensity for fraud. The findings support the equilibrium theory of corporate governance and the firm, suggesting that a firm??s corporate governance structure is an endogenous characteristic determined by other firm factors; and that any observed relations between governance and firm outcomes are spurious in nature. Chapter 2 examines the corporate governance and firm performance relation. Using a comprehensive suite of corporate governance measures, this chapter finds no evidence of a causal relation between corporate governance and firm performance when accounting for the biases introduced by simultaneity, dynamic endogeneity, and unobservable heterogeneity. This result is consistent across all firm performance measures. Chapter 3 explores the corporate governance and likelihood of firm failure relation by implementing the Merton (1974) model of firm-valuation. Similarly, no significant causal relations between a firm??s corporate governance structure and its likelihood of failure are detected when accounting for the influence of endogeneity on the parameter estimates. Chapter 4 re-examines the corporate governance and firm performance/failure relation within the context of corporate fraud. Using KPMG and ASIC fraud databases, the corporate governance and firm outcome relations are estimated whilst accounting for the firms?? vulnerability to corporate fraud. This chapter finds no evidence of a causal relation between corporate governance and firm outcomes when conditioning on a firm??s propensity for fraud.
|
44 |
Sports arenas in Sweden : A study investigating the impact of sports arenas on net migration and amenity premiums.Gambina, Andrew January 2018 (has links)
This paper examines the impact of the building or renovation of a sports arena on net migration and amenity premiums. Swedish municipal data is collected for 289 municipalities over the period 1999 to 2016. The econometric analysis makes use of fixed effects (FE) and feasible generalised linear squares (FGLS) estimation techniques. This study builds on the growing literature of the intangible benefits of sports arenas and is one of the few Swedish studies of its kind. The results show that a sports arena built in year t, realises a 3.458% increase in net migration in year t + 5, for those sports arenas being used by football and ice hockey teams in the highest and second highest leagues.
|
45 |
A Formal Proof of Feit-Higman Theorem in AgdaRao, Balaji R January 2014 (has links) (PDF)
In this thesis we present a formalization of the combinatorial part of the proof of Feit-Higman theorem on generalized polygons. Generalised polygons are abstract geometric structures that generalize ordinary polygons and projective planes. They are closely related to finite groups.
The formalization is carried out in Agda, a dependently typed functional programming language and proof assistant based on the intuitionist type theory by Per Martin-Löf.
|
46 |
How Does the New Keynesian Phillips Curve Forecast the Rate of Inflation in the Czech Economy? / Jak nová keynesiánská phillipsova křivka odhaduje míru inflace v české ekonomice?Dřímal, Marek January 2011 (has links)
This analysis studies the phenomenon of the New Keynesian Phillips Curve - its inception from the RBC theory and DSGE modelling via incorporation of nominal rigidities, and its various specifications and empirical issues. The estimates on Czech macroeconomic data using the Generalised Method of Moments show that the hybrid New Keynesian Phillips Curve with the labour income share or the real unit labour cost as driving variables can be considered as an appropriate model describing inflation in the Czech Republic. Compared to other analyses, we show that the inflation process in the Czech Republic exhibits higher backwardness vis-a-vis other researchers' estimates based on US data.
|
47 |
Contexts preferred for use in mathematics by Swaziland high performing public schools' junior secondary learnersNgcobo, Minenhle Sthandile Faith January 2011 (has links)
Philosophiae Doctor - PhD / At primary school learners are excited about mathematics. This may be an indication that learning related to familiar contexts, connected to the learners’ interests, values and goals is necessary for motivation. At secondary school level learners begin to question the applicability of certain topics in the school syllabus and sometimes do not see the necessity of mathematics in their future careers. This is an indication that they are apprehensive regarding
the relevance of mathematics in various contexts. However, relevance has a point of reference, what is relevant to a teacher is not necessarily relevant to the learner and what is relevant to a text book writer might not be relevant to the text book reader. As mathematics educators endeavour to encourage learners to appreciate the relevance of mathematics to everyday life, it is important to be aware of their interests. It is crucial to be informed on the subject areas they desire to know about in order to plan classroom activities
that will occupy them in purposeful activity.Usually contexts for learning are chosen by adults without conferring with learners at any point. The present study investigated learners' preferences for contexts to use in learning school mathematics. Furthermore the study sought to establish motivations learners have for preferring particular contexts. The problem the study addressed was that of absence of learners' contribution in contexts used to learn mathematics. The aim was to find out the contexts learners preferred and the reasons they gave for their preferences. It is important to be aware of learners' preferences when choosing contexts to use in teaching. Preferences improve motivation and learning. Furthermore, consulting them sends a message that they matter and have an important role to play in their education. / South Africa
|
48 |
Community Participation in Poverty Reduction Interventions: Examiningthe Factors that impact on the Community-Based Organisation (CBO) Empowerment Project in GhanaBayor, Isaac January 2010 (has links)
Masters in Public Administration - MPA / Hence, in this mini-thesis I argue that community participation does not automatically facilitate gains for the poor. My main assumption is that internal rigidities in communities, such as weak social capital, culture, trust and reciprocity, affect mutual cooperation towards collective community gains. I used two communities, where a community empowerment project is implemented, as a case study to demonstrate that the success of community participation is contingent on the stocks of social capital in the community. The results show that the responsiveness of the two communities to the project activities differs with the stocks of social capital. I found that trust among community members facilitates information flow in the community. The level of trust is also related to the sources of information of community members about development activities in the community. I also found that solidarity is an important dimension of social capital, which determines community members’ willingness to help one another and to participate in activities towards collective community gain. The research also demonstrated that perception of community members about target beneficiaries of projects - whether they represent the interest of the majority of the community or only the interest of community leaders - influences the level of confidence and ownership of the project. From my research findings, I concluded that, in order for community participation to work successfully, development managers need to identify the stocks of social capital in the community that will form the basis to determine the level of engagement with community members in the participatory process. / South Africa
|
49 |
Generalised beta type II distributions - emanating from a sequential processAdamski, Karien January 2013 (has links)
This study focuses on the development of a generalised multivariate beta type II distribution
as well as the noncentral and bimatrix counterparts with positive domain. These
models emanate from a sequential quality monitoring procedure with the normal and
multivariate normal distributions as the underlying process distributions. Three different
scenarios are considered, namely:
1. The variance is monitored from a normal process and the mean remains unchanged;
2. The above-mentioned scenario but the known mean also encounters a sustained
shift;
3. The covariance structure of a multivariate normal distribution is monitored with
the known mean vector unchanged.
The statistics originating from the above-mentioned scenarios considered are constructed
from different dependent chi-squared or Wishart ratios. Exact expressions are derived for
the probability density functions of these statistics. These new distributions contribute
to the statistical discipline in the sense that it can serve as alternatives to existing probability
models, and can be used in determining the performance of the quality monitoring
procedure. / Thesis (PhD)--University of Pretoria, 2013. / gm2014 / Statistics / unrestricted
|
50 |
A Bayesian approach to energy monitoring optimizationCarstens, Herman January 2017 (has links)
This thesis develops methods for reducing energy Measurement and Verification (M&V) costs through
the use of Bayesian statistics. M&V quantifies the savings of energy efficiency and demand side
projects by comparing the energy use in a given period to what that use would have been, had no
interventions taken place. The case of a large-scale lighting retrofit study, where incandescent lamps
are replaced by Compact Fluorescent Lamps (CFLs), is considered. These projects often need to be
monitored over a number of years with a predetermined level of statistical rigour, making M&V very
expensive.
M&V lighting retrofit projects have two interrelated uncertainty components that need to be addressed,
and which form the basis of this thesis. The first is the uncertainty in the annual energy use of the
average lamp, and the second the persistence of the savings over multiple years, determined by the
number of lamps that are still functioning in a given year. For longitudinal projects, the results from
these two aspects need to be obtained for multiple years.
This thesis addresses these problems by using the Bayesian statistical paradigm. Bayesian statistics is
still relatively unknown in M&V, and presents an opportunity for increasing the efficiency of statistical
analyses, especially for such projects.
After a thorough literature review, especially of measurement uncertainty in M&V, and an introduction
to Bayesian statistics for M&V, three methods are developed. These methods address the three types
of uncertainty in M&V: measurement, sampling, and modelling. The first method is a low-cost energy
meter calibration technique. The second method is a Dynamic Linear Model (DLM) with Bayesian
Forecasting for determining the size of the metering sample that needs to be taken in a given year.
The third method is a Dynamic Generalised Linear Model (DGLM) for determining the size of the
population survival survey sample.
It is often required by law that M&V energy meters be calibrated periodically by accredited laboratories.
This can be expensive and inconvenient, especially if the facility needs to be shut down for meter
installation or removal. Some jurisdictions also require meters to be calibrated in-situ; in their operating
environments. However, it is shown that metering uncertainty makes a relatively small impact to
overall M&V uncertainty in the presence of sampling, and therefore the costs of such laboratory
calibration may outweigh the benefits. The proposed technique uses another commercial-grade meter
(which also measures with error) to achieve this calibration in-situ. This is done by accounting for the
mismeasurement effect through a mathematical technique called Simulation Extrapolation (SIMEX).
The SIMEX result is refined using Bayesian statistics, and achieves acceptably low error rates and
accurate parameter estimates.
The second technique uses a DLM with Bayesian forecasting to quantify the uncertainty in metering
only a sample of the total population of lighting circuits. A Genetic Algorithm (GA) is then applied
to determine an efficient sampling plan. Bayesian statistics is especially useful in this case because
it allows the results from previous years to inform the planning of future samples. It also allows for
exact uncertainty quantification, where current confidence interval techniques do not always do so.
Results show a cost reduction of up to 66%, but this depends on the costing scheme used. The study
then explores the robustness of the efficient sampling plans to forecast error, and finds a 50% chance
of undersampling for such plans, due to the standard M&V sampling formula which lacks statistical
power.
The third technique uses a DGLM in the same way as the DLM, except for population survival
survey samples and persistence studies, not metering samples. Convolving the binomial survey result
distributions inside a GA is problematic, and instead of Monte Carlo simulation, a relatively new
technique called Mellin Transform Moment Calculation is applied to the problem. The technique is
then expanded to model stratified sampling designs for heterogeneous populations. Results show a
cost reduction of 17-40%, although this depends on the costing scheme used.
Finally the DLM and DGLM are combined into an efficient overall M&V plan where metering and
survey costs are traded off over multiple years, while still adhering to statistical precision constraints.
This is done for simple random sampling and stratified designs. Monitoring costs are reduced by
26-40% for the costing scheme assumed.
The results demonstrate the power and flexibility of Bayesian statistics for M&V applications, both in
terms of exact uncertainty quantification, and by increasing the efficiency of the study and reducing
monitoring costs. / Hierdie proefskrif ontwikkel metodes waarmee die koste van energiemonitering en verifieëring (M&V)
deur Bayesiese statistiek verlaag kan word. M&V bepaal die hoeveelheid besparings wat deur
energiedoeltreffendheid- en vraagkantbestuurprojekte behaal kan word. Dit word gedoen deur die
energieverbruik in ’n gegewe tydperk te vergelyk met wat dit sou wees indien geen ingryping plaasgevind
het nie. ’n Grootskaalse beligtingsretrofitstudie, waar filamentgloeilampe met fluoresserende
spaarlampe vervang word, dien as ’n gevallestudie. Sulke projekte moet gewoonlik oor baie jare met
’n vasgestelde statistiese akkuuraatheid gemonitor word, wat M&V duur kan maak.
Twee verwante onsekerheidskomponente moet in M&V beligtingsprojekte aangespreek word, en vorm
die grondslag van hierdie proefskrif. Ten eerste is daar die onsekerheid in jaarlikse energieverbruik
van die gemiddelde lamp. Ten tweede is daar die volhoubaarheid van die besparings oor veelvoudige
jare, wat bepaal word deur die aantal lampe wat tot in ’n gegewe jaar behoue bly. Vir longitudinale
projekte moet hierdie twee komponente oor veelvoudige jare bepaal word.
Hierdie proefskrif spreek die probleem deur middel van ’n Bayesiese paradigma aan. Bayesiese
statistiek is nog relatief onbekend in M&V, en bied ’n geleentheid om die doeltreffendheid van
statistiese analises te verhoog, veral vir bogenoemde projekte.
Die proefskrif begin met ’n deeglike literatuurstudie, veral met betrekking tot metingsonsekerheid
in M&V. Daarna word ’n inleiding tot Bayesiese statistiek vir M&V voorgehou, en drie metodes
word ontwikkel. Hierdie metodes spreek die drie hoofbronne van onsekerheid in M&V aan: metings,
opnames, en modellering. Die eerste metode is ’n laekoste energiemeterkalibrasietegniek. Die
tweede metode is ’n Dinamiese Linieêre Model (DLM) met Bayesiese vooruitskatting, waarmee meter
opnamegroottes bepaal kan word. Die derde metode is ’n Dinamiese Veralgemeende Linieêre Model
(DVLM), waarmee bevolkingsoorlewing opnamegroottes bepaal kan word.
Volgens wet moet M&V energiemeters gereeld deur erkende laboratoria gekalibreer word. Dit kan
duur en ongerieflik wees, veral as die aanleg tydens meterverwydering en -installering afgeskakel moet
word. Sommige regsgebiede vereis ook dat meters in-situ gekalibreer word; in hul bedryfsomgewings.
Tog word dit aangetoon dat metingsonsekerheid ’n klein deel van die algehele M&V onsekerheid
beslaan, veral wanneer opnames gedoen word. Dit bevraagteken die kostevoordeel van laboratoriumkalibrering.
Die voorgestelde tegniek gebruik ’n ander kommersieële-akkuurraatheidsgraad meter
(wat self ’n nie-weglaatbare metingsfout bevat), om die kalibrasie in-situ te behaal. Dit word gedoen
deur die metingsfout deur SIMulerings EKStraptolering (SIMEKS) te verminder. Die SIMEKS resultaat
word dan deur Bayesiese statistiek verbeter, en behaal aanvaarbare foutbereike en akkuurate
parameterafskattings.
Die tweede tegniek gebruik ’n DLM met Bayesiese vooruitskatting om die onsekerheid in die meting
van die opnamemonster van die algehele bevolking af te skat. ’n Genetiese Algoritme (GA) word
dan toegepas om doeltreffende opnamegroottes te vind. Bayesiese statistiek is veral nuttig in hierdie
geval aangesien dit vorige jare se uitslae kan gebruik om huidige afskattings te belig Dit laat ook
die presiese afskatting van onsekerheid toe, terwyl standaard vertrouensintervaltegnieke dit nie doen
nie. Resultate toon ’n kostebesparing van tot 66%. Die studie ondersoek dan die standvastigheid van
kostedoeltreffende opnameplanne in die teenwoordigheid van vooruitskattingsfoute. Dit word gevind
dat kostedoeltreffende opnamegroottes 50% van die tyd te klein is, vanweë die gebrek aan statistiese
krag in die standaard M&V formules.
Die derde tegniek gebruik ’n DVLM op dieselfde manier as die DLM, behalwe dat bevolkingsoorlewingopnamegroottes
ondersoek word. Die saamrol van binomiale opname-uitslae binne die GA skep ’n
probleem, en in plaas van ’n Monte Carlo simulasie word die relatiewe nuwe Mellin Vervorming
Moment Berekening op die probleem toegepas. Die tegniek word dan uitgebou om laagsgewyse
opname-ontwerpe vir heterogene bevolkings te vind. Die uitslae wys ’n 17-40% kosteverlaging,
alhoewel dit van die koste-skema afhang.
Laastens word die DLM en DVLM saamgevoeg om ’n doeltreffende algehele M&V plan, waar meting
en opnamekostes teen mekaar afgespeel word, te ontwerp. Dit word vir eenvoudige en laagsgewyse
opname-ontwerpe gedoen. Moniteringskostes word met 26-40% verlaag, maar hang van die aangenome
koste-skema af.
Die uitslae bewys die krag en buigsaamheid van Bayesiese statistiek vir M&V toepassings, beide vir
presiese onsekerheidskwantifisering, en deur die doeltreffendheid van die dataverbruik te verhoog en
sodoende moniteringskostes te verlaag. / Thesis (PhD)--University of Pretoria, 2017. / National Research Foundation / Department of Science and Technology / National Hub for the Postgraduate
Programme in Energy Efficiency and Demand Side Management / Electrical, Electronic and Computer Engineering / PhD / Unrestricted
|
Page generated in 0.077 seconds