• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 616
  • 157
  • 86
  • 74
  • 54
  • 47
  • 33
  • 17
  • 16
  • 14
  • 13
  • 12
  • 9
  • 8
  • 8
  • Tagged with
  • 1428
  • 210
  • 189
  • 189
  • 181
  • 179
  • 124
  • 117
  • 104
  • 102
  • 98
  • 85
  • 80
  • 79
  • 78
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Structural adaptive models in financial econometrics

Mihoci, Andrija 05 October 2012 (has links)
Moderne statistische und ökonometrische Methoden behandeln erfolgreich stilisierte Fakten auf den Finanzmärkten. Die vorgestellten Techniken erstreben die Dynamik von Finanzmarktdaten genauer als traditionelle Ansätze zu verstehen. Wirtschaftliche und finanzielle Vorteile sind erzielbar. Die Ergebnisse werden hier in praktischen Beispielen ausgewertet, die sich vor allem auf die Prognose von Finanzmarktdaten fokussieren. Unsere Anwendungen umfassen: (i) die Modellierung und die Vorhersage des Liquiditätsangebotes, (ii) die Lokalisierung des ’Multiplicative Error Model’ und (iii) die Erbringung von Evidenz für den empirischen Zustandsfaktorparadox über Landern. / Modern methods in statistics and econometrics successfully deal with stylized facts observed on financial markets. The presented techniques aim to understand the dynamics of financial market data more accurate than traditional approaches. Economic and financial benefits are achievable. The results are here evaluated in practical examples that mainly focus on forecasting of financial data. Our applications include: (i) modelling and forecasting of liquidity supply, (ii) localizing multiplicative error models and (iii) providing evidence for the empirical pricing kernel paradox across countries.
242

The effect of mineral addition on the pyrolysis products derived from typical Highveld coal / Leon Roets

Roets, Leon January 2014 (has links)
Mineral matter affect various coal properties as well as the yield and composition of products released during thermal processes. This necessitates investigation of the effect of the inherent minerals on the products derived during pyrolysis, as pyrolysis forms the basis of most coal utilisation processes. A real challenge in this research has been quantifying the changes seen and attributing these effects to specific minerals. Thus far it has been deemed impossible to predict product yields based on the mineral composition of the parent coal. Limited research regarding these aspects has been done on South African coal and the characterisation of pyrolysis products in previous studies was usually limited to one product phase. A novel approach was followed in this study and the challenges stated were effectively addressed. A vitrinite-rich South African coal from the Highveld coal field, was prepared to an undersize of 75 μm and divided into two fractions. HCl/HF acid washing reduced the ash yield from 14.0 wt% d.b. to 2.0 wt% d.b. (proximate analysis). Pyrolysis was carried out with the North-West University (NWU) Fischer Assay setup at 520, 750 and 900°C under N2 atmosphere and atmospheric pressure. The effect of acid washing and the addition of minerals on the derived pyrolysis products were evaluated. Acid washing led to lower water and tar yields, whilst the gas yields increased, and the char yields were unaffected. The higher gas yield can be related to increased porosity after mineral removal as revealed by Brunauer-Emmett-Teller (BET) CO2 adsorption surface area analysis of the derived chars. Gas chromatography (GC) analyses of the derived pyrolysis gases indicated that the acid washed coal fraction (AW TWD) derived gas contained higher yields of H2, CH4, CO2, C2H4, C2H6, C3H4, C3H6 and C4s when compared to the gas derived from the raw coal fraction (TWD). The CO yield from the TWD coal was higher at all final pyrolysis temperatures. Differences in gas yields were related to increased tar cracking as well as lower hydrogen transfer and de-hydrogenation of the acid washed chars. Analyses of the tar fraction by means of simulated distillation (Simdis), gas chromatography mass spectrometry (GC-MS) –flame ionization detection (–FID) and size exclusion chromatography with ultraviolet (SEC-UV) analyses, indicated that the AW TWD derived tars were more aromatic in nature, containing more heavier boiling point components, which increased with increasing final pyrolysis temperature. The chars were characterised by proximate, ultimate, X-ray diffraction (XRD), X-ray fluorescence (XRF), diffuse reflectance infrared Fourier-transform (DRIFT) and BET CO2 analyses. Addition of either 5 wt% calcite, dolomite, kaolinite, pyrite or quartz to the acid washed fraction (AW TWD) was done in order to determine the effect of these minerals on the pyrolysis products. These minerals were identified as the most prominent mineral phases in the Highveld coal used in this study, by XRD and quantitative evaluation of minerals by scanning electron microscopy (QEMSCAN) analyses. It was found that mineral activity decreased in the order calcite/dolomite>pyrite>kaolinite>>>quartz. Calcite and dolomite addition led to a decrease in tar yield, whilst the gas yields were increased. Markedly, increased water yields were also observed with the addition of calcite, dolomite and pyrite. Kaolinite addition led to increased tar, char and gas yields at 520°C, whilst the tar yield decreased at 750°C. Pyrite addition led to decreased tar and gas yields. Quartz addition had no noteworthy effect on pyrolysis yields and composition, except for a decrease in char yield at all final pyrolysis temperatures and an increased gas yield at 520°C. Regarding the composition of the pyrolysis products, the various minerals had adverse effects. Calcite and dolomite affected the composition of the gas, tar and char phases most significantly, showing definite catalytic activity. Tar producers should take note as presence of these minerals in the coal feedstock could have a significant effect on the tar yield and composition. Kaolinite and pyrite showed some catalytic activity under specific conditions. Model coal-mineral mixtures confirmed synergism between coal-mineral and mineral-mineral interactions. Although some correlation between the pyrolysis products derived from the model coal-mineral mixtures and that of TWD coal was observed, it was not possible to entirely mimic the behaviour of the coal prior to acid washing. Linear regression models were developed to predict the gas, tar and char yields (d.m.m.f.) with mineral composition and pyrolysis temperature as variables, resulting in R2 coefficients of 0.837, 0.785 and 0.846, respectively. Models for the prediction of H2, CO, CO2 and CH4 yields with mineral composition and pyrolysis temperature as variables resulting in R2 coefficients of 0.917, 0.702, 0.869 and 0.978, respectively. These models will serve as foundation for future work, and prove that it is feasible to develop models to predict pyrolysis yields based on mineral composition. Extending the study to coals of different rank can make the models universally applicable and deliver a valuable contribution in industry. / MIng (Chemical Engineering), North-West University, Potchefstroom Campus, 2015
243

The effect of mineral addition on the pyrolysis products derived from typical Highveld coal / Leon Roets

Roets, Leon January 2014 (has links)
Mineral matter affect various coal properties as well as the yield and composition of products released during thermal processes. This necessitates investigation of the effect of the inherent minerals on the products derived during pyrolysis, as pyrolysis forms the basis of most coal utilisation processes. A real challenge in this research has been quantifying the changes seen and attributing these effects to specific minerals. Thus far it has been deemed impossible to predict product yields based on the mineral composition of the parent coal. Limited research regarding these aspects has been done on South African coal and the characterisation of pyrolysis products in previous studies was usually limited to one product phase. A novel approach was followed in this study and the challenges stated were effectively addressed. A vitrinite-rich South African coal from the Highveld coal field, was prepared to an undersize of 75 μm and divided into two fractions. HCl/HF acid washing reduced the ash yield from 14.0 wt% d.b. to 2.0 wt% d.b. (proximate analysis). Pyrolysis was carried out with the North-West University (NWU) Fischer Assay setup at 520, 750 and 900°C under N2 atmosphere and atmospheric pressure. The effect of acid washing and the addition of minerals on the derived pyrolysis products were evaluated. Acid washing led to lower water and tar yields, whilst the gas yields increased, and the char yields were unaffected. The higher gas yield can be related to increased porosity after mineral removal as revealed by Brunauer-Emmett-Teller (BET) CO2 adsorption surface area analysis of the derived chars. Gas chromatography (GC) analyses of the derived pyrolysis gases indicated that the acid washed coal fraction (AW TWD) derived gas contained higher yields of H2, CH4, CO2, C2H4, C2H6, C3H4, C3H6 and C4s when compared to the gas derived from the raw coal fraction (TWD). The CO yield from the TWD coal was higher at all final pyrolysis temperatures. Differences in gas yields were related to increased tar cracking as well as lower hydrogen transfer and de-hydrogenation of the acid washed chars. Analyses of the tar fraction by means of simulated distillation (Simdis), gas chromatography mass spectrometry (GC-MS) –flame ionization detection (–FID) and size exclusion chromatography with ultraviolet (SEC-UV) analyses, indicated that the AW TWD derived tars were more aromatic in nature, containing more heavier boiling point components, which increased with increasing final pyrolysis temperature. The chars were characterised by proximate, ultimate, X-ray diffraction (XRD), X-ray fluorescence (XRF), diffuse reflectance infrared Fourier-transform (DRIFT) and BET CO2 analyses. Addition of either 5 wt% calcite, dolomite, kaolinite, pyrite or quartz to the acid washed fraction (AW TWD) was done in order to determine the effect of these minerals on the pyrolysis products. These minerals were identified as the most prominent mineral phases in the Highveld coal used in this study, by XRD and quantitative evaluation of minerals by scanning electron microscopy (QEMSCAN) analyses. It was found that mineral activity decreased in the order calcite/dolomite>pyrite>kaolinite>>>quartz. Calcite and dolomite addition led to a decrease in tar yield, whilst the gas yields were increased. Markedly, increased water yields were also observed with the addition of calcite, dolomite and pyrite. Kaolinite addition led to increased tar, char and gas yields at 520°C, whilst the tar yield decreased at 750°C. Pyrite addition led to decreased tar and gas yields. Quartz addition had no noteworthy effect on pyrolysis yields and composition, except for a decrease in char yield at all final pyrolysis temperatures and an increased gas yield at 520°C. Regarding the composition of the pyrolysis products, the various minerals had adverse effects. Calcite and dolomite affected the composition of the gas, tar and char phases most significantly, showing definite catalytic activity. Tar producers should take note as presence of these minerals in the coal feedstock could have a significant effect on the tar yield and composition. Kaolinite and pyrite showed some catalytic activity under specific conditions. Model coal-mineral mixtures confirmed synergism between coal-mineral and mineral-mineral interactions. Although some correlation between the pyrolysis products derived from the model coal-mineral mixtures and that of TWD coal was observed, it was not possible to entirely mimic the behaviour of the coal prior to acid washing. Linear regression models were developed to predict the gas, tar and char yields (d.m.m.f.) with mineral composition and pyrolysis temperature as variables, resulting in R2 coefficients of 0.837, 0.785 and 0.846, respectively. Models for the prediction of H2, CO, CO2 and CH4 yields with mineral composition and pyrolysis temperature as variables resulting in R2 coefficients of 0.917, 0.702, 0.869 and 0.978, respectively. These models will serve as foundation for future work, and prove that it is feasible to develop models to predict pyrolysis yields based on mineral composition. Extending the study to coals of different rank can make the models universally applicable and deliver a valuable contribution in industry. / MIng (Chemical Engineering), North-West University, Potchefstroom Campus, 2015
244

Promiscuity and Selectivity in Phosphoryl Transferases

Barrozo, Alexandre January 2016 (has links)
Phosphoryl transfers are essential chemical reactions in key life processes, including energy production, signal transduction and protein synthesis. They are known for having extremely low reaction rates in aqueous solution, reaching the scale of millions of years. In order to make life possible, enzymes that catalyse phosphoryl transfer, phosphoryl transferases, have evolved to be tremendously proficient catalysts, increasing reaction rates to the millisecond timescale. Due to the nature of the electronic structure of phosphorus atoms, understanding how hydrolysis of phosphate esters occurs is a complex task. Experimental studies on the hydrolysis of phosphate monoesters with acidic leaving groups suggest a concerted mechanism with a loose, metaphosphate-like transition state. Theoretical studies have suggested two possible concerted pathways, either with loose or tight transition state geometries, plus the possibility of a stepwise mechanism with the formation of a phosphorane intermediate. Different pathways were shown to be energetically preferable depending on the acidity of the leaving group. Here we performed computational studies to revisit how this mechanistic shift occurs along a series of aryl phosphate monoesters, suggesting possible factors leading to such change. The fact that distinct pathways can occur in solution could mean that the same is possible for an enzyme active site. We performed simulations on the catalytic activity of β-phosphoglucomutase, suggesting that it is possible for two mechanisms to occur at the same time for the phosphoryl transfer. Curiously, several phosphoryl transferases were shown to be able to catalyse not only phosphate ester hydrolysis, but also the cleavage of other compounds. We modeled the catalytic mechanism of two highly promiscuous members of the alkaline phosphatase superfamily. Our model reproduces key experimental observables and shows that these enzymes are electrostatically flexible, employing the same set of residues to enhance the rates of different reactions, with different electrostatic contributions per residue.
245

Empirical models of the incidence and spread of tropical fires

Fletcher, Imogen Nancy January 2014 (has links)
Tropical wildfires account for up to 93% of global burnt area and approximately 85% of the resulting carbon emissions, yet are significantly under-represented in existing fire models. These models are predominantly process-based, require a multitude of input datasets, parameters and calculations, and are difficult to reproduce or use independently from a dynamic global vegetation model (DGVM). The aim of this thesis is to develop empirical parameterisations of tropical fire occurrence and spread that represent an improvement in accuracy over existing models and that can be easily implemented both as standalone models or within a DGVM. These models are based on well-documented relationships from the literature. An index of potential fire is produced based on the observed peak of fire activity at intermediate levels of productivity and aridity. This can be converted into expected fire counts using a simple, observation-derived parameter map. Fire sizes have been shown to follow an approximately fractal distribution in a range of ecosystems, which is used to develop a new burnt area model. Replacing the fire count and burnt area calculations of existing fire models with these new parameterisations improves the spatial distribution of the resulting estimates, while giving temporally comparable predictions to the original models. The magnitude of the resulting burnt area estimates is also improved. The use of empirical fire modelling is therefore a viable alternative to current process-based methods, and makes practical use of theories that are well-documented in the literature. These models require few input variables and can be easily incorporated into a DGVM. However, further work to improve the temporal accuracy and dynamicity of these models would be beneficial, as would a method to link these models to parameterisations of combustion and trace gas emissions.
246

Food Quality Effects on Zooplankton Growth and Energy Transfer in Pelagic Freshwater Food Webs / Effekter av födokvalitet på djurplanktons tillväxt och på energiöverföringen i födovävar i sjöar

Persson, Jonas January 2007 (has links)
Poor food quality can have large negative effects on zooplankton growth and this can also affect food web interactions. The main aims of this thesis were to study the importance of different food quality aspects in Daphnia, to identify potentially important differences among zooplankton taxa, and to put food quality research into a natural context by identifying the importance of food quality and quantity in lakes of different nutrient content. In the first experiment, the RNA:DNA ratio was positively related to the somatic growth rate of Daphnia, supporting a connection between P content, RNA content, and growth rate. The second experiment showed that EPA was important for Daphnia somatic growth, and 0.9 µg EPA mg C-1 was identified as the threshold below which negative effects on Daphnia growth occurred. A field survey identified patterns in the PUFA content of zooplankton that could be explained by taxonomy and trophic position. Cladocera enriched EPA and ARA relative to seston, and Copepoda primarily enriched DHA. In a whole-lake experiment, gentle fertilization of an oligotrophicated reservoir increased the seston P content and the biomass of high quality phytoplankton (Cryptophyceae, high EPA content). This was followed by increases in zooplankton and fish biomasses. An empirical model based on data from a literature survey predicted that food quantity is most important for zooplankton growth in oligotrophic lakes, and that food quality factors are more important in eutrophic lakes. Thus, zooplankton growth, and energy transfer efficiency in the food web, is predicted to be highest in mesotrophic lakes. The results predict that the strength and nature of food quantity and quality limitation of Daphnia growth varies with lake trophic state, and that some combination of food quantity and/or quality limitation should be expected in nearly all lakes.
247

Modeling & optimisation of coarse multi-vesiculated particles

Clarke, Stephen Armour 03 1900 (has links)
Thesis (MScEng)--Stellenbosch University, 2012. / ENGLISH ABSTRACT: Multi-vesiculated particles (MVP) are synthetic insoluble polymeric particles containing a multitude of vesicles (micro-voids). The particles are generally produced and used as a suspension in an aqueous fluid and are therefore readily incorporated in latex paints as opacifiers. The coarse or suede MVP have a large volume-mean diameter (VMD) generally in the range of 35-60μm, the large VMD makes them suitable for textured effect paints. The general principle behind the MVP technology is as the particles dry, the vesicles drain of liquid and fill with air. The large refractive index difference between the polymer shell and air result in the scattering of incident light which give the MVP their white opaque appearance making them suitable as an opacifier for the partial replacement of TiO2 in coating systems. Whilst the coarse MVP have been successfully commercialized, insufficient understanding of the influence of the MVP system parameters on the final MVP product characteristics coupled with the MVP’s sensitivity towards the unsaturated polyester resin (UPR) resulted in a product with significant quality variation. On the other hand these uncertainties provided the opportunity to model and optimise the MVP system through developing a better understanding of the influence of the MVP system parameters on the MVP product characteristics, developing a model to mathematically describe these relationships and to optimise the MVP system to achieve the product specifications whilst simultaneously minimising the variation observed in the product characteristics. The primary MVP characteristics for this study were the particle size distribution (quantified by the volume-mean diameter (VMD)) and the reactor buildup.1 The approach taken was to analyse the system determining all possible system factors that may affect it, and then to reduce the total number of system factors by selecting those which have a significant influence on the characteristics of interest. A model was then developed to mathematically describe the relationship between these significant factors and the characteristics of interest. This was done utilising a set of statistical methods known as design of experiments (DoE). A screening DoE was conducted on the identified system factors reducing them to a subset of factors which had a significant effect on the VMD & buildup. The UPR was characterised by its acid value and viscosity and in combination with the identified significant factors a response surface model (RSM) was developed for the chosen design space, mathematically describing their relationship with the MVP characteristics. Utilising a DoE method known as robust parameter design (specifically propagation of error) an optimised MVP system was numerically determined which brought the MVP product within specification and simultaneously reduced the MVP’s sensitivity to the UPR. The validation of the response surface model indicated that the average error in the VMD prediction was 2.16μm (5.16%) which compared well to the 1.96μm standard deviation of replication batches. The high Pred-R2 value of 0.839 and the low validation error indicates that the model is well suited for predicting the VMD characteristic of the MVP system. The application of propagation of error to the model during optimisation resulted in a MVP process and formulation which brought the VMD response from the standard’s average of 44.56μm to the optimised system’s average of 47.84μm which was significantly closer to the desired optimal of 47.5μm. The most notable value added to the system by the propagation of error technique was the reduction in the variation around the mean of the VMD, due to the UPR, by over 30%1 from the standard to optimised MVP system. In addition to the statistical model, dimensional analysis, (specifically Buckingham-Π method) was applied to the MVP system to develop a semi-empirical dimensionless model for the VMD. The model parameters were regressed from the experimental data obtained from the DoE and the model was compared to several models sited in literature. The dimensionless model was not ideal for predicting the VMD as indicated by the R2 value of 0.59 and the high average error of 21.25%. However it described the VMD better than any of the models cited in literature, many of which had negative R2 values and were therefore not suitable for modelling the MVP system. / AFRIKAANSE OPSOMMING: Sintetiese polimeer partikels wat veeltallige lugblasies huisves en omhul, staan beter bekend as MVP (verkort vanaf die Engelse benaming, "multi-vesiculated particles"). Tipies word hierdie partikels berei en gestabiliseer in 'n waterige suspensie wat dit mengbaar maak met konvensionele emulsie sisteme en dit dus in staat stel om te funksioneer as 'n dekmiddel in verf. Deur die volume gemiddelde deursnee (VGD) te manipuleer tot tussen 35 en 60μm, word die growwe partikels geskik vir gebruik in tekstuur verwe, soos byvoorbeeld afwerkings met 'n handskoenleer (suède) tipe tekstuur. Die dekvermoë van MVP ontstaan soos die partikels droog en die water in die polimeer partikel vervang word met lug. As gevolg van die groot verskil in brekingsindeks tussen die polimeer huls en die lugblasies, word lig verstrooi in alle rigtings wat daartoe lei dat die partikels wit vertoon. Dus kan die produk gebruik word om anorganiese pigmente soos TiO2 gedeeltelik te vervang in verf. Alhoewel growwe MVP al suksesvol gekommersialiseer is, bestaan daar nog net 'n beperkte kennis oor die invloed van sisteem veranderlikes op die karakteristieke eienskappe van die finale produk. Dit volg onder andere uit waarnemings dat die kwaliteit van die growwe MVP baie maklik beïnvloed word deur onbekende variasies in die reaktiewe poliëster hars wat gebruik word om die partikels te maak. Dit het egter die geleentheid geskep om die veranderlikes deeglik te modeleer en te optimiseer om sodoende 'n beter begrip te kry van hoe eienskappe geaffekteer word. 'n Wetenskaplike model is opgestel om verwantskappe te illustreer en om die sisteem te optimiseer sodat daar aan produk spesifikasies voldoen word, terwyl produk variasies minimaal bly. Die oorheersende doel in hierdie studie was om te fokus op partikelgrootte en verspreiding (bepaal met behulp van die VGD) as primêre karakteristieke eienskap, asook die graad van aanpaksel op die reaktorwand gedurende produksie. Vanuit eerste beginsel is alle moontlike veranderlikes geanaliseer, waarna die hoeveelheid verminder is na slegs dié wat die karakteristieke eienskap die meeste beïnvloed. Deur gebruik te maak van eksperimentele ontwerp is die wetenskaplike model ontwikkel wat die effek van hierdie eienskappe statisties omsluit. 'n Afskerms eksperimentele ontwerp is uitgevoer om onbeduidende veranderlikes te elimineer van dié wat meer betekenisvol is. Die hars is gekaraktiseer met 'n getal wat gebruik word om die aantal suur groepe per molekuul aan te dui, asook die hars se viskositeit. Hierdie twee eienskappe, tesame met ander belangrike eienskappe is gebruik om 'n karakteristieke oppervlakte model te ontwikkel wat hul invloed op die VGD van die partikels en reaktor aanpakking beskryf. Deur gebruik te maak van 'n robuuste ontwerp, beter beskryf as 'n fout verspreidingsmodel, is die MVP sisteem numeries geoptimiseer. Dit het tot gevolg dat die MVP binne spesifikasie bly en die VGD se sensitiwiteit vir variasie in die hars verminder het. Geldigheidstoetse op die oppervlakte model het aangetoon dat die gemiddelde fout in VGD 2.16μm (5.16%) was. Dit is stem goed ooreen met die 1.96μm standaard afwyking tussen herhaalde lopies. Hoë Pred-R2 waardes (0.839) en lae geldigheidsfout waardes het getoon dat die voorgestelde model die VGD eienskappe uiters goed beskryf. Toepassing van die fout verspreidingsmodel gedurende optimisering het tot gevolg dat die VGD vanaf die standaard gemiddelde van 44.56μm verskuif het na die geoptimiseerde gemiddelde van 47.84μm. Dit is aansienlik nader aan die verlangde optimum waarde van 47.5μm. Die grootste waarde wat toegevoeg is na afloop van hierdie studie, is dat die afwyking rondom die gemiddelde VGD, toegeskryf aan die eienskappe van die hars, verminder het met oor die 30%1 (vanaf die standaard tot die optimiseerde sisteem). Verdere dimensionele analise van die sisteem deur spesifiek gebruik te maak van die Buckingham-Π metode het gelei tot die ontwikkeling van 'n semi-empiriese dimensielose VGD model. Regressie op eksperimentele data verkry uit die eksperimentele ontwerp is vergelyk met verskeie modelle beskryf in ander literatuur bronne. Hierdie dimensionele model was nie ideaal om die VGD te beskryf nie, aangesien die R2 waarde 0.59 was en die gemiddelde fout van 21.25% relatief hoog was. Nietemin, hierdie model beskryf die VGD beter as enige ander model voorgestel in die literatuur. In talle gevalle is negatiewe R2 waardes verkry, wat hierdie literatuur modelle geheel en al ongeskik maak vir toepassing in die MVP sisteem.
248

Conceptual and empirical advances in antitrust market definition with application to South African competition policy

Boshoff, Willem Hendrik 12 1900 (has links)
Thesis (PhD)--Stellenbosch University, 2011. / ENGLISH ABSTRACT: Delineating the relevant product and geographic market is an important first step in competition inquiries, as it permits an assessment of market power and substitutability. Critics often argue that market definition is arbitrary and increasingly unnecessary, as modern econometric models can directly predict the competitive effects of a merger or anti-competitive practice. Yet practical constraints (such as limited data) and legal considerations (such as case law precedence) continue to support a formal definition of the relevant market. Within this context, this dissertation develops three tools to improve market definition: two empirical tools for cases with limited data and one conceptual decision-making tool to elucidate important factors and risks in market definition. The first tool for market definition involves a systematic analysis of consumer characteristics (i.e. the demographic and income profiles of consumers). Consumer characteristics can assist in defining markets as consumers with similar characteristics tend to switch to similar products following a price rise. Econometric models therefore incorporate consumer characteristics data to improve price elasticity estimates. Even though data constraints often prevent the use of econometric models, a systematic analysis of consumer characteristics can still be useful for market definition. Cluster analysis offers a statistical technique to group products on the basis of the similarity of their consumers. characteristics. A recently concluded partial radio station merger in South Africa offers a case study for the use of consumer characteristics in defining markets. The second tool, or set of tools, for defining markets involves using tests for price co-movement. Critics argue that price tests are not appropriate for defining markets, as these tests are based on the law of one price - which tests only for price linkages and not for the ability to raise prices. Price tests, however, are complements for existing market definition tools, rather than substitutes. Critics also argue that price tests suffer from low statistical power in discriminating close and less close substitutes. But these criticisms ignore inter alia the role of price tests as tools for gathering information and the range of price tests with better size and power properties that are available, including new stationarity tests and autoregressive models. A recently concluded investigation in the South African dairy industry offers price data to evaluate the market definition insights of various price tests. The third tool is conceptual in nature and involves a decision rule for defining markets. If market definition is a binary classification problem (a product is either 'in' or 'out' of the market), it faces risks of misclassification (incorrectly including or excluding a product). Analysts can manage these risks using a Bayesian decision rule that balances (1) the weight of evidence in favour of and against substitutability, (2) prior probabilities determined by previous cases and economic research, and (3) the loss function of the decision maker. The market definition approach adopted by the South African Competition Tribunal in the Primedia / Kaya FM merger investigation offers a useful case study to illustrate the implementation of such a rule in practice. / AFRIKAANSE OPSOMMING: Mededingingsake neem gewoonlik 'n aanvang met die afbakening van die relevante produk- en geografiese mark. Die markdefinisie-proses werp dikwels lig op markmag en substitusie-moontlikhede, en ondersteun dus die beoordeling van 'n mededingingsaak. Markdefinisie word egter deur kritici as arbitrer en selfs onnodig geag, veral aangesien ekonometriese modelle die uitwerking van 'n samesmelting of 'n teen-mededingende praktyk op mededinging direk kan voorspel. Tog verkies praktisyns steeds om markte formeel af te baken op grond van sowel praktiese oorwegings (insluitend databeperkings wat ekonometriese modellering bemoeilik) as regsoorwegings (insluitend die rol van presedentereg). Hierdie proefskrif ontwikkel dus drie hulpmiddels vir die definisie van markte: twee empiriese hulpmiddels vir gevalle waar data beperk is sowel as 'n denkhulpmiddel om o.a. risiko's rondom markdefinisie te bestuur. Die eerste hulpmiddel vir die definisie van markte behels die sistematiese analise van verbruikerseienskappe, insluitend die demografiese en inkomste-profiel van verbruikers. Verbruikerseienskappe werp lig op substitusie, aangesien soortgelyke verbruikers neig om na soortgelyke produkte te verwissel na aanleiding van 'n prysstyging. Ekonometriese modelle maak derhalwe van data omtrent verbruikerseienskappe gebruik om beramings van pryselastisiteit te verbeter. Hoewel databeperkings dikwels ekonometriese modellering beperk, kan verbruikerseienskappe op sigself steeds nuttig wees vir die afbakening van die mark. Trosanalise bied 'n statistiese metode vir 'n stelselmatige ondersoek van verbruikerseienskappe vir markdefinisie, deurdat dit produkte op grond van gelyksoortige verbruikerseienskappe groepeer. 'n Onlangse ondersoek in Suid-Afrika rakende die gedeeltelike samesmelting van Primedia and Kaya FM radiostasies bied data om die gebruik van trosanalise en verbruikerseienskappe vir markdefinisie-doeleindes te illustreer. Die tweede hulpmiddel vir markdefinisie behels statistiese toetse vir verwantskappe tussen prystydreekse van verskillende produkte of streke. Hierdie prystoetse is gebaseer op die wet van een prys en beklemtoon prysverwantskappe eerder as die vermoë om pryse te verhoog (wat die uiteindelike fokus in mededingingsbeleid is). Hierdie klem verminder egter nie noodwendig die insigte wat prystoetse bied nie, aangesien markdefinisie dikwels 'n omvattende analise verg. Prystoetse se statistiese onderskeidingsvermoe word ook dikwels deur kritici as swak beskryf. Hierdie tegniese kritiek beskou prystoetse as eng-gedefinieerde hipotesetoetse eerder as hulpmiddels vir die verkenning van substitusiepatrone. Voorts ignoreer hierdie tegniese kritiek 'n verskeidenheid nuwe prystoetse met beter onderskeidingsvermoë, insluitend nuwe toetse vir stasioneriteit en nuwe autoregressiewe modelle. 'n Onlangse mededingingsondersoek in die Suid-Afrikaanse melkindustrie verskaf prysdata om die verrigting van verskillende prystoetse vir geografiese markdefinisie te ondersoek. Die derde hulpmiddel vir die definisie van markte behels 'n besluitnemingsreël. Hiervolgens word markdefinisie as 'n binêre klassifikasieprobleem beskou, waar 'n produk of streek 'binne' of 'buite' die mark geplaas moet word. Gegewe dat hierdie klassifikasie onder toestande van onsekerheid geskied, is markdefinisie blootgestel aan risiko's van wanklassifikasie. Praktisyns kan hierdie risiko‟s bestuur deur gebruik te maak van 'n Bayesiaanse besluitnemingsreël. Sodanige reël balanseer (1) die gewig van getuienis ten gunste van en teen substitusie, (2) a priori waarskynlikhede soos bepaal deur vorige mededingingsake en akademiese navorsing, en (3) die verliesfunksie van die besluitnemer. Die benadering van die Suid-Afrikaanse Mededingingstribunaal in die saak rakende die gedeeltelike samesmelting van Primedia en Kaya FM bied 'n nuttige gevallestudie om hierdie beginsels te demonstreer.
249

Some Novel Statistical Inferences

Li, Chenxue 12 August 2016 (has links)
In medical diagnostic studies, the area under the Receiver Operating Characteristic (ROC) curve (AUC) and Youden index are two summary measures widely used in the evaluation of the diagnostic accuracy of a medical test with continuous test results. The first half of this dissertation will highlight ROC analysis including extension of Youden index to the partial Youden index as well as novel confidence interval estimation for AUC and Youden index in the presence of covariates in induced linear regression models. Extensive simulation results show that the proposed methods perform well with small to moderate sized samples. In addition, some real examples will be presented to illustrate the methods. The latter half focuses on the application of empirical likelihood method in economics and finance. Two models draw our attention. The first one is the predictive regression model with independent and identically distributed errors. Some uniform tests have been proposed in the literature without distinguishing whether the predicting variable is stationary or nearly integrated. Here, we extend the empirical likelihood methods in Zhu, Cai and Peng (2014) with independent errors to the case of an AR error process. The proposed new tests do not need to know whether the predicting variable is stationary or nearly integrated, and whether it has a finite variance or an infinite variance. Another model we considered is a GARCH(1,1) sequence or an AR(1) model with ARCH(1) errors. It is known that the observations have a heavy tail and the tail index is determined by an estimating equation. Therefore, one can estimate the tail index by solving the estimating equation with unknown parameters replaced by Quasi Maximum Likelihood Estimation (QMLE), and profile empirical likelihood method can be employed to effectively construct a confidence interval for the tail index. However, this requires that the errors of such a model have at least finite fourth moment to ensure asymptotic normality with n1/2 rate of convergence and Wilk's Theorem. We show that the finite fourth moment can be relaxed by employing some Least Absolute Deviations Estimate (LADE) instead of QMLE for the unknown parameters by noting that the estimating equation for determining the tail index is invariant to a scale transformation of the underlying model. Furthermore, the proposed tail index estimators have a normal limit with n1/2 rate of convergence under minimal moment condition, which may have an infinite fourth moment, and Wilk's theorem holds for the proposed profile empirical likelihood methods. Hence a confidence interval for the tail index can be obtained without estimating any additional quantities such as asymptotic variance.
250

Data cleaning techniques for software engineering data sets

Liebchen, Gernot Armin January 2010 (has links)
Data quality is an important issue which has been addressed and recognised in research communities such as data warehousing, data mining and information systems. It has been agreed that poor data quality will impact the quality of results of analyses and that it will therefore impact on decisions made on the basis of these results. Empirical software engineering has neglected the issue of data quality to some extent. This fact poses the question of how researchers in empirical software engineering can trust their results without addressing the quality of the analysed data. One widely accepted definition for data quality describes it as `fitness for purpose', and the issue of poor data quality can be addressed by either introducing preventative measures or by applying means to cope with data quality issues. The research presented in this thesis addresses the latter with the special focus on noise handling. Three noise handling techniques, which utilise decision trees, are proposed for application to software engineering data sets. Each technique represents a noise handling approach: robust filtering, where training and test sets are the same; predictive filtering, where training and test sets are different; and filtering and polish, where noisy instances are corrected. The techniques were first evaluated in two different investigations by applying them to a large real world software engineering data set. In the first investigation the techniques' ability to improve predictive accuracy in differing noise levels was tested. All three techniques improved predictive accuracy in comparison to the do-nothing approach. The filtering and polish was the most successful technique in improving predictive accuracy. The second investigation utilising the large real world software engineering data set tested the techniques' ability to identify instances with implausible values. These instances were flagged for the purpose of evaluation before applying the three techniques. Robust filtering and predictive filtering decreased the number of instances with implausible values, but substantially decreased the size of the data set too. The filtering and polish technique actually increased the number of implausible values, but it did not reduce the size of the data set. Since the data set contained historical software project data, it was not possible to know the real extent of noise detected. This led to the production of simulated software engineering data sets, which were modelled on the real data set used in the previous evaluations to ensure domain specific characteristics. These simulated versions of the data set were then injected with noise, such that the real extent of the noise was known. After the noise injection the three noise handling techniques were applied to allow evaluation. This procedure of simulating software engineering data sets combined the incorporation of domain specific characteristics of the real world with the control over the simulated data. This is seen as a special strength of this evaluation approach. The results of the evaluation of the simulation showed that none of the techniques performed well. Robust filtering and filtering and polish performed very poorly, and based on the results of this evaluation they would not be recommended for the task of noise reduction. The predictive filtering technique was the best performing technique in this evaluation, but it did not perform significantly well either. An exhaustive systematic literature review has been carried out investigating to what extent the empirical software engineering community has considered data quality. The findings showed that the issue of data quality has been largely neglected by the empirical software engineering community. The work in this thesis highlights an important gap in empirical software engineering. It provided clarification and distinctions of the terms noise and outliers. Noise and outliers are overlapping, but they are fundamentally different. Since noise and outliers are often treated the same in noise handling techniques, a clarification of the two terms was necessary. To investigate the capabilities of noise handling techniques a single investigation was deemed as insufficient. The reasons for this are that the distinction between noise and outliers is not trivial, and that the investigated noise cleaning techniques are derived from traditional noise handling techniques where noise and outliers are combined. Therefore three investigations were undertaken to assess the effectiveness of the three presented noise handling techniques. Each investigation should be seen as a part of a multi-pronged approach. This thesis also highlights possible shortcomings of current automated noise handling techniques. The poor performance of the three techniques led to the conclusion that noise handling should be integrated into a data cleaning process where the input of domain knowledge and the replicability of the data cleaning process are ensured.

Page generated in 0.0552 seconds