• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 82
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 144
  • 26
  • 25
  • 23
  • 18
  • 18
  • 16
  • 15
  • 14
  • 14
  • 11
  • 11
  • 11
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Soliton solutions of some novel nonlinear evolution equations

Morrison, Alan James January 2002 (has links)
No description available.
2

Resistant fitting methods for statistical shape comparison

Lima, Verônica Maria Cadena January 2003 (has links)
No description available.
3

The hippocampus in memory and anxiety : an exploration within computational neuroscience and robotics

Kazer, J. F. January 2000 (has links)
No description available.
4

Estimating break points in linear models : a GMM approach

Augustine-Ohwo, Odaro January 2016 (has links)
In estimating econometric time series models, it is assumed that the parameters remain constant over the period examined. This assumption may not always be valid when using data which span an extended period, as the underlying relationships between the variables in these models are exposed to various exogenous shifts. It is therefore imperative to examine the stability of models as failure to identify any changes could result in wrong predictions or inappropriate policy recommendations. This research proposes a method of estimating the location of break points in linear econometric models with endogenous regressors, estimated using Generalised Method of Moments (GMM). The proposed estimation method is based on Wald, Lagrange Multiplier and Difference type test statistics of parameter variation. In this study, the equation which sets out the relationship between the endogenous regressor and the instruments is referred to as the Jacobian Equation (JE). The thesis is presented along two main categories: Stable JE and Unstable JE. Under the Stable JE, models with a single and multiple breaks in the Structural Equation (SE) are examined. The break fraction estimators obtained are shown to be consistent for the true break fraction in the model. Additionally, using the fixed break approach, their $T$-convergence rates are established. Monte Carlo simulations which support the asymptotic properties are presented. Two main types of Unstable JE models are considered: a model with a single break only in the JE and another with a break in both the JE and SE. The asymptotic properties of the estimators obtained from these models are intractable under the fixed break approach, hence the thesis provides essential steps towards establishing the properties using the shrinking breaks approach. Nonetheless, a series of Monte Carlo simulations conducted provide strong support for the consistency of the break fraction estimators under the Unstable JE. A combined procedure for testing and estimating significant break points is detailed in the thesis. This method yields a consistent estimator of the true number of breaks in the model, as well as their locations. Lastly, an empirical application of the proposed methodology is presented using the New Keynesian Phillips Curve (NKPC) model for U.S. data. A previous study has found this NKPC model is unstable, having two endogenous regressors with Unstable JE. Using the combined testing and estimation approach, similar break points were estimated at 1975:2 and 1981:1. Therefore, using the GMM estimation approach proposed in this study, the presence of a Stable or Unstable JE does not affect estimations of breaks in the SE. A researcher can focus directly on estimating potential break points in the SE without having to pre-estimate the breaks in the JE, as is currently performed using Two Stage Least Squares.
5

Notions of Semicomputability in Topological Algebras over the Reals

Armstrong, Mark 11 1900 (has links)
Several results from classical computability theory (computability over discrete structures such as the natural numbers and strings over finite alphabets, due to Turing, Church, Kleene and others) have been shown to hold for a generalisation of computability theory over total abstract algebras, using for instance the model of \While\ computation. We present a number of results relating to computation on topological partial algebras, again using \While\ computation. We consider several results from the classical theory in the context of topological algebra of the reals: closure of semicomputable sets under finite union (Chapter \ref{chap:results1} Theorem \ref{thm:union_While_scomp_not_While_scomp}, p.\pageref{thm:union_While_scomp_not_While_scomp}), the equivalence of semicomputable and projectively (semi)computable sets (Chapter \ref{chap:results2} Theorem \ref{thm:proj_while_equivalents}, p.\pageref{thm:proj_while_equivalents}), and Post's Theorem (i.e.~a set is computable iff both it and its complement are semicomputable) (Appendix \ref{appendix:posts_theorem} Theorem \ref{thm:posts_general}, p.\pageref{thm:posts_general}). This research has significance in the field of scientific computation, which is underpinned by computability on the real numbers. We will consider a ``continuity principle", which states that computability should imply continuity; however, equality, order, and other total boolean-valued functions on the reals are clearly discontinuous. As we want these functions to be basic for the algebras under consideration, we resolve this incompatibility by redefining such functions to be partial, leading us to consider topological partial algebras. / Thesis / Master of Computer Science (MCS) / We investigate to what extent certain well-known results of classical computability theory on the natural numbers hold in the context of generalised computability theories on the real numbers.
6

Conformal Properties of Generalized Dirac Operator

Thakre, Varun 05 June 2013 (has links)
No description available.
7

Empirical likelihood with applications in time series

Li, Yuyi January 2011 (has links)
This thesis investigates the statistical properties of Kernel Smoothed Empirical Likelihood (KSEL, e.g. Smith, 1997 and 2004) estimator and various associated inference procedures in weakly dependent data. New tests for structural stability are proposed and analysed. Asymptotic analyses and Monte Carlo experiments are applied to assess these new tests, theoretically and empirically. Chapter 1 reviews and discusses some estimation and inferential properties of Empirical Likelihood (EL, Owen, 1988) for identically and independently distributed data and compares it with Generalised EL (GEL), GMM and other estimators. KSEL is extensively treated, by specialising kernel-smoothed GEL in the working paper of Smith (2004), some of whose results and proofs are extended and refined in Chapter 2. Asymptotic properties of some tests in Smith (2004) are also analysed under local alternatives. These special treatments on KSEL lay the foundation for analyses in Chapters 3 and 4, which would not otherwise follow straightforwardly. In Chapters 3 and 4, subsample KSEL estimators are proposed to assist the development of KSEL structural stability tests to diagnose for a given breakpoint and for an unknown breakpoint, respectively, based on relevant work using GMM (e.g. Hall and Sen, 1999; Andrews and Fair, 1988; Andrews and Ploberger, 1994). It is also original in these two chapters that moment functions are allowed to be kernel-smoothed after or before the sample split, and it is rigorously proved that these two smoothing orders are asymptotically equivalent. The overall null hypothesis of structural stability is decomposed according to the identifying and overidentifying restrictions, as Hall and Sen (1999) advocate in GMM, leading to a more practical and precise structural stability diagnosis procedure. In this framework, these KSEL structural stability tests are also proved via asymptotic analysis to be capable of identifying different sources of instability, arising from parameter value change or violation of overidentifying restrictions. The analyses show that these KSEL tests follow the same limit distributions as their counterparts using GMM. To examine the finite-sample performance of KSEL structural stability tests in comparison to GMM's, Monte Carlo simulations are conducted in Chapter 5 using a simple linear model considered by Hall and Sen (1999). This chapter details some relevant computational algorithms and permits different smoothing order, kernel type and prewhitening options. In general, simulation evidence seems to suggest that compared to GMM's tests, these newly proposed KSEL tests often perform comparably. However, in some cases, the sizes of these can be slightly larger, and the false null hypotheses are rejected with much higher frequencies. Thus, these KSEL based tests are valid theoretical and practical alternatives to GMM's.
8

MIMO Multiplierless FIR System

Imran, Muhammad, Khursheed, Khursheed January 2009 (has links)
<p>The main issue in this thesis is to minimize the number of operations and the energy consumption per operation for the computation (arithmetic operation) part of DSP circuits, such as Finite Impulse Response Filters (FIR), Discrete Cosine Transform (DCT), and Discrete Fourier Transform (DFT) etc. More specific, the focus is on the elimination of most frequent common sub-expression (CSE) in binary, Canonic Sign Digit (CSD), Twos Complement or Sign Digit representation of the coefficients of non-recursive multiple input multiple output (MIMO)  FIR system , which can be realized using shift-and-add based operations only. The possibilities to reduce the complexity i.e. the chip area, and the energy consumption have been investigated.</p><p>We have proposed an algorithm which finds the most common sub expression in the binary/CSD/Twos Complement/Sign Digit representation of coefficients of non-recursive MIMO multiplier less FIR systems. We have implemented the algorithm in MATLAB. Also we have proposed different tie-breakers for the selection of most frequent common sub-expression, which will affect the complexity (Area and Power consumption) of the overall system. One choice (tie breaker) is to select the pattern (if there is a tie for the most frequent pattern) which will result in minimum number of delay elements and hence the area of the overall system will be reduced. Another tie-breaker is to choose the pattern which will result in minimum adder depth (the number of cascaded adders). Minimum adder depth will result in least number of glitches which is the main factor for the power consumption in MIMO multiplier less FIR systems. Switching activity will be increased when glitches are propagated to subsequent adders (which occur if adder depth is high). As the power consumption is proportional to the switching activity (glitches) hence we will use the sub-expression which will result in lowest adder depth for the overall system.</p>
9

Do Better Institutions Alleviate the Resource Curse? Evidence from a Dynamic Panel Approach.

Malebogo Bakwena Unknown Date (has links)
Contrary to conventional theory, a growing body of evidence suggests that economies with abundant natural resources perform badly in terms of economic growth relative to their resource poor counterparts—the so-called resource curse hypothesis. However, this general hypothesis is not robust. It clearly fails to account for the differing experiences of resource abundant economies. For instance, the theory, applied generally, offers no explanation as to why economies like Botswana and Norway have exceptional growth while Saudi Arabia and Nigeria have stagnated. Prompted by these experiences, the thesis investigates the circumstances under which the curse is more or less likely to exist. In particular, the thesis finds evidence that the major reason for the diverging experiences is the differences in the quality of institutions across countries. The thesis tests the hypothesis that the effect of resources on growth is conditional on the type and quality of institutions, by further building on Boschini, Pettersson, and Roine’s (2007) and Mehlum, Moene, and Torvik’s (2006b) influential works on the role of institutions in mitigating the resource curse. Advances are made by: (a) using a panel of up to 53 countries with different levels of development, institutional quality and natural resource abundance over the period 1984-2003; (b) applying a two-step system Generalised Method of Moments (GMM) estimation that accounts for biases associated with omitted variables, endogeneity and unobserved heterogeneity that potentially affect existing cross-country Ordinary Least Squares (OLS) growth results; (c) supplementing results of the commonly used International Country Risk Guide (ICRG) institutional performance indicators with those of institutional design indicators–that is, highlighting the role of electoral rules and form of government; (d) using an institutional quality measure that is more related to financial institutions than just economic or political institutions; (e) using a resource abundance indicator that focuses on non-renewable resources alone rather than the ones commonly used in the literature that include renewable resources, which are inappropriate. The key hypothesis that natural resource economies are not destined to be cursed if they have good institutions is confirmed by the empirical results of the thesis. Specifically, the results suggest that (a) adopting a democratic regime is better than a non-democratic one, in terms of generating growth from resource abundance (b) the electoral rules that a country adopts matter, i.e. having a democratic proportional rather than a democratic majority regime increases the growth benefits of resource abundance (c) as far as the form of government adopted is concerned, a democratic parliamentary rather than a democratic presidential regime generates more economic growth from its abundant natural resource (d) a well functioning banking sector induces more (resource abundant generated) growth and capital accumulation. Therefore, the lessons for policy makers who struggle to overcome the impediments to economic development that potentially accompany the “curse of resource abundance” are the need to develop and maintain better institutions and adopt improved management strategies of the financial proceeds forthcoming from such abundance.
10

Do Better Institutions Alleviate the Resource Curse? Evidence from a Dynamic Panel Approach.

Malebogo Bakwena Unknown Date (has links)
Contrary to conventional theory, a growing body of evidence suggests that economies with abundant natural resources perform badly in terms of economic growth relative to their resource poor counterparts—the so-called resource curse hypothesis. However, this general hypothesis is not robust. It clearly fails to account for the differing experiences of resource abundant economies. For instance, the theory, applied generally, offers no explanation as to why economies like Botswana and Norway have exceptional growth while Saudi Arabia and Nigeria have stagnated. Prompted by these experiences, the thesis investigates the circumstances under which the curse is more or less likely to exist. In particular, the thesis finds evidence that the major reason for the diverging experiences is the differences in the quality of institutions across countries. The thesis tests the hypothesis that the effect of resources on growth is conditional on the type and quality of institutions, by further building on Boschini, Pettersson, and Roine’s (2007) and Mehlum, Moene, and Torvik’s (2006b) influential works on the role of institutions in mitigating the resource curse. Advances are made by: (a) using a panel of up to 53 countries with different levels of development, institutional quality and natural resource abundance over the period 1984-2003; (b) applying a two-step system Generalised Method of Moments (GMM) estimation that accounts for biases associated with omitted variables, endogeneity and unobserved heterogeneity that potentially affect existing cross-country Ordinary Least Squares (OLS) growth results; (c) supplementing results of the commonly used International Country Risk Guide (ICRG) institutional performance indicators with those of institutional design indicators–that is, highlighting the role of electoral rules and form of government; (d) using an institutional quality measure that is more related to financial institutions than just economic or political institutions; (e) using a resource abundance indicator that focuses on non-renewable resources alone rather than the ones commonly used in the literature that include renewable resources, which are inappropriate. The key hypothesis that natural resource economies are not destined to be cursed if they have good institutions is confirmed by the empirical results of the thesis. Specifically, the results suggest that (a) adopting a democratic regime is better than a non-democratic one, in terms of generating growth from resource abundance (b) the electoral rules that a country adopts matter, i.e. having a democratic proportional rather than a democratic majority regime increases the growth benefits of resource abundance (c) as far as the form of government adopted is concerned, a democratic parliamentary rather than a democratic presidential regime generates more economic growth from its abundant natural resource (d) a well functioning banking sector induces more (resource abundant generated) growth and capital accumulation. Therefore, the lessons for policy makers who struggle to overcome the impediments to economic development that potentially accompany the “curse of resource abundance” are the need to develop and maintain better institutions and adopt improved management strategies of the financial proceeds forthcoming from such abundance.

Page generated in 0.1814 seconds