• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 298
  • 107
  • 49
  • 38
  • 23
  • 20
  • 20
  • 18
  • 9
  • 8
  • 7
  • 6
  • 5
  • 4
  • 4
  • Tagged with
  • 689
  • 152
  • 84
  • 77
  • 71
  • 66
  • 55
  • 54
  • 49
  • 48
  • 46
  • 43
  • 43
  • 42
  • 40
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
341

An analysis of population lifetime data of South Australia 1841 - 1996

Leppard, Phillip I. January 2003 (has links)
The average length of life from birth until death in a human population is a single statistic that is often used to characterise the prevailing health status of the population. It is one of many statistics calculated from an analysis that, for each age, combines the number of deaths with the size of the population in which these deaths occur. This analysis is generally known as life table analysis. Life tables have only occasionally been produced specifically for South Australia, although the necessary data has been routinely collected since 1842. In this thesis, the mortality pattern of South Australia over the period of 150 years of European settlement is quantified by using life table analyses and estimates of average length of life. In Chapter 1, a mathematical derivation is given for the lifetime statistical distribution function that is the basis of life table analysis, and from which the average length of life or current expected life is calculated. This derivation uses mathematical notation that clearly shows the deficiency of current expected life as a measure of the life expectancy of an existing population. Four statistical estimation procedures are defined, and the computationally intensive method of bootstrapping is discussed as an estimation procedure for the standard error of each of the estimates of expected life. A generalisation of this method is given to examine the robustness of the estimate of current expected life. In Chapter 2, gender and age-specific mortality and population data are presented for twenty five three-year periods; each period encompassing one of the colonial (1841-1901) or post-Federation (1911-96) censuses that have been taken in South Australia. For both genders within a census period, four types of estimate of current expected life, each with a bootstrap standard error, are calculated and compared, and a robustness assessment is made. In Chapter 3, an alternate measure of life expectancy known as generation expected life is considered. Generation expected life is derived by extracting, from official records arranged in temporal order, the mortality pattern of a notional group of individuals who were born in the same calendar year. Several estimates of generation expected life are calculated using South Australian data, and each estimate is compared to the corresponding estimate of current expected life. Additional estimates of generation expected life calculated using data obtained from the Roll of Honour at the Australian War Memorial quantify the reduction in male generation expected life for 1881-1900 as a consequence of military service during World War I, 1914-18, and the Influenza Pandemic, 1919. / Thesis (M.Sc.) -- University of Adelaide, School of Applied Mathematics, 2003.
342

Source detection and parameter estimation in array processing in the presence of nonuniform noise

Aouada, Saïd. Unknown Date (has links)
Techn. University, Diss., 2006--Darmstadt.
343

Modeling complex systems with differential equations

Müller, Thorsten G. January 2002 (has links)
Freiburg, Univ., Diss., 2002.
344

Modelling and resampling based multiple testing with applications to genetics

Huang, Yifan. January 2005 (has links)
Thesis (Ph. D.)--Ohio State University, 2005. / Title from first page of PDF file. Document formatted into pages; contains xii, 97 p.; also includes graphics. Includes bibliographical references (p. 94-97). Available online via OhioLINK's ETD Center
345

Shape and topology constrained image segmentation with stochastic models

Zöller, Thomas. Unknown Date (has links) (PDF)
University, Diss., 2005--Bonn.
346

Robust principal component analysis biplots

Wedlake, Ryan Stuart 03 1900 (has links)
Thesis (MSc (Mathematical Statistics))--University of Stellenbosch, 2008. / In this study several procedures for finding robust principal components (RPCs) for low and high dimensional data sets are investigated in parallel with robust principal component analysis (RPCA) biplots. These RPCA biplots will be used for the simultaneous visualisation of the observations and variables in the subspace spanned by the RPCs. Chapter 1 contains: a brief overview of the difficulties that are encountered when graphically investigating patterns and relationships in multidimensional data and why PCA can be used to circumvent these difficulties; the objectives of this study; a summary of the work done in order to meet these objectives; certain results in matrix algebra that are needed throughout this study. In Chapter 2 the derivation of the classic sample principal components (SPCs) is first discussed in detail since they are the „building blocks‟ of classic principal component analysis (CPCA) biplots. Secondly, the traditional CPCA biplot of Gabriel (1971) is reviewed. Thirdly, modifications to this biplot using the new philosophy of Gower & Hand (1996) are given attention. Reasons why this modified biplot has several advantages over the traditional biplot – some of which are aesthetical in nature – are given. Lastly, changes that can be made to the Gower & Hand (1996) PCA biplot to optimally visualise the correlations between the variables is discussed. Because the SPCs determine the position of the observations as well as the orientation of the arrows (traditional biplot) or axes (Gower and Hand biplot) in the PCA biplot subspace, it is useful to give estimates of the standard errors of the SPCs together with the biplot display as an indication of the stability of the biplot. A computer-intensive statistical technique called the Bootstrap is firstly discussed that is used to calculate the standard errors of the SPCs without making underlying distributional assumptions. Secondly, the influence of outliers on Bootstrap results is investigated. Lastly, a robust form of the Bootstrap is briefly discussed for calculating standard error estimates that remain stable with or without the presence of outliers in the sample. All the preceding topics are the subject matter of Chapter 3. In Chapter 4, reasons why a PC analysis should be made robust in the presence of outliers are firstly discussed. Secondly, different types of outliers are discussed. Thirdly, a method for identifying influential observations and a method for identifying outlying observations are investigated. Lastly, different methods for constructing robust estimates of location and dispersion for the observations receive attention. These robust estimates are used in numerical procedures that calculate RPCs. In Chapter 5, an overview of some of the procedures that are used to calculate RPCs for lower and higher dimensional data sets is firstly discussed. Secondly, two numerical procedures that can be used to calculate RPCs for lower dimensional data sets are discussed and compared in detail. Details and examples of robust versions of the Gower & Hand (1996) PCA biplot that can be constructed using these RPCs are also provided. In Chapter 6, five numerical procedures for calculating RPCs for higher dimensional data sets are discussed in detail. Once RPCs have been obtained by using these methods, they are used to construct robust versions of the PCA biplot of Gower & Hand (1996). Details and examples of these robust PCA biplots are also provided. An extensive software library has been developed so that the biplot methodology discussed in this study can be used in practice. The functions in this library are given in an appendix at the end of this study. This software library is used on data sets from various fields so that the merit of the theory developed in this study can be visually appraised.
347

Fractionally integrated processes and structural changes: theoretical analyses and bootstrap methods

Chang, Seong Yeon 22 January 2016 (has links)
The first chapter considers the asymptotic validity of bootstrap methods in a linear trend model with a change in slope at an unknown time. Perron and Zhu (2005) analyzed the consistency, rate of convergence, and limiting distributions of the parameter estimates in this model. I provide theoretical results for the asymptotic validity of bootstrap methods related to forming confidence intervals for the break date. I consider two bootstrap schemes, the residual (for white noise errors) and the sieve bootstrap (for correlated errors). Simulation experiments confirm that confidence intervals obtained using bootstrap methods perform well in terms of exact coverage rate. The second chapter extends Perron and Zhu's (2005) analysis to cover more general fractionally integrated errors with memory parameter d in the interval (-0.5,1.5). My theoretical results uncover some interesting features. For example, with a concurrent level shift allowed, the rate of convergence of the estimate of the break date is the same for all values of d in the interval (-0.5,0.5), a feature linked to the contamination induced by allowing a level shift. In all other cases, the rate of convergence is decreasing as d increases. I also provide results about the spurious break issue. The third chapter considers constructing confidence intervals for the break date in linear regressions. I compare the performance of various procedures in terms of the exact coverage rates and lengths: Bai's (1997) based on the asymptotic distribution with shrinking shifts, Elliott and Müller's (EM) (2007) based on inverting a test locally invariant to the magnitude of the change, Eo and Morley's (2013) based on inverting a likelihood ratio test, and various bootstrap procedures. In terms of coverage rates, EM's approach is the best but with a high cost in terms of length. With serially correlated errors and a change in intercept or in the coefficient of a regressor with a high signal-to-noise ratio, or when a lagged dependent variable is present, the length approaches the whole sample as the magnitude of the change increases. This drawback is not present for the other methods. Theoretical results are provided to explain the drawbacks of EM's method.
348

Informační systém pro základní školu

Mateašák, David January 2015 (has links)
The information system for primary school. Thesis. Brno, 2015. The thesis describes an information system developing which is built on customers needs and requests. The customer in this case is an elementary school. The whole information system fully correspond to modern developer techniques and standards with using MVC architecture and responsive design.
349

Současné trendy kódování webového frontendu / Present trends in frontend web development

KOMRSKA, Roman January 2016 (has links)
This thesis contains present trends in frontend web development. On the basis of this, there is made web application which is responsive a it is usable on mobile devices. In the theoretical part, there are described news and present trends of frontend web development and in the practical part, there is described whole process, which contains design, development, testing and deployment. This thesis could be very useful for beginners in the subject.
350

Un metaverificador de firmas y su aplicación en la inscripción de organizaciones políticas en el Perú

Vilchez Fernandez, Luis Enrique January 2008 (has links)
En el Perú, para lograr una inscripción como organización política se debe contar con una relación de adherentes (planillones de firmas) la cual es verificada por el Registro Nacional de Identificación y Estado Civil, utilizando la técnica del cotejo visual. La problemática radica en que esta técnica es completamente manual, propensa al error humano influenciado por los tiempos cortos para homologación y alta demanda en época electoral, lo cual está ocasionando que la verificación de firmas no se realice de manera exhaustiva, llegando a aceptar firmas cuya originalidad no ha sido completamente verificada. En consecuencia, algunas organizaciones políticas están logrando su inscripción en el ROP con firmas falsificadas, las cuales posteriormente son denunciadas en los medios de comunicación, generando desconfianza en la ciudadanía. Este trabajo de investigación propone el desarrollo de un metaverificador de firmas, el cual realizará la verificación de los patrones de la firma en cuestión con las firmas genuinas, determinando la originalidad de la misma. La propuesta incluye el uso de nuevas características y un motor de verificación compuesto por dos módulos, el primer módulo tiene como función verificar si la firma en cuestión es falsa, y el segundo, realizar una verificación más detallada de las firmas que no fueron detectadas como falsas en el primer módulo. Los resultados demuestran que el metaverificador propuesto logra obtener una precisión del 93.3%, lo cual es bastante alto en comparación con resultados señalados en la literatura, usando solo 3 firmas genuinas para el entrenamiento. / Perú. Ministerio de la Producción. Programa Nacional de Innovación para la Competitividad y Productividad (Innóvate Perú) / Tesis

Page generated in 0.0497 seconds