• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 202
  • 100
  • 35
  • 32
  • 31
  • 7
  • 6
  • 6
  • 5
  • 4
  • 4
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 520
  • 520
  • 84
  • 81
  • 65
  • 60
  • 46
  • 45
  • 38
  • 38
  • 37
  • 36
  • 35
  • 31
  • 31
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Analysis of failure time data with ordered categories of response

Berridge, Damon M. January 1991 (has links)
No description available.
32

A statistical analysis of education in Ghana 1950-1960

Ohemeng, Edward 01 June 1963 (has links)
No description available.
33

Bayesian extreme quantile regression for hidden Markov models

Koutsourelis, Antonios January 2012 (has links)
The main contribution of this thesis is the introduction of Bayesian quantile regression for hidden Markov models, especially when we have to deal with extreme quantile regression analysis, as there is a limited research to inference conditional quantiles for hidden Markov models, under a Bayesian approach. The first objective is to compare Bayesian extreme quantile regression and the classical extreme quantile regression, with the help of simulated data generated by three specific models, which only differ in the error term’s distribution. It is also investigated if and how the error term’s distribution affects Bayesian extreme quantile regression, in terms of parameter and confidence intervals estimation. Bayesian extreme quantile regression is performed by implementing a Metropolis-Hastings algorithm to update our parameters, while the classical extreme quantile regression is performed by using linear programming. Moreover, the same analysis and comparison is performed on a real data set. The results provide strong evidence that our method can be improved, by combining MCMC algorithms and linear programming, in order to obtain better parameter and confidence intervals estimation. After improving our method for Bayesian extreme quantile regression, we extend it by including hidden Markov models. First, we assume a discrete time finite state-space hidden Markov model, where the distribution associated with each hidden state is a) a Normal distribution and b) an asymmetric Laplace distribution. Our aim is to explore the number of hidden states that describe the extreme quantiles of our data sets and check whether a different distribution associated with each hidden state can affect our estimation. Additionally, we also explore whether there are structural changes (breakpoints), by using break-point hidden Markov models. In order to perform this analysis we implement two new MCMC algorithms. The first one updates the parameters and the hidden states by using a Forward-Backward algorithm and Gibbs sampling (when a Normal distribution is assumed), and the second one uses a Forward-Backward algorithm and a mixture of Gibbs and Metropolis-Hastings sampling (when an asymmetric Laplace distribution is assumed). Finally, we consider hidden Markov models, where the hidden state (latent variables) are continuous. For this case of the discrete-time continuous state-space hidden Markov model we implement a method that uses linear programming and the Kalman filter (and Kalman smoother). Our methods are used in order to analyze real interest rates by assuming hidden states, which represent different financial regimes. We show that our methods work very well in terms of parameter estimation and also in hidden state and break-point estimation, which is very useful for the real life applications of those methods.
34

A SNP-based method for determining the origin of MRSA isolates

Sciberras, James January 2016 (has links)
The advancements in Whole Genome Sequencing (WGS) have increased the amount of genomic information available for epidemiological analyses. WGS opens many avenues for investigation into the tracking of pathogens, but the rapid advancements in WGS could soon lead to a situation where traditional analytical techniques might become computationally impractical. For example, the traditional method to determine the origin of an isolate is to use phylogenetic analyses. However, phylogenetic analyses become computationally prohibitive with larger datasets and are best for retrospective epidemiology. Therefore, I investigated if there might be less computationally demanding methods of analysing the same data to obtain similar conclusions. This thesis describes a proof-of-principle method for evaluating if such alternative analysis techniques might be viable. In this thesis Methicillin resistant Staphylococcus aureus (MRSA) was used, and single nucleotide polymorphism (SNP) and insertion/deletion (indel) genomic variation. I move away from whole genome analysis techniques, such as phylogenetic analysis, and instead focus on individual SNPs. I showed that genetic signals (such as SNPs and indels) can be utilised in novel ways to rapidly produce a summary of the possible geographic origin of an isolate with a minimal demand on computational power. The methods described could be added to the suite of analytical epidemiological tools and are a promising indication of the viability of developing cheap, rapid diagnostic tools to be implemented in healthcare institutions. Furthermore, the principles behind the development of the methods described in this thesis could have much wider applications than just MRSA. This implies that further work based on the principles described in this thesis on alternative pathogens could prove to be promising avenues of investigation.
35

Examining the utility of a clustering method for analysing psychological test data

Dawes, Sharron Elizabeth January 2004 (has links)
The belief that certain disorders will produce specific patterns of cognitive strengths and weaknesses on psychological testing is pervasive and entrenched in the area of clinical neuropsychology, both with respect to expectations regarding the behaviour of individuals and clinical groups. However, there is little support in the literature for such a belief. To the contrary, studies examining patterns of cognitive performance in different clinical samples without exception find more than one pattern of test scores. Lange (2000) in his comprehensive analysis of WAIS-R/WMS-R data for a large sample of mixed clinical cases found that three to five profiles described variations in test performances within clinical diagnoses. Lange went on to show that these profiles occurred with approximately equal frequency in all diagnostic groups. He additionally found four profiles in an exploratory analysis of WAIS-III/WMS-III data from a similar sample. The goals of the current dissertation were to: a) replicate Lange’s findings in a larger clinical sample; b) extend the scope of these findings to a wider array of psychological tests; and c) develop a method to classify individual cases in terms of their psychological test profile. The first study assessed 849 cases with a variety of neurological and psychiatric diagnoses using hierarchical cluster and K-Means analysis. Four WAIS-III/WMS-III profiles were identified that included approximately equal numbers of cases from the sample. Two of these profiles were uniquely related to two of Lange’s profiles, while the remaining two demonstrated relationships with more than one of Lange’s clusters. The second study expanded the neuropsychological test battery employed in the analysis to include the Trail Making Test, Boston Naming Test, Wisconsin Card Sorting Test, Controlled Oral Word Association Test, and Word Lists from the WMS-III reducing the number of clinical cases to 420. In order to compensate for the impact of the reduced number of cases and increased number of variables on potential cluster stability, the number of test score variables was reduced using factor analysis. In this manner the 22 variables were reduced to six factor scores, which were then analysed with hierarchical cluster and K-Means analysis yielding five cognitive profiles. The third study examined the potential clinical utility of the five cognitive profiles by developing a single case methodology for allocating individual cases to cognitive profiles. This was achieved using a combination of a multivariate outlier statistic, the Mahalanobis Distance, and equations derived from a discriminant function analysis. This combination resulted in classification accuracies exceeding 88% when predicting the profile membership based upon the K-Means analysis. The potential utility of this method was illustrated with three age-, education-, gender-, and diagnostically-matched cases that demonstrated different cognitive test profiles. The implications of the small number of cognitive profiles that characterise test performance in a diverse sample of neurological and psychiatric cases as well as the clinical utility of an accurate classification method at the individual case level was discussed. The role of such a classification system in the design of individualised rehabilitation programmes was also highlighted. This research raises the intriguing possibility of developing a typology based on human behaviour rather than a medical nosology. In effect, replacing the medical diagnosis so ill-suited to encompassing the complexities of human behaviour, with a more appropriate “psychological diagnosis” based on cognitive test performance.
36

A Web-based Statistical Analysis Framework

Chodos, David January 2007 (has links)
Statistical software packages have been used for decades to perform statistical analyses. Recently, the emergence of the Internet has expanded the potential for these packages. However, none of the existing packages have fully realized the collaborative potential of the Internet. This medium, which is beginning to gain acceptance as a software development platform, allows people who might otherwise be separated by organizational or geographic barriers to come together and tackle complex issues using commonly available data sets, analysis tools and communications tools. Interestingly, there has been little work towards solving this problem in a generally applicable way. Rather, systems in this area have tended to focus on particular data sets, industries, or user groups. The Web-based statistical analysis model described in this thesis fills this gap. It includes a statistical analysis engine, data set management tools, an analysis storage framework and a communication component to facilitate information dissemination. Furthermore, its focus on enabling users with little statistical training to perform basic data analysis means that users of all skill levels will be able to take advantage of its capabilities. The value of the system is shown both through a rigorous analysis of the system’s structure and through a detailed case study conducted with the tobacco control community.
37

A Web-based Statistical Analysis Framework

Chodos, David January 2007 (has links)
Statistical software packages have been used for decades to perform statistical analyses. Recently, the emergence of the Internet has expanded the potential for these packages. However, none of the existing packages have fully realized the collaborative potential of the Internet. This medium, which is beginning to gain acceptance as a software development platform, allows people who might otherwise be separated by organizational or geographic barriers to come together and tackle complex issues using commonly available data sets, analysis tools and communications tools. Interestingly, there has been little work towards solving this problem in a generally applicable way. Rather, systems in this area have tended to focus on particular data sets, industries, or user groups. The Web-based statistical analysis model described in this thesis fills this gap. It includes a statistical analysis engine, data set management tools, an analysis storage framework and a communication component to facilitate information dissemination. Furthermore, its focus on enabling users with little statistical training to perform basic data analysis means that users of all skill levels will be able to take advantage of its capabilities. The value of the system is shown both through a rigorous analysis of the system’s structure and through a detailed case study conducted with the tobacco control community.
38

Hydrogeochemistry and hydrology of a basalt aquifer system, the Atherton Tablelands, North Queensland

Locsey, Katrina L. January 2004 (has links)
The Atherton Tablelands basalt aquifer is a major source of groundwater supply for irrigation and other agricultural use. The Tertiary to Quaternary age basaltic aquifer can be regarded as a generally unconfined, layered system, comprising numerous basalt flows separated by palaeo-weathering surfaces and minor alluvial gravels of palaeo-drainage channels. Layers of massive basalt and clay-rich weathered zones act as local aquitards, with some local perched aquifers also present. The aquifer is regarded as a system in which several factors interact to produce the overall characteristics of the hydrogeochemistry of the groundwaters. They include the mineralogical composition of both the basalt aquifer and the thick overlying weathered zone, the porosity and permeability of the basalt aquifer, its thickness, bedrock composition, and climate and topography. The hydrogeochemical processes operating in this aquifer system have been investigated though the analysis of 90 groundwater samples collected from October 1998 to October 1999, groundwater chemistry data provided by the Queensland Department of Natural Resources & Mines for more than 800 groundwater samples, rain water samples collected during 1999 by CSIRO, stream chemistry data provided by CSIRO and James Cook University, and mineralogical and whole rock geochemistry data of drill chip samples. The methods used in this research study include the assessment of groundwater major ion chemistry data and field physico-chemical parameters using hydrochemical facies and statistical approaches, investigation of the mineralogical composition of the aquifer, assessment of concentrations and activities of the ions in solution, the degree of saturation with respect to both primary and secondary minerals, and hydrogeochemical modelling to determine the likely controls on the chemical evolution of these groundwaters. The basaltic groundwaters are mostly Mg-Ca-Na, HCO3 type waters, with electrical conductivities generally less than 250 μS/cm and pH values from 6.5 to 8.5. Dissolved silica (H4SiO4) comprises a large proportion of the total dissolved load, with average concentrations of around 140 mg/L. Concentrations of potassium, chloride and sulphate are low, that is, generally less than 3 mg/L, 15 mg/L and 10 mg/L, respectively. Despite the very low salinity of the Atherton Tablelands basalt groundwaters, the relative concentrations of the major ions are comparable to groundwaters from other basaltic regions, and are consistent with expected waterrock interactions. A variety of multivariate statistical techniques may be used to aid in the analysis of hydrochemical data, including for example, principal component analysis, factor analysis and cluster analysis. Principal component factor analyses undertaken using the hydrochemical data for the Atherton groundwaters has enabled the differentiation of groundwaters from various lithological formations, the underlying geochemical processes controlling groundwater composition in the basalt aquifer to be inferred, relative groundwater residence and flow directions to be inferred and mapping of the estimated thickness of the basalt aquifer. The limitations of multivariate statistical methods have been examined, with emphasis on the issues pertinent to hydrochemical data, that is, data that are compositional and typically, non-normally distributed. The need to validate, normalize and standardize hydrochemical data prior to the application of multivariate statistical methods is demonstrated. Assessment of the saturation states of the Atherton basalt groundwaters with respect to some of the primary minerals present indicate that the groundwaters are mostly at equilibrium or saturated with respect to K-feldspar, and approach equilibrium with respect to the plagioclase feldspars (albite and anorthite) with increasing pH. These groundwaters are at equilibrium or saturated with respect to the major secondary minerals, kaolinite, smectite (Ca-montmorillonite) and gibbsite. They also tend to be saturated with respect to the oxidation products, goethite and hematite, common accessory minerals in the Atherton Tablelands basalt sequence. Silicate mineral weathering processes are the predominant influence on the composition of these basalt groundwaters. These weathering processes include the weathering of pyroxenes, feldspars and other primary minerals to clays, aluminium and iron oxides, amorphous or crystalline silica, carbonates and zeolites, releasing ions to solution. The contribution of substantial organic carbon dioxide to the groundwater is an important factor in the extent to which silicate mineral weathering occurs in this aquifer system. Evaporative enrichment of recharging waters, oxidation and ion-exchange reactions and the uptake of ions from, and decomposition of, organic matter, are processes that have a minor influence on the composition of the basalt groundwaters. The relationships observed between mineralogical compositions, basalt character and groundwater occurrence in the Atherton Tablelands region improved the understanding how groundwater is stored and transmitted in this basalt aquifer system. Groundwater is mostly stored in vesicular basalt that may be fresh to highly weathered, and movement of this water is facilitated by pathways through both vesicular and fractured basalt. Related work undertaken as part of this research project showed that the groundwater flow patterns defined by the hydrogeochemical interpretations correspond well with the spatial trends in water level fluctuations, and response to recharge events in particular. Groundwater baseflow to streams and discharge to topographic lows in the Atherton Tablelands region is indicated by the relationships between the major cations and anions in the stream waters. Fracture zones are likely to be preferred pathways of groundwater movement. Recharge estimates, based on a chloride mass balance, range from 310 mm/yr in the north-western part of the study area (north of Atherton) to 600 mm/yr in the wetter southern and eastern parts of the study area. These recharge estimates should be treated with caution however, due to the low groundwater chloride concentrations and the high variability in rainfall chloride concentrations. The findings of this research project have improved the understanding of the hydrogeochemical processes controlling the composition of the low salinity basalt groundwaters in the Atherton Tablelands region, and are applicable to other basalt groundwater systems, particularly those in high rainfall environments.
39

Multichannel blind deconvolution

Ma, Liang Suo. January 2004 (has links)
Thesis (Ph.D.)--University of Wollongong, 2004. / Typescript. Includes bibliographical references: p. 199-206.
40

Možnosti využití programovacího jazyku VBA ve statistické analýze / Using of VBA language in statistical analysis

Trávníček, Lukáš January 2008 (has links)
The main thesis aim is to introduce possibilities of MS Excel in data analysis and to point out to some strengths and weaknesses of MS Excel in data processing. I pay attention to tools which MS Excel contains by default as well as to statistical products which are not normally part of MS Excel. These products can be run in MS Excel environment as statistical add-ins for MS Excel. In the first case I focus on MS Excel tools as statistical functions, graphic analysis and statistical methods, in the other case on statistical products as XLStatistics, Analyse it, WinSTAT. In connection with the first mentioned statistical product I show some examples how to analyze and work with real data samples in product XLStatistics. The aim of this part is to show other way how to analyze data then in the others products or MS Excel itself. At the end of my thesis I programmed my own product Regxcel with the help of VBA language as the main contribution of my entire work. Product Regxcel provides solution in regression analysis for the six most common types of regression function. The main aim of this part is to point out to other possibility how to analyze data and solve some problems which users can face in regression analysis in MS Excel. I also try to show VBA language as big help in data processing in MS Excel. At the very end I presented all possibilities of Regxcel on real data sample. The product itself is part of attached CD

Page generated in 0.0181 seconds