• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 148
  • Tagged with
  • 156
  • 156
  • 156
  • 148
  • 148
  • 89
  • 79
  • 43
  • 43
  • 18
  • 13
  • 12
  • 11
  • 10
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Almost Runge-Kutta methods for stiff and non-stiff problems

Rattenbury, Nicolette January 2005 (has links)
Ordinary differential equations arise frequently in the study of the physical world. Unfortunately many cannot be solved exactly. This is why the ability to solve these equations numerically is important. Traditionally mathematicians have used one of two classes of methods for numerically solving ordinary differential equations. These are linear multistep methods and Runge–Kutta methods. General linear methods were introduced as a unifying framework for these traditional methods. They have both the multi-stage nature of Runge–Kutta methods as well as the multi-value nature of linear multistep methods. This extremely broad class of methods, besides containing Runge–Kutta and linear multistep methods as special cases, also contains hybrid methods, cyclic composite linear multistep methods and pseudo Runge–Kutta methods. In this thesis we present a class of methods known as Almost Runge–Kutta methods. This is a special class of general linear methods which retains many of the properties of traditional Runge–Kutta methods, but with some advantages. Most of this thesis concentrates on explicit methods for non-stiff differential equations, paying particular attention to a special fourth order method which, when implemented in the correct way, behaves like order five. We will also introduce low order diagonally implicit methods for solving stiff differential equations.
42

Queueing and Storage Control Models

Sheu, Ru-Shuo January 2002 (has links)
Whole document restricted at the request of the author / This thesis is divided into two parts. The first part is about the control of a special queueing network which has two service nodes in tandem on each service channel. With capacity at each service node being finite, we compare som different control policies to find the admission and routing policies that minimise the blocking rate in the queueing system. We obtain limit theorems as the number of channels becomes large. The stochastic optimization technique we apply here is the Lagrangian method, using the Complementary Slackness Conditions to choose the optimal action. In the second part we consider two reservoir control problems. In the first, the cost function is a single simple linear function, and the second has two different cost functions and the choice of them forms a finite-state Markov chain. We find the optimal policies to determine how many units of water should be released from the reservoir under these two different models. We model the reservoir as a Markov decision process. The policy-iteration algorithm and the value-iteration algorithm are the main methods we apply in this part. In both problems we apply stochastic optimization techniques. The reservoir model uses a standard Markov decision process model, with the associated methods of policy-iteration and value-iteration to find the optimal state-dependant policy. In the routing problem we also interested in state-dependent policies, but here we wish to look at the system in the limit as the number of queues becomes large, so we can no longer us the technique of Markov decision processes. We look, instead, at the limiting deterministic problem to find the optimal policy.
43

A Study of Symmetric Forced Oscillators

Ben-Tal, Alona January 2001 (has links)
In this thesis we study a class of symmetric forced oscillators modeled by non-linear ordinary differential equations. Solutions for this class of systems can be symmetric or non-symmetric. When a symmetric periodic solution loses its stability as a physical parameter is varied, and two non-symmetric periodic solutions appear, this is called a symmetry breaking bifurcation. In a symmetry increasing bifurcation two conjugate chaotic attractors (i.e.,attractors which are related to each other by the symmetry) collide and form a larger symmetric chaotic attractor. Symmetry can also be restored via explosions where, as a physical parameter is varied, two conjugate attractors (chaotic or periodic) which do not intersect are suddenly embedded in one symmetric attractor. In this thesis we show that all these apparently distinct bifurcations can be realized by a single mechanism in which two conjugate attractors collide with a symmetric limit set. The same mechanism seems to operate for at least some bifurcations involving non-attracting limit sets. We illustrate this point with examples of symmetry restoration in attracting and non-attracting sets found in the forced Duffing oscillator and in a power system. Symmetry restoration in the power system is associated with a phenomenon known as ferroresonance. The study of the ferroresonance phenomenon motivated this thesis. Part of this thesis is devoted to studying one aspect of the ferroresonance phenomenon the appearance of a strange attractor with a band-like structure. This attractor was called previously a 'pseudo-periodic' attractor. Some methods for analyzing the non-autonomous systems under study are shown. We construct three different maps which highlight different features of symmetry restoring bifurcations. One map in particular captures the symmetry of a solution by sampling it every half the period of the forcing. We describe a numerical method to construct a bifurcation diagram of periodic solutions and present a non-standard approach for converting the forced oscillator to an autonomous system.
44

Ethnomathematics: Exploring Cultural Diversity in Mathematics

Barton, Bill, 1948- January 1996 (has links)
This thesis provides a new conceptualisation of ethnomathematics which avoids some of the difficulties which emerge in the literature. In particular, work has been started on a philosophic basis for the field. There is no consistent view of ethnomathematics in the literature. The relationship with mathematics itself has been ignored, and the philosophical and theoretical background is missing. The literature also reveals the ethnocentricity implied by ethnomathematics as a field of study based in a culture which has mathematics as a knowledge category. Two strategies to over come this problem are identified: universalising the referent of ‘mathematics’ so that it is the same as “knowledge-making”; or using methodological techniques to minimise it. The position of ethnomathematics in relationship to anthropology, sociology, history, and politics is characterised on a matrix. A place for ethnomathematics is found close the anthropology of mathematics, but the aim of anthropology is to better understand culture in general, while ethnomathematics aims to better understand mathematics. Anthropology, however, contributes its well-established methodologies for overcoming ethnocentricity. The search for a philosophical base finds a Wittgensteinian orientation which enables culturally based ‘systems of meaning’ to gain credibility in mathematics. A definition is proposed for ethnomathematics as the study of mathematical practices within context. Four types of ethnomathematical activity are identified: descriptive, archaeological, mathematising, and analytical activity. The definition also gives rise to a categorisation of ethnomathematical work along three dimensions: the closeness to conventional mathematics; the historical time; and the type of host culture. The mechanisms of interaction between mathematical practices are identified, and the imperialistic growth of mathematics is explained. Particular features of ethnomathematical theory are brought out in a four examples. By admitting the legitimacy of other viewpoints, ethnomathematics opens mathematics to new creative forces. Within education, ethnomathematics provides new choices, and turns cultural conflict into a useful tool for teaching. Mathematical activity exists in a variety of contexts. Learning mathematics involves being aware of, and integrating, diverse concepts. Ethnomathematics expands mathematical horizons, so that cultural diversity becomes a richer contributor to the cultural structures which humans use to understand their world.
45

Forensic Applications of Bayesian Inference to Glass Evidence

Curran, James Michael January 1996 (has links)
The role of the scientist in the courtroom has come under more scrutiny this century than ever before. As a consequence, scientists must constantly look for ways to improve the validity of the evidence they deliver. It is here that the professional statistician can provide assistance. The use of statistics in the courtroom and in forensic science is not new, but until recently has not been common either. Statistics can provide objectivity to subjective assessments and strengthen a case for the prosecution or the defence, but only if is used correctly. The aim of this thesis is to enhance and replace the existing technology used in statistical analysis and presentation of trace evidence, i.e. all non-genetic evidence (hairs, fibres, glass, paint, etc.) and transfer problems.
46

Effects of serial correlation on linear models

Triggs, Christopher M. January 1975 (has links)
Given a linear regression model y = Xβ + e, where e has a multivariate normal distribution N(0, Σ) consequences of the erroneous assumption that e is distributed as N(0, I) are considered. For a general linear hypothesis concerning the parameters β, in a general model the distribution of the statistic to test the hypothesis, derived under the erroneous assumption is studied. Particular linear hypotheses concerning particular linear models are investigated so as to describe the effects of various patterns of serial correlation on the test statistics arising from these hypotheses. Attention is specially paid to the models of one- and two- way analysis of variance.
47

Understanding linear algebra concepts through the embodied, symbolic and formal worlds of mathematical thinking

Stewart, Sepideh January 2008 (has links)
Linear algebra is one of the first advanced mathematics courses that students encounter at university level. The transfer from a primarily procedural or algorithmic school approach to an abstract and formal presentation of concepts through concrete definitions, seems to be creating difficulty for many students who are barely coping with procedural aspects of the subject. This research proposes applying APOS theory, in conjunction with Tall’s three worlds of embodied, symbolic and formal mathematics, to create a framework in order to examine the learning of a variety of linear algebra concepts by groups of first and second year university students. The aim is to investigate the difficulties in understanding some linear algebra concepts and to propose potential paths for preventing them. As part of this research project several case studies were conducted where groups of first and second year students were exposed to teaching and learning some introductory linear algebra concepts based on the framework and expressed their thinking through their involvements in tests, interviews and concept maps. The results suggest that the students had limited understanding of the concepts, they struggled to recognise the concepts in different registers, and their lack of ability in linking the major concepts became apparent. However, they also revealed that those with more representational diversity had more overall understanding of the concepts. In particular the embodied introduction of the concept proved a valuable adjunct to their thinking. Since difficulties with learning linear algebra by average students are universally acknowledged, it is anticipated that this study may provide suggestions with the potential for widespread positive consequences for learning.
48

Significance testing in automatic interaction detection (A.I.D.)

Worsley, Keith John January 1978 (has links)
Automatic Interaction Detection (A.I.D.) is the name of a computer program, first used in the social sciences, to find the interaction between a set of predictor variables and a single dependent variable. The program proceeds in stages, and at each stage the categories of a predictor variable induce a split of the dependent variable into two groups, so that the between groups sum of squares ( BSS ) is a maximum. In this way, the optimum split defines the interaction between predictor and dependent variable, and the criterion BSS is taken as a measure of the explanatory power of the split. One of the strengths of A.I.D. is that this interaction is established without any reference to a specific model, and for this reason it is widely used in practice. However this strength is also its weakness; with no model there is no measure of its significance. Barnard (1974) has said: “… nowadays with more and more apparently sophisticated computer programs for social science, failure to take account of possible sampling fluctuations is leading to a glut of unsound analyses … I have in mind procedures such as A.I.D., the automatic interaction detector, which guarantees to get significance out of any data whatsoever. Methods of this kind require validation …” The aim of this thesis is to supply Part of that validation by investigating the null distribution of the optimum BSS for a single predictor at a single stage of A.I.D., so that the significance of any particular split can be judged. The problem of the overall significance of a complete A.I.D. analysis, combining many stages, still remains to be solved. In Chapter 1 the A.I.D. method is described in more detail and an example is presented to illustrate its use. A null hypothesis that the dependent variable observations have independent and identical normal distributions is proposed as a model for no interaction. In Chapters 2 and 3 the null distributions of the optimum BSS for a single predictor are derived and tables of percentage points are given. In Chapter 4 the normal assumption is dropped and non-parametric A.I.D. criteria, based on ranks, are proposed. Tables of percentage points, found by direct enumeration and by Monte Carlo methods, are given. In Chapter 5 the example presented in Chapter 1 is used to illustrate the application of the theory and tables in Chapters 2, 3 and 4 and some final conclusions are drawn.
49

Automatic structures

Rubin, Sasha January 2004 (has links)
This thesis investigates structures that are presentable by finite automata working synchronously on tuples of finite words. The emphasis is on understanding the expressiveness and limitations of automata in this setting. In particular, the thesis studies the classification of classes of automatic structures, the complexity of the isomorphism problem, and the relationship between definability and recognisability.
50

New methods for analysing generalised linear models with applications to epidemiology

Holden, Jennifer Kay January 2001 (has links)
Whole document restricted, see Access Instructions file below for details of how to access the print copy. / The aim of capture-recapture methods in epidemiology is to accurately estimate the total number of people who have a specific disease. These methods were first developed by ecologists interested in estimating the total population size of various animal species. Capture-recapture methods have a relatively short history, at least in terms of application to epidemiological data sets. If applied correctly they can be of great benefit, and are invaluable for planning and resource allocation. The aim of this thesis is to enhance the existing methods used in epidemiological capture-recapture problems. This research explores new methods for analysing generalised linear models, within the capture-recapture framework, with applications to epidemiology. In particular, we critically examine two New Zealand data sets. We compare two small sample adjustments for capture-recapture methods, and find that the Evans and Bonett adjustment can be a useful tool for sparse data. We employ stratified capture-recapture analyses to alleviate problems with heterogeneity and reporting patterns. In addition, we consider a type of cost-benefit analysis for the reporting sources. Two proposed methods of internal validity analysis are scrutinised. We find that one of these is counter-intuitive and of no use, however, the other method may be of some use in at least indicating the direction of any bias in the capture-recapture estimates. We use simulation to explore the effects of errors on patient records, and find that even relatively small percentages of errors can affect estimates dramatically. We conclude with a study of the optimal number of sources to use in epidemiological capture-recapture analyses. We argue that using three sources is not necessarily optimal, and that using four sources is also entirely manageable. This thesis outlines a strategy for analysing epidemiological data sets using capture-recapture methods, and includes aspects of model fitting and selection, cost-benefit analysis, diagnostic checking through simulations of the effects of record errors, and the effects of collapsing lists, as well as a critical check of the capture-recapture assumptions. This investigation demonstrates the potential of capture-recapture methods to provide accurate estimates of the size of various disease populations.

Page generated in 0.0929 seconds