Ahmed, Md. Faiaz
No description available.
Thesis (Ph. D.)--University of Hong Kong, 2009. / Includes bibliographical references (leaves 138-148). Also available in print.
Nelson, Bradley William
01 January 1977
Since its invention twenty years ago, the Delphi technique has been gaining wider and wider acceptance as a tool for forecasting technology; gathering expert opinion from a local to world wide "advice community" upon which government, industry, and other policy making bodies must so frequently rely; and providing judgmental input for studies (e. g., social sciences) where hard data are unavailable or too difficult to obtain. Accompanying this increased acceptance is an increased danger of manipulation of a Delphi to produce the results desired by one certain individual or group of individuals. Manipulation is increasingly being mentioned in the literature as a danger but little has been done to study the problem. Two groups of thirty United States Air Force Officers enrolled in a Masters of Business Administration program participated in a fact probing Delphi containing thirty statements. The participants of one group were given falsified statistical feedback on fifteen of the statements, while the participants of the other group were given falsified statistical feedback on the other fifteen statements. A similar study was done with another group of officers using a value probing Delphi. The results of these studies showed a high degree of success in obtaining a desired value through the use of manipulated statistical feedback. This success was enhanced by running additional rounds. It was also found that the statistical manipulation had a significant effect on the convergence and stability of the Delphi statements. The effects of statistical manipulation on confidence as measured by self-rating was also studied. It was found that there was a significant tendency for Delphi participants to shift their selfrating during later rounds toward the middle. The effect of the distance between a participant's original estimate and the median reported back to him on the amount the participant changed his self-rating was investigated. The results were inconsistent. Statistically manipulated participants showed an overall decrease in confidence, regardless of their original self-rating. Suggestions for extending the research in the area of manipulation of Delphi statements plus a taxonomy of the variables that comprise the problem of manipulation are discussed.
firstname.lastname@example.org / Many symbolic integration methods and algorithms have been developed to deal with definite integrals such as those of interest to physicists, theoretical chemists and engineers. These have been implemented into mathematical softwares like Maple and Mathematica to give closed forms of definite integrals. The work presented here introduces and analytically investigates an algorithm called Method of brackets. Method of brackets consists of a small number of rules to transform the evaluation of a definite integral into a problem of solving a system of linear equations. These rules are heuristic so justification is needed to make this method rigorous. Here we use contour integrals to justify the evaluation given by the algorithm. / 1 / Tri Ngo
17 May 2020
In this thesis we derive the convergence order of a regularized error functional for reconstructing faults from boundary measurements of displacement fields. The convergence was proved to occur as the regularization parameter converges to zero but the convergence order was unknown. This functional is used to solve an inverse problem related to a half-space linear elasticity model. We first discuss this related model and review some basic properties of this functional and then we derive the convergence order for small regularization parameters. This study is a first, but essential, step toward analyzing the convergence order of related numerical methods. The reconstruction method of faults studied in this thesis was built from a model for real-world faults between tectonic plates that occur in nature. This model was first proposed by geophysicists and was later analyzed by mathematicians who were interested in building efficient numerical methods with proof of convergence.
No description available.
Parente, Paul J. V.
Thesis (M.A.)--Boston University / In this paper, several numerical methods, instructive for the calculation of the approximate solutions of differential equations, are exhibited to be convergent. In the first two methods (Picard's Method and the Cauchy-Euler method), the theoretical importance of numerical solutions is demonstrated by establishing existence and uniqueness theorems for the linear differential equation of the first order dy/dx = f(x,y) subject to the following conditions: The equation is considered in some region of xy space containing a point (xo,yo) and in addition to being continuous, f(x,y) is assumed to satisfy a Lipschitz condition with respect to y, i.e. |f(x,y1)-f(x,y2)| < k|y1 - y2| where k is called the Lipschitz constant [TRUNCATED]
26 May 2004
The dissertation consists of two parts.The first part is mainly to provide the algorithms and error estimates of the collocation Trefftz methods (CTMs) for seeking the solutions of partial differential equations. We consider several popular models of PDEs with singularities, including Poisson equations and the biharmonic equations. The second part is to present the collocation methods (CMs) and to give a unified framework of combinations of CMs with other numerical methods such as finite element method, etc. An interesting fact has been justified: The integration quadrature formulas only affect on the uniformly $V_h$-elliptic inequality, not on the solution accuracy. In CTMs and CMs, the Gaussian quadrature points will be chosen as the collocation points. Of course, the Newton-Cotes quadrature points can be applied as well. We need a suitable dense points to guarantee the uniformly $V_h$-elliptic inequality. In addition, the solution domain of problems may not be confined in polygons. We may also divide the domain into several small subdomains. For the smooth solutions of problems, the different degree polynomials can be chosen to approximate the solutions properly. However, different kinds of admissible functions may also be used in the methods given in this dissertation. Besides, a new unified framework of combinations of CMs with other methods will be analyzed. In this dissertation, the new analysis is more flexible towards the practical problems and is easy to fit into rather arbitrary domains. Thus is a great distinctive feature from that in the existing literatures of CTMs and CMs. Finally, a few numerical experiments for smooth and singularity problems are provided to display effectiveness of the methods proposed, and to support the analysis made.
(has links) (PDF)
The "Ahrens method" is a very simple method for sampling from univariate distributions. It is based on rejection from piecewise constant hat functions. It can be applied analogously to the multivariate case where hat functions are used that are constant on rectangular domains. In this paper we investigate the case of distributions with so called orthounimodal densities. Technical implementation details as well as their practical limitations are discussed. The application to more general distributions is considered. (author's abstract) / Series: Research Report Series / Department of Statistics and Mathematics
Willard, John Gordon
The purpose of this paper is to investigate the Kjeldahl method of nitrogen determination.
Page generated in 0.0504 seconds