• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7714
  • 4364
  • 2026
  • 1278
  • 865
  • 760
  • 290
  • 210
  • 197
  • 172
  • 143
  • 142
  • 103
  • 96
  • 93
  • Tagged with
  • 21369
  • 5013
  • 4867
  • 3093
  • 2663
  • 1913
  • 1888
  • 1408
  • 1367
  • 1365
  • 1249
  • 1225
  • 1131
  • 1114
  • 1073
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
321

High performance simplex solver

Huangfu, Qi January 2013 (has links)
The dual simplex method is frequently the most efficient technique for solving linear programming (LP) problems. This thesis describes an efficient implementation of the sequential dual simplex method and the design and development of two parallel dual simplex solvers. In serial, many advanced techniques for the (dual) simplex method are implemented, including sparse LU factorization, hyper-sparse linear system solution technique, efficient approaches to updating LU factors and sophisticated dual simplex pivoting rules. These techniques, some of which are novel, lead to serial performance which is comparable with the best public domain dual simplex solver, providing a solid foundation for the simplex parallelization. During the implementation of the sequential dual simplex solver, the study of classic LU factor update techniques leads to the development of three novel update variants. One of them is comparable to the most efficient established approach but is much simpler in terms of implementation, and the other two are specially useful for one of the parallel simplex solvers. In addition, the study of the dual simplex pivoting rules identifies and motivates further investigation of how hyper-sparsity maybe promoted. In parallel, two high performance simplex solvers are designed and developed. One approach, based on a less-known dual pivoting rule called suboptimization, exploits parallelism across multiple iterations (PAMI). The other, based on the regular dual pivoting rule, exploits purely single iteration parallelism (SIP). The performance of PAMI is comparable to a world-leading commercial simplex solver. SIP is frequently complementary to PAMI in achieving speedup when PAMI results in slowdown.
322

Evaluating Atlantic tropical cyclone track error distributions based on forecast confidence

Hauke, Matthew D. 06 1900 (has links)
A new Tropical Cyclone (TC) surface wind speed probability product from the National Hurricane Center (NHC) takes into account uncertainty in track, maximum wind speed, and wind radii. A Monte Carlo (MC) model is used that draws from probability distributions based on historic track errors. In this thesis, distributions of forecast track errors conditioned on forecast confidence are examined to determine if significant differences exist in distribution characteristics. Two predictors are used to define forecast confidence: the Goerss Predicted Consensus Error (GPCE) and the Global Forecast System (GFS) ensemble spread. The distributions of total-, along-, and crosstrack errors from NHC official forecasts are defined for low, average, and high forecast confidence. Also, distributions of the GFS ensemble mean total-track errors are defined based on similar confidence levels. Standard hypothesis testing methods are used to examine distribution characteristics. Using the GPCE values, significant differences in nearly all track error distributions existed for each level of forecast confidence. The GFS ensemble spread did not provide a basis for statistically different distributions. These results suggest that the NHC probability model would likely be improved if the MC model would draw from distributions of track errors based on the GPCE measures of forecast confidence / US Air Force (USAF) author.
323

Die toepasbaarheid van die Monte Carlo studies op empiriese data van die Suid-Afrikaanse ekonomie

29 July 2014 (has links)
M.Com.(Econometrics) / The objective of this study is to evaluate different estimation techniques that can be used to estimate the coefficients of a model. The estimation techniques were applied to empirical data drawn from the South African economy. The Monte Carlo studies are unique in that data was statistically generated for the experiments. This approach was due to the fact that actual observations on economic variables contain several econometric problems, such as autocorrelation and MUlticollinearity, simultaneously. However, the approach in this study differs in that empirical data is used to evaluate the estimation techniques. The estimation techniques evaluated are : • Ordinary least squares method • Two stage least squares method • Limited information maximum likelihood method • Three stage least squares method • Full information maximum likelihood method. The estimates of the different coefficients are evaluated on the following criteria : • The bias of the estimates • The variance of the estimates • t-values of the estimates • The root mean square error. The ranking of the estimation techniques on the bias criterion is as follows : 1 Full information maximum likelihood method. 2 Ordinary least squares method 3 Three stage least squares method 4 Two stage least squares method 5 Limited information maximum likelihood method The ranking of the estimation techniques on the variance criterion is as follows : 1 Full information maximum likelihood method. 2 Ordinary least squares method 3 Three stage least squares method 4 Two stage least squares method 5 Limited information maximum.likelihood method All the estimation techniques performed poorly with regard to the statistical significance of the estimates. The ranking of the estimation techniques on the t-values of the estimates is thus as follows 1 Three stage least squares method 2 ordinary least squares method 3 Two stage least squares method and the limited information maximum likelihood method 4 Full information maximum likelihood method. The ranking of the estimation techniques on the root mean square error criterion is as follows : 1 Full information maximum likelihood method and the ordinary least squares method 2 Two stage least squares method 3 Limited information maximum likelihood method and the three stage least squares method The results achieved in this study are very similar to those of the Monte Carlo studies. The only exception is the ordinary least squares method that performed better on every criteria dealt with in this study. Though the full information maximum likelihood method performed the best on two of the criteria, its performance was extremely poor on the t-value criterion. The ordinary least squares method is shown, in this study, to be the most constant performer.
324

A Comparison of the Cuisenaire Method of Teaching Arithmetic with a Conventional Method

Steiner, Kelly Everett 08 1900 (has links)
The problem of this investigation was to determine the effectiveness of the Cuisenaire method of teaching arithmetic to fourth graders, as compared with a traditional conventional method. Furthermore, a secondary aspect of the problem was to compare performances of the experimental and control groups when classified according to sex.
325

Linear Programming Using the Simplex Method

Patterson, Niram F. 01 1900 (has links)
This thesis examines linear programming problems, the theoretical foundations of the simplex method, and how a liner programming problem can be solved with the simplex method.
326

Multicompartmental poroelasticity for the integrative modelling of fluid transport in the brain

Vardakis, Ioannis C. January 2014 (has links)
The world population is expected to increase to approximately 11 billion by 2100. The ageing population (aged 60 and over) is projected to exceed the number of children in 2047. This will be a situation without precedent. The number of citizens with disorders of old age like Dementia will rise to 115 million worldwide by 2050. The estimated cost of Dementia will also increase, from $604 billion in 2010, to $1,117 billion by 2030. At the same time, medical expertise, evidence-driven policymaking and commissioning of services are increasingly evolving the definitive architecture of comprehensive long-term care to account for these changes. Technological advances, such as those provided by computational science and biomedical engineering, will allow for an expansion in our ability to model and simulate an almost limitless variety of complex problems that have long defied traditional methods of medical practice. Numerical methods and simulation offer the prospect of improved clinically relevant predictive information, and of course optimisation, enabling more efficient use of resources for designing treatment protocols, risk assessment and urgently needed management of a long term care system for a wide spectrum of brain disorders. Within this paradigm, the importance of the relationship of senescence of cerebrospinal fluid transport to dementia in the elderly make the cerebral environment notably worthy of investigation through numerical and computational modelling. Hydrocephalus can be succinctly described as the abnormal accumulation (imbalance between production and circulation) of cerebrospinal fluid (CSF) within the brain. Using hydrocephalus as a test bed, one is able to account for the necessary mechanisms involved in the interaction between cerebral fluid production, transport and drainage. The current state of knowledge about hydrocephalus, and more broadly integrative cerebral dynamics and its associated constitutive requirements, advocates that poroelastic theory provides a suitable framework to better understand the disease. In this work, Multiple-network poroelastic Theory (MPET) is used to develop a novel spatio-temporal model of fluid regulation and tissue displacement in various scales within the cerebral environment. The model is discretised in a variety of formats, through the established finite difference method, finite difference – finite volume coupling and also the finite element method. Both chronic and acute hydrocephalus was investigated in a variety of settings, and accompanied by emerging surgical techniques where appropriate. In the coupled finite difference – finite volume model, a key novelty was the amalgamation of anatomically accurate choroid plexuses with their feeding arteries and a simple relationship relaxing the constraint of a unique permeability for the CSF compartment. This was done in order to account for Aquaporin-4 sensitisation. This model is used to demonstrate the impact of aqueductal stenosis and fourth ventricle outlet obstruction. The implications of treating such a clinical condition with the aid of endoscopic third (ETV) and endoscopic fourth ventriculostomy (EFV) are considered. It was observed that CSF velocity in the aqueduct, along with ventricular displacement, CSF pressure, wall shear stress and pressure difference between lateral and fourth ventricles increased with applied stenosis. The application of ETV reduced the aqueductal velocity, ventricular displacement, CSF pressure, wall shear stress and pressure difference within nominal levels. The greatest reversal of the effects of atresia come by opting for ETV rather than the more complicated procedure of EFV. For the finite difference model incorporating nonlinear permeability, qualitatively similar results were obtained in comparison to the pertinent literature, however, there was an overall amplification of ventriculomegaly and transparenchymal pressure difference using this model. A quantitative and qualitative assessment is made of hydrocephalus cases involving aqueductal stenosis, along with the effects to CSF reabsorption in the parenchyma and subarachnoid space. The finite element discretisation template produced for the n<sup>th</sup>- dimensional transient MPET system allowed for novel insight into hydrocephalus. In the 1D formulation, imposing the breakdown of the blood-CSF barrier responsible for clearance resulted in an increase in ventricular displacement, transparenchymal venous pressure gradient and transparenchymal CSF pressure gradient, whilst altering the compliance proved to markedly alter the rate of change of displacement and CSF pressure gradient. The influence of Poisson's ratio was investigated through the use of the dual-grid solver in order to distinguish between possible over or under prediction of the ventricular displacement. In the 2D model based on linear triangles, the importance of the MPET boundary conditions is acknowledged, along with the quality of the underlying mesh. Interesting results include that the fluid content is highest in the periventricular region and the skull, whilst after longer time scales, the peak CSF content becomes limited to the periventricular region. Venous fluid content is heavily influenced by the Biot-Willis constant, whilst both the venous and CSF/ISF compartments show to be strongly influenced by breakdown in the blood-CSF barrier. Increasing the venous compliance effects the arterial, capillary and venous compartments. Decreasing the venous compliance shows an accumulation of fluid, possibly helping to explain why the ventricles can be induced to compress rather than expand under decreased compliance. Finally, a successful application of the 3D-MPET template is shown for simple geometries. It is envisaged that future observations into the biology of cerebral fluid flow (such as perivascular CSF-ISF fluid exchange) and its interaction with the surrounding parenchyma, will demand the evolution of the MPET model to reach a level of complexity that could allow for an experimentally guided exploration of areas that would otherwise prove too intricate and intertwined under conventional settings.
327

The effect of simulation bias on action selection in Monte Carlo Tree Search

James, Steven Doron January 2016 (has links)
A dissertation submitted to the Faculty of Science, University of the Witwatersrand, in fulfilment of the requirements for the degree of Master of Science. August 2016. / Monte Carlo Tree Search (MCTS) is a family of directed search algorithms that has gained widespread attention in recent years. It combines a traditional tree-search approach with Monte Carlo simulations, using the outcome of these simulations (also known as playouts or rollouts) to evaluate states in a look-ahead tree. That MCTS does not require an evaluation function makes it particularly well-suited to the game of Go — seen by many to be chess’s successor as a grand challenge of artificial intelligence — with MCTS-based agents recently able to achieve expert-level play on 19×19 boards. Furthermore, its domain-independent nature also makes it a focus in a variety of other fields, such as Bayesian reinforcement learning and general game-playing. Despite the vast amount of research into MCTS, the dynamics of the algorithm are still not yet fully understood. In particular, the effect of using knowledge-heavy or biased simulations in MCTS still remains unknown, with interesting results indicating that better-informed rollouts do not necessarily result in stronger agents. This research provides support for the notion that MCTS is well-suited to a class of domain possessing a smoothness property. In these domains, biased rollouts are more likely to produce strong agents. Conversely, any error due to incorrect bias is compounded in non-smooth domains, and in particular for low-variance simulations. This is demonstrated empirically in a number of single-agent domains. / LG2017
328

General multivariate approximation techniques applied to the finite element method

Hassoulas, Vasilios 26 January 2015 (has links)
No description available.
329

Phonics as an aid to independent first-grade reading

Unknown Date (has links)
"The problem in this study was an attempt to determine whether phonics serves as an aid to the teaching of first-grade reading and to weigh and examine its possibilities in the light of the consequences in the first grade. It has been a question in the minds of authorities in reading as to how much phonics should be taken up during the first year of a child's experience in school and what steps should be taken in presenting this aid. Some related questions suggested by this study are: (1) What is the status of phonics at the present time? (2) What are the views of the leading authorities in this field? (3) Should phonics be taken up during the child's first year in school? (4) How much phonics should be taught in the first year? (5) What steps should be taken up first? (6) What are the advantages to be derived from phonics? (7) What are the disadvantages resulting from phonetic training? (8) What precautions should be taken in teaching phonics? (9) Have any experiments been made; what conclusions were drawn from them? (10) Is phonics the only desirable method of word analysis? (11) Do all children need phonics and if so, do they all need the same amount? (12) Should phonics be presented during the reading period? (13) Is reading the only subject which is aided by phonetic training?"--Introduction. / Typescript. / "July, 1945." / "Submitted to the Faculty of the Florida State College for Women in partial fulfillment of the requirements for the degree of Master of Science." / Advisor: M. R. Hinson, Professor Directing Paper. / Includes bibliographical references (leaves 34-37).
330

Features of Dialogic Instruction in Upper Elementary Classrooms and their Relationships to Student Reading Comprehension

Michener, Catherine January 2014 (has links)
Thesis advisor: C. Patrick Proctor / There is widespread agreement that language skill underpins reading comprehension (e.g. Cutting & Scarborough, 2006; Dickinson, McCabe, Anastasopoulos, Peisner-Feinberg, & Poe, 2003; Snow, 1991), and empirical work over the last 20 years has shown positive effects of dialogic instruction on student literacy outcomes. This suggests the importance of the engagement with others in the learning process as a scaffold for academic literacy skills (Wells, 1999). Research in this area has shown a number of important features of dialogic instruction to be positively correlated with literacy skills, but it is still not well understood how teachers guide and support students in developing language abilities for reading comprehension. Drawing on dialogic theories of language and the simple view of reading model (Hoover & Gough, 1990), and using a convergent mixed method analysis, the study explores how features of dialogic instruction relate to students' reading comprehension outcomes, and identifies themes within the patterns and variations of these features during instruction. Multilevel modeling (Raudenbush & Bryk, 2002) and case study analysis (Merriam, 1998; Stake, 2006; Yin, 2009) are used to identify significant talk moves for reading comprehension and to qualify the content and function of these moves in their instructional contexts. Quantitative analyses showed five significant talk moves predicted reading comprehension achievement, including the rate of uptake questions, teacher explanations, and low-quality evaluations. High rates of student explanations and high-quality questions were predictive of lower reading outcomes. Case study analyses show a preponderance of teacher talk, a lack of quantity and quality to student talk, and an efferent stance (Rosenblatt, 1994) toward reading. These findings indicate a lack of dialogic practices across the grades and classrooms. However, there were opportunities for dialogic practices that support students' linguistic comprehension. Overall, this analysis showed mixed results for the importance of dialogic instructional moves, and indicates the importance of teacher talk to facilitate linguistic comprehension, as well as the promise of talk moves that incorporate student attention and participation around texts. / Thesis (PhD) — Boston College, 2014. / Submitted to: Boston College. Lynch School of Education. / Discipline: Teacher Education, Special Education, Curriculum and Instruction.

Page generated in 0.2172 seconds