• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 142
  • 16
  • 2
  • 1
  • 1
  • Tagged with
  • 180
  • 180
  • 70
  • 69
  • 69
  • 69
  • 39
  • 27
  • 21
  • 20
  • 18
  • 18
  • 18
  • 18
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Upper and lower bounds on permutation codes of distance four

Sawchuck, Natalie 30 December 2008 (has links)
A permutation array, represented by PA(n, d), is a subset of Sn such that any two distinct elements have a distance of at least d where d is the number of differing positions. We analyze the upper and lower bounds of permutation codes with distance equal to 4. An optimization problem on Young diagrams is used to improve the upper bound for almost all n while the lower bound is improved for small values of n by means of recursive construction methods.
72

Integrating mathematics into engineering : a case study

Mahomed, Shaheed January 2007 (has links)
Thesis (MTech (Mechanical Engineering))--Cape Peninsula University of Technology, 2007 / Twelve years into a democracy, South Africa still faces many developmental challenges. Since 2002 Universities of Technology in South Africa have introduced Foundational Programmes/provisions in their Science and Engineering programmes as a key mechanism for increasing throughput and enhancing quality. The Department of Education has been funding these foundational provisions since 2005. This Case Study evaluates an aspect of a Foundational provision in Mechanical Engineering, from the beginning of 2002 to the end of 2005, at a University of Technology, with a view to contributing to its improvemenl The Cape Peninsula University of Technology {CPUn, the locus for this Case Study, is the only one of its kind in a region that serves in excess of 4.5 million people. Further, underpreparedness in Mathematics for tertiary level study is a national and intemational phenomenon. There is thus a social interest in the evaluation of a Mathematics course that is part of a strategy towards addressing the shortage in Engineering graduates. This Evaluation of integration of the Foundation Mathematics course into Foundation Science, within the Department of Mechanical Engineering at CPUT, falls within the ambit of this social need. An integrated approach to cunriculum conception, design and implementation is a widely accepted strategy in South Africa and internationally; this approach formed the basis of the model used for the Foundation programme that formed part of this Evaluation. A review of the literature of the underpinnings of the model provided a theoretical framework for this Evaluation Study. In essence this involved the use of academic literacy theory together with learning approach theory to provide a lens for this Case Study.
73

PermuNim : an impartial game of permutation avoidance.

Parton, Kristin 08 April 2010 (has links)
PermuNim is an impartial combinatorial game played on a board of squares where each player takes turns playing in rows and columns of the board which have not been played in, avoiding zero or more permutations. The game comes to an end when neither player can move. The first player unable to move on his or her turn loses the game. Many researchers have investigated combinatorial game theory as well as the idea of permutation pattern avoidance. PermuNim combines both of these ideas. When (12) or (1) is the forbidden permutation in PermuNim, or when the forbidden permutation is 'loses' in size to that of the smallest of the two dimensions of the board, we can say a great deal about the value of the game. For other permutations, the values of the options seem much more chaotic. Even (123) is chaotic as evidenced by our data in the appendix. We investigate the trend for even height boards which are `wide enough' to have options with all odd values and vice versa but we don't believe that this to be true in general. If a PermuNim board is stretched by adding columns, sometimes the value of the position is affected. We find that when any permutation is avoided and t moves have been made, as long at 2m columns are available together, there is a place where any number of columns may be added to the board without affecting the value of the position. We suspect that the number of columns necessary may be much lower for some permutations.
74

Multiple insertion/deletion correcting and detecting codes : structural analysis, constructions and applications

Paluncic, Filip 01 August 2012 (has links)
D.Ing. / This thesis is dedicated to an analysis of fundamental topics and issues related to deterministic insertion/deletion correcting and detecting codes. The most important contributions in this respect are the construction of a multiple insertion/deletion correcting code for run-length limited sequences and the construction and applications of multiple deletion (insertion) detecting codes. It is shown how run-length constraints and higher order moments can be combined to create a code which is simultaneously multiple insertion/deletion error correcting and runlength constrained. A systematic form of this code is presented, whereby any arbitrary run-length constrained sequence can be made into a multiple insertion/deletion correcting codeword by adding a prefix. This prefix is appended to the run-length constrained sequence in such a way that the resulting codeword is itself run-length constrained. Furthermore, it is shown that, despite the run-length constraints, the resulting code is guaranteed to have a better asymptotic rate than the Helberg code, the only other known non-trivial deterministic multiple insertion/deletion correcting code. We consider insertion/deletion detecting codes and present a multiple deletion (insertion) detecting code. It is shown that this code, which is systematic, is optimal in the sense that there exists no other systematic multiple deletion (insertion) detecting code with a better rate. Furthermore, we present a number of applications of such codes. In addition, further related topics of interest are considered. Firstly, jitter as a fundamental cause of insertion/deletion errors is investigated and as a result a counterpart to the signal-to-noise ratio in the amplitude domain is proposed for the time domain. Secondly, motivated by the correspondence of Levenshtein and Varshamov-Tenengol’ts codes, we investigate the insertion/deletion correcting capability of the single asymmetric error correcting Constantin-Rao codes within a wider framework of asymmetric error correcting and insertion/deletion correcting code structure correspondences. Finally, we propose a generalisation of Tenengol’ts’ construction for multiple non-binary insertion/deletion correction.
75

Disturbance reduction in nonlinear systems via adaptive quenching

Heydon, Bryan Dwayne 10 June 2009 (has links)
An indirect adaptive quenching algorithm for a nonlinear single-degree-of-freedom system with unknown constant system parameters is presented. The system is subject to external or parametric sinusoidal disturbances and the resulting control signal is also sinusoidal. The quenching algorithm provides a reduction in the control effort required compared to direct disturbance cancellation. The disturbance sinusoid and the unknown parameters are incorporated into the system model and an extended Kalman filter (EKF) with modified update equations is used to estimate the system state and parameters. The estimates are then used to form the quenching signal. The adaptive quenching algorithm is found to work well inside a quenching region defined by the separatrices and suggests the use of a hybrid control law. The algorithm was verified by implementing it on an analog computer. / Master of Science
76

Criticality concepts for paired domination in graphs

Edwards, Michelle 29 January 2010 (has links)
No description available.
77

A solution adaptive grid (SAG) for incompressible flows simulations : an attempt towards enhancing SAG algorithm for large eddy simulation (LES)

Kaennakham, S. January 2010 (has links)
A study of the use of solution adaptive grid (SAG) method for simulations of incompressible flows is carried out in this work. Both laminar and turbulent types of flows are chosen. Investigation on laminar flow simulation starts with mesh adaptation criteria that are based on strong changes of some selected flow parameters; pressure and velocity components. Three most common laminar types of flows are studied; flow in a circular pipe, flow in a channel with sudden expansion and flow in a cavity with a moving lid. It is found that with the use of SAG, a reduction in both computational grid nodes and CPU time can be obtained when compared to those of fixed grid while satisfactory solutions are also achievable. Nevertheless, the refinement criteria setup procedure reveals inconveniences and requirement for several judgments that have to be defined ‘ad hoc’. This hence, makes the refinement criteria dubious for real engineering applications. For the study of turbulent flows with large eddy simulation (LES) and implicit filtering, examination of literature reveals that the lack of connections between the filter width and a physical scale has made LES somewhat unclosed, i.e. in a physical sense. In addition, it is known that numerical and modelling errors are always combined and it is difficult to study each of them separately making the total error magnitude difficult to control. Since both error types are characterised by the grid size, LES users very often find cases where a finer mesh no longer provides better accuracy. An attempt to address this ‘physical’ enclosure property of LES and its complication to implement/setup in FLUENT begins with the construction of a new refinement variable as a function of the Taylor scale. Then a new SAG algorithm is formed. The requirement to satisfy a condition of the selected subgrid scale (SAG) model, the Smagorinsky model, is taken into consideration to minimize the modeling error. The construction of a new refinement algorithm is also aimed to be the key to studying the interaction between the two types of error and could lead to the means of controlling their total magnitude. The validation in terms of its effectiveness, efficiency and reliability of the algorithm are made based on several criteria corresponding to suitability for practical applications. This includes the simplicity to setup/employ, computational affordability, and the accuracy level. For this, two different turbulent flow types that represent different commonly found turbulent phenomena are chosen; plane free jet and the flow over a circular cylinder. The simulations of the two cases were carried out in two dimensions. It is found that there are two key factors that strongly determine the success of the algorithm. The first factor is the Taylor scale definition, with literature only available for the turbulent plane jet study, for which good level of accuracy is expected. Unfortunately, this is not true for the flow over a circular cylinder, indicating a need for further analytical work. The second encountered difficulty results from limited access to software codes, which makes it impossible to implement the proposed scheme. As a result, the algorithm formulation needs be modified with carful judgment. Nevertheless, overall results are in reasonably good agreement with their corresponding experimental data.
78

Enabling success in mathematics and statistics : the effects of self-confidence and other factors in a University College

Parsons, Sarah January 2014 (has links)
This thesis reports empirical and theoretical research into learning of mathematics and statistics at university level, with particular regard to students views of their self-confidence and experiences, and the effects of these on achievement. This study was conducted at a time of widespread national concern about difficulties in mathematics education in England, particularly at the transition from school to university Science, Technology, Engineering and Mathematics (STEM) courses. Factors which affected non-specialist students learning of mathematics and statistics were investigated using student surveys in 2004/5, 2005/6 and 2006/7 (701 questionnaires) in the a-typical setting of a University College specialising in rural and land-based higher education. 52 student interviews were also carried out, primarily in 2008 and 2009, and are referred to but are not the main focus of this thesis. Both deductive and inductive approaches were used. Self-confidence was defined using three Mathematics Self-confidence Domains: Overall Confidence in Mathematics, Topic confidences for specific tasks, and Applications Confidence. Self-confidence was considered a belief, whilst liking of the subjects was an attitude, both forming part of affect , where affect comprised beliefs, attitudes and emotions. Student motivation was also investigated. The survey data, and examination and assignment marks, of engineering students learning mathematics and other non-specialist students learning statistics, were analysed both quantitatively (by descriptive statistics, ANOVA, Kruskal Wallis, Correlation, Multiple Regression, Factor and Cluster analyses) and qualitatively. Previous success in mathematics, primarily GCSE Mathematics grade, was found to be the greatest determinant of university students success in mathematics and statistics, but self-confidence and other affective variables also had significantly measurable effects. Significant effects on student confidence were also found for gender and dyslexia despite good achievement. Findings indicate that students self-confidence in mathematics does matter, as evidenced by significant relationships between confidence and achievement, but it was also concluded that these inter-relations were complex. Educators are encouraged to adopt student-focussed teaching styles which improve students self-confidence as a means to improving attainment.
79

Multiscale transport of mass, momentum and energy

Xu, Mingtian., 許明田. January 2002 (has links)
published_or_final_version / Mechanical Engineering / Doctoral / Doctor of Philosophy
80

Green element solutions for inverse groundwater contaminant problems

Onyari, Ednah Kwamboka January 2016 (has links)
In this work two inverse methodologies are developed based on the Green element method for the recovery of contaminant release histories and reconstruction of the historical concentration plume distribution in groundwater. Unlike direct groundwater contaminant transport simulations which generally produce stable and well-behaved solutions, the solutions of inverse groundwater contaminant transport problems may exhibit non-uniqueness, non-existence and instability, with escalation in computational challenges due to paucity of data. Methods that can tackle inverse problems are of major interest to researchers, and this is the goal of this work. Basically, the advection dispersion equation which governs the transport of contaminants can be handled by analytical or numerical methods like the Finite element method, the Finite difference method, the Boundary element method and their many variants and hybrids. However, if a numerical method is used to solve an inverse problem the resulting matrix is ill-conditioned requiring special techniques to be employed in order to obtain meaningful solutions. In view of this we explore the Green element method, which is a hybrid technique, based on the boundary element theory but is implemented in an element by element manner. This method is attractive to inverse modelling because of the fewer degrees of freedom that are generated at each node. We develop two approaches, in the first approach inverse Green element formulations are developed, the ill-conditioned matrix that results is decomposed with the aid of the singular value decomposition method and solved using the Tikhonov regularized least square method. The second approach utilizes the direct Green element method and the Shuffled complex evolutionary (SCE) optimization method. Finally, the proposed approaches are implemented to solve typical problems in contaminant transport with analytical solutions besides those that have appeared in various research papers. An investigation on the capability of these approaches for the simultaneous recovery of the source strength and the contaminant concentration distribution is carried out for three types of sources and they include boundary iv sources, instantaneous point sources and continuous point sources. The assessment accounts for different transport modes, time discretization, spatial discretization, location of observation points, and the quality of observation data. The numerical results demonstrate the applicability and limitations of the proposed methodologies. It is found in most cases that the solutions with inverse GEM and the least squares approach are of comparable accuracy to those with direct GEM and the SCE approach. However, the latter approach is found to be computationally intensive.

Page generated in 0.1365 seconds