• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 12
  • 12
  • 12
  • 12
  • 12
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Properties of jets and wakes

Crane, Lawrence John January 1959 (has links)
This thesis is a study of the affect of differences in the density of a fluid on the mixing regions of jets, which may be laminar or turbulent. These differences in density are present for three main reasons, namely; when the speed of the fluid is of the same order of magnitude as the local speed of sound; when there are large temperature differences in the fluid; and when the fluid consists of a mixture of components the relative proportions of which vary from point to point. Three problems are considered. These are: the flow far from the orifice of a plane and of a round jet and the mixing region on the surface of the core of a plane jet near the orifice. This last problem is idealised as the mixing of two semi-infinite streams. For flows of jet type, the assumption of a coefficient of eddy kinematic viscosity in turbulent flow leads to the possibility of combining in one the equations for laminar and turbulent motion. The method used is to expand the stream function in a Rayleigh-Jansen series. The first term of this series corresponds to the stream function when the fluid is of constant density. The series is developed in powers of a small parameter whose magnitude depends on the density differences in the fluid. Only the second term of this series is found explicitly. This term gives the first order effect that changes in density have on the flow. The solutions of all examples considered are, with on exception, given in analytical form. The last appendix to the the thesis shows the connection between Stewartson's (1957) approach to the problem of finding uniformly valid approximate solutions to the boundary layer equations and Lighthill's (1948) method. This connection is shown by working out one of the problems considered by Stewartson, namely, the wake past a flat plate, using Lighthill's method.
2

Multipath selection for resilient network routing

Kazmi, Nayyar A. January 2013 (has links)
In this dissertation we study the routing problem for multi-commodity survivable network ows, with splittable demands, and propose end-to-end path-based solutions where maximum link utilization is minimized, in order to improve resilience in existing telecommunication networks. We develop mixed integer programming models, and demonstrate that, when the selection of disjoint paths is part of the optimization problem (rather than when k-shortest paths are pre-selected, as in earlier works), maximum link utilization is reduced and the overall network also balances out. We find that three paths are usually enough to reap the benefits of a multipath approach. A reduction in maximum link utilization also provides a margin by which demand values can grow without causing congestion. We also prove that the disjoint multipath selection problem is NPcomplete, even for the case of one node-pair. This warrants a recourse to effi- cient solution methods within ILP (such as decomposition), and to matheuristics. Our literature survey of applications of heuristic techniques, and those combining heuristics with exact methods, shows a research gap, which we attempt to bridge through a novel heuristic algorithm. The heuristic works well and, in several cases, yields better solutions than ILP (in a given time limit), or provides solutions for problems where ILP could not even find one valid solution in the given time limit. We also study this problem within a decomposition methods framework: i.e., column generation. The pricing sub-problem is a mixed non-linear programme, for which we propose an ILP formulation. We find some lower bounds for missing dual values and use them as surrogates. We then show that the lower bounds are valid and present examples where the proposed pricing is applied to path generation for self-protecting multipath routing.
3

Heuristic approaches for real world examination timetabling problems

Mohmad Kahar, Mohd Nizam January 2013 (has links)
The examination timetabling (exam-timeslot-room assignment) problem involves assigning exams to a specific or limited number of timeslots and rooms, with the aim of satisfying the hard constraints and the soft constraints as much as possible. Most of the techniques reported in the literature have been applied to solve simplified examination benchmark datasets, available within the scientific literature. In this research we bridge the gap between research and practice by investigating a problem taken from the Universiti Malaysia Pahang (UMP), a real world capacitated examination timetabling problem. This dataset has several novel constraints, in addition to those commonly used in the literature. Additionally, the invigilator scheduling problem (invigilator assignment) was also investigated as it has not received the same level of research attention as the examination scheduling (although it is just as important to educational institutions). The formal models are defined, and constructive heuristics was developed for both problems in which the overall problems are solved with a two-phase approach which involves scheduling the exam to timeslot and room, and follows with scheduling the invigilator. During the invigilator assignment, we assume that there is already an examination timetable in place (i.e. previously generated). It reveals that the invigilator scheduling solution dependent on the number of rooms selected from the exam-timeslot-room assignment phase (i.e. a lesser number of used rooms would minimises the invigilation duties for staff), this encourages us to further improve the exam-timeslot-room timetable solution. An improvement on the result was carried out using modified extended great deluge algorithm (modified-GDA) and multi-neighbourhood GDA approach (that use more than one neighbourhood during the search). The modified-GDA uses a simple to understand parameter and allows the boundary that acts as the acceptance level, to dynamically change during the search. The propose approaches able to produce good quality solution when compared to the solutions from the proprietary software used by UMP. In addition, our solutions adhere to all hard constraints which the current systems fail to do. Finally, we extend our research onto investigating the Second International Timetabling Competition (ITC2007) dataset as it also contains numerous constraints much similar to UMP datasets. Our propose approach able to produce competitive solutions when compared to the solutions produced by other reported works in the literature.
4

Novel heuristic and metaheuristic approaches to the automated scheduling of healthcare personnel

Curtois, Timothy January 2008 (has links)
This thesis is concerned with automated personnel scheduling in healthcare organisations; in particular, nurse rostering. Over the past forty years the nurse rostering problem has received a large amount of research. This can be mostly attributed to its practical applications and the scientific challenges of solving such a complex problem. The benefits of automating the rostering process include reducing the planner’s workload and associated costs and being able to create higher quality and more flexible schedules. This has become more important recently in order to retain nurses and attract more people into the profession. Better quality rosters also reduce fatigue and stress due to overwork and poor scheduling and help to maximise the use of leisure time by satisfying more requests. A more contented workforce will lead to higher productivity, increased quality of patient service and a better level of healthcare. Basically stated, the nurse rostering problem requires the assignment of shifts to personnel to ensure that sufficient employees are present to perform the duties required. There are usually a number of constraints such as working regulations and legal requirements and a number of objectives such as maximising the nurses working preferences. When formulated mathematically this problem can be shown to belong to a class of problems which are considered intractable. The work presented in this thesis expands upon the research that has already been conducted to try and provide higher quality solutions to these challenging problems in shorter computation times. The thesis is broadly structured into three sections. 1) An investigation into a nurse rostering problem provided by an industrial collaborator. 2) A framework to aid research in nurse rostering. 3) The development of a number of advanced algorithms for solving highly complex, real world problems.
5

Worst-case bounds for bin-packing heuristics with applications to the duality gap of the one-dimensional cutting stock problem

Tuenter, Hans J. H. January 1997 (has links)
The thesis considers the one-dimensional cutting stock problem, the bin-packing problem, and their relationship. The duality gap of the former is investigated and a characterisation of a class of cutting stock problems with the next round-up property is given. It is shown that worst-case bounds for bin-packing heuristics can be and are best expressed in terms of the linear programming relaxation of the corresponding cutting stock problem. The concept of recurrency is introduced for a bin-packing heuristic, which allows a more natural derivation of a measure for the worst-case behaviour. The ideas are tested on some well known bin-packing heuristics and (slightly) tighter bounds for these are derived. These new bounds (in terms of the linear programming relaxation) are then used to make inferences about the duality gap of the cutting stock problem. In particular; these bounds allow à priori, problem-specific bounds. The thesis ends with conclusions and a number of suggestions to extend the analysis to higher dimensional problems.
6

Theoretical foundations of operational research

Bryer, R. A. January 1977 (has links)
The conclusions of both Parts One and Two complement and reinforce each other. After outlining the ideals of OR, I set out in Part One to find and scrutinize the philosophical foundations upon which some leading operations researchers have claimed that these ideals could be implemented. In chapters 2, 3, and 4 I argue that adopting (respectively) the positivist, conventionalist and/or idealist philosophies as the theoretical foundations upon which to build an adequate theory of inquiry for the purposes of OR would force it to abandon its ideals. These philosophies are interpreted as attempts on the part of academic operational researchers to stave-off the open-ended ambiguity and anarchy of inquiry which an unqualified interpretation of OR's ideals could engender. These attempts to give substance to the ideals of OR all exert a strong bias against raising questions about the nature of the subject-matter with which OR deals, and it in largely on these grounds that they are rejected in chapter 5 because of the implications which this has for the ideals of OR. One conclusion of Part One is that OR needs protection from such philosophies, and that a realist-type alternative at least provides this. I conclude by raising the doubt whether philosophy can provide much more to OR. The other major conclusion is that OR needs to understand its subject-matter before it can reasonably hope to implement its ideals. Given the general bias which we find in Part One against seriously considering the subject-matter of OR, we enter Part Two with some trepidation. Notwithstanding the philosophical bias against it, it is clear that OR must have a conception of the nature of its subject-matter. However, OR's ideals can just as easily be lost by inadequate attention to this task. In Part Two the biases discovered in Part One come home to roost. The first attempt to provide the ideals of OR with a substance on the basis of which its ideals can be implemented in an objective fray turns out to be just that, i.e., metaphysical 'substance' in the guise of a theory of management. We see in chapter 6 that to the extent to which this theory moves beyond merely asserting that management would 'take care' of OR's need for an objective basis, it presupposes a social theory which would show how social systems by their nature (if properly constructed) embody this objectivity. This move is foreshadowed in chapter 3 where we see Kuhn (who is taken as an exemplar of conventionalist philosophy) finally resorting to this device to prop up his conventionalism, against the growing weight of subjectivity under which it threatened to sag into the jaws of positivism. The social theory on which such claims rest is given detailed consideration in. chapter 7. In chapter 7 I give serious consideration to the possibility that OR's social theory, if it has one at all, will be developed in reaction to what it sees as the "problem of order", because this problem can be seen as but another way of stating its ideals in a specifically social way. Stating OR ideals in this way orients them directly to at least one aspect of the question of the nature of OR's subject-matter. We see that by employing, Durkheim's account of and solution to the social problem of order as a basis for comparison with OR (first as a homomorphism. and later as an isomorphism) that we are able to gain quite a firm grip on OR's social theory (and, hence, its grasp of its subject-matter). We see that this theory, although providing a justification for OR's theory of management (especially in its modern form), it is itself inadequate. The basis of the inadequacy, most fundamentally, is that the theory in question presupposes the very thing, that should be in question, namely, the nature of the social collective. I conclude with a specific illustration of the impact of this theory on the ideal of OR by analysing the inadequate treatment of power and conflict which it allows.
7

Analysis of human underwater undulatory swimming using musculoskeletal modelling

Phillips, Christopher W. G. January 2013 (has links)
Elite swimming is a highly competitive sport. At this professional level, the difference between a podium finish or not, is measured in fractions of a second. While improvements in specific performance metrics may deliver a marginal improvement, it is through the accumulation of marginal gains that the winning margins are created. Quantifying performance in elite sport is therefore fundamental in identifying and implementing improvements. The trade-off between energy expenditure, thrust generated and attained velocity are identified as key aspects to performance. From a review of previous swimming research it was identified that there was a lack of suitable methods for simultaneously quantifying the energy expenditure, thrust and velocity for a particular swimming technique. The aim of this thesis is to analyse the performance of human underwater undulatory swimming (UUS) |a significant proportion of a race for multiple events. This encompasses experimentally gathered data and computational musculoskeletal modelling in the analysis and evaluation of UUS technique. This thesis has developed a novel, fully functional musculoskeletal model with which detailed analysis of human UUS can be performed. The experimental and processing methods for two methods of acquiring the athlete's kinematics have also been developed. A model based upon fish locomotion is coupled with the musculoskeletal model to provide the fluid loadings for the simulation. Detailed analysis of two techniques of an elite athlete has demonstrated this process in a case study. Energy expended by the simulated muscles is estimated. Combined with the measured velocity and predicted thrust, the propulsive efficiency for each technique is determined.
8

Use of spatial models and the MCMC method for investigating the relationship between road traffic pollution and asthma amongst children

Zhang, Yong January 2000 (has links)
This thesis uses two datasets: NCDS (National Child Development Study) and Bartholomew's Digital road map to investigate the relationship between road traffic pollution and asthma amongst children. A pollution exposure model is developed to provide an indicator of road traffic pollution. Also, a spatially driven logistic regression model of the risk of asthma occurrence is developed. The relationship between asthma and pollution is tested using this model. The power of the test has been studied. Because of the uncertainty of exact spatial location of subjects, given a post-code, we have considered error-in-variable model, otherwise known as measurement error model. A general foundation is presented. Inference is attempted in three approaches. Compared with models without measurement error, no improvement on log-likelihood is made. We suggest the error can be omitted. We also take a Bayesian approach to analyse the relationship. A discretized MCMC (Markov Chain Monte Carlo) is developed so that it can be used to estimate parameters and to do inference on a very complex posterior density function. It extends the simulated tempering method to 'multi-dimension temperature' situation. We use this method to implement MCMC on our models. The improvement in speed is remarkable. A significant effect of road traffic pollution on asthma is not found. But the methodology (spatially driven logistic regression and discretized MCMC) can be applied on other data.
9

Stochastic network calculus with martingales

Poloczek, Felix January 2016 (has links)
The practicality of the stochastic network calculus (SNC) is often questioned on grounds of looseness of its performance bounds. The reason for its inaccuracy lies in the usage of too elementary tools from probability theory, such as Boole’s inequality, which is unable to account for correlations and thus inappropriate to properly model arrival flows. In this thesis, we propose an extension of stochastic network calculus that characterizes its main objects, namely arrival and service processes, in terms of martingales. This characterization allows to overcome the shortcomings of the classical SNC by leveraging Doob’s inequality to provide more accurate performance bounds. Additionally, the emerging stochastic network calculus with martingales is quite versatile in the sense that queueing related operations like multiplexing and scheduling directly translate into operations of the corresponding martingales. Concretely, the framework is applied to analyze the per-flow delay of various scheduling policies, the performance of random access protocols, and queueing scenarios with a random number of parallel flows. Moreover, we show our methodology is not only relevant within SNC but can be useful also in related queueing systems. E.g., in the context of multi-server systems, we provide a martingale-based analysis of fork-join queueing systems and systems with replications. Throughout, numerical comparisons against simulations show that the Martingale bounds obtained with Doob’s inequality are not only remarkably accurate, but they also improve the Standard SNC bounds by several orders of magnitude.
10

Development and application of low Reynolds number turbulence models for air-cooled electronics

Dhinsa, Kulvir Kaur January 2006 (has links)
Semiconductors are at the heart of electronic devices such as computers, mobile phones, avionics systems, telecommunication racks, etc. Power dissipation from semiconductor devices is continuing to increase due to the growth in the number of transistors on the silicon chip as predicted by Moore's Law. Thermal management techniques, used to dissipate this power, are becoming more and more challenging to design. Air cooling of electronic components is the preferred method for many designs where the air flow is characterised as being in the laminar-to-turbulent transitional region. Over the last fifteen years there has been a dramatic take-up of Computational Fluid Dynamics (CFD) technology in the electronics industry to simulate the airflow and temperatures in electronic systems. These codes solve the Reynolds Averaged Navier-Stokes (RANS) equations for momentum and turbulence. RANS models are popular as they are much quicker to solve than time-dependent models such as Large Eddy Simulation (LES) or Direct Numerical Simulation (DNS). At present the majority of thermal design engineers use the standard k-e model which is a high Reynolds number model. This is because there is limited knowledge on the benefit of using low Reynolds number models in the electronics cooling industry. This Ph.D. investigated and developed low Reynolds number models for use in electronics cooling CFD calculations. Nine turbulence models were implemented and validated in the in-house CFD code PHYSICA. This includes three zero-equation, two single equation, and four zonal models. All of these models are described in the public literature except the following two models which were developed in this study: AUTO_CAP: This zero-equation model automates the existing LVEL_CAP model available within the commercial CFD code FLOTHERM. ke I kl: This zonal model uses a new approach to blend the k — l model used at the wall with the k-e model used to predict the bulk airflow. Validation of these turbulence models was undertaken on eight different test cases. This included the detailed experimental work undertaken by Meinders. Results show that the ke I kl model provides the most accurate flow predictions. For prediction of temperature there was no clear favourite. This was probably due to the use of the universal log-law function in this study. A generalised wall function may be more appropriate. Results from this research have been disseminated through a total of nine peer-reviewed conference and journal publications, evidence of the interest the topic of this investigation generates amongst electronic packaging engineers.

Page generated in 0.0802 seconds