661 |
Satisticing solutions for multiobjective stochastic linear programming problemsAdeyefa, Segun Adeyemi 06 1900 (has links)
Multiobjective Stochastic Linear Programming is a relevant topic. As a matter of fact,
many real life problems ranging from portfolio selection to water resource management
may be cast into this framework.
There are severe limitations in objectivity in this field due to the simultaneous presence
of randomness and conflicting goals. In such a turbulent environment, the mainstay of
rational choice does not hold and it is virtually impossible to provide a truly scientific
foundation for an optimal decision.
In this thesis, we resort to the bounded rationality and chance-constrained principles to
define satisficing solutions for Multiobjective Stochastic Linear Programming problems.
These solutions are then characterized for the cases of normal, exponential, chi-squared
and gamma distributions.
Ways for singling out such solutions are discussed and numerical examples provided for
the sake of illustration.
Extension to the case of fuzzy random coefficients is also carried out. / Decision Sciences
|
662 |
Grade 12 learner's problem-solving skills in probabilityAwuah, Francis Kwadwo 06 1900 (has links)
This study investigated the problem-solving skills of Grade 12 learners in probability. A total of 490 Grade 12 learners from seven schools, categorised under four quintiles (socioeconomic factors) were purposefully selected for the study. The mixed method research methodology was employed in the study. Bloom’s taxonomy and the aspects of probability enshrined in the Mathematics Curriculum and Assessment Policy Statement (CAPS) document of 2011 were used as a framework of analysis. A cognitive test developed by the researcher was used as an instrument to collect data from learners. The instrument used for data collection passed the test of validity and reliability. Quantitative data collected was analysed using descriptive and inferential statistics and qualitative data collected from learners was analysed by performing a content analysis of learners’ scripts. The study found that the learners in this study were more proficient in the use of Venn diagrams as an aid in solving probability problems than in using tree diagrams and contingency tables as aids in solving these problems. Results of the study also showed that with the exception of Bloom's taxonomy synthesis level, learners in Quintile 4 (fee-paying schools) had statistically significant (P-value < 0.05) higher achievement scores than learners in Quintiles 1 to 3, (i.e. non-fee-paying schools) at the levels of knowledge, comprehension, application, analysis and evaluation of Bloom’s taxonomy.
Contrary to expectations, it was revealed that the achievement of the learners in probability in this study decreased from Quintile 1 to Quintile 3 in all but the synthesis level of Bloom's taxonomy. Based on these findings, the study argued that the quintile ranking of schools in South Africa may be a useful but not a perfect means of categorisation to help improve learner achievement. Furthermore, learners in the study demonstrated three main error types, namely computational error, procedural error and structural error. Based on the findings of the study it was recommended that regular content-specific professional development be given to all teachers, especially on newly introduced topics, to enhance effective teaching and learning. / Mathematics Education / Ph. D. (Mathematics, Science and Technology Education)
|
663 |
Machine learning via dynamical processes on complex networks / Aprendizado de máquina via processos dinâmicos em redes complexasCupertino, Thiago Henrique 20 December 2013 (has links)
Extracting useful knowledge from data sets is a key concept in modern information systems. Consequently, the need of efficient techniques to extract the desired knowledge has been growing over time. Machine learning is a research field dedicated to the development of techniques capable of enabling a machine to \"learn\" from data. Many techniques have been proposed so far, but there are still issues to be unveiled specially in interdisciplinary research. In this thesis, we explore the advantages of network data representation to develop machine learning techniques based on dynamical processes on networks. The network representation unifies the structure, dynamics and functions of the system it represents, and thus is capable of capturing the spatial, topological and functional relations of the data sets under analysis. We develop network-based techniques for the three machine learning paradigms: supervised, semi-supervised and unsupervised. The random walk dynamical process is used to characterize the access of unlabeled data to data classes, configuring a new heuristic we call ease of access in the supervised paradigm. We also propose a classification technique which combines the high-level view of the data, via network topological characterization, and the low-level relations, via similarity measures, in a general framework. Still in the supervised setting, the modularity and Katz centrality network measures are applied to classify multiple observation sets, and an evolving network construction method is applied to the dimensionality reduction problem. The semi-supervised paradigm is covered by extending the ease of access heuristic to the cases in which just a few labeled data samples and many unlabeled samples are available. A semi-supervised technique based on interacting forces is also proposed, for which we provide parameter heuristics and stability analysis via a Lyapunov function. Finally, an unsupervised network-based technique uses the concepts of pinning control and consensus time from dynamical processes to derive a similarity measure used to cluster data. The data is represented by a connected and sparse network in which nodes are dynamical elements. Simulations on benchmark data sets and comparisons to well-known machine learning techniques are provided for all proposed techniques. Advantages of network data representation and dynamical processes for machine learning are highlighted in all cases / A extração de conhecimento útil a partir de conjuntos de dados é um conceito chave em sistemas de informação modernos. Por conseguinte, a necessidade de técnicas eficientes para extrair o conhecimento desejado vem crescendo ao longo do tempo. Aprendizado de máquina é uma área de pesquisa dedicada ao desenvolvimento de técnicas capazes de permitir que uma máquina \"aprenda\" a partir de conjuntos de dados. Muitas técnicas já foram propostas, mas ainda há questões a serem reveladas especialmente em pesquisas interdisciplinares. Nesta tese, exploramos as vantagens da representação de dados em rede para desenvolver técnicas de aprendizado de máquina baseadas em processos dinâmicos em redes. A representação em rede unifica a estrutura, a dinâmica e as funções do sistema representado e, portanto, é capaz de capturar as relações espaciais, topológicas e funcionais dos conjuntos de dados sob análise. Desenvolvemos técnicas baseadas em rede para os três paradigmas de aprendizado de máquina: supervisionado, semissupervisionado e não supervisionado. O processo dinâmico de passeio aleatório é utilizado para caracterizar o acesso de dados não rotulados às classes de dados configurando uma nova heurística no paradigma supervisionado, a qual chamamos de facilidade de acesso. Também propomos uma técnica de classificação de dados que combina a visão de alto nível dos dados, por meio da caracterização topológica de rede, com relações de baixo nível, por meio de medidas de similaridade, em uma estrutura geral. Ainda no aprendizado supervisionado, as medidas de rede modularidade e centralidade Katz são aplicadas para classificar conjuntos de múltiplas observações, e um método de construção evolutiva de rede é aplicado ao problema de redução de dimensionalidade. O paradigma semissupervisionado é abordado por meio da extensão da heurística de facilidade de acesso para os casos em que apenas algumas amostras de dados rotuladas e muitas amostras não rotuladas estão disponíveis. É também proposta uma técnica semissupervisionada baseada em forças de interação, para a qual fornecemos heurísticas para selecionar parâmetros e uma análise de estabilidade mediante uma função de Lyapunov. Finalmente, uma técnica não supervisionada baseada em rede utiliza os conceitos de controle pontual e tempo de consenso de processos dinâmicos para derivar uma medida de similaridade usada para agrupar dados. Os dados são representados por uma rede conectada e esparsa na qual os vértices são elementos dinâmicos. Simulações com dados de referência e comparações com técnicas de aprendizado de máquina conhecidas são fornecidos para todas as técnicas propostas. As vantagens da representação de dados em rede e de processos dinâmicos para o aprendizado de máquina são evidenciadas em todos os casos
|
664 |
Random allocations: new and extended models and techniques with applications and numerics.Kennington, Raymond William January 2007 (has links)
This thesis provides a general methodology for classifying and describing many combinatoric problems, systematising and finding theoretical expressions for quantities of interest, and investigating their feasible numerical evaluation. Unifying notation and definitions are provided. Our knowledge of random allocations is also extended. This is achieved by investigating new processes, generalising known processes, and by providing a formal structure and innovative techniques for analysing them. The random allocation models described in this thesis can be classified as occupancy urn models, in which we have a sequence of urns and throw balls into them, and investigate static, waiting-time and dynamic processes. Various structures are placed on the relationship(s) between cells, balls, and the selection of items being distributed, including varieties, batch arrivals, taboo sets and blocking sets. Static, waiting-time and dynamic processes are investigated. Both without-replacement and with-replacement sampling types are considered. Emphasis is placed on the distributions of waiting-times for one or more events to occur measured from the time a particular event occurs; this begins as an abstraction and generalisation of a model of departures of cars parked in lanes. One of several additional determinations is the platoon size distribution. Models are analysed using combinatorial analysis and Markov Chains. Global attributes are measured, including maximum waits, maximum room required, moments and the clustering of completions. Various conversion formulae have been devised to reduce calculation times by several orders of magnitude. New and extended applications include Queueing in Lanes, Cake Displays, Coupon Collector's Problem, Sock-Sorting, Matching Dependent Sets (including Genetic Code Attribute Matching and the game SET), the Zig-Zag Problem, Testing for Randomness (including the Cake Display Test, which is a without-replacement test similar to the standard Empty Cell test), Waiting for Luggage at an Airport, Breakdowns in a Network, Learning Theory and Estimating the Number of Skeletons at an Archaeological Dig. Fundamental, reduction and covering theorems provide ways to reduce the number of calculations required. New combinatorial identities are discovered and a well-known one is proved in a combinatorial way for the first time. Some known results are derived from simple cases of the general models. / http://proxy.library.adelaide.edu.au/login?url= http://library.adelaide.edu.au/cgi-bin/Pwebrecon.cgi?BBID=1309598 / Thesis (Ph.D.) -- University of Adelaide, School of Mathematical Sciences, 2007
|
665 |
On unequal probability sampling designsGrafström, Anton January 2010 (has links)
The main objective in sampling is to select a sample from a population in order to estimate some unknown population parameter, usually a total or a mean of some interesting variable. When the units in the population do not have the same probability of being included in a sample, it is called unequal probability sampling. The inclusion probabilities are usually chosen to be proportional to some auxiliary variable that is known for all units in the population. When unequal probability sampling is applicable, it generally gives much better estimates than sampling with equal probabilities. This thesis consists of six papers that treat unequal probability sampling from a finite population of units. A random sample is selected according to some specified random mechanism called the sampling design. For unequal probability sampling there exist many different sampling designs. The choice of sampling design is important since it determines the properties of the estimator that is used. The main focus of this thesis is on evaluating and comparing different designs. Often it is preferable to select samples of a fixed size and hence the focus is on such designs. It is also important that a design has a simple and efficient implementation in order to be used in practice by statisticians. Some effort has been made to improve the implementation of some designs. In Paper II, two new implementations are presented for the Sampford design. In general a sampling design should also have a high level of randomization. A measure of the level of randomization is entropy. In Paper IV, eight designs are compared with respect to their entropy. A design called adjusted conditional Poisson has maximum entropy, but it is shown that several other designs are very close in terms of entropy. A specific situation called real time sampling is treated in Paper III, where a new design called correlated Poisson sampling is evaluated. In real time sampling the units pass the sampler one by one. Since each unit only passes once, the sampler must directly decide for each unit whether or not it should be sampled. The correlated Poisson design is shown to have much better properties than traditional methods such as Poisson sampling and systematic sampling.
|
666 |
Analyse und praktische Umsetzung unterschiedlicher Methoden des <i>Randomized Branch Sampling</i> / Analysis and practical application of different methods of the <i>Randomized Branch Sampling</i>Cancino Cancino, Jorge Orlando 26 June 2003 (has links)
No description available.
|
667 |
Random allocations: new and extended models and techniques with applications and numerics.Kennington, Raymond William January 2007 (has links)
This thesis provides a general methodology for classifying and describing many combinatoric problems, systematising and finding theoretical expressions for quantities of interest, and investigating their feasible numerical evaluation. Unifying notation and definitions are provided. Our knowledge of random allocations is also extended. This is achieved by investigating new processes, generalising known processes, and by providing a formal structure and innovative techniques for analysing them. The random allocation models described in this thesis can be classified as occupancy urn models, in which we have a sequence of urns and throw balls into them, and investigate static, waiting-time and dynamic processes. Various structures are placed on the relationship(s) between cells, balls, and the selection of items being distributed, including varieties, batch arrivals, taboo sets and blocking sets. Static, waiting-time and dynamic processes are investigated. Both without-replacement and with-replacement sampling types are considered. Emphasis is placed on the distributions of waiting-times for one or more events to occur measured from the time a particular event occurs; this begins as an abstraction and generalisation of a model of departures of cars parked in lanes. One of several additional determinations is the platoon size distribution. Models are analysed using combinatorial analysis and Markov Chains. Global attributes are measured, including maximum waits, maximum room required, moments and the clustering of completions. Various conversion formulae have been devised to reduce calculation times by several orders of magnitude. New and extended applications include Queueing in Lanes, Cake Displays, Coupon Collector's Problem, Sock-Sorting, Matching Dependent Sets (including Genetic Code Attribute Matching and the game SET), the Zig-Zag Problem, Testing for Randomness (including the Cake Display Test, which is a without-replacement test similar to the standard Empty Cell test), Waiting for Luggage at an Airport, Breakdowns in a Network, Learning Theory and Estimating the Number of Skeletons at an Archaeological Dig. Fundamental, reduction and covering theorems provide ways to reduce the number of calculations required. New combinatorial identities are discovered and a well-known one is proved in a combinatorial way for the first time. Some known results are derived from simple cases of the general models. / http://proxy.library.adelaide.edu.au/login?url= http://library.adelaide.edu.au/cgi-bin/Pwebrecon.cgi?BBID=1309598 / Thesis (Ph.D.) -- University of Adelaide, School of Mathematical Sciences, 2007
|
668 |
Estimação conjunta de atraso de tempo subamostral e eco de referência para sinais de ultrassom / Joint subsample time delay and echo template estimation for ultrasound signalsAntelo Junior, Ernesto Willams Molina 20 September 2017 (has links)
CAPES / Em ensaios não destrutivos por ultrassom, o sinal obtido a partir de um sistema de aquisição de dados real podem estar contaminados por ruído e os ecos podem ter atrasos de tempo subamostrais. Em alguns casos, esses aspectos podem comprometer a informação obtida de um sinal por um sistema de aquisição. Para lidar com essas situações, podem ser utilizadas técnicas de estimativa de atraso temporal (Time Delay Estimation ou TDE) e também técnicas de reconstrução de sinais, para realizar aproximações e obter mais informações sobre o conjunto de dados. As técnicas de TDE podem ser utilizadas com diversas finalidades na defectoscopia, como por exemplo, para a localização precisa de defeitos em peças, no monitoramento da taxa de corrosão em peças, na medição da espessura de um determinado material e etc. Já os métodos de reconstrução de dados possuem uma vasta gama de aplicação, como nos NDT, no imageamento médico, em telecomunicações e etc. Em geral, a maioria das técnicas de estimativa de atraso temporal requerem um modelo de sinal com precisão elevada, caso contrário, a localização dessa estimativa pode ter sua qualidade reduzida. Neste trabalho, é proposto um esquema alternado que estima de forma conjunta, uma referência de eco e atrasos de tempo para vários ecos a partir de medições ruidosas. Além disso, reinterpretando as técnicas utilizadas a partir de uma perspectiva probabilística, estendem-se suas funcionalidades através de uma aplicação conjunta de um estimador de máxima verossimilhança (Maximum Likelihood Estimation ou MLE) e um estimador máximo a posteriori (MAP). Finalmente, através de simulações, resultados são apresentados para demonstrar a superioridade do método proposto em relação aos métodos convencionais. / Abstract (parágrafo único): In non-destructive testing (NDT) with ultrasound, the signal obtained from a real data acquisition system may be contaminated by noise and the echoes may have sub-sample time delays. In some cases, these aspects may compromise the information obtained from a signal by an acquisition system. To deal with these situations, Time Delay Estimation (TDE) techniques and signal reconstruction techniques can be used to perform approximations and also to obtain more information about the data set. TDE techniques can be used for a number of purposes in the defectoscopy, for example, for accurate location of defects in parts, monitoring the corrosion rate in pieces, measuring the thickness of a given material, and so on. Data reconstruction methods have a wide range of applications, such as NDT, medical imaging, telecommunications and so on. In general, most time delay estimation techniques require a high precision signal model, otherwise the location of this estimate may have reduced quality. In this work, an alternative scheme is proposed that jointly estimates an echo model and time delays for several echoes from noisy measurements. In addition, by reinterpreting the utilized techniques from a probabilistic perspective, its functionalities are extended through a joint application of a maximum likelihood estimator (MLE) and a maximum a posteriori (MAP) estimator. Finally, through simulations, results are presented to demonstrate the superiority of the proposed method over conventional methods.
|
669 |
Extremal combinatorics, graph limits and computational complexityNoel, Jonathan A. January 2016 (has links)
This thesis is primarily focused on problems in extremal combinatorics, although we will also consider some questions of analytic and algorithmic nature. The d-dimensional hypercube is the graph with vertex set {0,1}<sup>d</sup> where two vertices are adjacent if they differ in exactly one coordinate. In Chapter 2 we obtain an upper bound on the 'saturation number' of Q<sub>m</sub> in Q<sub>d</sub>. Specifically, we show that for m ≥ 2 fixed and d large there exists a subgraph G of Q<sub>d</sub> of bounded average degree such that G does not contain a copy of Q<sub>m</sub> but, for every G' such that G ⊊ G' ⊆ Q<sub>d</sub>, the graph G' contains a copy of Q<sub>m</sub>. This result answers a question of Johnson and Pinto and is best possible up to a factor of O(m). In Chapter 3, we show that there exists ε > 0 such that for all k and for n sufficiently large there is a collection of at most 2<sup>(1-ε)k</sup> subsets of [n] which does not contain a chain of length k+1 under inclusion and is maximal subject to this property. This disproves a conjecture of Gerbner, Keszegh, Lemons, Palmer, Pálvölgyi and Patkós. We also prove that there exists a constant c ∈ (0,1) such that the smallest such collection is of cardinality 2<sup>(1+o(1))<sup>ck</sup> </sup> for all k. In Chapter 4, we obtain an exact expression for the 'weak saturation number' of Q<sub>m</sub> in Q<sub>d</sub>. That is, we determine the minimum number of edges in a spanning subgraph G of Q<sub>d</sub> such that the edges of E(Q<sub>d</sub>)\E(G) can be added to G, one edge at a time, such that each new edge completes a copy of Q<sub>m</sub>. This answers another question of Johnson and Pinto. We also obtain a more general result for the weak saturation of 'axis aligned' copies of a multidimensional grid in a larger grid. In the r-neighbour bootstrap process, one begins with a set A<sub>0</sub> of 'infected' vertices in a graph G and, at each step, a 'healthy' vertex becomes infected if it has at least r infected neighbours. If every vertex of G is eventually infected, then we say that A<sub>0</sub> percolates. In Chapter 5, we apply ideas from weak saturation to prove that, for fixed r ≥ 2, every percolating set in Q<sub>d</sub> has cardinality at least (1+o(1))(d choose r-1)/r. This confirms a conjecture of Balogh and Bollobás and is asymptotically best possible. In addition, we determine the minimum cardinality exactly in the case r=3 (the minimum cardinality in the case r=2 was already known). In Chapter 6, we provide a framework for proving lower bounds on the number of comparable pairs in a subset S of a partially ordered set (poset) of prescribed size. We apply this framework to obtain an explicit bound of this type for the poset 𝒱(q,n) consisting of all subspaces of 𝔽<sub>q</sub><sup>n</sup>ordered by inclusion which is best possible when S is not too large. In Chapter 7, we apply the result from Chapter 6 along with the recently developed 'container method,' to obtain an upper bound on the number of antichains in 𝒱(q,n) and a bound on the size of the largest antichain in a p-random subset of 𝒱(q,n) which holds with high probability for p in a certain range. In Chapter 8, we construct a 'finitely forcible graphon' W for which there exists a sequence (ε<sub>i</sub>)<sup>∞</sup><sub>i=1</sub> tending to zero such that, for all i ≥ 1, every weak ε<sub>i</sub>-regular partition of W has at least exp(ε<sub>i</sub><sup>-2</sup>/2<sup>5log∗ε<sub>i</sub><sup>-2</sup></sup>) parts. This result shows that the structure of a finitely forcible graphon can be much more complex than was anticipated in a paper of Lovász and Szegedy. For positive integers p,q with p/q ❘≥ 2, a circular (p,q)-colouring of a graph G is a mapping V(G) → ℤ<sub>p</sub> such that any two adjacent vertices are mapped to elements of ℤ<sub>p</sub> at distance at least q from one another. The reconfiguration problem for circular colourings asks, given two (p,q)-colourings f and g of G, is it possible to transform f into g by recolouring one vertex at a time so that every intermediate mapping is a p,q-colouring? In Chapter 9, we show that this question can be answered in polynomial time for 2 ≤ p/q < 4 and is PSPACE-complete for p/q ≥ 4.
|
670 |
Climate variability and climate change in water resources management of the Zambezi River basinTirivarombo, Sithabile January 2013 (has links)
Water is recognised as a key driver for social and economic development in the Zambezi basin. The basin is riparian to eight southern African countries and the transboundary nature of the basin’s water resources can be viewed as an agent of cooperation between the basin countries. It is possible, however, that the same water resource can lead to conflicts between water users. The southern African Water Vision for ‘equitable and sustainable utilisation of water for social, environmental justice and economic benefits for the present and future generations’ calls for an integrated and efficient management of water resources within the basin. Ensuring water and food security in the Zambezi basin is, however, faced with challenges due to high variability in climate and the available water resources. Water resources are under continuous threat from pollution, increased population growth, development and urbanisation as well as global climate change. These factors increase the demand for freshwater resources and have resulted in water being one of the major driving forces for development. The basin is also vulnerable due to lack of adequate financial resources and appropriate water resources infrastructure to enable viable, equitable and sustainable distribution of the water resources. This is in addition to the fact that the basin’s economic mainstay and social well-being are largely dependent on rainfed agriculture. There is also competition among the different water users and this has the potential to generate conflicts, which further hinder the development of water resources in the basin. This thesis has focused on the Zambezi River basin emphasising climate variability and climate change. It is now considered common knowledge that the global climate is changing and that many of the impacts will be felt through water resources. If these predictions are correct then the Zambezi basin is most likely to suffer under such impacts since its economic mainstay is largely determined by the availability of rainfall. It is the belief of this study that in order to ascertain the impacts of climate change, there should be a basis against which this change is evaluated. If we do not know the historical patterns of variability it may be difficult to predict changes in the future climate and in the hydrological resources and it will certainly be difficult to develop appropriate management strategies. Reliable quantitative estimates of water availability are a prerequisite for successful water resource plans. However, such initiatives have been hindered by paucity in data especially in a basin where gauging networks are inadequate and some of them have deteriorated. This is further compounded by shortages in resources, both human and financial, to ensure adequate monitoring. To address the data problems, this study largely relied on global data sets and the CRU TS2.1 rainfall grids were used for a large part of this study. The study starts by assessing the historical variability of rainfall and streamflow in the Zambezi basin and the results are used to inform the prediction of change in the future. Various methods of assessing historical trends were employed and regional drought indices were generated and evaluated against the historical rainfall trends. The study clearly demonstrates that the basin has a high degree of temporal and spatial variability in rainfall and streamflow at inter-annual and multi-decadal scales. The Standardised Precipitation Index, a rainfall based drought index, is used to assess historical drought events in the basin and it is shown that most of the droughts that have occurred were influenced by climatic and hydrological variability. It is concluded, through the evaluation of agricultural maize yields, that the basin’s food security is mostly constrained by the availability of rainfall. Comparing the viability of using a rainfall based index to a soil moisture based index as an agricultural drought indicator, this study concluded that a soil moisture based index is a better indicator since all of the water balance components are considered in the generation of the index. This index presents the actual amount of water available for the plant unlike purely rainfall based indices, that do not account for other components of the water budget that cause water losses. A number of challenges were, however, faced in assessing the variability and historical drought conditions, mainly due to the fact that most parts of the Zambezi basin are ungauged and available data are sparse, short and not continuous (with missing gaps). Hydrological modelling is frequently used to bridge the data gap and to facilitate the quantification of a basin’s hydrology for both gauged and ungauged catchments. The trend has been to use various methods of regionalisation to transfer information from gauged basins, or from basins with adequate physical basin data, to ungauged basins. All this is done to ensure that water resources are accounted for and that the future can be well planned. A number of approaches leading to the evaluation of the basin’s hydrological response to future climate change scenarios are taken. The Pitman rainfall-runoff model has enjoyed wide use as a water resources estimation tool in southern Africa. The model has been calibrated for the Zambezi basin but it should be acknowledged that any hydrological modelling process is characterised by many uncertainties arising from limitations in input data and inherent model structural uncertainty. The calibration process is thus carried out in a manner that embraces some of the uncertainties. Initial ranges of parameter values (maximum and minimum) that incorporate the possible parameter uncertainties are assigned in relation to physical basin properties. These parameter sets are used as input to the uncertainty version of the model to generate behavioural parameter space which is then further modified through manual calibration. The use of parameter ranges initially guided by the basin physical properties generates streamflows that adequately represent the historically observed amounts. This study concludes that the uncertainty framework and the Pitman model perform quite well in the Zambezi basin. Based on assumptions of an intensifying hydrological cycle, climate changes are frequently expected to result in negative impacts on water resources. However, it is important that basin scale assessments are undertaken so that appropriate future management strategies can be developed. To assess the likely changes in the Zambezi basin, the calibrated Pitman model was forced with downscaled and bias corrected GCM data. Three GCMs were used for this study, namely; ECHAM, GFDL and IPSL. The general observation made in this study is that the near future (2046-2065) conditions of the Zambezi basin are expected to remain within the ranges of historically observed variability. The differences between the predictions for the three GCMs are an indication of the uncertainties in the future and it has not been possible to make any firm conclusions about directions of change. It is therefore recommended that future water resources management strategies account for historical patterns of variability, but also for increased uncertainty. Any management strategies that are able to satisfactorily deal with the large variability that is evident from the historical data should be robust enough to account for the near future patterns of water availability predicted by this study. However, the uncertainties in these predictions suggest that improved monitoring systems are required to provide additional data against which future model outputs can be assessed.
|
Page generated in 0.0551 seconds