• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3056
  • 943
  • 351
  • 314
  • 185
  • 108
  • 49
  • 49
  • 49
  • 49
  • 49
  • 48
  • 40
  • 35
  • 30
  • Tagged with
  • 6293
  • 1448
  • 1116
  • 1068
  • 838
  • 734
  • 725
  • 711
  • 648
  • 615
  • 510
  • 492
  • 482
  • 471
  • 457
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Multi-objective optimisation under deep uncertainty

Shavazipour, Babooshka January 2018 (has links)
Most of the decisions in real-life problems need to be made in the absence of complete knowledge about the consequences of the decision. Furthermore, in some of these problems, the probability and/or the number of different outcomes are also unknown (named deep uncertainty). Therefore, all the probability-based approaches (such as stochastic programming) are unable to address these problems. On the other hand, involving various stakeholders with different (possibly conflicting) criteria in the problems brings additional complexity. The main aim and primary motivation for writing this thesis have been to deal with deep uncertainty in Multi-Criteria Decision-Making (MCDM) problems, especially with long-term decision-making processes such as strategic planning problems. To achieve these aims, we first introduced a two-stage scenario-based structure for dealing with deep uncertainty in Multi-Objective Optimisation (MOO)/MCDM problems. The proposed method extends the concept of two-stage stochastic programming with recourse to address the capability of dealing with deep uncertainty through the use of scenario planning rather than statistical expectation. In this research, scenarios are used as a dimension of preference (a component of what we term the meta-criteria) to avoid problems relating to the assessment and use of probabilities under deep uncertainty. Such scenario-based thinking involved a multi-objective representation of performance under different future conditions as an alternative to expectation, which fitted naturally into the broader multi-objective problem context. To aggregate these objectives of the problem, the Generalised Goal Programming (GGP) approach is used. Due to the capability of this approach to handle large numbers of objective functions/criteria, the GGP is significantly useful in the proposed framework. Identifying the goals for each criterion is the only action that the Decision Maker (DM) needs to take without needing to investigate the trade-offs between different criteria. Moreover, the proposed two-stage framework has been expanded to a three-stage structure and a moving horizon concept to handle the existing deep uncertainty in more complex problems, such as strategic planning. As strategic planning problems will deal with more than two stages and real processes are continuous, it follows that more scenarios will continuously be unfolded that may or may not be periodic. "Stages", in this study, are artificial constructs to structure thinking of an indefinite future. A suitable length of the planning window and stages in the proposed methodology are also investigated. Philosophically, the proposed two-stage structure always plans and looks one step ahead while the three-stage structure considers the conditions and consequences of two upcoming steps in advance, which fits well with our primary objective. Ignoring long-term consequences of decisions as well as likely conditions could not be a robust strategic approach. Therefore, generally, by utilising the three-stage structure, we may expect a more robust decision than with a two-stage representation. Modelling time preferences in multi-stage problems have also been introduced to solve the fundamental problem of comparability of the two proposed methodologies because of the different time horizon, as the two-stage model is ignorant of the third stage. This concept has been applied by a differential weighting in models. Importance weights, then, are primarily used to make the two- and three-stage models more directly comparable, and only secondarily as a measure of risk preference. Differential weighting can help us apply further preferences in the model and lead it to generate more preferred solutions. Expanding the proposed structure to the problems with more than three stages which usually have too many meta-scenarios may lead us to a computationally expensive model that cannot easily be solved, if it all. Moreover, extension to a planning horizon that too long will not result in an exact plan, as nothing in nature is predictable to this level of detail, and we are always surprised by new events. Therefore, beyond the expensive computation in a multi-stage structure for more than three stages, defining plausible scenarios for far stages is not logical and even impossible. Therefore, the moving horizon models in a T-stage planning window has been introduced. To be able to run and evaluate the proposed two- and three-stage moving horizon frameworks in longer planning horizons, we need to identify all plausible meta-scenarios. However, with the assumption of deep uncertainty, this identification is almost impossible. On the other hand, even with a finite set of plausible meta-scenarios, comparing and computing the results in all plausible meta-scenarios are hardly possible, because the size of the model grows exponentially by raising the length of the planning horizon. Furthermore, analysis of the solutions requires hundreds or thousands of multi-objective comparisons that are not easily conceivable, if it all. These issues motivated us to perform a Simulation-Optimisation study to simulate the reasonable number of meta-scenarios and enable evaluation, comparison and analysis of the proposed methods for the problems with a T-stage planning horizon. In this Simulation-Optimisation study, we started by setting the current scenario, the scenario that we were facing it at the beginning of the period. Then, the optimisation model was run to get the first-stage decisions which can implement immediately. Thereafter, the next scenario was randomly generated by using Monte Carlo simulation methods. In deep uncertainty, we do not have enough knowledge about the likelihood of plausible scenarios nor the probability space; therefore, to simulate the deep uncertainty we shall not use anything of scenario likelihoods in the decision models. The two- and three-stage Simulation-Optimisation algorithms were also proposed. A comparison of these algorithms showed that the solutions to the two-stage moving horizon model are feasible to the other pattern (three-stage). Also, the optimal solution to the three-stage moving horizon model is not dominated by any solutions of the other model. So, with no doubt, it must find better, or at least the same, goal achievement compared to the two-stage moving horizon model. Accordingly, the three-stage moving horizon model evaluates and compares the optimal solution of the corresponding two-stage moving horizon model to the other feasible solutions, then, if it selects anything else it must either be better in goal achievement or be robust in some future scenarios or a combination of both. However, the cost of these supremacies must be considered (as it may lead us to a computationally expensive problem), and the efficiency of applying this structure needs to be approved. Obviously, using the three-stage structure in comparison with the two-stage approach brings more complexity and calculations to the models. It is also shown that the solutions to the three-stage model would be preferred to the solutions provided by the two-stage model under most circumstances. However, by the "efficiency" of the three-stage framework in our context, we want to know that whether utilising this approach and its solutions is worth the expense of the additional complexity and computation. The experiments in this study showed that the three-stage model has advantages under most circumstances(meta-scenarios), but that the gains are quite modest. This issue is frequently observed when comparing these methods in problems with a short-term (say less than five stages) planning window. Nevertheless, analysis of the length of the planning horizon and its effects on the solutions to the proposed frameworks indicate that utilising the three-stage models is more efficient for longer periods because the differences between the solutions of the two proposed structures increase by any iteration of the algorithms in moving horizon models. Moreover, during the long-term calculations, we noticed that the two-stage algorithm failed to find the optimal solutions for some iterations while the three-stage algorithm found the optimal value in all cases. Thus, it seems that for the planning horizons with more than ten stages, the efficiency of the three-stage model be may worth the expenses of the complexity and computation. Nevertheless, if the DM prefers to not use the three-stage structure because of the complexity and/or calculations, the two-stage moving horizon model can provide us with some reasonable solutions, although they might not be as good as the solutions generated by a three-stage framework. Finally, to examine the power of the proposed methodology in real cases, the proposed two-stage structure was applied in the sugarcane industry to analyse the whole infrastructure of the sugar and bioethanol Supply Chain (SC) in such a way that all economics (Max profit), environmental (Min CO₂), and social benefits (Max job-creations) were optimised under six key uncertainties, namely sugarcane yield, ethanol and refined sugar demands and prices, and the exchange rate. Moreover, one of the critical design questions - that is, to design the optimal number and technologies as well as the best place(s) for setting up the ethanol plant(s) - was also addressed in this study. The general model for the strategic planning of sugar- bioethanol supply chains (SC) under deep uncertainty was formulated and also examined in a case study based on the South African Sugar Industry. This problem is formulated as a Scenario-Based Mixed-Integer Two-Stage Multi-Objective Optimisation problem and solved by utilising the Generalised Goal Programming Approach. To sum up, the proposed methodology is, to the best of our knowledge, a novel approach that can successfully handle the deep uncertainty in MCDM/MOO problems with both short- and long-term planning horizons. It is generic enough to use in all MCDM problems under deep uncertainty. However, in this thesis, the proposed structure only applied in Linear Problems (LP). Non-linear problems would be an important direction for future research. Different solution methods may also need to be examined to solve the non-linear problems. Moreover, many other real-world optimisation and decision-making applications can be considered to examine the proposed method in the future.
42

The diet of the Cape fur seal Arctocephalus pusillus pusillus in Namibia : variability and fishery interactions

Mecenero, Silvia January 2005 (has links)
Includes bibliographical references.
43

The effects of oiling and rehabilitation on the breeding productivity and annual moult and breeding cycles of African penguins

Wolfaardt, Anton Carl January 2007 (has links)
Includes bibliographical references.
44

A comparative study of stochastic models in biology

Brandão, Anabela de Gusmão 04 May 2020 (has links)
In many instances, problems that arise in biology do not fall under any category for which standard statistical techniques are available to be able to analyse them. Under these situations, specifics methods have to be developed to solve and answer questions put forward by biologists. In this thesis four different problems occurring in biology are investigated. A stochastic model is built in each case which describes the problem at hand. These models are not only effective as a description tool but also afford strategies consistent with conventional model selection processes to deal with the standard statistical hypothesis testing situations. The abstracts of the papers resulting from these problems are presented below.
45

A variance shilf model for outlier detection and estimation in linear and linear mixed models

Gumedze, Freedom Nkhululeko January 2008 (has links)
Includes abstract. / Includes bibliographical references. / Outliers are data observations that fall outside the usual conditional ranges of the response data.They are common in experimental research data, for example, due to transcription errors or faulty experimental equipment. Often outliers are quickly identified and addressed, that is, corrected, removed from the data, or retained for subsequent analysis. However, in many cases they are completely anomalous and it is unclear how to treat them. Case deletion techniques are established methods in detecting outliers in linear fixed effects analysis. The extension of these methods to detecting outliers in linear mixed models has not been entirely successful, in the literature. This thesis focuses on a variance shift outlier model as an approach to detecting and assessing outliers in both linear fixed effects and linear mixed effects analysis. A variance shift outlier model assumes a variance shift parameter, wi, for the ith observation, where wi is unknown and estimated from the data. Estimated values of wi indicate observations with possibly inflated variances relative to the remainder of the observations in the data set and hence outliers. When outliers lurk within anomalous elements in the data set, a variance shift outlier model offers an opportunity to include anomalies in the analysis, but down-weighted using the variance shift estimate wi. This down-weighting might be considered preferable to omitting data points (as in case-deletion methods). For very large values of wi a variance shift outlier model is approximately equivalent to the case deletion approach.
46

A continuous-time formulation for spatial capture-recapture models

Distiller, Greg January 2016 (has links)
Spatial capture-recapture (SCR) models are relatively new but have become the standard approach used to estimate animal density from capture-recapture data. It has in the past been impractical to obtain sufficient data for analysis on species that are very difficult to capture such as elusive carnivores that occur at low density and range very widely. Advances in technology have led to alternative ways to virtually "capture" individuals without having to physically hold them. Some examples of these new non-invasive sampling methods include scat or hair collection for genetic analysis, acoustic detection and camera trapping. In traditional capture-recapture (CR) and SCR studies populations are sampled at discrete points in time leading to clear and well defined occasions whereas the new detector types mentioned above sample populations continuously in time. Re- searchers with data collected continuously currently need to define an appropriate occasion and aggregate their data accordingly thereby imposing an artificial construct on their data for analytical convenience. This research develops a continuous-time (CT) framework for SCR models by treating detections as a temporal non homogeneous Poisson process (NHPP) and replacing the usual SCR detection function with a continuous detection hazard func- tion. The general CT likelihood is first developed for data from passive (also called "proximity") detectors like camera traps that do not physically hold individuals. The likelihood is then modified to produce a likelihood for single-catch traps (traps that are taken out of action by capturing an animal) that has proven difficult to develop with a discrete-occasion approach. The lack of a suitable single-catch trap likelihood has led to researchers using a discrete-time (DT) multi-catch trap estimator to analyse single-catch trap data. Previous work has found the DT multi-catch estimator to be robust despite the fact that it is known to be based on the wrong model for single-catch traps (it assumes that the traps continue operating after catching an individual). Simulation studies in this work confirm that the multi-catch estimator is robust for estimating density when density is constant or does not vary much in space. However, there are scenarios with non-constant density surfaces when the multi-catch estimator is not able to correctly identify regions of high density. Furthermore, the multi-catch estimator is known to be negatively biased for the intercept parameter of SCR detection functions and there may be interest in the detection function in its own right. On the other hand the CT single-catch estimator is unbiased or nearly so for all parameters of interest including those in the detection function and those in the model for density. When one assumes that the detection hazard is constant through time there is no impact of ignoring capture times and using only the detection frequencies. This is of course a special case and in reality detection hazards will tend to vary in time. However when one assumes that the effects of time and distance in the time-varying hazard are independent, then similarly there is no information in the capture times about density and detection function parameters. The work here uses a detection hazard that assumes independence between time and distance. Different forms for the detection hazard are explored with the most exible choice being that of a cyclic regression spline. Extensive simulation studies suggest as expected that a DT proximity estimator is unbiased for the estimation of density even when the detection hazard varies though time. However there are indirect benefits of incorporating capture times because doing so will lead to a better fitting detection component of the model, and this can prevent unexplained variation being erroneously attributed to the wrong covariate. The analysis of two real datasets supports this assertion because the models with the best fitting detection hazard have different effects to the other models. In addition, modelling the detection process in continuous-time leads to a more parsimonious approach compared to using DT models when the detection hazard varies in time. The underlying process is occurring in continuous-time and so using CT models allows inferences to be drawn about the underlying process, for example the time- varying detection hazard can be viewed as a proxy for animal activity. The CT formulation is able to model the underlying detection hazard accurately and provides a formal modelling framework to explore different hypotheses about activity patterns. There is scope to integrate the CT models developed here with models for space usage and landscape connectivity to explore these processes on a finer temporal scale. SCR models are experiencing a rapid growth in both application and method development. The data generating process occurs in CT and hence a CT modelling approach is a natural fit and opens up several opportunities that are not possible with a DT formulation. The work here makes a contribution by developing and exploring the utility of such a CT SCR formulation.
47

Cold winters vs long journeys : adaptations of primary moult and body mass to migration and wintering in the Grey Plover Pluvialis Squatarola

Serra, Lorenzo January 2002 (has links)
Includes bibliographical references. / The Grey Plover Pluvialis squatarola is a circumpolar breeding wader with a cosmopolitan winter distribution. Primary moult generally starts only when potential wintering sites are reached. Across the Palearctic-African region Grey Plovers experience an enormous variety of ecological and climatic conditions, which determine the development of different moult patterns, according to local conditions and timing of migration.
48

Effects of protected areas and climate change on the occupancy dynamics of common bird species in South Africa

Duckworth, Greg 18 February 2019 (has links)
Protected areas are tracts of land set aside primarily for the conservation of biodiversity and natural habitats. They are intended to mitigate biodiversity loss caused by land-use change worldwide. Climate change has been shown to disrupt species' natural distributions and patterns, and poses a significant threat to global biodiversity. The goals of this thesis are to address these important issues, and understand how protected areas and climate change affect the range dynamics of common, resident bird species in South Africa. Common species were used because they have been shown to drive important ecosystem patterns, and a decline in abundance and diversity of common species can indicate drastic declines in ecosystem integrity. This thesis comprises four data chapters; in the first three I model the occupancy dynamics of 200 common, resident bird species in South Africa to gain an understanding of how the proportion of protected areas within a landscape affects common species. For the last data chapter, I examined the effects of protected areas and a changing climate on the range dynamics of Cape Rock-jumper (Chaetops frenatus), a species endemic to the southwestern part of South Africa and whose population is declining rapidly in response to climate change. I modelled its occupancy dynamics in relation to climate, vegetation, and protected area. Overall, my key findings show bird abundances vary widely as a function of protected areas, but on average, bird abundances are higher in regions with a higher proportion of protected areas, compared to regions with a lower proportion. I found that the conservation ability of protected areas was influenced by the type of land-use found in the surrounding landscape. For example, the extent of agricultural land in proximity to a protected area significantly increased the mean abundance of birds in that protected area, whilst the average abundance of most species was not affected by the extent of urban area near protected area. On average, species preferentially colonized and persisted within landscapes with a higher proportion of protected area, compared to landscapes with a lower proportion of protected area. However, protected areas were not able to slow the extinction rate for all species, and the average extinction rate for some groups of species actually increased as the extend of protected areas within a landscape increased. Furthermore, Cape Rock-jumper also preferentially occupied regions with higher proportions of protected area. Despite this, Cape Rock-jumper’s range is predicted to shrink considerably in response to a hotter and mildly drier climate forecast for the region. As a result, Cape Rock-jumper will likely be of conservation concern as the climate over its range continues to change. I conclude that, in general, protected areas are effective at conserving common bird species over a heterogeneous landscape in South Africa, and should be prioritised as key conservation strategies in the future. I further conclude that climate change will be a concern to an endemic species, and to biodiversity in general. This will likely place extra stress on the importance of protected areas to mitigate responses of species to climate change.
49

Survival and movements of African Penguins, especially after oiling

Whittington, Phil, 1958- January 2002 (has links)
Bibliography: p. 273-286.
50

Multi-attribute value measurement and economic paradigms in environmental decision making

Joubert, Alison Ruth January 2002 (has links)
Bibliography: p. 219-228. / The two environmental decision-making approaches of environmental economics (EE) valuation and multi-criteria decision analysis (MCDA) differ fundamentally in their underlying philosophies and approach; hence they are characterised as paradigms. The EE paradigm includes the idea that, if appropriate prices can be found and implemented for goods not normally traded on the market, then the market mechanism will efficiently distribute resources and decisions are therefore based on the concepts of individual willingness to pay and consumer sovereignty. That an efficient market is not necessarily equitable or sustainable has long been acknowledged, but EE adjustments are subject to theoretical and methodological problems. The MCDA paradigm is based on the idea that values and preferences should be examined and constructed through interaction between workshop participants and the analyst, given basic measurement theory axioms. Various EE and MCDA methods have been devised for measuring value in different contexts, some of which were applied, in the context of environmental (particularly water resources) management, in six action research case studies. The EE methods were contingent behaviour valuation, the contingent valuation method, conjoint analysis and the travel cost method. The MCDA method was a version of the simple multi-attribute rating technique (called SMARTx). In the SMARTx cases, applying a group-value sharing model during a series of workshops, stakeholders rated the effect of alternatives on a number of environmental, social and economic attributes directly or using value functions and gave weights to criteria. Indirect compensatory values of one criterion in terms of another were determined. In the EE cases, survey respondents were asked their travel costs, preference for multi-attribute profiles and willingness to pay for alternatives. Total and average willingness to pay for an amenity, its attributes or changes in environmental quality were determined. The practical and theoretical implications of applying the different methods were examined and compared in terms of four metacriteria: resonance with and validity within the prevailing political and decision-context, general validity and reliability, ability to include equity and sustainability criteria and practicality.

Page generated in 0.1752 seconds