• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 1
  • 1
  • 1
  • Tagged with
  • 23
  • 23
  • 8
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Species distribution modelling of Aloidendron dichotomum (quiver tree)

Dube, Qobo 18 February 2019 (has links)
A variety of species distribution models (SDMs) were fit to data collected by a 15,000km road-side visual survey of Aloidendron dichotomum populations in the Northern Cape region of South Africa, and Namibia. We fit traditional presence/absence SDMs as well as SDMs on how proportions are distributed across three species stage classes (juvenile, adult, dead). Using five candidate machine learning methods and an ensemble model, we compared a number of approaches, including the role of balanced class (presence/absence) datasets in species distribution modelling. Secondary to this was whether or not the addition of species’ absences, generated where the species is known not to exist have an impact on findings. The goal of the analysis was to map the distribution of Aloidendron dichotomum under different scenarios. Precipitation-based variables were generally more deterministic of species presence or lack thereof. Visual interpretation of the estimated Aloidendron dichotomum population under current climate conditions, suggested a reasonably well fit model, having a large overlap with the sampled area. There however were some conditions estimated to be suitable for species incidence outside of the sampled range, where Aloidendron dichotomum are not known to occur. Habitat suitability for juvenile individuals was largely decreasing in concentration towards Windhoek. The largest proportion of dead individuals was estimated to be on the northern edge of the Riemvasmaak Conservancy, along the South African/Namibian boarder, reaching up to a 60% composition of the population. The adult stage class maintained overall proportional dominance. Under future climate scenarios, despite maintaining a bulk of the currently habitable conditions, a noticeable negative shift in habitat suitability for the species was observed. A temporal analysis of Aloidendron dichotomum’s latitudinal and longitudinal range revealed a potential south-easterly shift in suitable species conditions. Results were however met with some uncertainty as SDMs were uncovered to be extrapolating into a substantial amount of the study area. We found that balancing response class frequencies within the data proved not to be an effective error reduction technique overall, having no considerable impact on species detection accuracy. Balancing the classes however did improve the accuracy on the presence class, at the cost of accuracy of the observed absence class. Furthermore, overall model accuracy increased as more absences from outside the study area were added, only because these generated absences were predicted well. The resulting models had lower estimated suitability outside of the survey area and noticeably different suitability distributions within the survey area. This made the addition of the generated absences undesirable. Results highlighted the potential vulnerability of Aloidendron dichotomum given the pessimistic, yet likely future climate scenarios.
2

A Machine Learning Approach to Modeling Dynamic Decision-Making in Strategic Interactions and Prediction Markets

Nay, John Jacob 28 March 2017 (has links)
My overarching modeling goal for my dissertation is to maximize generalization â some function of data and knowledge â from one sample, with its observations drawn independently from the distribution D, to another sample drawn from D, while also obtaining interpretable insights from the models. The processes of collecting relevant data and generating features from the raw data impart substantive knowledge into predictive models (and the model representation and optimization algorithms applied to those features contain methodological knowledge). I combine this knowledge with the data to train predictive models to deliver generalizability, and then investigate the implications of those models with simulations systematically exploring parameter spaces. The exploration of parameter space provides insights about the relationships between key variables. Chapter 2 describes a method to generate descriptive models of strategic decision-making. I use an efficient representation of repeated game strategies with state matrices and a genetic algorithm-based estimation process to learn these models from data. This combination of representation and optimization is effective for modeling decision-making with experimental game data and observational international relations data. Chapter 3 demonstrates that models can deliver high levels of generalizability with accurate out-of-sample predictions and interpretable scores of variable importance that can guide future behavioral research. I combine behavioral-game-theory-inspired feature design with data to train predictive models to deliver generalizability, and then investigate interactive implications of those models with optimization and sensitivity analyses. Chapter 4 presents a computational model as a test-bed for designing climate prediction markets. I simulate two alternative climate futures, in which global temperatures are primarily driven either by carbon dioxide or by solar irradiance. These represent, respectively, the scientific consensus and the most plausible hypothesis advanced by prominent skeptics. Then I conduct sensitivity analyses to determine how a variety of factors describing both the market and the physical climate may affect tradersâ beliefs about the cause of global climate change. Market participation causes most traders to converge quickly toward believing the âtrueâ climate model.
3

A Prediction and Decision Framework for Energy Management in Smart Buildings

Poolla, Chaitanya 01 December 2016 (has links)
By 2040, global CO2 emissions and energy consumption are expected to increase by 40%. In the US, buildings account for 40% of national CO2 emissions and energy consumption, of which 75% is met by fossil fuels. Reducing this impact on the environment requires both improved building energy efficiency and increased renewable utilization. To this end, this dissertation presents a demand-supplystorage- based decision framework to enable strategic energy management in smart buildings. This framework includes important but largely unaddressed aspects pertaining to building demand and supply such as occupant plugloads and the integration of weather forecast-based solar prediction, respectively. We devote the first part of our work to study occupant plugloads, which account for up to 50% of demand in high performance buildings. We investigate the impact of plugload control mechanisms based on the analysis of real-world data from experiments we conducted at NASA Ames sustainability base and Carnegie Mellon University (SV campus). Our main contribution is in extending existing demand response approaches to an occupant-in-the-loop paradigm. In the second part of this work, we describe methods to develop weather forecastbased solar prediction models using both local sensor measurements and global weather forecast data from the National Ocean and Atmospheric Administration (NOAA).We contribute to the state-of-the-art solar prediction models by proposing the incorporation of both local and global weather characteristics into their predictions. This weather forecast-based solar model plus the plugload-integrated demand model, along with an energy storage model constitutes the weather-driven plugloadintegrated decision-making framework for energy management. To demonstrate the utility of this framework, we apply it to solve an optimal decision problem with the objective of minimizing the energy-related operating costs associated with a smart building. The findings indicate that the optimal decisions can result in savings of up to 74% in the expected operational costs. This framework enables inclusive energy management in smart buildings by accounting for occupants-in-the-loop. Results are presented and discussed in the context of commercial office buildings.
4

Heuristics in Construction Project Management

Sprinkle, Zachary Joseph 30 January 2019 (has links)
Modern construction projects are delivered in complex, fast pace environments. Stakeholders are required to participate in dynamic project settings with resource constraints, information constraints, and time constraints. To overcome gaps in knowledge, to deliver decisions quickly, and to overcome human limits in cognitive ability, decision makers typically employ heuristics, or rules of thumb to arrive at relatively quick answers. Heuristics are cognitive shortcuts that an individual employs to arrive at quick decisions (Goodwin et al., 2004). These heuristics are used in a variety of ways, ranging from using the process of elimination (elimination heuristic) to applying different cognitive weights to options based on recent experience, reputation, or familiarity (Shah and Oppenheimer, 2008). This research aims to identify heuristics present in the implementation phase of construction. By summarizing the results of two studies conducted with a Mid-Atlantic Contractor, this thesis prescribes seven heuristics commonly used by construction stakeholders. / Master of Science / Modern construction provides a difficult decision making environment for workers. Construction stakeholders often work in environments with limited time, with limited information, and with limited knowledge. Decision makers in these environments typically use mental rules of thumb (formerly known as heuristics). These rules of thumb help decisions makers arrive at quick answers and often increase efficiency. They can be used in a variety of ways. An individual may use the process of elimination to find a solution. Others may base their decision off a company, person, or object ‘s reputation. Others may only choose an option that is recognizable. Rules of thumb take many forms and are used by all people. Studying rules of thumb can benefit an industry. This has already been proven in many industries, such as insurance (Handel & Kolstad, 2015), medicine (Martin et al., 2012), and economics (Grandori, 2010). The construction industry has begun to study rules of thumb that impact early stages of the construction process, but it still lacks rules of thumb that impact the process of physical construction. This paper aims to assist the construction industry in gaining a fuller view of decision making shortcuts used by its stakeholders. By summarizing the results of two studies conducted with a Mid-Atlantic Contractor, this thesis outlines seven heuristic used by construction workers.
5

Conservation and land use planning applications in Gabon, Central Africa

Lee, Michelle E. January 2014 (has links)
Spatial prioritization and systematic conservation planning methods are designed to improve land use decisions and conservation outcomes, yet remain underutilized in many biologically-rich places that need them most. This thesis applies the theory and methods developed in the discipline of spatial prioritization to conservation and land use decisions in the Central African country of Gabon. Creating a spatial information base of priority species, habitats and land uses in a region that is notoriously data-poor, I reveal that many features important for both conservation and natural resource production are highly localized; their coincidence has important implications for management. Setting conservation targets for species and habitats, I find that representation in existing protected areas is relatively low, and identify a number of near-optimal solutions that meet all targets, with minimal impact on land used for local livelihoods. I distill these solutions down to a handful of critical biodiversity sites that are top priority to protect, and make management actions explicit for the species and habitats they contain. To make the work more widely applicable, I also develop a novel method to identify where field surveys are most likely to improve decisions about protected area expansion, providing decision-makers with more options of places that could be protected to achieve conservation goals. This study contributes to the research, development and practice of conservation prioritization and spatial planning, particularly in data-poor contexts like Gabon, which still have a wealth of biodiversity, and need to carefully plan for its conservation alongside development.
6

Elaborations on Multiattribute Utility Theory Dominance

Vairo, David L 01 January 2019 (has links)
ELABORATIONS ON MULTIATTRIBUTE UTILITY THEORY DOMINANCE By David L. Vairo A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy at Virginia Commonwealth University. Virginia Commonwealth University, 2019. Major Director: Dissertation director’s name, Dr. Jason Merrick, Supply Chain Management and Analytics Multiattribute Utility Theory (MAUT) is used to structure decisions with more than one factor (attribute) in play. These decisions become complex when the attributes are dependent on one another. Where linear modeling is concerned with how factors are directly related or correlated with each other, MAUT is concerned with how a decision maker feels about the attributes. This means that direct elicitation of value or utility functions is required. This dissertation focuses on expanding the types of dominance forms used within MAUT. These forms reduce the direct elicitation needed to help structure decisions. Out of this work comes support for current criticisms of gain/loss separability that is assumed as part of Prospect Theory. As such, an alternative to Prospect Theory is presented, derived from within MAUT, by modeling the probability an event occurs as an attribute.
7

Optimizing Cardiovascular Disease Screening and Projection Efforts in the United States

Pandya, Ankur January 2012 (has links)
The objective of this dissertation is to develop and evaluate quantitative models that have the potential to improve cardiovascular disease (CVD) screening and projection efforts in the U.S. Paper 1 assesses the exchangeability of a non-laboratory-based CVD risk score (predictors do not include cholesterol) with more commonly-used laboratory-based scores, such as the Framingham risk equations. Under conventional thresholds for identifying high-risk individuals, 92-96% of adults in the National Health and Nutrition Examination Survey (NHANES III) were equivalently characterized as high- or low-risk using either type of score. The 10-year CVD death results also suggest that simple CVD risk assessment could be a useful proxy for more expensive laboratory-based screening strategies in the U.S. or other resource-limited settings. Paper 2 uses micro-simulation modeling techniques to evaluate the cost effectiveness of primary cardiovascular disease (CVD) screening using staged laboratory-based and/or non-laboratory-based total CVD risk assessment. The results imply that efficient screening guidelines should include non-laboratory-based risk assessment, either as a single stage or as part of multistage screening approach. Compared to current CVD screening guidelines, fewer cholesterol tests would be administered and more adults would receive low-cost statins under cost-effective screening policies. Paper 3 examines the trends of CVD risk factors, treatment, and total risk in the U.S. from 1973-2010, and offers projections of these variables for 2015-2030. Nine waves of cross-sectional NHANES data show that the divergent, observed trends in common CVD risk factors (such as smoking, BMI, total cholesterol, and blood pressure) are expected to continue in future years. Age-adjusted CVD risk has decreased over time (during the observed and projected periods), but total risk has increased when considering the impact of aging on CVD risk. Scenario analyses suggest that strategies targeting cholesterol and blood pressure treatment have the greatest potential to reduce future CVD burden in the U.S.
8

Sovereign contingent liabilities : a perspective on default and debt crises

Menzies, John Alexander January 2014 (has links)
Chapters 2-3: A global games approach to sovereign debt crises The first chapters present a model that investigates the risks involved when a fiscal authority attempts to roll-over a stock of debt and there is the potential for coordination failure by investors. A continuum of investors, after receiving signals about the authority's willingness to repay, decides whether to roll-over the stock of debt. If an insufficient proportion of investors participates, the authority defaults. With one fiscal authority, private information results in a deterministic outcome. When a public signal is available, the model behaves in a similar manner to a sunspot model. In line with much of the global games literature, improving public information has an ambiguous effect on welfare. Finally, the model is extended to include a second fiscal authority, which captures a similar sunspot result and illustrates the potential for externalities in fiscal policy. Lower debt in the less indebted authority can push a more indebted authority into crisis. Lower debt makes the healthier authority relatively more attractive, which causes the investors to treat the heavily indebted authority more conservatively. In certain circumstances, this is sufficient to cause a coordination failure. Chapter 4: A debt game with correlated information This chapter models of debt roll-over where a continuum of investors receives correlated signals on whether a debtor is solvent or insolvent. The investors face a collective action problem: a sufficient proportion of investors must agree to participate in the debt roll-over for it to be a success. If an insufficient proportion of investors participates in the deal, the debtor will default. The game has a unique switching strategy, which results in global uncertainty being preserved. The ex ante distribution of play (conditional on the true solvency of the debtor) follows a Vasicek credit distribution. The ex ante probability of a debt crisis is affected by the exogenous model parameters. Of particular interest is the observation that increasing private noise unambiguously reduces the probability of a debt crisis. Unsurprisingly, increasing the fiscal space or return on debt also decreases the probability of a crisis. Chapter 5: Bailouts and politics The final chapter examines the political-economic equilibrium in a two-period model with overlapping generations and a financial sector, which is inspired by the model in Tabellini (1989). The public policy is chosen under majority rule by the agents currently alive. It demonstrates that the bailout policy adopted in the second period has important effects on the bank's financing decisions in the first period. By adopting a riskier financing regime (i.e. higher leverage) in the first period, the older generation can extract consumption from the younger generation in the second period. Sovereign backstops of the financial sector are state-contingent: they can appear costless for long periods of time but eventually result in a socialization of private-sector debt. It is this mechanism that makes implementing capital requirements costly to investors yet beneficial to the younger generation. The model also highlights two important issues: (i) bank capital is endogenous and (ii) proposed resolution mechanisms must be politically credible. It suggests that a major benefit of increasing and narrowing equity-capital requirements or increasing liquidity ratios is that they are implemented ex ante and therefore available either to absorb losses in the event of a crisis or to reduce the possibility of large drops in asset values. Finally, this chapter also provides a structure by which to interpret the stylized facts of Calomiris et al. (2014): that more populist political institutions are associated with more fragile financial systems.
9

Stochastic agent-based modelling for reality : dynamic discrete choice analysis with interaction

Takama, Takeshi January 2005 (has links)
This D.Phil. thesis develops a new agent-based simulation model to improve the results of analysis, which solely uses discrete choice modelling, as well as to analyse the effects of a road user charging scheme for the Upper Derwent Valley in the Peak District National Park. The advantages of discrete choice analysis are well known. However, results with these conventional conventional approaches, which conduct analysis solely with discrete choice models, can be biased if interaction and learning effects are significant. The Minority Game, in which agents try to choose the option of the minority side, is an appropriate tool to deal with these problems. The situation in the Upper Derwent Valley can be explained with economic game theories and the Minority Game. The two approaches mutually help to analyse the situation in the Upper Derwent Valley leading to the development of a stochastic Minority Game. The stochastic Minority Game was tested with an online game (questionnaire), which was played 3,886 times by response in all around the world. The practical part of this thesis examines the components of the stochastic Minority Game with the data collected around the Upper Derwent Valley. The main data was collected using a stated preference survey. Overall, 700 questionnaires were distributed and 323 of them were returned (i.e. a return rate of 46.1 %). In the practical part, the agent-based model has four sub modules: 1) Multinomial mixed logit model for mode choice, 2) Binary logit model for parking location choice, 3) Markov queue model for parking network, and 4) the Minority Game for parking congestion and learning. This simulation model produces comprehensive outputs including mode choices, congestion levels, and user utilities. The results show that the road user charging scheme reduces car demand in the Upper Derwent Valley and ensures a reduction in congestion at the parking areas. The model also shows that an exemption will increase the utilities of elderly visitors without substantially sacrificing those of younger visitors. In conclusion, the simulation model demonstrated that oversimplification in conventional approaches solely using discrete choice models gave significant biases when real world problems were analysed.
10

Use of decision science to aid selection of genetically superior animals : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Animal Science at Massey University, Palmerston North, New Zealand

Sherriff, Ryan Leith January 2010 (has links)
This thesis is concerned with a theoretical simulation model for pig breeding, as part of the ongoing search for the “perfect” genotype. The starting point is an additive model to investigate how accurately the classical, infinitesimal model predicts genetic gain for traits controlled by few loci and few alleles. This initial investigation demonstrates that the infinitesimal model is robust, providing that at least 15 loci are controlling a trait and there is symmetry in the allele distributions. A Genotype-Pig (GE-Pig) model is then developed to apply the additive effects of alleles on sub-phenotypic traits like maximum protein deposition, minimum lipid to protein content in the whole body, ad libitum digestible energy intake, energy for maintenance requirement and water content in the whole body. These parameters are then used in a nutrient partitioning simulation model to growth a pig and calculate traditional breeding traits such as average daily gain, feed conversion ratio, and backfat thickness for any combination of alleles. Three algorithms, Genetic Algorithm, Tabu Search, and Simulated Annealing, are used to investigate the GE-Pig model and find optimal combination of alleles for different dietary and selection objective situations. The two diets investigated were either of a low or high quality, and the three selection objectives used were, maximising average daily gain, minimizing feed conversion ratio, and minimizing back fat. A graphical method is developed for easy comparison of the genotypes. Of the algorithms, the Genetic Algorithm performed the best, followed by Tabu Search and finally Simulated Annealing. It is demonstrated that, in general, there is a different, single, optimum for any given selection objective and diet. However under the back fat selection objective, both diets produce the same optimal genotype. Also there are many similarities between the optima for the average daily gain and feed conversion ratio selection objectives. When the theoretical minimum number of generations of selection to the optima is considered, the feed conversion ratio selection objective is the quickest for a breeding program to achieve the optimal solutions, followed by back fat, then average daily gain. It is demonstrated that diet also has an effect on the theoretical number of generations. A Multiple selection objective, using relative economic values applied to the individual selection objectives, is also investigated. For both diets, the majority of the multiple selection objective solutions are in the vicinity of the feed conversion ratio optima, indicating that feed conversion ratio is the most prominent factor. It is also demonstrated that the optimal solution is most affected by the objective parameter weights under low diet conditions.

Page generated in 0.0511 seconds