Spelling suggestions: "subject:"codecision making - amathematical models"" "subject:"codecision making - dmathematical models""
101 |
Application of Stochastic Decision Models to Solid Waste ManagementWright, William Ervin 08 1900 (has links)
This research applies stochastic decision tree analytical techniques to a decision of the type a small community may face when choosing a solid waste disposal system from among several alternatives. Specifically targeted are those situations in which a community finds itself (1) lying at or near the boundary of a central planning area, (2) in a position to exercise one of several disposal options, and (3) has access to the data base on solid waste which has been systematically developed by a central planning agency. The options available may or may not be optimal in terms of total cost, either to the community or to adjacent communities which participate in centrally coordinated or jointly organized activities. The study suggests that stochastic simulation models, drawing upon a data base developed by central planning agencies in cases where local data are inadequate or not available, can be useful in evaluating disposal alternatives at the community level. Further, the decision tree can be usefully employed to communicate results of the analysis. Some important areas of further research on the small community disposal system selection problem are noted.
|
102 |
Development of a Technology Transfer Score for Evaluating Research Proposals: Case Study of Demand Response Technologies in the Pacific NorthwestEstep, Judith 13 February 2017 (has links)
Investment in Research and Development (R&D) is necessary for innovation, allowing an organization to maintain a competitive edge. The U.S. Federal Government invests billions of dollars, primarily in basic research technologies to help fill the pipeline for other organizations to take the technology into commercialization. However, it is not about just investing in innovation, it is about converting that research into application. A cursory review of the research proposal evaluation criteria suggests that there is little to no emphasis placed on the transfer of research results. This effort is motivated by a need to move research into application.
One segment that is facing technology challenges is the energy sector. Historically, the electric grid has been stable and predictable; therefore, there were no immediate drivers to innovate. However, an aging infrastructure, integration of renewable energy, and aggressive energy efficiency targets are motivating the need for research and to put promising results into application. Many technologies exist or are in development but the rate at which they are being adopted is slow.
The goal of this research is to develop a decision model that can be used to identify the technology transfer potential of a research proposal. An organization can use the model to select the proposals whose research outcomes are more likely to move into application. The model begins to close the chasm between research and application -- otherwise known as the "valley of death."
A comprehensive literature review was conducted to understand when the idea of technology application or transfer should begin. Next, the attributes that are necessary for successful technology transfer were identified. The emphasis of successful technology transfer occurs when there is a productive relationship between the researchers and the technology recipient. A hierarchical decision model, along with desirability curves, was used to understand the complexities of the researcher and recipient relationship, specific to technology transfer. In this research, the evaluation criteria of several research organizations were assessed to understand the extent to which the success attributes that were identified in literature were considered when reviewing research proposals. While some of the organizations included a few of the success attributes, none of the organizations considered all of the attributes. In addition, none of the organizations quantified the value of the success attributes.
The effectiveness of the model relies extensively on expert judgments to complete the model validation and quantification. Subject matter experts ranging from senior executives with extensive experience in technology transfer to principal research investigators from national labs, universities, utilities, and non-profit research organizations were used to ensure a comprehensive and cross-functional validation and quantification of the decision model.
The quantified model was validated using a case study involving demand response (DR) technology proposals in the Pacific Northwest. The DR technologies were selected based on their potential to solve some of the region's most prevalent issues. In addition, several sensitivity scenarios were developed to test the model's response to extreme case scenarios, impact of perturbations in expert responses, and if it can be applied to other than demand response technologies. In other words, is the model technology agnostic? In addition, the flexibility of the model to be used as a tool for communicating which success attributes in a research proposal are deficient and need strengthening and how improvements would increase the overall technology transfer score were assessed. The low scoring success attributes in the case study proposals (e.g. project meetings, etc.) were clearly identified as the areas to be improved for increasing the technology transfer score. As a communication tool, the model could help a research organization identify areas they could bolster to improve their overall technology transfer score. Similarly, the technology recipient could use the results to identify areas that need to be reinforced, as the research is ongoing.
The research objective is to develop a decision model resulting in a technology transfer score that can be used to assess the technology transfer potential of a research proposal. The technology transfer score can be used by an organization in the development of a research portfolio. An organization's growth, in a highly competitive global market, hinges on superior R&D performance and the ability to apply the results. The energy sector is no different. While there is sufficient research being done to address the issues facing the utility industry, the rate at which technologies are adopted is lagging. The technology transfer score has the potential to increase the success of crossing the chasm to successful application by helping an organization make informed and deliberate decisions about their research portfolio.
|
103 |
A Scoring Model to Assess Organizations' Technology Transfer Capabilities: the Case of a Power Utility in the Northwest USALavoie, João Ricardo 10 May 2019 (has links)
This research intends to advance knowledge in the technology management field, most importantly in the study of organizations that develop technologies in-house and wish to enhance their technology transfer performance while maintaining adherence between R&D activities and overall business strategies. The objective was to build a multi-criteria decision-making model capable of producing a technology transfer score, which can be used by practitioners in order to assess and later improve their organizations' technology transfer capabilities -- ultimately aiming to improve technology development as a whole. The model was applied to a major power utility organization in the Pacific Northwest of the United States. The introduction brings initial and basic information on the topic, along with the problem statement -- this chapter is aimed at situating the reader on the boundaries of the topic while highlighting its importance within the technology management field of study. The second chapter is the literature review. It brings general and specific information on technology transfer, as well as its complexities, gaps, relationship with other fields and the characteristics of this topic within the energy realm. It also tries to shed a light on how the alignment between R&D and business strategy is perceived by the literature, discussing some of the methods used and its shortcomings. Additionally, the literature review brings an analysis that builds the argument in favor of a continuous technology transfer process, and tries to show how it would be helpful in aligning R&D and business strategy. The third chapter presents the methodological approach -- hierarchical decision modeling (HDM) aided by action research -- which constitutes a methodological novelty piloted and validated throughout the development of the study. The fourth chapter details the model development process step-by-step, and the fifth chapter details the model application process with the analysis of the aforementioned organization. Additionally, results are interpreted and analyzed, and insights for the specific case and for technology managers in general are discussed. Lastly, the contributions of the study towards the advancement of the body of knowledge are discussed, as well as the study limitations and future research opportunities.
|
104 |
Rational design theory: a decision-based foundation for studying design methodsThompson, Stephanie C. 22 January 2011 (has links)
While design theories provide a foundation for representing and reasoning about design methods, existing design theories do not explicitly include uncertainty considerations or recognize tradeoffs between the design artifact and the design process. These limitations prevent the existing theories from adequately describing and explaining observed or proposed design methods.
In this thesis, Rational Design Theory is introduced as a normative theoretical framework for evaluating prescriptive design methods. This new theory is based on a two-level perspective of design decisions in which the interactions between the artifact and the design process decisions are considered. Rational Design Theory consists of normative decision theory applied to design process decisions, and is complemented by a decision-theory-inspired conceptual model of design.
The application of decision analysis to design process decisions provides a structured framework for the qualitative and quantitative evaluation of design methods. The qualitative evaluation capabilities are demonstrated in a review of the systematic design method of Pahl and Beitz. The quantitative evaluation capabilities are demonstrated in two example problems. In these two quantitative examples, Value of Information analysis is investigated as a strategy for deciding when to perform an analysis to gather additional information in support of a choice between two design concepts. Both quantitative examples demonstrate that Value of Information achieves very good results when compared to a more comprehensive decision analysis that allows for a sequence of analyses to be performed.
|
105 |
A computational model of engineering decision makingHeller, Collin M. 13 January 2014 (has links)
The research objective of this thesis is to formulate and demonstrate a computational framework for modeling the design decisions of engineers. This framework is intended to be descriptive in nature as opposed to prescriptive or normative; the output of the model represents a plausible result of a designer's decision making process. The framework decomposes the decision into three elements: the problem statement, the designer's beliefs about the alternatives, and the designer's preferences. Multi-attribute utility theory is used to capture designer preferences for multiple objectives under uncertainty. Machine-learning techniques are used to store the designer's knowledge and to make Bayesian inferences regarding the attributes of alternatives. These models are integrated into the framework of a Markov decision process to simulate multiple sequential decisions. The overall framework enables the designer's decision problem to be transformed into an optimization problem statement; the simulated designer selects the alternative with the maximum expected utility. Although utility theory is typically viewed as a normative decision framework, the perspective in this research is that the approach can be used in a descriptive context for modeling rational and non-time critical decisions by engineering designers. This approach is intended to enable the formalisms of utility theory to be used to design human subjects experiments involving engineers in design organizations based on pairwise lotteries and other methods for preference elicitation. The results of these experiments would substantiate the selection of parameters in the model to enable it to be used to diagnose potential problems in engineering design projects.
The purpose of the decision-making framework is to enable the development of a design process simulation of an organization involved in the development of a large-scale complex engineered system such as an aircraft or spacecraft. The decision model will allow researchers to determine the broader effects of individual engineering decisions on the aggregate dynamics of the design process and the resulting performance of the designed artifact itself. To illustrate the model's applicability in this context, the framework is demonstrated on three example problems: a one-dimensional decision problem, a multidimensional turbojet design problem, and a variable fidelity analysis problem. Individual utility functions are developed for designers in a requirements-driven design problem and then combined into a multi-attribute utility function. Gaussian process models are used to represent the designer's beliefs about the alternatives, and a custom covariance function is formulated to more accurately represent a designer's uncertainty in beliefs about the design attributes.
|
106 |
Distributed decision and communication problems in tactical USAF command and control : annual technical report for period ...January 1900 (has links)
Alexander H. Levis [et al.]. / Prepared for Air Force [Office] of Scientific Research, Bolling Air Force Base, Washington, D.C. Contract AFOSR - 80-0229. / Description based on: July 1981/June 1982.
|
107 |
Successful delivery of an online higher education course: a quantitative management frameworkBurger, Dimitri January 2017 (has links)
South Africa has been experiencing several challenges regarding access to higher education, quality of higher education, effectiveness of higher education course delivery, and funding for higher education. In the higher education sector, the bulk of the burden is placed on traditional higher education institutions, most notably universities, in providing higher education to a growing youth base in dire need of education that supports their individual learning needs. With these challenges facing traditional universities, online higher education provided by both public sector higher education institutions and private sector education providers can act as a valuable alternative and solution to access for some of the population. Online education and face-to-face education differ considerably in how they deliver courses to students. Many have argued that these differences are in some cases attributable to strengths in face-to-face education and drawbacks or limitations in online education, large enough that they should serve as the criteria for selecting the former over the latter as the better mode of delivery. While there have been examples of online programmes that have failed to deliver courses successfully by underutilising or misusing the tools and techniques available, there are positive examples where these programmes perform equally as well as face-to-face courses. The defining difference is ultimately and often the management of these courses’ resources, activities, people, processes, and practices. Considering the above, and with examination of the available literature, a conceptual and theoretical framework was constructed and a quantitative research study was undertaken to prove the significant correlational relationships between elements of course delivery and a management framework to govern those elements. The sample consisted of 115 students from a postgraduate degree programme presented in two formats, online and on-campus. The findings provide evidence of significant relationships between the core functions of management as well as between aspects of course delivery, such as opportunities for interaction, opportunities for feedback, and course content in achieving learning outcomes for students and contributing to engagement. The findings also indicate positive perceptions from students in relation to the delivery of the courses.
|
108 |
Assessment centers and group decision making: Substituting the arithmetic mean for the traditional consensus discussionGust, Jeffrey Allen 01 January 1998 (has links)
No description available.
|
109 |
Essays on Dynamic Optimization for Markets and NetworksGan, Yuanling January 2023 (has links)
We study dynamic decision-making problems in networks and markets under uncertainty about future payoffs. This problem is difficult in general since 1) Although the current decision (potentially) affects future decisions, the decision-maker does not have exact information on the future payoffs when he/she commits to the current decision; 2) The decision made at one part of the network usually interacts with the decisions made at the other parts of the network, which makes the computation scales very fast with the network size and brings computational challenges in practice. In this thesis, we propose computationally efficient methods to solve dynamic optimization problems on markets and networks, specify a general set of conditions under which the proposed methods give theoretical guarantees on global near-optimality, and further provide numerical studies to verify the performance empirically. The proposed methods/algorithms have a general theme as “local algorithms”, meaning that the decision at each node/agent on the network uses only partial information on the network.
In the first part of this thesis, we consider a network model with stochastic uncertainty about future payoffs. The network has a bounded degree, and each node takes a discrete decision at each period, leading to a per-period payoff which is a sum of three parts: node rewards for individual node decisions, temporal interactions between individual node decisions from the current and previous periods, and spatial interactions between decisions from pairs of neighboring nodes. The objective is to maximize the expected total payoffs over a finite horizon. We study a natural decentralized algorithm (whose computational requirement is linear in the network size and planning horizon) and prove that our decentralized algorithm achieves global near-optimality when temporal and spatial interactions are not dominant compared to the randomness in node rewards. Decentralized algorithms are parameterized by the locality parameter L: An L-local algorithm makes its decision at each node v based on current and (simulated) future payoffs only up to L periods ahead, and only in an L-radius neighborhood around v. Given any permitted error ε > 0, we show that our proposed L-local algorithm with L = O(log(1/ε)) has an average per-node-per- period optimality gap bounded above by ε, in networks where temporal and spatial interactions are not dominant. This constitutes the first theoretical result establishing the global near-optimality of a local algorithm for network dynamic optimization.
In the second part of this thesis, we consider the previous three types of payoff functions under adversarial uncertainty about the future. In general, there are no performance guarantees for arbitrary payoff functions. We consider an additional convexity structure in the individual node payoffs and interaction functions, which helps us leverage the tools in the broad Online Convex Optimization literature. In this work, we study the setting where there is a trade-off between developing future predictions for a longer lookahead horizon, denoted as k versus increasing spatial radius for decentralized computation, denoted as r. When deciding individual node decisions at each time, each node has access to predictions of local cost functions for the next k time steps in an r-hop neighborhood. Our work proposes a novel online algorithm, Localized Predictive Control (LPC), which generalizes predictive control to multi-agent systems. We show that LPC achieves a competitive ratio approaching to 1 exponentially fast in ρT and ρS in an adversarial setting, where ρT and ρS are constants in (0, 1) that increase with the relative strength of temporal and spatial interaction costs, respectively. This is the first competitive ratio bound on decentralized predictive control for networked online convex optimization. Further, we show that the dependence on k and r in our results is near-optimal by lower bounding the competitive ratio of any decentralized online algorithm.
In the third part of this work, we consider a general dynamic matching model for online competitive gaming platforms. Players arrive stochastically with a skill attribute, the Elo rating. The distribution of Elo is known and i.i.d across players. However, the individual’s rating is only observed upon arrival. Matching two players with different skills incurs a match cost. The goal is tominimize a weighted combination of waiting costs and matching costs in the system. We investigate a popular heuristic used in industry to trade-off between these two costs, the Bubble algorithm. The algorithm places arriving players on the Elo line with a growing bubble around them. When two bubbles touch, the two players get matched. We show that, with the optimal bubble expansion rate, the Bubble algorithm achieves a constant factor ratio against the offline optimal cost when the match cost (resp. waiting cost) is a power of Elo difference (resp. waiting time). We use players’ activity logs data from a gaming start-up to validate our approach and further provide guidance on how to tune the Bubble expansion rate in practice.
|
110 |
Essays on Fair OperationsXia, Shangzhou January 2024 (has links)
Fairness emerges as a vital concern to decision makers as crucial as efficiency, if not more important. Fair operations decisions are aimed at distributive justice in various scenarios. In this dissertation, we study two examples of distributively fair decision making in operations research, a dynamic fair allocation problem and a subpopulational robustness assessment problem for machine learning models.
We first study a dynamic allocation problem in which 𝑇 sequentially arriving divisible resources are to be allocated to a number of agents with concave utilities. The joint utility functions of each resource to the agents are drawn stochastically from a known joint distribution, independently and identically across time, and the central planner makes immediate and irrevocable allocation decisions. Most works on dynamic resource allocation aim to maximize the utilitarian welfare, i.e., the efficiency of the allocation, which may result in unfair concentration of resources on certain high-utility agents while leaving others' demands under-fulfilled. In this work, aiming at balancing efficiency and fairness, we instead consider a broad collection of welfare metrics, the Hölder means, which includes the Nash social welfare and the egalitarian welfare.
To this end, we first study a fluid-based policy derived from a deterministic surrogate to the underlying problem and show that for all smooth Hölder mean welfare metrics it attains an 𝑂 (1) regret over the time horizon length 𝑇 against the hindsight optimum, i.e., the optimal welfare if all utilities were known in advance of deciding on allocations. However, when evaluated under the non-smooth egalitarian welfare, the fluid-based policy attains a regret of order 𝛩 (√𝑇). We then propose a new policy built thereupon, called Backward Infrequent Re-solving (𝖡𝖨𝖱), which consists of re-solving the deterministic surrogate problem at most 𝑂 (log 𝑇) times. We show under a mild regularity condition that it attains a regret against the hindsight optimal egalitarian welfare of order 𝑂 (1) when all agents have linear utilities and 𝑂 (log 𝑇) otherwise. We further propose the Backward Infrequent Re-solving with Thresholding (𝖡𝖨𝖱𝖳) policy, which enhances the (𝖡𝖨𝖱𝖳) policy by thresholding adjustments and performs similarly without any assumption whatsoever. More specifically, we prove the (𝖡𝖨𝖱𝖳) policy attains an 𝑂 (1) regret independently of the horizon length 𝑇 when all agents have linear utilities and 𝑂 (log²⁺^𝜀) otherwise. We conclude by presenting numerical experiments to corroborate our theoretical claims and to illustrate the significant performance improvement against several benchmark policies.
The performance of ML models degrades when the training population is different from that seen under operation. Towards assessing distributional robustness, we study the worst-case performance of a model over 𝒂𝒍𝒍 subpopulations of a given size, defined with respect to core attributes 𝑍. This notion of robustness can consider arbitrary (continuous) attributes 𝑍, and automatically accounts for complex intersectionality in disadvantaged groups. We develop a scalable yet principled two-stage estimation procedure that can evaluate the robustness of state-of-the-art models. We prove that our procedure enjoys several finite-sample convergence guarantees, including 𝒅𝒊𝒎𝒆𝒏𝒔𝒊𝒐𝒏-𝒇𝒓𝒆𝒆 convergence. Instead of overly conservative notions based on Rademacher complexities, our evaluation error depends on the dimension of 𝑍 only through the out-of-sample error in estimating the performance conditional on 𝑍. On real datasets, we demonstrate that our method certifies the robustness of a model and prevents deployment of unreliable models.
|
Page generated in 0.135 seconds