Spelling suggestions: "subject:"codecision making - amathematical models"" "subject:"codecision making - dmathematical models""
51 |
An improved approach for "optimization" of multiple policy objectivesTawfik, Perihan Soliman 01 January 1977 (has links)
This thesis involved three categories of activity; development and testing of an expanded version of ELECTRE II, also the development of a computer software program for ELECTRE II. The expanded version of ELECTRE II took the form of an input aiding questionnaire along with a tailored structure to suit a particular problem. The contents of the questionnaire were based on geueral problem solving concepts (techniques, strategies) gleaned from the systems science literature. This questionnaire assumed a programmed instruction format in contrast to that of an interactive computer software package, so that it would not be prohibitive in terms of expenses in its use. The second part of the research was the comparative testing of group decision quality. Improved ELECTRE II was compared to a competitive method called SPAN, regular ELECTRE II, and unaided group decision-making. The effectiveness of the improved "Front End" ELECTRE II was tested as follows: TREATMENT Group A : Decision using ELECTRE II with the improved Front End. CONTROLS Group B : Unaided decision. Group C : Decision using regular ELECTRE II. Group D Decision using "SPAN" consensus taking method. The hypothesis that ELECTRE II and Front End ELECTRE II provide equally good bases for group decision making as SPAN (which had numerous claims for its effectiveness), was tested using appropriate statistical methods. Results of the experiments showed that the regular ELECTRE II did not perform as well as SPAN. However, the improved version of ELECTRE II developed for this thesis did perform as well as but not better than SPAN. It is important to note, however, that the "experimental" task was clearly not favorable to ELECTRE II. Had the task displayed more complexity, we believe the improved version of ELECTRE II would have outperformed SPAN. We feel that our results provide evidence for the value of this improved version of ELECTRE II which, we hope, will lead to its widespread use.
|
52 |
A model for rational decision-making in administration of mental retardation services.Pearl, Ellsworth Alden 01 January 1973 (has links) (PDF)
No description available.
|
53 |
An evaluation methodology to ensure the success of decision support toolsSinghal, Amod January 1986 (has links)
Motivated by the need for an evaluation technique to help decision makers ensure the success of their computer-based decision support tools, this research explores the evaluation of decision aids from a broad organizational and managerial perspective. A review of current research identifies the need for theoretical and practical developments emphasizing: (1) evaluation techniques which can work with partial knowledge about the effect of a decision support tool on management processes, (2) a systematic way to prescribe evaluation techniques for different assessment situations, (3) evaluation techniques which provide a way to transition from one assessment situation to another, and (4) evaluation techniques which recognize that the performance of one decision support tool may depend on other decision aids used by the manager. This study complements existing theoretical research by developing seven conceptual models which identify essential evaluation parameters and their relationships. The first model explores parameters affecting the decision to evaluate. The second and third models examine the role of evaluation in ensuring success. The fourth and fifth models analyze how conclusions about success are made. The sixth model identifies components of an evaluation technique. Finally, the seventh model presents a framework for prescribing evaluation approaches. Using the seven conceptual models and previous research as its theoretical foundation, Evalu-Action, a step-by-step practical technique to ensure the success of computer-based decision support tools is developed. The technique is pilot tested and improved. Recommendations for further work are presented. / M.S.
|
54 |
Consistency Analysis for Judgment Quantification in Hierarchical Decision ModelAbbas, Mustafa Sulaiman 21 March 2016 (has links)
The objective of this research is to establish consistency thresholds linked to alpha (α) levels for HDM’s (Hierarchical Decision Model) judgment quantification method. Measuring consistency in order to control it is a crucial and inseparable part of any AHP/HDM experiment. The researchers on the subject recommend establishing thresholds that are statistically based on hypothesis testing, and are linked to the number of decision variables and (α) level. Such thresholds provide the means with which to evaluate the soundness and validity of an AHP/HDM decision. The linkage of thresholds to (α) levels allows the decision makers to set an appropriate inconsistency tolerance compatible with the situation at hand. The measurements of judgments are unreliable in the absence of an inconsistency measure that includes acceptable limits. All of this is essential to the credibility of the entire decision making process and hence is extremely useful for practitioners and researchers alike. This research includes distribution fitting for the inconsistencies. It is a valuable and interesting part of the research results and adds usefulness, practicality and insight. The superb fits obtained give confidence that all the statistical inferences based on the fitted distributions accurately reflect the HDM’s inconsistency measure.
|
55 |
Innovation Measurement: a Decision Framework to Determine Innovativeness of a CompanyPhan, Kenny 16 May 2013 (has links)
Innovation is one of the most important sources of competitive advantage. It helps a company to fuel the growth of new products and services, sustain incumbents, create new markets, transform industries, and promote the global competitiveness of nations. Because of its importance, companies need to manage innovation. It is very important for a company to be able to measure its innovativeness because one cannot effectively manage without measurement. A good measurement model will help a company to understand its current capability and identify areas that need improvement.
In this research a systematic approach was developed for a company to measure its innovativeness. The measurement of innovativeness is based on output indicators. Output indicators are used because they cannot be manipulated. A hierarchical decision model (HDM) was constructed from output indicators. The hierarchy consisted of three levels: innovativeness index, output indicators and sub-factors.
Experts' opinions were collected and quantified. A new concept developed by Dr. Dundar Kocaoglu and referred to as "desirability functions" was implemented in this research.
Inconsistency of individual experts, disagreement among experts, intraclass correlation coefficients and statistical F-tests were calculated to test the reliability of the experts' judgments. Sensitivity analysis was used to test the sensitivity of the output indicators, which indicated the allowable range of the changes in the output indicators in order to maintain the priority of the sub-factors.
The outcome of this research is a decision model/framework that provides an innovativeness index based on readily measurable company output indicators.
The model was applied to product innovation in the technology-driven semiconductor industry. Five hypothetical companies were developed to simulate the application of the model/framework. The profiles of the hypothetical companies were varied considerably to provide a deeper understanding of the model/framework. Actual data from two major corporations in the semiconductor industry were then used to demonstrate the application of the model.
According to the experts, the top three sub-factors to measure the innovativeness of a company are revenue from new products (28%), market share of new products (21%), and products that are new to the world (20%).
|
56 |
The multiple advocacy strategy and the role of the custodian : the Carter yearsMoens, A. Alexander January 1988 (has links)
The increasing complexity and high stakes of foreign policy decisions, especially of major powers such as the United States, have generated specialized studies of decision making.
One approach, called "multiple advocacy," maps a strategy of role tasks and process norms to guide the decision-makers towards an optimal decision-making process. This process allows the President to make an informed policy choice as a result of having heard a variety of options debated freely and openly among his advisors in his presence. A crucial actor in this process is the National Security Advisor. As process manager or "custodian," he must ensure that the key provisions of the strategy are met while abstaining from personal involvement in the substance of policy advice and execution.
This thesis examines the internal coherence and usefulness of the strategy. The first two years of the Carter administration provide a close approximation of the strategy. Four important policy issues during this period form the empirical basis of this test: the "Deep Cuts" proposals in SALT II, the war in the Horn of Africa, Sino-American Normalization, and the fall of the Shah of Iran. While the basic principles of the strategy are found useful and sound, several of its provisions are challenged. First, in spite of its claim, the strategy does not produce multiple options when the advisors have no wide divergence of opinion. Second, contrary to the strategy's prescriptions, the custodian can improve the process in such situations by joining the policy debate. Third, custodial engagement in activities such as diplomacy and public speaking need not be prohibited too strictly. Last, the demise of the strategy can be more narrowly defined as the result of custodial disregard for a free flow of information and open participation among the advisors.
Though further studies are needed to widen the empirical base, several tentative suggestions are offered to improve the strategy. The president must insist on a reasonable range of opinions when appointing advisors. While the National Security Advisor may join the policy debate to widen the range of options, his policy advice should not become the rule. At all times the President must insist that all policy debates among his advisors be brought to his attention, and that all policy options receive a fair hearing. / Arts, Faculty of / Political Science, Department of / Graduate
|
57 |
Application of stochastic differential equations and real option theory in investment decision problemsChavanasporn, Walailuck January 2010 (has links)
This thesis contains a discussion of four problems arising from the application of stochastic differential equations and real option theory to investment decision problems in a continuous-time framework. It is based on four papers written jointly with the author’s supervisor. In the first problem, we study an evolutionary stock market model in a continuous-time framework where uncertainty in dividends is produced by a single Wiener process. The model is an adaptation to a continuous-time framework of a discrete evolutionary stock market model developed by Evstigneev, Hens and Schenk-Hoppé (2006). We consider the case of fix-mix strategies and derive the stochastic differential equations which determine the evolution of the wealth processes of the various market players. The wealth dynamics for various initial set-ups of the market are simulated. In the second problem, we apply an entry-exit model in real option theory to study concessionary agreements between a private company and a state government to run a privatised business or project. The private company can choose the time to enter into the agreement and can also choose the time to exit the agreement if the project becomes unprofitable. An early termination of the agreement by the company might mean that it has to pay a penalty fee to the government. Optimal times for the company to enter and exit the agreement are calculated. The dynamics of the project are assumed to follow either a geometric mean reversion process or geometric Brownian motion. A comparative analysis is provided. Particular emphasis is given to the role of uncertainty and how uncertainty affects the average time that the concessionary agreement is active. The effect of uncertainty is studied by using Monte Carlo simulation. In the third problem, we study numerical methods for solving stochastic optimal control problems which are linear in the control. In particular, we investigate methods based on spline functions for solving the two-point boundary value problems that arise from the method of dynamic programming. In the general case, where only the value function and its first derivative are guaranteed to be continuous, piecewise quadratic polynomials are used in the solution. However, under certain conditions, the continuity of the second derivative is also guaranteed. In this case, piecewise cubic polynomials are used in the solution. We show how the computational time and memory requirements of the solution algorithm can be improved by effectively reducing the dimension of the problem. Numerical examples which demonstrate the effectiveness of our method are provided. Lastly, we study the situation where, by partial privatisation, a government gives a private company the opportunity to invest in a government-owned business. After payment of an initial instalment cost, the private company’s investments are assumed to be flexible within a range [0, k] while the investment in the business continues. We model the problem in a real option framework and use a geometric mean reversion process to describe the dynamics of the business. We use the method of dynamic programming to determine the optimal time for the private company to enter and pay the initial instalment cost as well as the optimal dynamic investment strategy that it follows afterwards. Since an analytic solution cannot be obtained for the dynamic programming equations, we use quadratic splines to obtain a numerical solution. Finally we determine the optimal degree of privatisation in our model from the perspective of the government.
|
58 |
On the theory and modeling of dynamic programming with applications in reservoir operationSniedovich, Moshe,1945- January 1976 (has links)
This dissertation contains a discussion concerning the validity of the principle of optimality and the dynamic programming algorithm in the context of discrete time and state multistage decision processes. The multistage decision model developed for the purpose of the investigation is of a general structure, especially as far as the reward function is concerned. The validity of the dynamic programming algorithm as a solution method is investigated and results are obtained for a rather wide class of decision processes. The intimate relationship between the principle and the algorithm is investigated and certain important conclusions are derived. In addition to the theoretical considerations involved in the implementation of the dynamic programming algorithm, some modeling and computational aspects are also investigated. It is demonstrated that the multistage decision model and the dynamic programming algorithm as defined in this study provide a solid framework for handling a wide class of multistage decision processes. The flexibility of the dynamic programming algorithm as a solution procedure for nonroutine reservoir control problems is demonstrated by two examples, one of which is a reliability problem. To the best of the author's knowledge, many of the theoretical derivations presented in this study, especially those concerning the relation between the principle of optimality and the dynamic programming algorithm, are novel.
|
59 |
Choice of multicriterion decision making techniques for watershed managementTecle, Aregai,1948- January 1988 (has links)
The problem of selecting a multicriterion decision making (MCDM) technique for watershed resources management is investigated. Of explicit concern in this research is the matching of a watersned resources management problem with an appropriate MCDM technique. More than seventy techniques are recognized while reviewing the area of MCDM. A new classification scheme is developed to categorize these techniques into four groups on the bases of each algorithm's structural formulation and the possible results obtained by using the algorithm. Other standard classification schemes are also discussed to better understand the differences and similarities among the techniques and thereby demonstrate the importance of matching a particular multicriterion decision problem with an appropriate MCDM technique. The desire for selecting the most appropriate MCDM technique for watershed resources management lead to the development of 49 technique choice criteria and an algorithm for selecting a technique. The algorithm divides the technique choice criteria into four groups: (1) DM/analyst-related criteria, (2) technique-related criteria, (3) problem-related criteria and (4) solution-related criteria. To analyze the applicability of MCDM techniques to a particular problem, the levels of performance of the techniques in solving the problem are, at first, evaluated with respect to the choice criteria in each criterion group resulting in four sets of preference rankings. These four sets are then linearly combined using a set of trade-off parameters to determine the overall preference ranking of the techniques. The MUM technique selection process is itself modeled as a multiobjective problem. In this research, for example, a set of 15 techniques, the author is familiar with, are analyzed for their appropriateness to solve a watershed resources management problem. The performance levels of the 15 MCDM techniques in solving such a problem are evaluated with respect to a selected set of technique choice criteria in each criterion group leading to a set of four evaluation matrices of choice criteria versus alternative techniques. This technique choice problem is then analyzed using a two-stage evaluation procedure known as composite programming. The final product of the process resulted in a preference ranking of the alternative MCDM techniques.
|
60 |
Exploiting the probability of observation for efficient Bayesian network inferenceMousumi, Fouzia Ashraf January 2013 (has links)
It is well-known that the observation of a variable in a Bayesian network can affect the
effective connectivity of the network, which in turn affects the efficiency of inference.
Unfortunately, the observed variables may not be known until runtime, which limits the
amount of compile-time optimization that can be done in this regard. This thesis considers
how to improve inference when users know the likelihood of a variable being observed. It
demonstrates how these probabilities of observation can be exploited to improve existing
heuristics for choosing elimination orderings for inference. Empirical tests over a set of
benchmark networks using the Variable Elimination algorithm show reductions of up to
50% and 70% in multiplications and summations, as well as runtime reductions of up to
55%. Similarly, tests using the Elimination Tree algorithm show reductions by as much as
64%, 55%, and 50% in recursive calls, total cache size, and runtime, respectively. / xi, 88 leaves : ill. ; 29 cm
|
Page generated in 0.15 seconds