• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • Tagged with
  • 5
  • 5
  • 5
  • 5
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Analysis and simulation of dynamic microeconomic models

Tinney, Edmund Herbert, January 1967 (has links)
Thesis (Ph. D.)--University of Wisconsin--Madison, 1967. / Typescript. Vita. Description based on print version record. Includes bibliographical references.
2

Multiple sensor credit apportionment /

Crow, Mason W. January 2002 (has links) (PDF)
Thesis (M.S.)--Naval Postgraduate School, 2002. / Thesis advisor(s): Eugene P. Paulo, Sergio Posadas, Susan M. Sanchez. Includes bibliographical references (p. 63-64). Also available online.
3

An analysis of the residual-influence effect upon members of small decision-making groups an experimental study using a management simulation model /

Bossman, Lawrence Joseph, January 1967 (has links)
Thesis (Ph. D.)--University of Wisconsin, 1967. / Typescript. Vita. eContent provider-neutral record in process. Description based on print version record. Includes bibliographical references.
4

AN EXPERT SYSTEM USING FUZZY SET REPRESENTATIONS FOR RULES AND VALUES TO MAKE MANAGEMENT DECISIONS IN A BUSINESS GAME.

DICKINSON, DEAN BERKELEY. January 1984 (has links)
This dissertation reports on an effort to design, construct, test, and adjust an expert system for making certain business decisions. A widely used approach to recurring judgmental decisions in business and other social organizations is the "rule-based decision system". This arrangement employs staff experts to propose decision choices and selections to a decisionmaker. Such decisions can be very important because of the large resources involved. Rules and values encountered in such systems are often vague and uncertain. Major questions explored by this experimental effort were: (1) could the output of such a decision system be mimicked easily by a mechanism incorporating the rules people say they use, and (2) could the imprecision endemic in such a system be represented by fuzzy set constructs. The task environment chosen for the effort was a computer-based game which required player teams to make a number of interrelated, recurring decisions in a realistic business situation. The primary purpose of this research is to determine the feasibility of using these methods in real decision systems. The expert system which resulted is a relatively complicated, feed-forward network of "simple" inferences, each with no more than one consequent and one or two antecedents. Rules elicited from an expert in the game or from published game instructions become the causal implications in these inferences. Fuzzy relations are used to represent imprecise rules and two distinctly different fuzzy set formats are employed to represent imprecise values. Once imprecision appears from the environment or rules the mechanism propagates it coherently through the inference network to the proposed decision values. The mechanism performs as well as the average human team, even though the strategy is relatively simple and the inferences crude linear approximations. Key aspects of this model, distinct from previous work, include: (1) the use of a mechanism to propose decisions in situations usually considered ill-structured; (2) the use of continuous rather than two-valued variables and functions; (3) the large scale employment of fuzzy set constructs to represent imprecision; and (4) use of feed forward network structure and simple inferences to propose human-like decisions.
5

Learning in Partially Observable Markov Decision Processes

Sachan, Mohit 21 August 2013 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Learning in Partially Observable Markov Decision process (POMDP) is motivated by the essential need to address a number of realistic problems. A number of methods exist for learning in POMDPs, but learning with limited amount of information about the model of POMDP remains a highly anticipated feature. Learning with minimal information is desirable in complex systems as methods requiring complete information among decision makers are impractical in complex systems due to increase of problem dimensionality. In this thesis we address the problem of decentralized control of POMDPs with unknown transition probabilities and reward. We suggest learning in POMDP using a tree based approach. States of the POMDP are guessed using this tree. Each node in the tree has an automaton in it and acts as a decentralized decision maker for the POMDP. The start state of POMDP is known as the landmark state. Each automaton in the tree uses a simple learning scheme to update its action choice and requires minimal information. The principal result derived is that, without proper knowledge of transition probabilities and rewards, the automata tree of decision makers will converge to a set of actions that maximizes the long term expected reward per unit time obtained by the system. The analysis is based on learning in sequential stochastic games and properties of ergodic Markov chains. Simulation results are presented to compare the long term rewards of the system under different decision control algorithms.

Page generated in 0.158 seconds