• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 27
  • 8
  • 6
  • 3
  • 1
  • 1
  • Tagged with
  • 48
  • 48
  • 48
  • 48
  • 18
  • 15
  • 12
  • 9
  • 9
  • 7
  • 6
  • 5
  • 5
  • 5
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Learning and planning in structured worlds

Dearden, Richard W. 11 1900 (has links)
This thesis is concerned with the problem of how to make decisions in an uncertain world. We use a model of uncertainty based on Markov decision problems, and develop a number of algorithms for decision-making both for the planning problem, in which the model is known in advance, and for the reinforcement learning problem in which the decision-making agent does not know the model and must learn to make good decisions by trial and error. The basis for much of this work is the use of structural representations of problems. If a problem is represented in a structured way we can compute or learn plans that take advantage of this structure for computational gains. This is because the structure allows us to perform abstraction. Rather than reasoning about each situation in which a decision must be made individually, abstraction allows us to group situations together and reason about a whole set of them in a single step. Our approach to abstraction has the additional advantage that we can dynamically change the level of abstraction, splitting a group of situations in two if they need to be reasoned about separately to find an acceptable plan, or merging two groups together if they no longer need to be distinguished. We present two planning algorithms and one learning algorithm that use this approach. A second idea we present in this thesis is a novel approach to the exploration problem in reinforcement learning. The problem is to select actions to perform given that we would like good performance now and in the future. We can select the current best action to perform, but this may prevent us from discovering that another action is better, or we can take an exploratory action, but we risk performing poorly now as a result. Our Bayesian approach makes this tradeoff explicit by representing our uncertainty about the values of states and using this measure of uncertainty to estimate the value of the information we could gain by performing each action. We present both model-free and model-based reinforcement learning algorithms that make use of this exploration technique. Finally, we show how these ideas fit together to produce a reinforcement learning algorithm that uses structure to represent both the problem being solved and the plan it learns, and that selects actions to perform in order to learn using our Bayesian approach to exploration. / Science, Faculty of / Computer Science, Department of / Graduate
32

Microcomputer programming package for the assessment of multiattribute value functions

Jones, Julia E. January 1985 (has links)
Research into multiattribute utility theory far outweighs current attempts to apply findings, while the need for usable decision techniques continues to increase. Current decision maker - analyst procedures involving decision making sessions and numerous manual calculations are considered to be overly time-consuming, except for the most important of complex decisions. The purpose of this thesis was to design and develop a microcomputer package utilizing recent improvements in decision theory to increase the efficiency of the decision making process. Algorithms for independence testing and parameter estimation have been developed for both continuous attributes and discrete attributes. Two separate packages, an additive value function package (DECISION) and a SMART technique package (SMART), are developed based on these algorithms and their validity tested by means of a case study. Both packages are written for use on a standard (256K) IBM PC microcomputer. / Master of Science
33

A novel ontology and machine learning driven hybrid clinical decision support framework for cardiovascular preventative care

Farooq, Kamran January 2015 (has links)
Clinical risk assessment of chronic illnesses is a challenging and complex task which requires the utilisation of standardised clinical practice guidelines and documentation procedures in order to ensure consistent and efficient patient care. Conventional cardiovascular decision support systems have significant limitations, which include the inflexibility to deal with complex clinical processes, hard-wired rigid architectures based on branching logic and the inability to deal with legacy patient data without significant software engineering work. In light of these challenges, we are proposing a novel ontology and machine learning-driven hybrid clinical decision support framework for cardiovascular preventative care. An ontology-inspired approach provides a foundation for information collection, knowledge acquisition and decision support capabilities and aims to develop context sensitive decision support solutions based on ontology engineering principles. The proposed framework incorporates an ontology-driven clinical risk assessment and recommendation system (ODCRARS) and a Machine Learning Driven Prognostic System (MLDPS), integrated as a complete system to provide a cardiovascular preventative care solution. The proposed clinical decision support framework has been developed under the close supervision of clinical domain experts from both UK and US hospitals and is capable of handling multiple cardiovascular diseases. The proposed framework comprises of two novel key components: (1) ODCRARS (2) MLDPS. The ODCRARS is developed under the close supervision of consultant cardiologists Professor Calum MacRae from Harvard Medical School and Professor Stephen Leslie from Raigmore Hospital in Inverness, UK. The ODCRARS comprises of various components, which include: (a) Ontology-driven intelligent context-aware information collection for conducting patient interviews which are driven through a novel clinical questionnaire ontology. (b) A patient semantic profile, is generated using patient medical records which are collated during patient interviews (conducted through an ontology-driven context aware adaptive information collection component). The semantic transformation of patients’ medical data is carried out through a novel patient semantic profile ontology in order to give patient data an intrinsic meaning and alleviate interoperability issues with third party healthcare systems. (c) Ontology driven clinical decision support comprises of a recommendation ontology and a NICE/Expert driven clinical rules engine. The recommendation ontology is developed using clinical rules provided by the consultant cardiologist from the US hospital. The recommendation ontology utilises the patient semantic profile for lab tests and medication recommendation. A clinical rules engine is developed to implement a cardiac risk assessment mechanism for various cardiovascular conditions. The clinical rules engine is also utilised to control the patient flow within the integrated cardiovascular preventative care solution. The machine learning-driven prognostic system is developed in an iterative manner using state of the art feature selection and machine learning techniques. A prognostic model development process is exploited for the development of MLDPS based on clinical case studies in the cardiovascular domain. An additional clinical case study in the breast cancer domain is also carried out for the development and validation purposes. The prognostic model development process is general enough to handle a variety of healthcare datasets which will enable researchers to develop cost effective and evidence based clinical decision support systems. The proposed clinical decision support framework also provides a learning mechanism based on machine learning techniques. Learning mechanism is provided through exchange of patient data amongst the MLDPS and the ODCRARS. The machine learning-driven prognostic system is validated using Raigmore Hospital's RACPC, heart disease and breast cancer clinical case studies.
34

Status report of research on distributed information and decision systems in command-and-control / Research on distributed information and decision systems in command-and-control

January 1982 (has links)
prepared by: Michael Athans [et al.] / Description based on: Sept.1981/Sept.1982. / Prepared under contract ONR/N00014-77-C-0532 (NR 041-519 and NR 277-300x).
35

Development of a decision making model for the CorexR iron making facility

Penney, A.T. 18 March 2015 (has links)
M.Com. (Business Management) / Please refer to full text to view abstract
36

Virtual design office: A collaborative unified modeling language tool

Totapally, Hara 01 January 2001 (has links)
Real-time conferencing and collaborative computing is a great way to make developers more effective. This project is a collaborative framework development comprising configurable client and server components.
37

A selection model for automated guided vehicles

Shelton, Debra Kay January 1985 (has links)
This research identifies the attributes to be considered in the selection of an automated guided vehicle (AGV). A distinction is made between automated guided vehicles (AGVs) and an automated guided vehicle system (AGVS). This research is concerned only with the selection of automated guided vehicles (AGVs). A selection model is developed which forces the user to evaluate attributes. his requirements and preferences for AGV The first step of the model allows the user to enter his specifications for AGV attributes which are applicable to his production environment. The second step in the selection model is for the user to determine 8-15 attributes to use as selection criteria. In the third phase, the user inputs his preferences and priorities with respect to the attributes chosen as selection criteria in the second step. model ranks the Based on this information, the selection AGV models in the feasible set. A description of the model and a numerical example are included. Steps 1 and 2, described above, are implemented using an R:BASE™ program. The BASIC computer language was used to perform the interrogation of the user with respect to his priorities and preferences among attributes in Step 3. The IBM PC™ is the hardware chosen for running the selection model. / M.S.
38

A decision support system for tuition and fee policy analysis

Greenwood, Allen G. January 1984 (has links)
Tuition and fees are a major source of income for colleges and universities and a major portion of the cost of a student's education. The university administration's task of making sound and effective tuition and fee policy decisions is becoming both more critical and more complex. This is a result of the increased reliance on student-generated tuition-and-fee income, the declining college-age student population, reductions in state and Federal funds, and escalating costs of operation. The comprehensive computerized decision support system (DSS) developed in this research enhances the administration's planning, decision-making, and policy-setting processes. It integrates data and reports with modeling and analysis in order to provide a systematic means for analyzing tuition and fee problems, at a detailed and sophisticated level, without the user having to be an expert in management science techniques or computers. The DSS with its imbedded multi-year goal programming (GP) model allocates the university's revenue requirements to charges for individual student categories based on a set of user-defined objectives, constraints, and priorities. The system translates the mathematical programming model into a valuable decision-making aid by making it directly and readily accessible to the administration. The arduous tasks of model formulation and solution, the calculation of the model's parameter values, and the generation of a series of reports to document the results are performed by the system; whereas, the user is responsible for defining the problem framework, selecting the goals, setting the targets, establishing the priority structure, and assessing the solution. The DSS architecture is defined in terms of three highly integrated subsystems - dialog, data, and models - that provide the following functions: user/system interface, program integration, process control, data storage and handling, mathematical, statistical, and financial computations, as well as display, memory aid, and report generation. The software was developed using four programming languages/systems: EXEC 2, FORTRAN, IFPS, and LINDO. While the system was developed, tested, and implemented at Virginia Polytechnic Institute and State University, the concepts developed in this research are general enough to be applied to any public institution of higher education. / Ph. D.
39

AN EXPERT SYSTEM USING FUZZY SET REPRESENTATIONS FOR RULES AND VALUES TO MAKE MANAGEMENT DECISIONS IN A BUSINESS GAME.

DICKINSON, DEAN BERKELEY. January 1984 (has links)
This dissertation reports on an effort to design, construct, test, and adjust an expert system for making certain business decisions. A widely used approach to recurring judgmental decisions in business and other social organizations is the "rule-based decision system". This arrangement employs staff experts to propose decision choices and selections to a decisionmaker. Such decisions can be very important because of the large resources involved. Rules and values encountered in such systems are often vague and uncertain. Major questions explored by this experimental effort were: (1) could the output of such a decision system be mimicked easily by a mechanism incorporating the rules people say they use, and (2) could the imprecision endemic in such a system be represented by fuzzy set constructs. The task environment chosen for the effort was a computer-based game which required player teams to make a number of interrelated, recurring decisions in a realistic business situation. The primary purpose of this research is to determine the feasibility of using these methods in real decision systems. The expert system which resulted is a relatively complicated, feed-forward network of "simple" inferences, each with no more than one consequent and one or two antecedents. Rules elicited from an expert in the game or from published game instructions become the causal implications in these inferences. Fuzzy relations are used to represent imprecise rules and two distinctly different fuzzy set formats are employed to represent imprecise values. Once imprecision appears from the environment or rules the mechanism propagates it coherently through the inference network to the proposed decision values. The mechanism performs as well as the average human team, even though the strategy is relatively simple and the inferences crude linear approximations. Key aspects of this model, distinct from previous work, include: (1) the use of a mechanism to propose decisions in situations usually considered ill-structured; (2) the use of continuous rather than two-valued variables and functions; (3) the large scale employment of fuzzy set constructs to represent imprecision; and (4) use of feed forward network structure and simple inferences to propose human-like decisions.
40

A study of genetic fuzzy trading modeling, intraday prediction and modeling. / CUHK electronic theses & dissertations collection

January 2010 (has links)
This thesis consists of three parts: a genetic fuzzy trading model for stock trading, incremental intraday information for financial time series forecasting, and intraday effects in conditional variance estimation. Part A investigates a genetic fuzzy trading model for stock trading. This part contributes to use a fuzzy trading model to eliminate undesirable discontinuities, incorporate vague trading rules into the trading model and use genetic algorithm to select an optimal trading ruleset. Technical indicators are used to monitor the stock price movement and assist practitioners to set up trading rules to make buy-sell decision. Although some trading rules have a clear buy-sell signal, the signals are always detected with 'hard' logical. These trigger the undesirable discontinuities due to the jumps of the Boolean variables that may occur for small changes of the technical indicator. Some trading rules are vague and conflicting. They are difficult to incorporate into the trading system while they possess significant market information. Various performance comparisons such as total return, maximum drawdown and profit-loss ratios among different trading strategies were examined. Genetic fuzzy trading model always gave moderate performance. Part B studies and contributes to the literature that focuses on the forecasting of daily financial time series using intraday information. Conventional daily forecast always focuses on the use of lagged daily information up to the last market close while neglecting intraday information from the last market close to current time. Such intraday information are referred to incremental intraday information. They can improve prediction accuracy not only at a particular instant but also with the intraday time when an appropriate predictor is derived from such information. These are demonstrated in two forecasting examples, predictions of daily high and range-based volatility, using linear regression and Neural Network forecasters. Neural Network forecaster possesses a stronger causal effect of incremental intraday information on the predictand. Predictability can be estimated by a correlation without conducting any forecast. Part C explores intraday effects in conditional variance estimation. This contributes to the literature that focuses on conditional variance estimation with the intraday effects. Conventional GARCH volatility is formulated with an additive-error mean equation for daily return and an autoregressive moving-average specification for its conditional variance. However, the intra-daily information doesn't include in the conditional variance while it should has implication on the daily variance. Using Engle's multiplicative-error model formulation, range-based volatility is proposed as an intraday proxy for several GARCH frameworks. The impact of significant changes in intraday data is reflected in the MEM-GARCH variance. For some frameworks, it is possible to use lagged values of range-based volatility to delay the intraday effects in the conditional variance equation. / Ng, Hoi Shing Raymond. / Adviser: Kai-Pui Lam. / Source: Dissertation Abstracts International, Volume: 72-01, Section: B, page: . / Thesis (Ph.D.)--Chinese University of Hong Kong, 2010. / Includes bibliographical references (leaves 107-114). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. Ann Arbor, MI : ProQuest Information and Learning Company, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese.

Page generated in 0.1723 seconds