• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 287
  • 108
  • 28
  • 10
  • 8
  • 7
  • 7
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 587
  • 587
  • 587
  • 133
  • 113
  • 96
  • 93
  • 86
  • 85
  • 80
  • 77
  • 60
  • 59
  • 54
  • 47
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Construção de uma rede Bayesiana aplicada ao diagnóstico de doenças cardíacas. / Building a Bayesian network for diagnosis of heart diseases.

André Hideaki Saheki 14 March 2005 (has links)
Este trabalho apresenta a construção de um sistema especialista aplicado ao diagnóstico de doenças cardíacas, usando como ferramenta computacional redes Bayesianas. O trabalho envolveu a interação entre diferentes áreas do conhecimento, engenharia e medicina, com maior foco na metodologia da construção de sistemas especialistas. São apresentados os processos de definição do problema, modelagem qualitativa e quantitativa, e avaliação. Neste trabalho, os processos de modelagem e avaliação foram realizados com o auxílio de um especialista médico e de dados bibliográficos. São apresentados como resultados a rede Bayesiana construída e um software para manipulação de redes Bayesianas denominado iBNetz. / This work presents the construction of an expert system applied to the diagnosis of heart diseases, using Bayesian networks as a modeling tool. The work involved interactions between two different fields, engineering and medicine, with special emphasis on the methodology of building expert systems. The processes of problem definition, qualitative and quantitative modeling, and evaluation are presented here. In this work, the modeling and evaluation processes have been conducted with the aid of a medical expert and bibliographic sources. The work has produced a Bayesian network for diagnosis and a software, called iBNetz, for creating and manipulating Bayesian networks.
162

Planification et ordonnancement de plateformes logistiques / Logistic platform planning and scheduling

Carrera, Susana 05 November 2010 (has links)
L'objectif de cette thèse est de fournir des outils d'aide à la décision pour piloter les plateformes logistiques à court de moyen terme. La première partie décrit la problématique concernée et les notions essentielles dans le cadre des chaînes logistiques. Dans la deuxième partie, le problème de la planification est étudié, nous proposons des modèles linéaires pour minimiser les coûts de personnel, qui prennent en compte les flux : leurs variations saisonnières, la possibilité de les négocier localement en amont et en aval, ainsi que leur organisation et celle du travail. Ainsi, l'outil peut être utilisé dans la coordination des flux entres les partenaires de la chaîne livrées en amont et en aval de la plateforme et la négociation des dates de livraison. Ces modèles sont testés et validés sur des instances générées aléatoirement, sur des configurations inspirées de problèmes réels. Dans la troisième partie, nous travaillons sur l'ordonnancement des activités de préparation de commandes. Ici, nous combinons deux familles de contraintes difficiles : l'arrivée de composants (ressources consommables) à des dates et en quantités connues à l'amont de la plateforme, et des tournées de livraison à dates fixées à l'aval. Trois cas particuliers sont étudiés, selon la façon dont les tournées sont organisées. Nous proposons des procédures par séparation et évaluation pour ces problèmes, et un modèle linéaire en nombres entiers pour le cas le plus simple. Des expériences sont faites sur des familles d'instances générées aléatoirement et de manière partiellement hétérogène. Plusieurs perspectives de généralisation sont proposées / The aim of this thesis is to provide decision support systems to control logistic platforms at the mid-term and short-term levels. Several problems and main notions concerning logistic platform context are described in the first part. In the second part, planning problems are studied. Two linear programming models are proposed to minimize the workforce costs. These models take into account several characteristics : seasonal flow variations, work and flow organization in the platform, and local negotiations of the upstream and downstream flows. In consequence, our decision support system can be used in the flow coordination between supply chain partners. Two types of negotiations are considered : negotiations of upstream and downstream delivered quantities and negotiation of delivery dates. These models have been tested on pertinent randomly generated instances inspired from concerete problems. In the third part of the thesis, the external flows of the platforme are assumed to be fixed. Orders preparation scheduling problem inside the platform is considered. Two families of strong contraints are combined : staircase availability of components (consumable resources) and dixed delivery dates. According to the way the downstream deliveries are organized and penalised, three different cases (based on industrial applications) have been studied. We proposed three branch and bound procedures for these problems, and an integer linear program for the easiest problem. Experimental analysis has been done over heterogeneous randomly generated instance families. In the last part, a series of perspectives for this work are proposed
163

Simulating drug responses in laboratory test time series with deep generative modeling

Yahi, Alexandre January 2019 (has links)
Drug effects can be unpredictable and vary widely among patients with environmental, genetic, and clinical factors. Randomized control trials (RCTs) are not sufficient to identify adverse drug reactions (ADRs), and the electronic health record (EHR) along with medical claims have become an important resource for pharmacovigilance. Among all the data collected in hospitals, laboratory tests represent the most documented and reliable data type in the EHR. Laboratory tests are at the core of the clinical decision process and are used for diagnosis, monitoring, screening, and research by physicians. They can be linked to drug effects either directly, with therapeutic drug monitoring (TDM), or indirectly using drug laboratory effects (DLEs) that affect surrogate tests. Unfortunately, very few automated methods use laboratory tests to inform clinical decision making and predict drug effects, partly due to the complexity of these time series that are irregularly sampled, highly dependent on other clinical covariates, and non-stationary. Deep learning, the branch of machine learning that relies on high-capacity artificial neural networks, has known a renewed popularity this past decade and has transformed fields such as computer vision and natural language processing. Deep learning holds the promise of better performances compared to established machine learning models, although with the necessity for larger training datasets due to their higher degrees of freedom. These models are more flexible with multi-modal inputs and can make sense of large amounts of features without extensive engineering. Both qualities make deep learning models ideal candidate for complex, multi-modal, noisy healthcare datasets. With the development of novel deep learning methods such as generative adversarial networks (GANs), there is an unprecedented opportunity to learn how to augment existing clinical dataset with realistic synthetic data and increase predictive performances. Moreover, GANs have the potential to simulate effects of individual covariates such as drug exposures by leveraging the properties of implicit generative models. In this dissertation, I present a body of work that aims at paving the way for next generation laboratory test-based clinical decision support systems powered by deep learning. To this end, I organized my experiments around three building blocks: (1) the evaluation of various deep learning architectures with laboratory test time series and their covariates with a forecasting task; (2) the development of implicit generative models of laboratory test time series using the Wasserstein GAN framework; (3) the inference properties of these models for the simulation of drug effects in laboratory test time series, and their application for data augmentation. Each component has its own evaluation: The forecasting task enabled me to explore the properties and performances of different learning architectures; the Wasserstein GAN models are evaluated with both intrinsic metrics and extrinsic tasks, and I always set baselines to avoid providing results in a "neural-network only" referential. Applied machine learning, and more so with deep learning, is an empirical science. While the datasets used in this dissertation are not publicly available due to patient privacy regulation, I described pre-processing steps, hyper-parameters selection and training processes with reproducibility and transparency in mind. In the specific context of these studies involving laboratory test time series and their clinical covariates, I found that for supervised tasks, machine learning holds up well against deep learning methods. Complex recurrent architectures like long short-term memory (LSTM) do not perform well on these short time series, while convolutional neural networks (CNNs) and multi-layer perceptrons (MLPs) provide the best performances, at the cost of extensive hyper-parameter tuning. Generative adversarial networks, enabled by deep learning models, were able to generate high-fidelity laboratory test time series, and the quality of the generated samples was increased with conditional models using drug exposures as auxiliary information. Interestingly, forecasting models trained on synthetic data exclusively still retain good performances, confirming the potential of GANs in privacy-oriented applications. Finally, conditional GANs demonstrated an ability to interpolate samples from drug exposure combinations not seen during training, opening the way for laboratory test simulation with larger auxiliary information spaces. In specific cases, augmenting real training sets with synthetic data improved performances in the forecasting tasks, and could be extended to other applications where rare cases present a high prediction error.
164

Using decision maker personality as a basis for building adaptive decision support system generators for senior decision makers

Paranagama, Priyanka C. (Priyanka Chandana) 1969- January 2000 (has links)
Abstract not available
165

The effects of parallel versus sequential coordination methods on distributed group multiple critera decision-making outcomes : an empirical study with a web-based GDSS prototype

Cao, Patrick Pu, 1963- January 2003 (has links)
Abstract not available
166

A framework for an Intelligent Decision Support System (IDSS), including a data mining methodology, for fetal-maternal clinical practice and research

Heath, Jennifer, University of Western Sydney, College of Health and Science, School of Computing and Mathematics January 2006 (has links)
Existing patient medical records are a rich data source with a potential to support clinical research. Fragmentation of data across disparate medical database inhibits the use of these existing datasets. Overcoming such disjointedness is possible through the use of a data warehouse. Once the data is cleansed, transformed, and stored within the data warehouse it is possible to turn attention to the exploration of the medical datasets. Exploratory and confirmatory Data Mining Tools are well suited to such activities. This thesis concerned with: demonstrating parallels between scientific method and CRISP-DM; extending CRISP-DM for use with medical datasets; and proposal of the supporting Intelligent Decision Support System framework. This research has been undertaken using a fetal-maternal case study. / Master of Science (Hons)
167

Understanding and applying decision support systems in Australian farming systems research

Robinson, Jeffrey Brett, University of Western Sydney, College of Science, Technology and Environment, School of Environment and Agriculture January 2005 (has links)
Decision support systems (DSS) are usually based on computerised models of biophysical and economic systems. Despite early expectations that such models would inform and improve management, adoption rates have been low, and implementation of DSS is now “critical” The reasons for this are unclear and the aim of this study is to learn to better design, develop and apply DSS in farming systems research (FSR). Previous studies have explored the merits of quantitative tools including DSS, and suggested changes leading to greater impact. In Australia, the changes advocated have been: Simple, flexible, low cost economic tools: Emphasis on farmer learning through soft systems approaches: Understanding the socio-cultural contexts of using and developing DSS: Farmer and researcher co-learning from simulation modelling and Increasing user participation in DSS design and implementation. Twenty-four simple criteria were distilled from these studies, and their usefulness in guiding the development and application of DSS were assessed in six FSR case studies. The case studies were also used to better understand farmer learning through models of decision making and learning. To make DSS useful complements to farmers’ existing decision-making repertoires, they should be based on: (i) a decision-oriented development process, (ii) identifying a motivated and committed audience, (iii) a thorough understanding of the decision-makers context, (iv) using learning as the yardstick of success, and (v) understanding the contrasts, contradictions and conflicts between researcher and farmer decision cultures / Doctor of Philosophy (PhD)
168

Women as Farm Partners: Agricultural Decision Support Systems in the Australian Cotton Industry

Mackrell, Dale Carolyn, n/a January 2006 (has links)
Australian farmers are supplementing traditional practices with innovative strategies in an effort to survive recent economic, environmental, and social crises in the rural sector. These innovative strategies include moving towards a technology-based farm management style. A review of past literature determines that, despite a growing awareness of the usefulness of computers for farm management, there is concern over the limited demand for computer-based agricultural decision support systems (DSS). Recent literature indicates that women are the dominant users of computers on family farms yet are hesitant to use computers for decision support, and it is also unclear what decision-making roles women assume on family farms. While past research has investigated the roles of women in the Australian rural sector, there is a dearth of research into the interaction of women cotton growers with computers. Therefore, this dissertation is an ontological study and aims to contribute to scholarly knowledge in the research domain of Australian women cotton growers, agricultural DSS, and cotton farm management. This dissertation belongs in the Information Systems (IS) stream and describes an interpretive single case study which explores the lives of Australian women cotton growers on family farms and the association of an agricultural DSS with their farm management roles. Data collection was predominantly through semi-structured interviews with women cotton growers and cotton industry professionals such as DSS developers, rural extension officers, researchers and educators, rural experimental scientists, and agronomists and consultants, all of whom advise cotton growers. The study was informed by multiple sociological theories with opposing paradigmatic assumptions: Giddens' (1984) structuration theory as a metatheory to explore the recursiveness of farm life and technology usage; Rogers' (1995) diffusion of innovations theory with a functionalist approach to objectively examine the features of the software and user, as well as the processes of technology adoption; and Connell's (2002) theory of gender relations with its radical humanist perspective to subjectively investigate the relationships between farm partners through critical enquiry. The study was enriched further by drawing on other writings of these authors (Connell 1987; Giddens 2001; Rogers 2003) as well as complementary theories by authors (Orlikowski 1992; Orlikowski 2000; Trauth 2002; Vanclay & Lawrence 1995). These theories in combination have not been used before, which is a theoretical contribution of the study. The agricultural DSS for the study was CottonLOGIC, an advanced farm management tool to aid the management of cotton production. It was developed in the late 1990s by the CSIRO and the Australian Cotton Cooperative Research Centre (CRC), with support from the Cotton Research and Development Corporation (CRDC). CottonLOGIC is a software package of decision support and record-keeping modules to assist cotton growers and their advisors in the management of cotton pests, soil nutrition, and farm operations. It enables the recording and reporting of crop inputs and yields, insect populations (heliothis, tipworm, mirids and so on), weather data, and field operations such as fertiliser and pesticide applications, as well as the running of insect density prediction (heliothis and mites) and soil nutrition models. The study found that innovative practices and sustainable solutions are an imperative in cotton farm management for generating an improved triple bottom line of economic, environmental and social outcomes. CottonLOGIC is an industry benchmark for supporting these values through the incorporation of Best Management Practices (BMP) and Integrated Pest Management (IPM) principles, although there were indications that the software is in need of restructuring as could be expected of software over five years old. The evidence from the study was that women growers are participants in strategic farm decisions but less so in operational decisions, partly due to their lack of relevant agronomic knowledge. This hindered their use of CottonLOGIC, despite creative attempts to modify it. The study endorsed the existence of gender differences and inequalities in rural Australia. Nevertheless, the study also found that the women are valued for their roles as business partners in the multidisciplinary nature of farm management. All the same, there was evidence that greater collaboration and cooperation by farm partners and advisors would improve business outcomes. On the whole, however, women cotton growers are not passive agents but take responsibility for their own futures. In particular, DSS tools such as CottonLOGIC are instrumental in enabling women cotton growers to adapt to, challenge, and influence farm management practices in the family farm enterprise, just as CottonLOGIC is itself shaped and reshaped. Hence, a practical contribution of this study is to provide non-prescriptive guidelines for the improved adoption of agricultural DSS, particularly by rural women, as well as increasing awareness of the worth of their roles as family farm business partners.
169

Acquisition of Fuzzy Measures in Multicriteria Decision Making Using Similarity-based Reasoning

Wagholikar, Amol S, N/A January 2007 (has links)
Continuous development has been occurring in the area of decision support systems. Modern systems focus on applying decision models that can provide intelligent support to the decision maker. These systems focus on modelling the human reasoning process in situations requiring decision. This task may be achieved by using an appropriate decision model. Multicriteria decision making (MCDM) is a common decision making approach. This research investigates and seeks a way to resolve various issues associated with the application of this model. MCDM is a formal and systematic decision making approach that evaluates a given set of alternatives against a given set of criteria. The global evaluation of alternatives is determined through the process of aggregation. It is well established that the aggregation process should consider the importance of criteria while determining the overall worth of an alternative. The importance of individual criteria and of sub-sets of the criteria affects the global evaluation. The aggregation also needs to consider the importance of the sub-set of criteria. Most decision problems involve dependent criteria and the interaction between the criteria needs to be modelled. Traditional aggregation approaches, such as weighted average, do not model the interaction between the criteria. Non-additive measures such as fuzzy measures model the interaction between the criteria. However, determination of non-additive measures in a practical application is problematic. Various approaches have been proposed to resolve the difficulty in acquisition of fuzzy measures. These approaches mainly propose use of past precedents. This research extends this notion and proposes an approach based on similarity-based reasoning. Solutions to the past problems can be used to solve the new decision problems. This is the central idea behind the proposed methodology. The methodology itself applies the theory of reasoning by analogy for solving MCDM problems. This methodology uses a repository of cases of past decision problems. This case base is used to determine the fuzzy measures for the new decision problem. This work also analyses various similarity measures. The illustration of the proposed methodology in a case-based decision support system shows that interactive models are suitable tools for determining fuzzy measures in a given decision problem. This research makes an important contribution by proposing a similarity-based approach for acquisition of fuzzy measures.
170

An Agent-based hybrid framework for decision making on complex problems.

Zhang, Zili, mikewood@deakin.edu.au January 2001 (has links)
Electronic commerce and the Internet have created demand for automated systems that can make complex decisions utilizing information from multiple sources. Because the information is uncertain, dynamic, distributed, and heterogeneous in nature, these systems require a great diversity of intelligent techniques including expert systems, fuzzy logic, neural networks, and genetic algorithms. However, in complex decision making, many different components or sub-tasks are involved, each of which requires different types of processing. Thus multiple such techniques are required resulting in systems called hybrid intelligent systems. That is, hybrid solutions are crucial for complex problem solving and decision making. There is a growing demand for these systems in many areas including financial investment planning, engineering design, medical diagnosis, and cognitive simulation. However, the design and development of these systems is difficult because they have a large number of parts or components that have many interactions. From a multi-agent perspective, agents in multi-agent systems (MAS) are autonomous and can engage in flexible, high-level interactions. MASs are good at complex, dynamic interactions. Thus a multi-agent perspective is suitable for modeling, design, and construction of hybrid intelligent systems. The aim of this thesis is to develop an agent-based framework for constructing hybrid intelligent systems which are mainly used for complex problem solving and decision making. Existing software development techniques (typically, object-oriented) are inadequate for modeling agent-based hybrid intelligent systems. There is a fundamental mismatch between the concepts used by object-oriented developers and the agent-oriented view. Although there are some agent-oriented methodologies such as the Gaia methodology, there is still no specifically tailored methodology available for analyzing and designing agent-based hybrid intelligent systems. To this end, a methodology is proposed, which is specifically tailored to the analysis and design of agent-based hybrid intelligent systems. The methodology consists of six models - role model, interaction model, agent model, skill model, knowledge model, and organizational model. This methodology differs from other agent-oriented methodologies in its skill and knowledge models. As good decisions and problem solutions are mainly based on adequate information, rich knowledge, and appropriate skills to use knowledge and information, these two models are of paramount importance in modeling complex problem solving and decision making. Follow the methodology, an agent-based framework for hybrid intelligent system construction used in complex problem solving and decision making was developed. The framework has several crucial characteristics that differentiate this research from others. Four important issues relating to the framework are also investigated. These cover the building of an ontology for financial investment, matchmaking in middle agents, reasoning in problem solving and decision making, and decision aggregation in MASs. The thesis demonstrates how to build a domain-specific ontology and how to access it in a MAS by building a financial ontology. It is argued that the practical performance of service provider agents has a significant impact on the matchmaking outcomes of middle agents. It is proposed to consider service provider agents' track records in matchmaking. A way to provide initial values for the track records of service provider agents is also suggested. The concept of ‘reasoning with multimedia information’ is introduced, and reasoning with still image information using symbolic projection theory is proposed. How to choose suitable aggregation operations is demonstrated through financial investment application and three approaches are proposed - the stationary agent approach, the token-passing approach, and the mobile agent approach to implementing decision aggregation in MASs. Based on the framework, a prototype was built and applied to financial investment planning. This prototype consists of one serving agent, one interface agent, one decision aggregation agent, one planning agent, four decision making agents, and five service provider agents. Experiments were conducted on the prototype. The experimental results show the framework is flexible, robust, and fully workable. All agents derived from the methodology exhibit their behaviors correctly as specified.

Page generated in 0.1208 seconds