• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 359
  • 113
  • 30
  • 10
  • 9
  • 9
  • 8
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 690
  • 690
  • 690
  • 160
  • 130
  • 116
  • 101
  • 99
  • 93
  • 89
  • 89
  • 67
  • 65
  • 55
  • 54
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
351

Variability Monitoring for Clinical Applications

Bravi, Andrea January 2014 (has links)
Current monitoring tools in the intensive care units focus on displaying physiologically monitored parameters (e.g. vital signs such as heart rate, respiratory rate and blood pressure) at the present moment. Added clinical utility can be found by analyzing how the conditions of a patient evolve with time, and automatically relating that dynamics to population trends. Variability analysis consists of monitoring patterns of variation over intervals in time of physiological signals such as heart rate and respiratory rate. Given that illness has been associated in multiple studies with altered variability, most commonly lack of variation, variability monitoring represents a tool whose contribution at the bedside still needs to be explored. With the long term objective of improving care, this thesis promotes the use of variability analysis through three distinct types of analysis: facing the technical challenges involved with the dimensionality of variability analysis, enhancing the physiological understanding of variability, and showing its utility in real world clinical applications. In particular, the contributions of this thesis include: the review and classification into domains of a large array of measures of variability; the design of system and methods to integrate multiple measures of variability into a unique score, called composite measure, bringing relevant information to specific clinical problems; the comparison of patterns of heart rate variability during exercise and sepsis development, showing the inability of single measures of variability to discriminate between the two kinds of stressors; the analysis of variability produced from a physiologically-based model of the cardiovascular system, showing that each single measure of variability is an unspecific sensor of the body, thereby promoting multivariate analysis to the only means of understanding the physiology underlying variability; the study of heart rate variability in a population at high risk of sepsis development, showing the ability of variability to predict the occurrence of sepsis more than 48 hours in advance respect to the time of diagnosis of the clinical team; the study of heart and respiratory rate variability in intubated intensive care unit patients, showing how variability can provide a better way of assessing extubation readiness respect to commonly used clinical parameters. Overall, it is hoped that these novel contributions will help promoting bedside applications of variability monitoring to improve patient care.
352

Towards a usability knowledge base to support health information technology design and evaluation : Application to Medication Alerting Systems / Vers une base de connaissance en utilisabilité pour aider la conception et l’évaluation de technologies de l’information en santé : application aux systèmes d’alerte médicamenteux

Marcilly, Romaric 15 October 2014 (has links)
Les Technologies de l’Information en Santé (TIS) sont de plus en plus utilisées pour améliorer la qualité des soins et la sécurité du patient. Cependant, certains problèmes d’utilisabilité peuvent amenuiser leur impact et peuvent même induire de nouveaux problèmes dont la mise en danger du patient. Pour éviter ces effets négatifs, il est notamment nécessaire d’améliorer l’utilisabilité des TIS ce qui requiert l’application de connaissances d’utilisabilité éprouvées. Les connaissances en utilisabilité appliquée aux TIS sont rares, éparpillées à travers diverses supports et peu utilisables. Par ailleurs, leur couverture en termes de problèmes d’utilisabilité est peu connue. Ce travail a deux objectifs: (i) participer à l’amélioration de l’accumulation de la connaissance en utilisabilité pour les TIS, (ii) fournir une connaissance structurée sur l’utilisabilité des TIS et dont la couverture est établie. Le domaine d’application est celui des systèmes d’alerte médicamenteux.Méthode. Deux analyses indépendantes de la littérature ont été menées : d’un côté, identifier et organiser les problèmes d’utilisabilité des systèmes d’alerte médicamenteux ainsi que leurs conséquences ; de l’autre, identifier et synthétiser les principes d’utilisabilité spécifiques à ces systèmes. Les résultats de ces analyses ont été croisés afin de connaitre la couverture desdits principes en termes de problèmes d’utilisabilité.Résultats. La revue systématique a identifié 13 types de problèmes d’utilisabilité dans les systèmes d’alerte médicamenteux. Les conséquences de ces problèmes sur le clinicien et son système de travail sont variées et ont un grand pouvoir de nuisance (e.g., fatigue, erreur d’interprétation). Au total, 63 principes d’utilisabilité permettent de rendre compte de tous les problèmes d’utilisabilité identifiés. Ils sont organisés en 6 thèmes : améliorer le ratio signal-bruit, être en adéquation avec l’activité des cliniciens, supporter le travail collaboratif, afficher les informations pertinentes, rendre le système transparent et fournir des outils utiles. Le croisement des deux ensembles de données révèle une bonne correspondance entre les principes d’utilisabilité énoncés et les problèmes d’utilisabilité réellement observés.Discussion. Une liste structurée des principes d’utilisabilité illustrés par des exemples réels de leur violation a été développée à partir de ce travail. Cette liste peut aider les concepteurs et les experts en Facteurs Humains à comprendre et à appliquer les principes d’utilisabilité durant la conception et l’évaluation de systèmes d’alerte médicamenteux. L’utilisabilité appliquée aux TIS est une discipline relativement récente qui souffre d’un déficit de structuration et de capitalisation de ses connaissances. Ce travail montre qu’il est possible d’accumuler et de structurer les données d’utilisabilité des TIS. Ce travail pourrait être poursuivi en développant une base de connaissance en utilisabilité appliquée aux TIS afin de tendre vers une « utilisabilité fondée sur les preuves ». / Health Information Technology (HIT) is increasingly implemented to improve healthcare quality and patient safety. However, some usability issues may reduce their impact and even induce new problems (including patient safety issues). To avoid those negative outcomes, amongst other actions, HIT usability must be improved. This action requires applying validated usability knowledge. However, usability knowledge applied to HIT is scattered across several sources, is not structured and is hardly usable. Moreover, its coverage regarding related usability flaws is not known. This work has two aims: (i) to participate in improving the accumulation of usability knowledge for HIT and (ii) to provide synthetic structured easy-to-use HIT usability knowledge with a clear coverage. Those aims are applied to medication alerting systems.Method.Two independent analyses of the literature have been performed. On the one hand, usability flaws and their consequences for the clinicians and the work system have been searched and organized; on the other hand, existing usability design principles specific to medication alerting systems have been synthesized. Results of both analyses have been matched together. Results.A systematic review identified 13 types of usability flaws in medication alerting systems. Consequences on the clinicians and the work system are varied: they greatly impede the clinicians and negatively impact the work system (e.g., alert fatigue, alert misinterpretation). Sixty-three usability design principles dedicated to medication alerting systems are identified. They represent six themes: improve the signal-to-noise ratio, fit clinicians’ workflow, support collaborative work, display relevant information, make the system transparent and provide useful tools. The matching between usability flaws and principles is quite good.Discussion.As a result of this work, a list of usability design principles illustrated by actual instances of their violation has been developed. It may help designers and Human Factors experts understand and apply usability design principles when designing and evaluating medication alerting systems. Usability applied to HIT is a recent research field that suffers from a deficit of structured knowledge. This work shows that it is possible to accumulate and structure usability knowledge. It could be carried on by developing a usability knowledge base dedicated to HIT in order to strive towards “evidence-based usability”.
353

Production planning in JS McMillan Fisheries Ltd. : catch allocation decision support tool design

Begen, Mehmet Atilla 05 1900 (has links)
JS McMillan Fisheries Ltd. (JSM) is a Vancouver-based company with operations in nearly all levels of the commercial fishing industry, from supply through distribution. The heart of the operation is the processing facilities where freshly caught Pacific salmon are prepared for sale to end consumers and institutional buyers. As the operations of JSM evolved, the decision making for allocating a catch of salmon with varying characteristics amongst a set of final products has become too complex and time consuming. The focus of this study is to determine an effective and efficient method for JSM to allocate daily a fresh salmon harvest between the various products they produce on a daily basis. The goal is short-term production planning, to allocate the catch among the products in such a manner that the profit potential of the catch is maximized, i.e. prepare a production schedule that maximizes the total profit over the planning horizon. Additional goals of this project include: automation of the decision making process for the catch allocation, "what if" planning, decreasing expert dependency, reducing decision making time, and building a practical and innovative decision support tool. In order to solve this problem efficiently and effectively, optimization models were developed for allocating the catch to the end products. A corresponding decision support tool was built for the end-users at JSM. / Business, Sauder School of / Graduate
354

Empirical evaluation of optimization techniques for classification and prediction tasks

Leke, Collins Achepsah 27 March 2014 (has links)
M.Ing. (Electrical and Electronic Engineering) / Missing data is an issue which leads to a variety of problems in the analysis and processing of data in datasets in almost every aspect of day−to−day life. Due to this reason missing data and ways of handling this problem have been an area of research in a variety of disciplines in recent times. This thesis presents a method which is aimed at finding approximations to missing values in a dataset by making use of Genetic Algorithm (GA), Simulated Annealing (SA), Particle Swarm Optimization (PSO), Random Forest (RF), Negative Selection (NS) in combination with auto-associative neural networks, and also provides a comparative analysis of these algorithms. The methods suggested use the optimization algorithms to minimize an error function derived from training an auto-associative neural network during which the interrelationships between the inputs and the outputs are obtained and stored in the weights connecting the different layers of the network. The error function is expressed as the square of the difference between the actual observations and predicted values from an auto-associative neural network. In the event of missing data, all the values of the actual observations are not known hence, the error function is decomposed to depend on the known and unknown variable values. Multi Layer Perceptron (MLP) neural network is employed to train the neural networks using the Scaled Conjugate Gradient (SCG) method. The research primarily focusses on predicting missing data entries from two datasets being the Manufacturing dataset and the Forest Fire dataset. Prediction is a representation of how things will occur in the future based on past occurrences and experiences. The research also focuses on investigating the use of this proposed technique in approximating and classifying missing data with great accuracy from five classification datasets being the Australian Credit, German Credit, Japanese Credit, Heart Disease and Car Evaluation datasets. It also investigates the impact of using different neural network architectures in training the neural network and finding approximations for the missing values, and using the best possible architecture for evaluation purposes. It is revealed in this research that the approximated values for the missing data obtained by applying the proposed models are accurate with a high percentage of correlation between the actual missing values and corresponding approximated values using the proposed models on the Manufacturing dataset ranging between 94.7% and 95.2% with the exception of the Negative Selection algorithm which resulted in a 49.6% correlation coefficient value. On the Forest Fire dataset, it was observed that there was a low percentage correlation between the actual missing values and the corresponding approximated values in the range 0.95% to 4.49% due to the nature of the values of the variables in the dataset. The Negative Selection algorithm on this dataset revealed a negative percentage correlation between the actual values and the approximated values with a value of 100%. Approximations found for missing data are also observed to depend on the particular neural network architecture employed in training the dataset. Further analysis revealed that the Random Forest algorithm on average performed better than the GA, SA, PSO, and NS algorithms yielding the lowest Mean Square Error, Root Mean Square Error, and Mean Absolute Error values. On the other end of the scale was the NS algorithm which produced the highest values for the three error metrics bearing in mind that for these, the lower the values, the better the performance, and vice versa. The evaluation of the algorithms on the classification datasets revealed that the most accurate in classifying and identifying to which of a set of categories a new observation belonged on the basis of the training set of data is the Random Forest algorithm, which yielded the highest AUC percentage values on all of the five classification datasets. The differences between its AUC values and those of the GA, SA, PSO, and NS algorithms were statistically significant, with the most statistically significant differences observed when the AUC values for the Random Forest algorithm were compared to those of the Negative Selection algorithm on all five classification datasets. The GA, SA, and PSO algorithms produced AUC values which when compared against each other on all five classification datasets were not very different. Overall analysis on the datasets considered revealed that the algorithm which performed best in solving both the prediction and classification problems was the Random Forest algorithm as seen by the results obtained. The algorithm on the other end of the scale after comparisons of results was the Negative Selection algorithm which produced the highest error metric values for the prediction problems and the lowest AUC values for the classification problems.
355

Utilisation of decision support systems in financial institutions : an analysis of methods and trends.

Rong, R. P. 13 February 2014 (has links)
M.Comm. (Business Management) / The objectives of this research project were identified as being as follows: • To identify the possible use of Decision Support Systems III financial institutions through a literature study with attention given to: • the relationship of decision theory to Decision Support Systems • the theory of Decision Support Systems • how Decision Support Systems are currently used, and • trends in the use of Decision Support Systems.• To identify the use and awareness of Decision Support Systems in a spectrum of financial institutions in South Africa by interviewing selected individuals from a number of financial institutions. The interviews were necessary due to the lack of specific South African literature. Aspects that were investigated and analysed are: • the current use of Decision Support Systems, • the planned implementation of Decision Support Systems, and • the comparison of international practices with the situation in South Africa. The institutions that were targeted are two of the major banks, a niche bank, an insurance company and a stock-broking company.
356

Tvorba systémů pro podporu rozhodování s použitím prostředků MS Office / Decision support systems design in the Microsoft Office environment

Kukač, Martin January 2011 (has links)
The main topic of this thesis is design and implementation of decision support systems in the environment of Microsoft Excel and Microsoft Access by a programmer or advanced user. The thesis provides basic theory regarding principles and the history of decision support systems. The thesis explains technologies that are the backbone of Microsoft Office suite and applies these technologies to create set of "best practices" for designing and programming decision support systems and demonstrates this set with two sample implementatios. Both implementations are compared, advantages and disadvantages of each are shown.
357

Understanding and supporting pricing decisions using multicriteria decision analysis: an application to antique silver in South Africa

Stephens, Jed 25 February 2021 (has links)
This dissertation presents an application of multicriteria decision analysis to understand and support pricing decisions in fields where goods are unique and described by their characteristics. The specific application area of this research is antique silver objects, where a complete iteration of the multicritia decision process is performed. This includes two problem structurings using SODA which provide rich detail into this application area. Multi-attribute additive models are constructed, with attribute partial value functions elicited using different methods: directly (bisection methods), indirectly (MACBETH and linear interpolation) and with discrete choice experiments. The applicability and advantages of each method is discussed. Additionally, an open source R package to implement the design of discrete choice experiments is created. The multi-attribute models provide key insights into decision maker's reasoning for price; and contrasting different decision maker's models explains the market. A risk adverse relationship between multicriteria model score and price is characterised and various inverse utility functions investigated. Two decision support systems are fully developed to address the needs of Cape silver decision makers in South Africa.
358

Collaborative Dispatching of Commercial Vehicles

Goel, Asvin, Gruhn, Volker 17 January 2019 (has links)
Collaborative dispatching allows several dispatchers to view the routing solution as a dynamic model where changes to the vehicle routes can be made in real-time. In this paper we discuss implications of collaborative dispatching on real-time decision support tools for motor carriers. We present a collaborative dispatching system which uses real-time information obtained from a telematics system. Messages sent from the vehicles are automatically analysed and actual data, such as exact arrival and departure times, as well as discrepancies between actual and planned data are identified. The collaborative dispatching system not only allows several dispatchers to concurrently modify the schedule, but also a dynamic optimisation method. The optimisation method is capable of taking into account that input data may change at any time and that dispatchers can concurrently modify the schedule and may add or relax certain constraints relevant to the optimisation model.
359

Organizational Competency Through Information: Business Intelligence and Analytics as a Tool for Process Dynamization

Torres, Russell 08 1900 (has links)
The data produced and collected by organizations represents both challenges and opportunities for the modern firm. Business intelligence and analytics (BI&A) comprises a wide variety of information management technologies and information seeking activities designed to exploit these information resources. As a result, BI&A has been heralded as a source of improved organizational outcomes in both the academic and practitioner literature, and these technologies are among the largest continuous IT expenditures made over the last decade.Despite the interest in BI&A, there is not enough theorizing about its role in improving firm performance. Scholarly investigations of the link between BI&A and organizational benefits are scarce and primarily exploratory in nature. Further, the majority of the extant research on BI&A is techno-centric, conceptualizing BI&A primarily an organizational technical asset. This study seeks to explicate the relationship between BI&A and improved organizational outcomes by viewing this phenomenon through the lens of dynamic capabilities, a promising theoretical perspective from the strategic management discipline. In so doing, this research reframes BI&A as an organizational capability, rather than simply a technical resource. Guided by a comprehensive review of the BI&A and dynamic capabilities literature, as well as a series of semi-structured focus groups with senior-level business practitioners with BI&A experience, this study develops and tests a model of BI&A enabled firm performance. Using a snowball sample, an online survey was administered to 137 business professionals in 24 industries. The data were analyzed using partial least squares (PLS) structural equation modeling (SEM). The findings support the contention that BI&A serve as the sensing and seizing components of an organizational dynamic capability, while transformation is achieved through business process change capability. These factors influence firm financial performance through their impact on the functional performance of the firm’s business processes. Further, this study demonstrates that traditional BI&A success factors are positively associated with BI&A sensing capability. This study makes several important contributions to BI&A research. First, this study addresses a gap in the scholarly literature by establishing a theoretical framework for the role of BI&A in achieving firm performance which is grounded in an established strategic management theory. Second, by drawing on the sense-seize-transform view of dynamic capabilities, this dissertation proposes a new conceptualization of BI&A as sensing and seizing organizational capabilities. Third, this research links the use of BI&A to improved organizational outcomes through the transformation of business processes, consistent with the view that the value of IT is derived from its impact on the value generating processes of the firm. Fourth, by viewing BI&A and business process change as distinct but inter-related components of dynamic capabilities, this research clarifies the role of BI&A in the dynamization of organizational processes, providing insight into the relationship between BI&A and business agility. Finally, this dissertation shows how BI&A capabilities are related to BI&A success factors identified in prior research.
360

Water harvesting through ponds in the Arco Seco region of the Republic of Panama : decision support system for pond storage capacity estimation

Desrochers, Anne January 2004 (has links)
No description available.

Page generated in 0.1572 seconds