Spelling suggestions: "subject:"medicine -- data processing"" "subject:"medicine -- mata processing""
1 |
An efficient content-based searching engine for medical imagedatabaseLee, Chi-hung, 李志鴻 January 1998 (has links)
published_or_final_version / Computer Science / Master / Master of Philosophy
|
2 |
The interaction between context and technology during information systems development (ISD) : action research investigations in two health settingsChiasson, Mike 11 1900 (has links)
Software development and implementation failure is perceived by developers and users
as a serious problem. Of every six new software development projects, 2 are abandoned, the
average project lasts 50% longer than expected, and 75% of large systems are "operating
failures" that are rejected or perform poorly. Design failure contributes to the productivity
paradox, where increased investment in information technology (IT) has not correlated with
improvements in productivity. Many IS researchers state that further research examining the
interaction between technology and context during information system development (ISD) is
required. This current study is motivated by these calls for research.
The marrying of information systems and health research also raises a second
motivation. The deployment and diffusion of IT can contribute to the effective utilization of
health resources. Another motivation of the thesis is to explore the effect of information
systems on disease prevention, and provide an opportunity to develop and diffuse IT tools that
promote health.
To address these two motivations, two case studies of ISD in two health studies are
described. The first case study involved the initiation and development of an electronic patient
record in two outpatient clinics specializing in heart disease prevention and rehabilitation
(SoftHeart). The second case study involved the development of a windows-based
multimedia software that assists the planning of breast cancer educational and policy programs
in communities. The first case study covered four years (Summer of 1992 to Spring of 1996)
and the second case covered 1 year (Spring of 1995 to Spring of 1996). The purpose of the
thesis is to generate hypotheses for future research in ISD.
Both studies employed an "action research" approach where the researcher was
directly involved with software design and programming. Data from interviews, meeting
minutes, field notes, design and programming notes, and other documentation were collected
from both studies and triangulated to provide valid interpretations. Important and illustrative
technology-context events are extracted from the cases to uncover processes between
technology and context during stages of development. Processes are compared with four
theories linking technology and context: technological imperative and organizational
imperative (unidirectional), and emergent perspective and social technology (bi-directional).
These processes are then combined to reach tentative conclusions about the ISD process.
Key findings indicate an interplay between a small number unidirectional processes
(organizational and technology imperative) and a large number of bi-directional theories
(social technology and emergence). Overall, the emergent perspective described or
participated in describing a majority of the processes, given the developer's perspective,
extraction and interpretation of these key processes. In both cases, the ISD trajectory was
best described as emergent.
The result of within case and cross-case analysis is a model integrating the four
technology-context theories depending on stakeholder agreement and the adaptability of
technology during development and use. Dynamics and change in task, technology, and
stakeholder configurations are explained by the deliberate or accidental interaction of new and
old stakeholders, technology, ideas, agreements and/or tasks over time. Implications for
research and practice are discussed.
|
3 |
The synthesis of a mobile computerized health testing systemStumph, Stephen Lynn 08 1900 (has links)
No description available.
|
4 |
The interaction between context and technology during information systems development (ISD) : action research investigations in two health settingsChiasson, Mike 11 1900 (has links)
Software development and implementation failure is perceived by developers and users
as a serious problem. Of every six new software development projects, 2 are abandoned, the
average project lasts 50% longer than expected, and 75% of large systems are "operating
failures" that are rejected or perform poorly. Design failure contributes to the productivity
paradox, where increased investment in information technology (IT) has not correlated with
improvements in productivity. Many IS researchers state that further research examining the
interaction between technology and context during information system development (ISD) is
required. This current study is motivated by these calls for research.
The marrying of information systems and health research also raises a second
motivation. The deployment and diffusion of IT can contribute to the effective utilization of
health resources. Another motivation of the thesis is to explore the effect of information
systems on disease prevention, and provide an opportunity to develop and diffuse IT tools that
promote health.
To address these two motivations, two case studies of ISD in two health studies are
described. The first case study involved the initiation and development of an electronic patient
record in two outpatient clinics specializing in heart disease prevention and rehabilitation
(SoftHeart). The second case study involved the development of a windows-based
multimedia software that assists the planning of breast cancer educational and policy programs
in communities. The first case study covered four years (Summer of 1992 to Spring of 1996)
and the second case covered 1 year (Spring of 1995 to Spring of 1996). The purpose of the
thesis is to generate hypotheses for future research in ISD.
Both studies employed an "action research" approach where the researcher was
directly involved with software design and programming. Data from interviews, meeting
minutes, field notes, design and programming notes, and other documentation were collected
from both studies and triangulated to provide valid interpretations. Important and illustrative
technology-context events are extracted from the cases to uncover processes between
technology and context during stages of development. Processes are compared with four
theories linking technology and context: technological imperative and organizational
imperative (unidirectional), and emergent perspective and social technology (bi-directional).
These processes are then combined to reach tentative conclusions about the ISD process.
Key findings indicate an interplay between a small number unidirectional processes
(organizational and technology imperative) and a large number of bi-directional theories
(social technology and emergence). Overall, the emergent perspective described or
participated in describing a majority of the processes, given the developer's perspective,
extraction and interpretation of these key processes. In both cases, the ISD trajectory was
best described as emergent.
The result of within case and cross-case analysis is a model integrating the four
technology-context theories depending on stakeholder agreement and the adaptability of
technology during development and use. Dynamics and change in task, technology, and
stakeholder configurations are explained by the deliberate or accidental interaction of new and
old stakeholders, technology, ideas, agreements and/or tasks over time. Implications for
research and practice are discussed. / Business, Sauder School of / Management Information Systems, Division of / Graduate
|
5 |
A computer based data acquisition and analysis system for a cardiovascular research laboratorySuwarno, Neihl Omar, 1963- January 1989 (has links)
No description available.
|
6 |
A computerized information system for pathology/Hercz, Lawrence January 1974 (has links)
No description available.
|
7 |
'n Ekspertstelsel vir die beheer van pneumonie in 'n kritiese sorg eenheid.Schoeman, Isabella Lodewina 14 August 2012 (has links)
M.Sc. / Surgical patients admitted to an intensive care unit, are susceptible to infection by a large number of micro-organisms. Host defence mechanisms are breached by severe injuries or operations, or the use of life-support systems such as ventilators, catheters and endotracheal tubes. These organisms, some of which are resistant to antibiotics, can therefore invade sterile tissue. Although tissue samples from infected sites are sent to a laboratory to be analyzed, treatment of the patient has to commence before the results are known. Intelligent computer systems, of which expert systems are one of the most popular applications, can be utilized to support diagnostic and therapeutical decisions. This thesis describes the development of an expert system that supports clinical decision-making in the diagnosis and treatment of hospital-aquired pneumonia in an intensive care unit. Input data required by the expert system module are extracted from a data base with patient records. The data base and expert system module communicates by means of a program written in a conventional programming language. The system, which is only a prototype, can be extended to include additional expert system modules addressing other infections. Aquiring knowledge to be encoded in the expert system's knowledge base, remains a problem. In this case an existing scoring system that assigns weights to measurements and the outcomes of certain investigations, is used to obtain a score according to which pneumonia can be diagnosed. The infection is subsequently classified as one of several categories, according to existing guidelines. Appropriate therapy is recommended. The system can also consult a file containing sensitivities of bacteria for antibiotics for the unit, in order to facilitate the choice of drugs. The system has been implemented and tested with a few cases.
|
8 |
A computerized information system for pathology/Hercz, Lawrence January 1974 (has links)
No description available.
|
9 |
Toward a novel predictive analysis framework for new-generation clinical decision support systemsMazzocco, Thomas January 2014 (has links)
The idea of developing automated tools able to deal with the complexity of clinical information processing dates back to the late 60s: since then, there has been scope for improving medical care due to the rapid growth of medical knowledge, and the need to explore new ways of delivering this due to the shortage of physicians. Clinical decision support systems (CDSS) are able to aid in the acquisition of patient data and to suggest appropriate decisions on the basis of the data thus acquired. Many improvements are envisaged due to the adoption of such systems including: reduction of costs by faster diagnosis, reduction of unnecessary examinations, reduction of risk of adverse events and medication errors, increase in the available time for direct patient care, improved medications and examination prescriptions, improved patient satisfaction, and better compliance to gold-standard up-to-date clinical pathways and guidelines. Logistic regression is a widely used algorithm which frequently appears in medical literature for building clinical decision support systems: however, published studies frequently have not followed commonly recommended procedures for using logistic regression and substantial shortcomings in the reporting of logistic regression results have been noted. Published literature has often accepted conclusions from studies which have not addressed the appropriateness and accuracy of the statistical analyses and other methodological issues, leading to design flaws in those models and to possible inconsistencies in the novel clinical knowledge based on such results. The main objective of this interdisciplinary work is to design a sound framework for the development of clinical decision support systems. We propose a framework that supports the proper development of such systems, and in particular the underlying predictive models, identifying best practices for each stage of the model’s development. This framework is composed of a number of subsequent stages: 1) dataset preparation insures that appropriate variables are presented to the model in a consistent format, 2) the model construction stage builds the actual regression (or logistic regression) model determining its coefficients and selecting statistically significant variables; this phase is generally preceded by a pre-modelling stage during which model functional forms are hypothesized based on a priori knowledge 3) the further model validation stage investigates whether the model could suffer from overfitting, i.e., the model has a good accuracy on training data but significantly lower accuracy on unseen data, 4) the evaluation stage gives a measure of the predictive power of the model (making use of the ROC curve, which allows to evaluate the predictive power of the model without any assumptions on error costs, and possibly R2 from regressions), 5) misclassification analysis could suggest useful insights into determining where the model could be unreliable, 6) implementation stage. The proposed framework has been applied to three applications on different domains, with a view to improve previous research studies. The first developed model predicts mortality within 28 days of patients suffering from acute alcoholic hepatitis. The aim of this application is to build a new predictive model that can be used in clinical practice to identify patients at greatest risk of mortality in 28 days as they may benefit from aggressive intervention, and to monitor their progress while in hospital. A comparison generated by state of the art tools shows an improved predictive power, demonstrating how an appropriate variables inclusion may result in an overall better accuracy of the model, which increased by 25% following an appropriate variables selection process. The second proposed predictive model is designed to aid the diagnosis of dementia, as clinicians often experience difficulties in the diagnosis of dementia due to the intrinsic complexity of the process and lack of comprehensive diagnostic tools. The aim of this application is to improve on the performance of a recent application of Bayesian belief networks using an alternative approach based on logistic regression. The approach based on statistical variables selection outperformed the model which used variables selected by domain experts in previous studies. Obtained results outperform considered benchmarks by 15%. The third built model predicts the probability of experiencing a certain symptom among common side-effects in patients receiving chemotherapy. The newly developed model includes a pre-modelling stage (which was based on previous research studies) and a subsequent regression. The computed accuracy of results (computed on a daily basis for each cycle of therapy) shows that the newly proposed approach has increased its predictive power by 19% when compared to the previously developed model: this has been obtained by an appropriate usage of available a priori knowledge to pre-model the functional forms. As shown by the proposed applications, different aspects of CDSS development are subject to substantial improvements: the application of the proposed framework to different domains leads to more accurate models than the existing state-of-the-art proposals. The developed framework is capable of helping researchers to identify and overcome possible pitfalls in their ongoing research works, by providing them with best practices for each step of the development process. An impact on the development of future clinical decision support systems is envisaged: the usage of an appropriate procedure in model development will produce more reliable and accurate systems, and will have a positive impact on the newly produced medical knowledge which may eventually be included in standard clinical practice.
|
10 |
USE OF A PRIORI INFORMATION FOR IMPROVED TOMOGRAPHIC IMAGING IN CODED-APERTURE SYSTEMS.GINDI, GENE ROBERT. January 1982 (has links)
Coded-aperture imaging offers a method of classical tomographic imaging by encoding the distance of a point from the detector by the lateral scale of the point response function. An estimate, termed a layergram, of the transverse sections of the object can be obtained by performing a simple correlation operation on the detector data. The estimate of one transverse plane contains artifacts contributed by source points from all other planes. These artifacts can be partially removed by a nonlinear algorithm which incorporates a priori knowledge of total integrated object activity per transverse plane, positivity of the quantity being measured, and lateral extent of the object in each plane. The algorithm is iterative and contains, at each step, a linear operation followed by the imposition of a constraint. The use of this class of algorithms is tested by simulating a coded-aperture imaging situation using a one-dimensional code and two-dimensional (one axis perpendicular to aperture) object. Results show nearly perfect reconstructions in noise-free cases for the codes tested. If finite detector resolution and Poisson source noise are taken into account, the reconstructions are still significantly improved relative to the layergram. The algorithm lends itself to implementation on an optical-digital hybrid computer. The problems inherent in a prototype device are characterized and results of its performance are presented.
|
Page generated in 0.1214 seconds