• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19624
  • 3369
  • 2417
  • 2007
  • 1551
  • 1432
  • 877
  • 406
  • 390
  • 359
  • 297
  • 233
  • 208
  • 208
  • 208
  • Tagged with
  • 38107
  • 12456
  • 9252
  • 7104
  • 6698
  • 5896
  • 5284
  • 5197
  • 4723
  • 3453
  • 3302
  • 2810
  • 2725
  • 2538
  • 2116
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
931

A computer simulation of the pulmonary microvascular exchange system - alveolar flooding

Heijmans, Franciscus R. C. January 1985 (has links)
Previous models of the pulmonary microvascular exchange system (28,29) have been restricted to the study of fluid and solute exchange between the pulmonary microcirculation, interstitial tissue space, and lymphatics. In severe pulmonary edema the capacities of the lymphatics and tissue space are exceeded. The fluid and solutes entering the interstitium from the circulation will, then, be transported Into the air space. The accumulation of fluid in the air space impairs the diffusion of gas (oxygen and carbon dioxide) between the air space and blood circulation; if this fluid accumulation is excessive a patient's health may be compromised. In this thesis severe pulmonary edema is studied by including the air space as a fourth compartment into the interstitial model developed by Bert and Pinder (29). A computer simulation of the four compartment (alveolar) model was developed on a digital computer. Tests of the model were made to study the effect of the parameters which were introduced into the alveolar model. These parameters include: a filtration coefficient that describes the alveolar membrane fluid conductivity, an extravascular fluid volume that represents the point at which fluid enters the air space, the alveolar fluid pressure at the onset of fluid flow into the air space, and the rate of alveolar fluid pressure change relative to an alveolar fluid volume change. For each case the dynamic response of the exchange system was recorded. In addition, two types of pulmonary edema were simulated: 1) hydrostatically induced edema, and 2) edema induced by changes to the fluid and solute permeability of the porous membrane separating the circulatory and interstitial compartments. Due to the limited data available on the interaction of the air space with the other three compartments of the pulmonary microvascular exchange system, only partial verification of the appropriate range of values of the alveolar model parameters and the predictions of the simulations was possible. The alveolar model developed in this thesis is an initial approximation but appears to provide a satisfactory approach for the inclusion of the air space in the pulmonary microvascular exchange system. / Applied Science, Faculty of / Chemical and Biological Engineering, Department of / Graduate
932

The estimated parameter flood forecasting model

Zachary, A. Glen January 1985 (has links)
Design flood estimates have traditionally been based on records of past events. However, there is a need for a method of estimating peak flows without these records. The Estimated Parameter Flood Forecasting Model (EPFFM) has been developed to provide such a method for small water resource projects based on a 200 year or less design flood. This "user friendly" computer model calculates the expected peak flow and its standard deviation from low, probable, and high estimates of thirteen user supplied parameters. These parameters describe physical characteristics of the drainage basin, infiltration rates, and rainstorm characteristics. The standard deviation provides a measure of reliability and is used to produce an 80% confidence interval on peak flows. The thesis briefly reviews existing flow estimation techniques and then describes the development of EPFFM. This includes descriptions of the Chicago method of rainfall hyetograph synthesis, Horton's infiltration equation, inflow by time-area method, Muskingum routing equation, and an approximate method of estimating the variance of multivariate equations since these are all used by EPFFM to model the physical and mathematical processes involved. Two examples are included to demonstrate EPFFM's ability to estimate a confidence interval, and compare these with recorded peak flows. / Applied Science, Faculty of / Civil Engineering, Department of / Graduate
933

Building rule-based expert systems in case-based law

Deedman, Galvin Charles January 1987 (has links)
This thesis demonstrates that it is possible to build rule-based expert systems in case-based law using a deep-structure analysis of the law and commercially available artificial intelligence tools. Nervous shock, an area of the law of negligence, was the domain chosen. The expert whose knowledge was used to build the system was Professor J.C. Smith of the Faculty of Law at the University of British Columbia / Law, Peter A. Allard School of / Graduate
934

Seismic migration by Chebychev transform : a novel approach

Mitsakis, Dimitrios Michael January 1987 (has links)
Chebychev semi-discretizations for both ordinary and partial differential equations are explored. The Helmholtz, heat, Schrӧdinger and 15° migration equations are investigated. The Galerkin, pseudospectral and tau projection operators are employed, while the Crank-Nicolson scheme is used for the integration of the time (depth) dependence. The performance of the Chebychev scheme is contrasted with the performance of the finite difference scheme for Dirichlet and Neumann boundary conditions. Comparisons between all finite difference, Fourier and Chebychev migration algorithms are drawn as well. Chebychev expansions suffer from neither the artificial dispersion dispersion of finite difference approximations nor the demand for a periodic boundary structure of Fourier expansions. Thus, it is shown that finite difference schemes require at least one order of magnitude more points in order to match the accuracy level of the Chebychev schemes. In addition, the Chebychev migration algorithm is shown to be free of the wraparound problem, inherent in migration procedures based on Fourier transform. / Science, Faculty of / Earth, Ocean and Atmospheric Sciences, Department of / Graduate
935

A fully automatic analytic approach to budget-constrained system upgrade

Wong, Angela Sai On January 1987 (has links)
This thesis describes the development of a software package to upgrade computer systems. The package, named OPTIMAL, solves the following problem: given an existing computer system and its workload, a budget, and the costs and descriptions of available upgrade alternatives for devices in the system, what is the most cost-effective way of upgrading and tuning the system to produce the optimal system throughput? To enhance the practicality of OPTIMAL, the research followed two criteria: i) input required by OPTIMAL must be system and workload characteristics directly measurable from the system under consideration; ii) other than gathering the appropriate input data, the package must be completely automated and must not require any specialized knowledge in systems performance evaluation to interpret the results. The output of OPTIMAL consists of the optimal system throughput under the budget constraint, the workload and system configuration (or upgrade strategy) that provide such throughput, and the cost of the upgrade. Various optimization techniques, including saturation analysis and fine tuning, have been applied to enhance the performance of OPTIMAL. / Science, Faculty of / Computer Science, Department of / Graduate
936

Simultaneous inversion of thermal and hydrogeologic data

Woodbury, Allan David January 1987 (has links)
The question that is addressed in this thesis is: can a simultaneous inverse scheme involving thermal and hydrologic data resolve hydrologic model parameters to a better degree than hydrologic data alone? The first chapter sets the framework for this question by first reviewing linear and non-linear inverse problems and then illustrating the advantages of a simultaneous inverse of two different data sets through the use of a simple example. It is the goal of Chapter 2 to examine current methodologies for stating and solving the inverse problem. A review of the maximum likelihood approach is presented, and a construction formalism is adopted by introducing a series of objective functionals (norms) which are minimized to yield a variety of possible models. The inverse is carried out using a modification of a constrained simplex procedure. The algorithm requires no derivative computations and can be used to minimize an arbitrarily complicated non-linear functional, subject to non-linear inequality constraints. The algorithm produces a wide variety of acceptable models clustered about a global minimum, each of which generates data that match observed values. The inverse technique is demonstrated on a series of one and two-dimensional synthetic data sets, and on a hydraulic head data set from Downie Slide, British Columbia, Canada. At this site, four parameters are determined; the free-surface position of the water table and three boundary conditions for the domain. Further simulations using a theoretical data set with assumed properties similar to that of Downie Slide show that with noise free data, and an adequate spacing between points it is possible to interpolate an unbiased estimate of hydraulic head data at all nodes in the equivalent discretized domain. When the inverse technique is applied, the domain's conductivity structure is correctly identified when enough prior log-conductivity information is available. The implications for Downie Slide are that in order to construct anything but a simple hydrogeologic model, accurate field measurements of hydraulic head are required, as well as well-defined estimates of hydraulic conductivity, a better spacing between measurements, and adequate knowledge of the boundary conditions. Chapter 3 is devoted to developing the idea of a joint inversion scheme involving both thermal and hydrologic data. One way of overcoming data limitations (sparse hydraulic head or few hydraulic conductivity estimates) in an inverse problem is to introduce an independently collected data set and apply simultaneous or joint inversion. The joint inversion method uses data from a number of different measurements to improve the resolution of parameters which are in common to one or more functional relationships. One such data set is subsurface temperature, which is sensitive to variations in hydraulic conductivity. In Chapter 3, the basic concepts of heat and fluid transfer in porous media with emphasis on forced convective effects are reviewed, followed by inversion of theoretical data and a re-investigation of the hydrogeology of Downie Slide, augmented with thermal data and a simultaneous inverse. Additional runs on a heterogenous medium presented in Chapter 2 are carried out. With a good temperature data base, thermal properties can be properly resolved. However, in this stochastic problem the addition of thermal data did not condition .the inverse to a greater degree than accomplished with the addition of prior information on log-conductivity. The benefits of including thermal data and applying a joint inversion can be substantial when considering the more realistic problem of uncertain boundary conditions. The simultaneous inverse is also applied to the Downie Slide data set examined in Chapter 2. Unfortunately, with a homogeneous hydraulic conductivity, all that can be determined from a hydraulic head inverse are ratios of flux to hydraulic conductivity. By including thermal data, the value of hydraulic conductivity can be determined at this site. Some of the model parameters (basal heat flux, thermal conductivity, specified head boundaries) are not resolved well by the joint scheme. None theless the constructed models do offer valuable insight into the hydrogeology of the field site. The constructed models persistently show a hydraulic conductivity value of about 1 x 10⁻⁷ m/sec, which is consistent with previous estimates of hydraulic conductivity at the site. A further comparison with the inverse results in Chapter 2 show good agreement between the two inverses for the hydraulic properties. / Science, Faculty of / Earth, Ocean and Atmospheric Sciences, Department of / Graduate
937

Determining the reliability of a computerized coach analysis instrument

Johnson, Robert Bennett January 1988 (has links)
The inter and intra-observer reliabilities of data collected by observers trained in the use of the Coach Analysis Instrument (CAI) are reported. The CAI is part of the Computerized Coaching Analysis System (CCAS). The CAI collects data related to the learning environment created by the coach in a team-sport practice situation. Both inter and intra-observer reliabilities are reported for each of the instrument's seven dimensions as well as for the overall instrument. The reliability measures reported are Total Percent Agreement (T) and Cohen's kappa (K). Recommendations pertaining to the development of systematic observation instruments, training program considerations, and reliability measures are presented and discussed. / Education, Faculty of / Curriculum and Pedagogy (EDCP), Department of / Graduate
938

Student nurses' conceptions of computers in hospitals

Abbott, Karen Elizabeth January 1988 (has links)
The trend toward increased computerization in Canadian hospitals has profound implications for hospital employed nurses, the majority of whom are educated in community college nursing programs. Educators, in response to this trend, must be attentive to the quality of student learning when planning for computer content in nursing curricula. The purpose of this phenomenological study was to identify how student nurses, enrolled in a community college nursing program, conceptualize the impact of computer use on hospital employed nurses. Students' conceptions were analyzed in relation to their: (a) attitude toward computers, and (b) length of clinical experience. Thirty-five (11 first-year, 11 second-year and 13 third-year) students enrolled in the nursing program at Cariboo College in Kamloops, British Columbia, were interviewed. Three broad, and ten forced-response, questions generated both qualitative and quantitative data, which were reported as primary and secondary findings. Data analysis, through use of the constant comparative method, was carried out on a formative and summative basis. Findings indicated that subjects had little awareness of computer use by nurses today. Their knowledge of how computers may be used by nurses in the future was also limited, and appeared to center around three broad areas: nursing, communication, and administration. Subjects conceptions of the impact of computer use on hospital employed nurses fell into four categories: (a) nursing image, (b) professionalism, (c) patient care, and (d) workload. Their comments on these four categories were further classified into three: sub-categories, indicating whether they felt that the increased use of computers would: (a) enhance, (b) detract from or (c) both enhance and detract from, each category. It was found that subjects' conceptions differed in complexity in direct proportion to the year in which they were enrolled in the program and also the length of their clinical experience. The majority of subjects had positive attitudes toward computer use. In addition, it was found that there was a significant relationship between complexity of conception and attitude. Students enter nursing programs with established conceptions and attitudes. The goal in planning computer programs must be to sequence computer content through the use of a taxonomy of learning outcomes, so that quality of learning is a priority, and positive attitudes are fostered. / Education, Faculty of / Educational Studies (EDST), Department of / Graduate
939

Development of data acquisition and analysis methods for chemical acoustic emission

Sibbald, David Bruce January 1990 (has links)
Acoustic Emission Analysis (AEA) is the study of the sonic (and ultrasonic) energy released by chemical systems in the form of transient waves, as the system attempts to (re)attain equilibrium. This area of chemistry, and chemical analysis, is ripe for fundamental studies since it has been little explored. The high potential of the technique as a non-invasive, non-destructive reaction monitoring scheme suggests that numerous applications will follow. In this work, an apparatus and software have been constructed to monitor acoustic emission (AE) and collect and process AE data. A broad-band piezoelectric transducer was used to convert the acoustic signals to electrical waveforms which could be captured by a digital storage oscilloscope. These waveforms were then stored on an IBM-compatible computer for further analysis. Analysis of the data was performed using pattern recognition techniques. The signals were characterized through the use of descriptors which can map each signal onto a multi-dimensional feature space. Visualization of the data structure in multidimensional space was accomplished using several methods. Hierarchical clustering was used to produce tree structures, known as dendrograms, which attempt to show clustering of the signals into various groups. Abstract factor analysis (AFA) - also called principal components analysis (PCA) - was used to project the data onto a two dimensional factor space to allow for direct viewing of structure in the multidimensional data. Sodium hydroxide dissolution, aluminum chloride hydration and heat activation of Intumescent Flame Retardants (IFR's) were used to test the assembled hardware and to provide data to submit to the pattern recognition algorithms coded as part of this work. The solid-solid phase transition of trimethylolethane (Trimet), and the liquid crystal phase transitions of two liquid crystals (α-ѡ-bis(4-n-decylaniline-benzilidene-4'-oxyhexane), and 4-n-pentyloxybenzylidene-4'-n-heptylaniline) were also monitored and the signals analyzed. The pattern recognition software was able to extract much information from the acoustically emitting samples - information which would not have been apparent by using standard (uni- and bi-variate) methods of analysis. Chemical acoustic emission, coupled with pattern recognition analysis, will be able to provide the chemist with knowledge (qualitative, quantitative, kinetic, etc.) about chemical systems which are often difficult or impossible to monitor and analyze by other means. / Science, Faculty of / Chemistry, Department of / Graduate
940

Towards computer-based analysis of clinical electroencephalograms

Doyle, Daniel John January 1974 (has links)
Two approaches to the automatic analysis of clinical electroencephalograms (EEGs) are considered with a view towards classifying clin ical EEGs as normal or abnormal. The first approach examines the variability of various EEG features in a population of astronaut candidates known to be free of neurological disorders by constructing histograms of these features; unclassified EEGs of subjects in the same age group are examined by comparison of their feature values to the histograms of this neurologically normal group. The second approach employs the techniques of automatic pattern recognition for classification of clinical EEGs. A set of 57 EEG records designated normal or abnormal by clinical electro-encephalographers are used to evaluate pattern recognition systems based on stepwise discriminant analysis. In particular, the efficacy of using various feature sets in such pattern recognition systems is evaluated in terms of estimated classification error probabilities (Pe). The results of the study suggest a potential for the development of satisfactory automatic systems for the classification of clinical EEGs. / Applied Science, Faculty of / Electrical and Computer Engineering, Department of / Graduate

Page generated in 0.1359 seconds