Spelling suggestions: "subject:"engineering research."" "subject:"ingineering research.""
31 |
Scaling effects in the static and dynamic response of graphite- epoxy beam-columnsJackson, Karen E. 22 August 2008 (has links)
Scale model technology represents one method of investigating the behavior of advanced, weight-efficient composite structures under a variety of loading conditions. Testing of scale models can provide a cost effective alternative to destructive testing of expensive composite prototypes and can be used to verify predictions obtained from finite element analyses. It is necessary, however, to understand the limitations involved in testing scale model structures before the technique can be fully utilized. The objective of this research is to characterize these limitations, or scaling effects, in the large deflection response and failure of composite beams. Scale model beams were loaded with an eccentric axial compressive load designed to produce large bending deflections and global failure. / Ph. D.
|
32 |
A synthesis of quality and process controlGraybeal, B. Cheree January 1986 (has links)
An improved quality control model is suggested in this thesis. The improved quality control model is derived by treating the quality control problem as a process control problem. The quality control model is developed by formulating a process control model in terms of product quality parameters and control variables which affect the product quality parameters. SQC is used in the model to provide estimates about the state of the product quality variable as the product is processed by the plant. A state variable approach is used to determine the optimal control strategy.
An example quality control model is formulated for a coke size-reduction process. Numerical values are assumed and sensitivity analysis results are discussed. The results show that the proposed quality control model is reasonable. Extensions to more complicated models are discussed. / M.S.
|
33 |
Software design: communication between human factors engineers and software developersBradley, Roxanne 22 August 2009 (has links)
As computers pervade aspects of daily life, users demand software that is easy to use. It has been suggested that adding human factors engineers (HFEs) to software development teams would help software development companies meet these user demands. However, there are qualitative data which suggest that software developers (80s) and HFEs do not communicate well with each other. It is believed that this lack of communication has inhibited the use of HFEs on software development teams. It is further believed that this lack of communication is due in part to the differences in the frames of reference of HFEs and 50s.
Thus, the objectives of this thesis are:
1. To develop an instrument which can be used to determine the differences in the frames of reference of HFEs and 80s.
2. To test the instrument.
Three questionnaires were developed to probe the differences in the frames of reference of HFEs and 80s. The first, a background questionnaire, probed for information concerning software development experience and knowledge of specific software industry terms. The second was a software development activities questionnaire which was used to ascertain the importance of participation of certain professionals in software development activities. Finally, the usability information questionnaire was used to determine what type of supporting information would be necessary for a design change at certain points in the development of the product.
Participants (30 HFEs and 30 80s) completed the questionnaires. It was found that HFEs and 80s do differ in their frames of reference. It was also found that some of these differences could cause a lack of communication between HFEs and 50s. It is suggested that software companies provide interdisciplinary training for their employees to help reduce these differences and to improve communication. / Master of Science
|
34 |
Using a productivity measurement model to drive gainsharingRossler, Paul Edward 21 November 2012 (has links)
This research shows how a state-of-the-art, plant and firm level, productivity measurement model, the Total Factor Productivity Measurement Model (TFPMM) can be used to drive an organization's gainsharing effort. The TFPMM uses accounting data from actual operations to isolate the dollar effects on profits due to changes in productivity and price recovery. Two case study examples are used. The first case study involves a manufacturing firm and evaluates how the TFPMM can be used in that organization for gainsharing. The second case study involves a service firm and describes how that organization plans to use the TFPMM to drive its gainsharing effort.
This research also examines the role of gainsharing in the performance management process and in performance management efforts. The critical interrelationships between gainsharing and other components of an organization's management system and compensation system are explored. Various gainsharing approaches are reviewed and presented with an emphasis placed on design variables associated with a gainsharing effort. A methodology for TFPMM implementation and for gainsharing system design, development and implementation is presented and described. The research balances rigor with relevance to assist managers and practitioners with the design and development of measurement and reward systems. / Master of Science
|
35 |
Computation of pseudosonic logs in shallow fresh/brackish water wells: a test case in Brunswick, GeorgiaAllen, Nancy J. 01 August 2012 (has links)
Due to the usefulness of sonic logs in formation evaluation, efforts have been made to develop a method for calculating pseudosonic logs for wells in which sonic logs were not originally obtained. These efforts attempt to use electrical resistivity data in the calculation of pseudosonic logs by means of empirical scale functions. The purpose of this study is to examine ways of applying these relationships in relatively shallow wells where the principal formation fluid is fresh or brackish water. Data from four wells situated in Brunswick, Georgia were used in this study.
Conventional focused resistivity logs are sensitive to beds as thin as one foot and can provide detail similar to that seen on sonic logs. Focused resistivity logs should be best for conversion to pseudosonic logs in shallow wells, where invasion is minimal and the water used for drilling fluid has electrical resistivity close to that of formation water. Sonic and resistivity logs from a representative well are needed in the procedure for finding an empirical relationship between sonic transit time and resistivity. Values of transit time plotted versus resistivity are read from corresponding depths on both types of logs. The graphs obtained in this study reveal significantly more scatter than previously published graphs based upon deep well data.
An important feature clearly evident in the graphs is the presence of groups of points which me offset from each other. A separate scale function relating transit time and resistivity can be obtained from each group of points. It is noted that the different groups correspond to differences ir1 the chlorinity of the formation water. The results of this study indicate that it is necessary to consider the salinity of the formation water as well as electrical resistivity for purposes of calculating pseudosonic logs. In previous studies three constant coefficients were deterrnined experimentally in order to obtain an empirical scale function. The present study suggests that it may be possible to replace these constants with chlorinity dependent coefficients. The final results of this study indicate that reasonably reliable pseudosonic logs can be obtained only by using high quality focused resistivity logs from wells where information about the salinity of the formation water is also available. / Master of Science
|
36 |
The use of chemiluminesence for light-off detection of flamesHamer, Andrew John January 1989 (has links)
A fast response method for detection of light-off in gaseous flames and liquid spray flames has been developed. The method used chemiluminescent signals from the 2Σ - ²π OH system centered at 309 nm and the ²Δ - ²π CH system centered at 430 nm to indicate the presence of a flame. Spectral scans (performed on gaseous methane, liquid hexane and liquid Jet-A aircraft fuel) from 280 nm to 610 nm indicated that these two species produced the strongest signals available for flame detection. As their light is emitted in the ultraviolet spectrum, using the OH and CH radicals will potentially provide a good signal-to-noise ratio since, in combustion chambers, most of the broadband background emissions are in the infrared and visible wavelengths. These scans also showed that the hexane and Jet-A gave OH and CH signals of approximately equal intensity. The transient histories of OH and CH were investigated by performing light-off ignition tests and intermittent light-off ignition tests. These various flame conditions showed that both signals were good indicators of flame presence, showing on average, a response time of better than 3 milliseconds. It was found that when the Hydrogen to Carbon ratio of the fuel was decreased, the CH signal strength increased as a percentage of OH signal intensity. Finally, the output signal intensity was found to be sensitive to both the flame image magnification and to the part of the flame observed. / Master of Science
|
37 |
Prolog and artificial intelligence in chemical engineeringQuantrille, Thomas E. 06 June 2008 (has links)
This dissertation deals with applications of Prolog and Artificial Intelligence (AI) to chemical engineering, and in particular, to the area of chemical process synthesis. We introduce the language Prolog (chapters 1-9), discuss AI techniques (chapters 10-11), discuss EXSEP, the EXpert System for SEParation Synthesis (chapters 12-15), and summarize applications of both AI and Artificial Neural Networks (ANNs) to chemical engineering (chapters 16-17).
We have developed EXSEP, a knowledge-based system that performs separation process synthesis. EXSEP is a computer-aided design tool that can generate flowsheets using any combination of high-recovery (sharp) and low-recovery (nonsharp) separations, using a variety of separation methods with energy and mass separating agents.
EXSEP generates separation process flowsheets using a unique plan-generate-test approach that incorporates computer-aided tools and techniques for problem representation and simplification, feasibility analysis of separation tasks, and heuristic synthesis and evolutionary improvement.
A difficult problem in knowledge-based approaches to chemical engineering is the "quantitative or deep knowledge dilemma." Experience has shown that a strictly qualitative knowledge approach to chemical process synthesis is insufficient. However, including rigorous quantitative analysis into an expert system is cumbersome and impractical.
EXSEP overcomes this deep-knowledge dilemma through a unique knowledge representation and problem-solving strategy that includes shortcut design calculations. These calculations are used as a feasibility test for all separations; no separation is chosen by EXSEP unless it is deemed as thermodynamically feasible through this quantitative, deep-knowledge, engineering analysis.
We apply EXSEP for the flowsheet synthesis of several industrial separations problems. The results show that EXSEP successfully generates technically feasible and economically attractive process flowsheets accurately and efficiently. EXSEP is also user-friendly, and can be readily applied by practicing engineers using a personal computer. In addition, EXSEP is developed modularly, and can be easily expanded in the future to include additional separation methods. / Ph. D.
|
38 |
Combustion modelling of pulverised coal boiler furnaces fuelled with Eskom coalsEichhorn, Niels Wilhelm January 1998 (has links)
A dissertation submitted to the Faculty of Engineering, University of the Witwatersrand,
Johannesburg, in fulfilment of the requirements for the degree of Master in Science in
Engineering,
Johannesburg September 1998 / Combustion modelling of utility furnace chambers provides a cost efficient means to
extrapolate the combustion behaviour of pulverised fuel (pf) as determined from drop
tube furnace (DTF) experiments to full scale plant by making use of computational fluid
dynamics (CFD). The combustion model will be used to assimilate essential
information for the evaluation and prediction of the effect of
• changing coal feedstocks
• proposed operational changes
• boiler modifications.
TRI comrnlssloned a DTF in 1989 which has to date been primarily used for the
comparative characterisation of coals in terms of combustion behaviour. An analysis of
the DTF results allows the determination of certain combustion parameters used to
define a mathematical model describing the rate at which the combustion reaction
takes place. This model has been incorporated into a reactor model which can
simulate the processes occurring in the furnace region of a boiler, thereby allowing the
extrapolation of the DTF determined combustion assessment to the full scale. This
provides information about combustion conditions in the boiler which in turn are used
in the evaluation of the furnace performance.
Extensive furnace testwork of one of Eskom's wall fired plant (Hendrina Unit 9) during
1996, intended to validate the model for the ar plications outlined above, included the
measurement {If :
• gas temperatures
• O2, C02, CO, NOx and S02 concentrations
• residence time distributions
• combustible matter in combustion residues extracted from the furnace
• furnace heat fluxes.
The coal used during the tests was sampled and subjected to a series of chemical and
other lab-scale analyses to determine the following:
• physical properties
• composition
• devolatilisation properties
" combustion properties
The same furnace was modelled using the University of Stuttgart's AIOLOS combustion
code, the results of Which are compared with the measured data.
A DTF derived combustion assessment of a coal sampled from the same site but from
a different part of the beneficiation plant, which was found to burn differently, was
subsequently used in a further simulation to assess the sensitivity of the model to char
combustion rate data. The results of these predictions are compared to the predictions
of the validation simulation.
It was found that the model produces results that compare well with the measured
data. Furthermore. the model was found to be sufficiently sensitive to reactivity
parameters of the coal. The model has thereby demonstrated that it can be used in the
envisaged application of extrapolating DTF reactivity assessments to full scale plant. In
using the model, it has become apparent that the evaluations of furnace modifications
and assessments of boiler operation lie well within the capabilities of the model. / MT2017
|
39 |
Nonlinear effects in ground motion simulations: modeling variability, parametric uncertainty and implications in structural performance predictionsLi, Wei 08 July 2010 (has links)
While site effects are accounted for in most modern U.S. seismic design codes for building structures, there exist no standardized procedures for the computationally efficient integration of nonlinear ground response analyses in broadband ground motion simulations. In turn, the lack of a unified methodology affects the prediction accuracy of site-specific ground motion intensity measures, the evaluation of site amplification factors when broadband simulations are used for the development of hybrid attenuation relations and the estimation of inelastic structural performance when strong motion records are used as input in aseismic structural design procedures.
In this study, a set of criteria is established, which quantifies how strong nonlinear effects are anticipated to manifest at a site by investigating the empirical relation between nonlinear soil response, soil properties, and ground motion characteristics. More specifically, the modeling variability and parametric uncertainty of nonlinear soil response predictions are studied, along with the uncertainty propagation of site response analyses to the estimation of inelastic structural performance. Due to the scarcity of design level ground motion recording, the geotechnical information at 24 downhole arrays is used and the profiles are subjected to broadband ground motion synthetics.
For the modeling variability study, the site response models are validated against available downhole array observations. The site and ground motion parameters that govern the intensity of nonlinear effects are next identified, and an empirical relationship is established, which may be used to estimate to a first approximation the error introduced in ground motion predictions if nonlinear effects are not accounted for.
The soil parameter uncertainty in site response predictions is next evaluated as a function of the same measures of soil properties and ground motion characteristics. It is shown that the effects of nonlinear soil property uncertainties on the ground-motion variability strongly depend on the seismic motion intensity, and this dependency is more pronounced for soft soil profiles. By contrast, the effects of velocity profile uncertainties are less intensity dependent and more sensitive to the velocity impedance in the near surface that governs the maximum site amplification.
Finally, a series of bilinear single degree of freedom oscillators are subjected to the synthetic ground motions computed using the alternative soil models, and evaluate the consequent variability in structural response. Results show high bias and uncertainty of the inelastic structural displacement ratio predicted using the linear site response model for periods close to the fundamental period of the soil profile. The amount of bias and the period range where the structural performance uncertainty manifests are shown to be a function of both input motion and site parameters.
|
40 |
Application of the 13th edition AISC direct analysis method to heavy industry industrial structuresModugno, Jennifer L. 02 July 2010 (has links)
The objective of this study was to understand and develop procedures for the use of the AISC 2005 Specification's Direct Analysis Method for the analysis and design of heavy-industry industrial structures, to layout a systematic approach for the engineer to analyze and design using this method, and to determine if there will be any consequences to the practicing engineer in using this method.
The relevant 13th Edition AISC stability analysis methods (Effective Length,First-Order, and Direct Analysis Methods) were researched in the 2005 Specification aswell as in available technical literature, and then were critically evaluated by their applicability and limitations.
This study will help serve as a guide for the systematic approach for the practicing engineer to apply this method to analyze and design such complex steel frame structures using the computer-aided software called GTSTRUDL. To accomplish this purpose, two analytical models were studied using the Direct Analysis Method. The first model was a simple industrial structure and the second model was a more complex nuclear power plant boiler building.
|
Page generated in 0.1166 seconds