Spelling suggestions: "subject:"3dmodel"" "subject:"2dmodel""
481 |
數據相關之二階製程管制 / Two-step Process Control for Autocorrelated data陳維倫, Chen, Wei-Lun Unknown Date (has links)
Most products are produced by several process steps and have more than one interested quality characteristics. If each step of the process is independent, and the observations taken from the process are also independent then we may use Shewhart control chart at each step. However, in many processes, most production steps are dependent and the observations taken from the process are correlated. In this research, we consider the process has two dependent steps and the observations taken from the process are correlated over time. We construct the individual residual control chart to monitor the previous process and the cause-selecting control chart to monitor the current process. Then simulate all the states occur in the process and present the individual residual control chart and the cause-selecting control chart of the simulations. Furthermore compare the proposed control charts with the Hotelling T2 control chart. At last, we give an example to illustrate how to construct the proposed control
From the proposed control charts, we can determine which step of the process is out of control easily. If there is a signal in the individual residual control chart, it means the previous process is out of control. If there is a signal in the cause-selecting control chart, it means the current process is out of control. The Hotelling T2 control chart only indicate the process is out of control but does not detect which step of the process is out of control.
|
482 |
Model Predictive Control of Magnetic Bearing SystemHuang, Yang, S3110949@student.rmit.edu.au January 2007 (has links)
Magnetic Bearing Systems have been receiving a great deal of research attention for the past decades. Its inherent nonlinearity and open-loop instability are challenges for controller design. This thesis investigates and designs model predictive control strategy for an experimental Active Magnetic Bearing (AMB) laboratory system. A host-target development environment of real-time control system with hardware in the loop (HIL) is implemented. In this thesis, both continuous and discrete time model predictive controllers are studied. In the first stage, local MPC controllers are applied to control the AMB system; and in the second stage, concept of supervisory controller design is then investigated and implemented. Contributions of the thesis can be summarized as follows; 1. A Discrete time Model Predictive Controller has been developed and applied to the active magnetic bearing system. 2. A Continuous time Model Predictive Controller has been developed and applied to the active magnetic bearing system. 3. A frequency domain identification method using FSF has been applied to pursue model identification with respect to local MPC and magnetic bearing system. 4. A supervisory control strategy has been applied to pursue a two stages model predictive control of active magnetic bearing system.
|
483 |
TEST DERIVATION AND REUSE USING HORIZONTAL TRANSFORMATION OF SYSTEM MODELSKAVADIYA, JENIS January 2010 (has links)
No description available.
|
484 |
Performance analysis of compositional and modified black-oil models for rich gas condensate reservoirs with vertical and horizontal wellsIzgec, Bulent 30 September 2004 (has links)
It has been known that volatile oil and gas condensate reservoirs cannot be modeled accurately with conventional black-oil models. One variation to the black-oil approach is the modified black-oil (MBO) model that allows the use of a simple, and less expensive computational algorithm than a fully compositional model that can result in significant timesaving in full field studies. The MBO model was tested against the fully compositional model and performances of both models were compared using various production and injection scenarios for a rich gas condensate reservoir. The software used to perform the compositional and MBO runs were Eclipse 300 and Eclipse 100 versions 2002A. The effects of black-oil PVT table generation methods, uniform composition and compositional gradient with depth, initialization methods, location of the completions, production and injection rates, kv/kh ratios on the performance of the MBO model were investigated. Vertical wells and horizontal wells with different drain hole lengths were used. Contrary to the common belief that oil-gas ratio versus depth initialization gives better representation of original fluids in place, initializations with saturation pressure versus depth gave closer original fluids in place considering the true initial fluids in place are given by the fully compositional model initialized with compositional gradient. Compared to the compositional model, results showed that initially there was a discrepancy in saturation pressures with depth in the MBO model whether it was initialized with solution gas-oil ratio (GOR) and oil-gas ratio (OGR) or dew point pressure versus depth tables. In the MBO model this discrepancy resulted in earlier condensation and lower oil production rates than compositional model at the beginning of the simulation. Unrealistic vaporization in the MBO model was encountered in both natural depletion and cycling cases. Oil saturation profiles illustrated the differences in condensate saturation distribution for the near wellbore area and the entire reservoir even though the production performance of the models was in good agreement. The MBO model representation of compositional phenomena for a gas condensate reservoir proved to be successful in the following cases: full pressure maintenance, reduced vertical communication, vertical well with upper completions, and producer set as a horizontal well.
|
485 |
Calculation of observables in the nuclear 0s1d shell using new model-independent two-body interactions.Mkhize, Percival Sbusiso. January 2007 (has links)
<p><font face="TimesNewRomanPSMT">
<p align="left">The objective of the present investigation is to calculate observables using the nuclear shell model for a variety of nuclear phenomena and to make comparisons with experimental data as well as the older interaction. The shell-model code OXBASH was employed for the calculations. Quantities investigated include energy level schemes, static magnetic dipole and electric quadrupole moments, electromagnetic transition probabilities, spectroscopic factors and beta decay rates.</p>
</font></p>
|
486 |
Statistical and Realistic Numerical Model Investigations of Anthropogenic and Climatic Factors that Influence Hypoxic Area Variability in the Gulf of MexicoFeng, Yang 2012 May 1900 (has links)
The hypoxic area in the Gulf of Mexico is the second largest in the world, which has received extensive scientific study and management interest. Previous modeling studies have concluded that the increased hypoxic area in the Gulf of Mexico was caused by the increased anthropogenic nitrogen loading of the Mississippi River; however, the nitrogen-area relationship is complicated by many other factors, such as wind, river discharge, and the ratio of Mississippi to Atchafalaya River flow. These factors are related to large-scale climate variability, and thus will not be affected by regional nitrogen reduction efforts.
In the research presented here, both statistical (regression) and numerical models are used to study the influence of anthropogenic and climate factors on the hypoxic area variability in the Gulf of Mexico. The numerical model is a three-dimensional, coupled hydrological-biogeochemical model (ROMS-Fennel). Results include: (1) the west wind duration during the summer explain 55% of the hypoxic area variability since 1993. Combined wind duration and nitrogen loading explain over 70% of the variability, and combined wind duration and river discharge explain over 85% of the variability. (2) The numerical model captures the temporal variability, but overestimates the bottom oxygen concentrations. The model shows that the simulated hypoxic area is in agreement with the observations from the year 1991, as long as hypoxia is defined as oxygen concentrations below 3 mg/L rather than below 2 mg/L. (3) The first three modes from an Empirical Orthogonal Function (EOF) analysis of the numerical model output results explain 62%, 8.1% and 4.9% of the variability of the hypoxic area. The Principle Component time series is cross-correlated with wind, dissolved inorganic nitrogen concentration and river discharge. (4) Scenario experiments with the same nitrogen loading, but different duration of upwelling favorable wind, indicate that the upwelling favorable wind is important for hypoxic area development. However, a long duration of upwelling wind decreases the area. (5) Scenario experiments with the same nitrogen loading, but different discharges, indicate that increasing river discharge by 50% increases the area by 42%. Additionally, scenario experiments with the same river discharge, but different nitrogen concentrations, indicate that reducing the nitrogen concentration by 50% decreases the area by 75%. (6) Scenario experiments with the same nitrogen loading, but different flow diver-
sions, indicate that if the Atchafalaya River discharges increased to 66.7%, the total hypoxic area increases the hypoxic area by 30%, and most of the hypoxic area moved
from east to west Louisiana shelf. Additionally, if the Atchafalaya River discharge decreased to zero, the total hypoxic area increases by 13%. (7) Scenario experiments with the same nitrogen loading, but different nitrogen forms, indicate that if all the nitrogen was in the inorganic forms, the hypoxic area increases by 15%. These results have multiple implications for understanding the mechanisms that control the oxygen dynamics, reevaluating management strategies, and improving the observational methods.
|
487 |
Grobner Basis and Structural Equation ModelingLim, Min 23 February 2011 (has links)
Structural equation models are systems of simultaneous linear equations that are generalizations of linear regression, and have many applications in the social, behavioural and biological sciences. A serious barrier to applications is that it is easy to specify models for which the parameter vector is not identifiable from the distribution of the observable data, and it is often difficult to tell whether a model is identified or not.
In this thesis, we study the most straightforward method to check for identification – solving a system of simultaneous equations. However, the calculations can easily get very complex. Grobner basis is introduced to simplify the process.
The main idea of checking identification is to solve a set of finitely many simultaneous
equations, called identifying equations, which can be transformed into polynomials. If a unique solution is found, the model is identified. Grobner basis reduces the polynomials into simpler forms making them easier to solve. Also, it allows us to investigate the model-induced constraints on the covariances, even when the model is not identified.
With the explicit solution to the identifying equations, including the constraints on the covariances, we can (1) locate points in the parameter space where the model is not identified, (2) find the maximum likelihood estimators, (3) study the effects of mis-specified models, (4) obtain a set of method of moments estimators, and (5) build customized parametric and distribution free tests, including inference for non-identified models.
|
488 |
Applicability of multiplicative and additive hazards regression models in survival analysisSarker, Sabuj 12 April 2011
Background: Survival analysis is sometimes called time-to-event analysis. The Cox model is used widely in survival analysis, where the covariates act multiplicatively on unknown baseline hazards. However, the Cox model requires the proportionality assumption, which limits its applications. The additive hazards model has been used as an alternative to the Cox model, where the covariates act additively on unknown baseline hazards.
Objectives and methods: In this thesis, performance of the Cox multiplicative hazards model and the additive hazards model have been demonstrated and applied to the transfer, lifting and repositioning (TLR) injury prevention study. The TLR injury prevention study was a retrospective, pre-post intervention study that utilized a non-randomized control group. There were 1,467 healthcare workers from six hospitals in Saskatchewan, Canada who were injured from January 1, 1999 to December 1, 2006. De-identified data sets were received from the Saskatoon Health Region and Regina Quappelle Health Region. Time to repeated TLR injury was considered as the outcome variable. The models goodness of fit was also assessed.
Results: Of a total of 1,467 individuals, 149 (56.7%) in the control group and 114 (43.3%) in the intervention group had repeated injuries during the study period. Nurses and nursing aides had the highest repeated TLR injuries (84.8%) among occupations. Back, neck and shoulders were the most common body parts injured (74.9%). These covariates were significant in both Cox multiplicative and additive hazards models. The intervention group had 27% fewer repeated injuries than the control group in the multiplicative hazards model (HR= 0.63; 95% CI=0.48-0.82; p-value=0.0002). In the additive model, the hazard difference between the intervention and the control groups was 0.002.
Conclusion: Both multiplicative and additive hazards models showed similar results, indicating that the TLR injury prevention intervention was effective in reducing repeated injuries. The additive hazards model is not widely used, but the coefficient of the covariates is easy to interpret in an additive manner. The additive hazards model should be considered when the proportionality assumption of the Cox model is doubtful.
|
489 |
Modelling retention time in a clearwellYu, Xiaoli 23 September 2009
Clearwells are large water reservoirs often used at the end of the water treatment process as chlorine contact chambers. Contact time required for microbe destruction is provided by residence time within the clearwell. The residence time distribution can be determined from tracer tests and is the one of the key factors in assessing the hydraulic behaviour and efficiency of these reservoirs. This work provides an evaluation of whether the two-dimensional, depth-averaged, finite element model, River2DMix can adequately simulate the flow pattern and residence time distribution in clearwells. One question in carrying out this modelling is whether or not the structural columns in the reservoir need to be included, as inclusion of the columns increases the computational effort required.<p>
In this project, the residence time distribution predicted by River2DMix was compared to results of tracer tests in a scale model of the Calgary Glenmore water treatment plant northeast clearwell. Results from tracer tests in this clearwell were available. The clearwell has a serpentine baffle system and 122 square structural columns distributed throughout the flow. A comparison of the flow patterns in the hydraulic and computational models was also made. The hydraulic model tests were carried out with and without columns in the clearwell.<p>
The 1:19 scale hydraulic model was developed on the basis of Froude number similarity and the maintenance of minimum Reynolds numbers in the flow through the serpentine system and the baffle wall at the entrance to the clearwell. Fluorescent tracer slug injection tests were used to measure the residence time distribution in the clearwell. Measurements of tracer concentration were taken at the clearwell outlet using a continuous flow-through fluorometer system. Flow visualization was also carried out using dye to identify and assess the dead zones in the flow. It was found that it was necessary to ensure the flow in the scale model was fully developed before starting the tracer tests, and determining the required flow development time to ensure steady state results from the tracer tests became an additional objective of the work. Tests were carried out at scale model flows of 0.85, 2.06, and 2.87 L/s to reproduce the 115, 280, and 390 ML/day flows seen in the prototype tracer tests.<p>
Scale model results of the residence time distribution matched the prototype tracer test data well. However, approximately 10.5 hours was required for flow development at the lowest flow rate tested (0.85 L/s) before steady state conditions were reached and baffle factor results matched prototype values. At the intermediate flow, baffle factor results between the scale model and prototype matched well after only 1 h of flow development time, with improvements only in the Morril dispersion index towards prototype values with increased flow development time (at 5 h). Similar results were seen at the highest flow tested. For fully developed flow, there was little change in the baffle factor and dispersion index results in the scale model with varied flow rate.<p>
With the addition of columns to the scale model, there was no significant change in the baffle factor compared to the case compared to without the columns, but up to a 13.9 % increase in dispersion index as compared to the tests in the scale model without columns for fully developed flow. Further, the residence time distribution results from the scale model tests without columns matched the entire residence time distribution found in the prototype tests. However, for the model with columns, the residence time distribution matched the prototype curve well at early times, but departed significantly from it at times later in the tests. It appears the major effect of the addition of columns within a model clearwell is to increase the dispersion index and increase the proportion of the clearwell which operates as a mixed reactor.<p>
The results also showed there was good agreement between the physical model tests and River2DMix simulations of the scale model tests for both the flow pattern and residence time distributions. This indicates that a two-dimensional depth-averaged computer model such as River2DMix can provide representative simulation results in the case where the inlet flow is expected to be quickly mixed throughout the depth of flow in the clearwell.
|
490 |
Applying an unfolding model to the stages and processes of changeBeever, Rob 02 January 2008
The purpose of this study was to utilize the graded unfolding model (GUM) (Roberts, 1995; Roberts & Laughlin, 1996) to examine the interaction between the stages of change (SOC) and the processes of change (POC) for smoking cessation (SC). Although an abundance of research has examined the transtheoretical model (TTM) and SC, the POC remains one of the least investigated dimensions of the TTM. Only one study has applied an item response theory model, the GUM, to the examination of the SOC and POC (Noel, 1999). This study attempted to replicate and extend the findings of Noel (1999) and provides additional external validity evidence for the SOC and the POC for SC.<p>The TTM posits that people undergoing change will use different processes and strategies as they proceed through the SOC and that each POC appears to reach peak use at different stages. Thus, the POC appear to follow an inverse-U-shaped pattern (Noel, 1999).<p>Responses to the SOC and 40-item POC for SC were collected from young adults. Analysis of the data using the GGUM (Roberts, 2000) demonstrated the applicability of the GUM and provides additional external validity of the POC for SC. More specifically, six POC were ordered as expected according to results of longitudinal studies. Four POC were found to be out of order, however, this could be due to sample characteristics or reduced validity of items (due to smoking law changes, some items may no longer be valid). Helping Relationships and Stimulus Control appeared together out of order. This finding replicates Noel (1999) and further research is needed to examine the ordering of these POC. The GUM was also found to fit the POC data better than other item response theory models.
|
Page generated in 0.0819 seconds