251 |
Gamma Interferon Production by Peripheral Blood Lymphocytes in Patients with Gastric CancerKATO, HAJIME, MORISE, KIMITOMO, KUSUGAMI, KAZUO 03 1900 (has links)
No description available.
|
252 |
An experimental investigation of the flow around impulsively started cylindersTonui, Nelson Kiplanga't 10 September 2009
A study of impulsively started flow over cylindrical objects is made using the particle image velocimetry (PIV) technique for Reynolds numbers of Re = 200, 500 and 1000 in an X-Y towing tank. The cylindrical objects studied were a circular cylinder of diameter, D = 25.4 mm, and square and diamond cylinders each with side length, D = 25.4 mm. The aspect ratio, AR (= L/D) of the cylinders was 28 and therefore they were considered infinite. The development of the recirculation zone up to a dimensionless time of t* = 4 following the start of the motion was examined. The impulsive start was approximated using a dimensionless acceleration parameter, a*, and in this research, the experiments were conducted for five acceleration parameters, a* = 0.5, 1, 3, 5 and 10. The study showed that conditions similar to impulsively started motion were attained once a* ¡Ý 3.<p>
A recirculation zone was formed immediately after the start of motion as a result of flow separation at the surface of the cylinder. It contained a pair of primary eddies, which in the initial stages (like in this case) were symmetrical and rotating in opposite directions. The recirculation zone was quantified by looking at the length of the zone, LR, the vortex development, both in terms of the streamwise location and the cross-stream spacing of the vortex centers, a and b, respectively, as well as the circulation (strength) of the primary vortices, ¦£.<p>
For all types of cylinders examined, the length of the recirculation zone, the streamwise location of the primary eddies and the circulation of the primary eddies increase as time advances from the start of the impulsive motion. They also increase with an increase in the acceleration parameter, a*, until a* = 3, beyond which there is no more change, since the conditions similar to impulsively started conditions have been achieved. The cross-stream spacing of the primary vortices is relatively independent of Re, a* and t* but was different for different cylinders.<p>
Irrespective of the type of cylinder, the growth of the recirculation zone at Re = 500 and 1000 is smaller than at Re = 200. The recirculation zone of a diamond cylinder is much larger than for both square and circular cylinders. The square and diamond cylinders have sharp edges which act as fixed separation points. Therefore, the cross-stream spacing of the primary vortex centers are independent of Re, unlike the circular cylinder which shows some slight variation with changes in Reynolds number.<p>
The growth of the recirculation is more dependent on the distance moved following the start of the impulsive motion; that is why for all types of cylinders, the LR/D, a/D and ¦£/UD profiles collapse onto common curves when plotted against the distance moved from the start of the motion.
|
253 |
On Covering Points with Conics and Strips in the PlaneTiwari, Praveen 1985- 14 March 2013 (has links)
Geometric covering problems have always been of focus in computer scientific research. The generic geometric covering problem asks to cover a set S of n objects with another set of objects whose cardinality is minimum, in a geometric setting. Many versions of geometric cover have been studied in detail, one of which is line cover: Given a set of points in the plane, find the minimum number of lines to cover them. In Euclidean space Rm, this problem is known as Hyperplane Cover, where lines are replaced by affine hyperplanes bounded by dimension d. Line cover is NP-hard, so is its hyperplane analogue. Our thesis focuses on few extensions of hyperplane cover and line cover.
One of the techniques used to study NP-hard problems is Fixed Parameter Tractability (FPT), where, in addition to input size, a parameter k is provided for input instance. We ask to solve the problem with respect to k, such that the running time is a function in both n and k, strictly polynomial in n, while the exponential component is limited to k. In this thesis, we study FPT and parameterized complexity theory, the theory of classifying hard problems involving a parameter k.
We focus on two new geometric covering problems: covering a set of points in the plane with conics (conic cover) and covering a set of points with strips or fat lines of given width in the plane (fat line cover). A conic is a non-degenerate curve of degree two in the plane. A fat line is defined as a strip of finite width w. In this dissertation, we focus on the parameterized versions of these two problems, where, we are asked to cover the set of points with k conics or k fat lines. We use the existing techniques of FPT algorithms, kernelization and approximation algorithms to study these problems. We do a comprehensive study of these problems, starting with NP-hardness results to studying their parameterized hardness in terms of parameter k.
We show that conic cover is fixed parameter tractable, and give an algorithm of running time O∗ ((k/1.38)^4k), where, O∗ implies that the running time is some polynomial in input size. Utilizing special properties of a parabola, we are able to achieve a faster algorithm and show a running time of O∗ ((k/1.15)^3k).
For fat line cover, first we establish its NP-hardness, then we explore algorithmic possibilities with respect to parameterized complexity theory. We show W [1]-hardness of fat line cover with respect to the number of fat lines, by showing a parameterized reduction from the problem of stabbing axis-parallel squares in the plane. A parameterized reduction is an algorithm which transforms an instance of one parameterized problem into an instance of another parameterized problem using a FPT-algorithm. In addition, we show that some restricted versions of fat line cover are also W [1]-hard. Further, in this thesis, we explore a restricted version of fat line cover, where the set of points are integer coordinates and allow only axis-parallel lines to cover them. We show that the problem is still NP-hard. We also show that this version is fixed parameter tractable having a kernel size of O (k^2) and give a FPT-algorithm with a running time of O∗ (3^k). Finally, we conclude our study on this problem by giving an approximation algorithm for this version having a constant approximation ratio 2.
|
254 |
Optimal Design of Sensor Parameters in PLC-Based Control System Using Mixed Integer ProgrammingOKUMA, Shigeru, SUZUKI, Tatsuya, MUTOU, Takashi, KONAKA, Eiji 01 April 2005 (has links)
No description available.
|
255 |
Thermal Conductivity and Specific Heat Measurements for Power Electronics Packaging Materials. Effective Thermal Conductivity Steady State and Transient Thermal Parameter Identification MethodsMadrid Lozano, Francesc 06 March 2005 (has links)
No description available.
|
256 |
Parameter Estimation and Uncertainty Analysis of Contaminant First Arrival Times at Household Drinking Water WellsKang, Mary January 2007 (has links)
Exposure assessment, which is an investigation of the extent of human exposure to a specific contaminant, must include estimates of the duration and frequency of exposure. For a groundwater system, the duration of exposure is controlled largely by the arrival time of the contaminant of concern at a drinking water well. This arrival time, which is normally estimated by using groundwater flow and transport models, can have a range of possible values due to the uncertainties that are typically present in real problems. Earlier arrival times generally represent low likelihood events, but play a crucial role in the decision-making process that must be conservative and precautionary, especially when evaluating the potential for adverse health impacts. Therefore, an emphasis must be placed on the accuracy of the leading tail region in the likelihood distribution of possible arrival times.
To demonstrate an approach to quantify the uncertainty of arrival times, a real contaminant transport problem which involves TCE contamination due to releases from the Lockformer Company Facility in Lisle, Illinois is used. The approach used in this research consists of two major components: inverse modelling or parameter estimation, and uncertainty analysis.
The parameter estimation process for this case study was selected based on insufficiencies in the model and observational data due to errors, biases, and limitations. A consideration of its purpose, which is to aid in characterising uncertainty, was also made in the process by including many possible variations in attempts to minimize assumptions. A preliminary investigation was conducted using a well-accepted parameter estimation method, PEST, and the corresponding findings were used to define characteristics of the parameter estimation process applied to this case study. Numerous objective functions, which include the well-known L2-estimator, robust estimators (L1-estimators and M-estimators), penalty functions, and deadzones, were incorporated in the parameter estimation process to treat specific insufficiencies. The concept of equifinality was adopted and multiple maximum likelihood parameter sets were accepted if pre-defined physical criteria were met. For each objective function, three procedures were implemented as a part of the parameter estimation approach for the given case study: a multistart procedure, a stochastic search using the Dynamically-Dimensioned Search (DDS), and a test for acceptance based on predefined physical criteria. The best performance in terms of the ability of parameter sets to satisfy the physical criteria was achieved using a Cauchy’s M-estimator that was modified for this study and designated as the LRS1 M-estimator. Due to uncertainties, multiple parameter sets obtained with the LRS1 M-estimator, the L1-estimator, and the L2-estimator are recommended for use in uncertainty analysis. Penalty functions had to be incorporated into the objective function definitions to generate a sufficient number of acceptable parameter sets; in contrast, deadzones proved to produce negligible benefits. The characteristics for parameter sets were examined in terms of frequency histograms and plots of parameter value versus objective function value to infer the nature of the likelihood distributions of parameters. The correlation structure was estimated using Pearson’s product-moment correlation coefficient. The parameters are generally distributed uniformly or appear to follow a random nature with few correlations in the parameter space that results after the implementation of the multistart procedure. The execution of the search procedure results in the introduction of many correlations and in parameter distributions that appear to follow lognormal, normal, or uniform distributions. The application of the physical criteria refines the parameter characteristics in the parameter space resulting from the search procedure by reducing anomalies. The combined effect of optimization and the application of the physical criteria performs the function of behavioural thresholds by removing parameter sets with high objective function values.
Uncertainty analysis is performed with parameter sets obtained through two different sampling methodologies: the Monte Carlo sampling methodology, which randomly and independently samples from user-defined distributions, and the physically-based DDS-AU (P-DDS-AU) sampling methodology, which is developed based on the multiple parameter sets acquired during the parameter estimation process. Monte Carlo samples are found to be inadequate for uncertainty analysis of this case study due to its inability to find parameter sets that meet the predefined physical criteria. Successful results are achieved using the P-DDS-AU sampling methodology that inherently accounts for parameter correlations and does not require assumptions regarding parameter distributions. For the P-DDS-AU samples, uncertainty representation is performed using four definitions based on pseudo-likelihoods: two based on the Nash and Sutcliffe efficiency criterion, and two based on inverse error or residual variance. The definitions consist of shaping factors that strongly affect the resulting likelihood distribution. In addition, some definitions are affected by the objective function definition. Therefore, all variations are considered in the development of likelihood distribution envelopes, which are designed to maximize the amount of information available to decision-makers. The considerations that are important to the creation of an uncertainty envelope are outlined in this thesis. In general, greater uncertainty appears to be present at the tails of the distribution. For a refinement of the uncertainty envelopes, the application of additional physical criteria is recommended.
The selection of likelihood and objective function definitions and their properties are made based on the needs of the problem; therefore, preliminary investigations should always be conducted to provide a basis for selecting appropriate methods and definitions. It is imperative to remember that the communication of assumptions and definitions used in both parameter estimation and uncertainty analysis is crucial in decision-making scenarios.
|
257 |
The Role of Dominant Cause in Variation Reduction through Robust Parameter DesignAsilahijani, Hossein 24 April 2008 (has links)
Reducing variation in key product features is a very important goal in process improvement. Finding and trying to control the cause(s) of variation is one way to reduce variability, but is not cost effective or even possible in some situations. In such cases, Robust Parameter Design (RPD) is an alternative.
The goal in RPD is to reduce variation by reducing the sensitivity of the process to the sources of variation, rather than controlling these sources directly. That is, the goal is to find levels of the control inputs that minimize the output variation imposed on the process via the noise variables (causes). In the literature, a variety of experimental plans have been proposed for RPD, including Robustness, Desensitization and Taguchi’s method. In this thesis, the efficiency of the alternative plans is compared in the situation where the most important source of variation, called the “Dominant Cause”, is known. It is shown that desensitization is the most appropriate approach for applying the RPD method to an existing process.
|
258 |
The Application of Markov Chain Monte Carlo Techniques in Non-Linear Parameter Estimation for Chemical Engineering ModelsMathew, Manoj January 2013 (has links)
Modeling of chemical engineering systems often necessitates using non-linear models. These models can range in complexity, from a simple analytical equation to a system of differential equations. Regardless of what type of model is being utilized, determining parameter estimates is essential in everyday chemical engineering practice. One promising approach to non-linear regression is a technique called Markov Chain Monte Carlo (MCMC).This method produces reliable parameter estimates and generates joint confidence regions (JCRs) with correct shape and correct probability content. Despite these advantages, its application in chemical engineering literature has been limited. Therefore, in this project, MCMC methods were applied to a variety of chemical engineering models. The objectives of this research is to (1) illustrate how to implement MCMC methods in complex non-linear models (2) show the advantages of using MCMC techniques over classical regression approaches and (3) provide practical guidelines on how to reduce the computational time.
MCMC methods were first applied to the biological oxygen demand (BOD) problem. In this case study, an implementation procedure was outlined using specific examples from the BOD problem. The results from the study illustrated the importance of estimating the pure error variance as a parameter rather than fixing its value based on the mean square error. In addition, a comparison was carried out between the MCMC results and the results obtained from using classical regression approaches. The findings show that although similar point estimates are obtained, JCRs generated from approximation methods cannot model the parameter uncertainty adequately.
Markov Chain Monte Carlo techniques were then applied in estimating reactivity ratios in the Mayo-Lewis model, Meyer-Lowry model, the direct numerical integration model and the triad fraction multiresponse model. The implementation steps for each of these models were discussed in detail and the results from this research were once again compared to previously used approximation methods. Once again, the conclusion drawn from this work showed that MCMC methods must be employed in order to obtain JCRs with the correct shape and correct probability content.
MCMC methods were also applied in estimating kinetic parameter used in the solid oxide fuel cell study. More specifically, the kinetics of the water-gas shift reaction, which is used in generating hydrogen for the fuel cell, was studied. The results from this case study showed how the MCMC output can be analyzed in order to diagnose parameter observability and correlation. A significant portion of the model needed to be reduced due to these issues of observability and correlation. Point estimates and JCRs were then generated using the reduced model and diagnostic checks were carried out in order to ensure the model was able to capture the data adequately.
A few select parameters in the Waterloo Polymer Simulator were estimated using the MCMC algorithm. Previous studies have shown that accurate parameter estimates and JCRs could not be obtained using classical regression approaches. However, when MCMC techniques were applied to the same problem, reliable parameter estimates and correct shape and correct probability content confidence regions were observed. This case study offers a strong argument as to why classical regression approaches should be replaced by MCMC techniques.
Finally, a very brief overview of the computational times for each non-linear model used in this research was provided. In addition, a serial farming approach was proposed and a significant decrease in computational time was observed when this procedure was implemented.
|
259 |
Diagnosis of a compressed air system in a heavy vehicle / Diagnos av tryckluftssystem i ett tungt fordonMartin, Kågebjer January 2011 (has links)
Compressed air has in the past been considered as a free resource in heavy vehicles.The recent years work to minimize fuel consumption has however made airconsumption an interesting topic for the manufactures to investigate further. Compressed air has many different applications in heavy vehicles. One importantconsumer of compressed air is the brake system, which would not work at allwithout compressed air. The compressed air is produced by a compressor attachedto the engine. A leakage in the system will force the compressor to work longer,which leads to an increased fuel consumption. It is of large interest to have a diagnosis system that can detect leakages, and ifpossible also provide information about where in the system the leakage is present.This information can then be used to repair the leakage at the next service stop. The diagnosis system that is developed in this thesis is based on model baseddiagnosis and uses a recursive least mean square method to estimate the leakagearea. The results from the validation show that the algorithm works well forleakages of the size 1-10 litres/minute. The innovative isolation algorithm givesfull fault isolation for a five circuit system with only three pressure sensors. / Tryckluft i lastbilar har tidigare ansetts vara en fri resurs. Den senaste tidens försökatt minimera bränsleförbrukningen har dock lett fram till att även användandetav tryckluft har börjat ses över. Tryckluft används i dagens lastbilar av flera olika förbrukare. En viktig förbrukareav tryckluft är bromsarna som inte fungerar överhuvudtaget utan tryckluft.Tryckluften produceras av en kompressor som sitter kopplad på förbränningsmotorn.Om det finns ett läckage i tryckluftsystemet leder detta till att kompressornmåste arbeta oftare vilket i sin tur leder till en ökad bränsleförbrukning. Det finns stort intresse av att kunna detektera dessa läckage och om möjligtäven avgöra var i systemet som läckaget finns. Informationen kan sedan användasvid nästa servicetillfälle för att laga läckaget. Diagnossystemet som utvecklats i detta examensarbete bygger på modellbaseraddiagnos och använder en rekursiv implementering av minstakvadratmetodenför att skatta läckagets storlek. Resultat från validering av algoritmen visar attdiagnossystemet fungerar bra för läckage i storleksordningen 1-10 liter/minut. Deninnovativa isoleringsalgoritmen ger full felisolerbarhet för ett system med fem kretsarmen bara tre tryckgivare.
|
260 |
A New Third Compartment Significantly Improves Fit and Identifiability in a Model for Ace2p Distribution in Saccharomyces cerevisiae after Cytokinesis.Järvstråt, Linnea January 2011 (has links)
Asymmetric cell division is an important mechanism for the differentiation of cells during embryogenesis and cancer development. Saccharomyces cerevisiae divides asymmetrically and is therefore used as a model system for understanding the mechanisms behind asymmetric cell division. Ace2p is a transcriptional factor in yeast that localizes primarily to the daughter nucleus during cell division. The distribution of Ace2p is visualized using a fusion protein with yellow fluorescent protein (YFP) and confocal microscopy. Systems biology provides a new approach to investigating biological systems through the use of quantitative models. The localization of the transcriptional factor Ace2p in yeast during cell division has been modelled using ordinary differential equations. Herein such modelling has been evaluated. A 2-compartment model for the localization of Ace2p in yeast post-cytokinesis proposed in earlier work was found to be insufficient when new data was included in the model evaluation. Ace2p localization in the dividing yeast cell pair before cytokinesis has been investigated using a similar approach and was found to not explain the data to a significant degree. A 3-compartment model is proposed. The improvement in comparison to the 2-compartment model was statistically significant. Simulations of the 3-compartment model predicts a fast decrease in the amount of Ace2p in the cytosol close to the nucleus during the first seconds after each bleaching of the fluorescence. Experimental investigation of the cytosol close to the nucleus could test if the fast dynamics are present after each bleaching of the fluorescence. The parameters in the model have been estimated using the profile likelihood approach in combination with global optimization with simulated annealing. Confidence intervals for parameters have been found for the 3-compartment model of Ace2p localization post-cytokinesis. In conclusion, the profile likelihood approach has proven a good method of estimating parameters, and the new 3-compartment model allows for reliable parameter estimates in the post-cytokinesis situation. A new Matlab-implementation of the profile likelihood method is appended.
|
Page generated in 0.0233 seconds