• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 226
  • 71
  • 28
  • 25
  • 21
  • 14
  • 11
  • 6
  • 6
  • 6
  • 5
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 511
  • 126
  • 95
  • 88
  • 73
  • 72
  • 70
  • 48
  • 48
  • 43
  • 39
  • 38
  • 36
  • 35
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Development and Test of a GEM-Based TEPC for Neutron Protection Dosimetry

Seydaliev, Marat Radikovich 12 February 2007 (has links)
The effective dose equivalent, H (or the effective dose, E ) to an individual is the primary limiting quantity in radiation protection. However, techniques for measuring H for neutrons have not been fully developed. In this regard a new tissue equivalent proportional counter (TEPC) based on a gas electron multiplier (GEM) for measuring H*(10), which is a conservative estimate of H, for neutrons was designed and constructed. The deposited energy distribution for two different neutron sources (a Cf-252 source and a AmBe source) was measured using the new TEPC. The measurements were performed using two different proportional gases: P-10 gas and a propane-based tissue equivalent gas at various pressures. A computer simulation of the new TEPC, based on the Monte Carlo method, was performed in order to obtain the pulse height distributions for the two neutron sources. The simulated results and the measured results were compared. Results show that the experimental results agree with the computational results within 20% of accuracy for both Cf-252 and AmBe neutron sources. A new model GEM-based TEPC was developed for use in obtaining H*(10). The value of H*(10) for the Cf-252 source and for the AmBe source using experimental measurements was obtained. These results are presented in this study. The study shows that the GEM-based TEPC can successfully estimate H*(10). With these results and some refinements, this GEM-based TEPC can directly be used as a neutron rem meter.
242

Optimal designs for statistical inferences in nonlinear models with bivariate response variables

Hsu, Hsiang-Ling 27 January 2011 (has links)
Bivariate or multivariate correlated data may be collected on a sample of unit in many applications. When the experimenters concern about the failure times of two related subjects for example paired organs or two chronic diseases, the bivariate binary data is often acquired. This type of data consists of a observation point x and indicators which represent whether the failure times happened before or after the observation point. In this work, the observed bivariate data can be written with the following form {x, £_1=I(X1≤ x), £_2=I(X2≤ x)}.The corresponding optimal design problems for parameter estimation under this type of bivariate data are discussed. For this kind of the multivariate responses with explanatory variables, their marginal distributions may be from different distributions. Copula model is a way to formulate the relationship of these responses, and the association between pairs of responses. Copula models for bivariate binary data are considered useful in practice due to its flexibility. In this dissertation for bivariate binary data, the marginal functions are assumed from exponential or Weibull distributions and two assumptions, independent or correlated, about the joint function between variables are considered. When the bivariate binary data is assumed correlated, the Clayton copula model is used as the joint cumulative distribution function. There are few works addressed the optimal design problems for bivariate binary data with copula models. The D-optimal designs aim at minimizing the volume of the confidence ellipsoid for estimating unknown parameters including the association parameter in bivariate copula models. They are used to determine the best observation points. Moreover, the Ds-optimal designs are mainly used for estimation of the important association parameter in Clayton model. The D- and Ds-optimal designs for the above copula model are found through the general equivalence theorem with numerical algorithm. Under different model assumptions, it is observed that the number of support points for D-optimal designs is at most as the number of model parameters for the numerical results. When the difference between the marginal distributions and the association are significant, the association becomes an influential factor which makes the number of supports gets larger. The performances of estimation based on optimal designs are reasonably well by simulation studies. In survival experiments, the experimenter customarily takes trials at some specific points such as the position of the 25, 50 and 75 percentile of distributions. Hence, we consider the design efficiencies when the design points for trials are at three or four particular percentiles. Although it is common in practice to take trials at several quantile positions, the allocations of the proportion of sample size also have great influence on the experimental results. To use a locally optimal design in practice, the prior information for models or parameters are needed. In case there is not enough prior knowledge about the models or parameters, it would be more flexible to use sequential experiments to obtain information in several stages. Hence with robustness consideration, a sequential procedure is proposed by combining D- and Ds-optimal designs under independent or correlated distribution in different stages of the experiment. The simulation results based on the sequential procedure are compared with those by the one step procedures. When the optimal designs obtained from an incorrect prior parameter values or distributions, those results may have poor efficiencies. The sample mean of estimators and corresponding optimal designs obtained from sequential procedure are close to the true values and the corresponding efficiencies are close to 1. Huster (1989) analyzed the corresponding modeling problems for the paired survival data and applied to the Diabetic Retinopathy Study. Huster (1989) considered the exponential and Weibull distributions as possible marginal distributions and the Clayton model as the joint function for the Diabetic Retinopathy data. This data was conducted by the National Eye Institute to assess the effectiveness of laser photocoagulation in delaying the onset of blindness in patients with diabetic retinopathy. This study can be viewed as a prior experiment and provide the experimenter some useful guidelines for collecting data in future studies. As an application with Diabetic Retinopathy Study, we develop optimal designs to collect suitable data and information for estimating the unknown model parameters. In the second part of this work, the optimal design problems for parameter estimations are considered for the type of proportional data. The nonlinear model, based on Jorgensen (1997) and named the dispersion model, provides a flexible class of non-normal distributions and is considered in this research. It can be applied in binary or count responses, as well as proportional outcomes. For continuous proportional data where responses are confined within the interval (0,1), the simplex dispersion model is considered here. D-optimal designs obtained through the corresponding equivalence theorem and the numerical results are presented. In the development of classical optimal design theory, weighted polynomial regression models with variance functions which depend on the explanatory variable have played an important role. The problem of constructing locally D-optimal designs for simplex dispersion model can be viewed as a weighted polynomial regression model with specific variance function. Due to the complex form of the weight function in the information matrix is considered as a rational function, an approximation of the weight function and the corresponding optimal designs are obtained with different parameters. These optimal designs are compared with those using the original weight function.
243

Development And Comparison Of Autopilot And Guidance Algorithms For Missiles

Evcimen, Cagdas 01 August 2007 (has links) (PDF)
In order to have an interception with a target, a missile should be guided with a successful guidance algorithm accompanied with a suitable autopilot structure. In this study, different autopilot and guidance designs for a canard-controlled missile are developed. As a first step, nonlinear missile mathematical model is derived by using the equations of motion with aerodynamic coefficients found by Missile DATCOM program. Autopilot design starts by the linearization of the nonlinear missile model around equilibrium flight conditions. Controllers based on the concepts of optimal control theory results and sliding mode control are designed. In all of the designs, angle of attack command and roll angle command type autopilot structures are used. During the design process, variations in angle of attack, Mach number and altitude can lead to significant performance degradation. This problem is typically solved by applying gain-scheduling methodology according to these parameters. There are different types of guidance methods in the literature. Throughout this study, proportional navigation guidance and its modified forms are selected as a base algorithm in the guidance system design. Other robust forms of guidance methods, such as an optimal guidance approach and sliding mode guidance, are also formed for performance comparison with traditional proportional navigation guidance approach. Finally, a new guidance method, optimal proportional-integral guidance, whose performance is the best among all of the methods included in the thesis against highly maneuvering targets, is introduced.
244

A Rule Based Missile Evasion Method For Fighter Aircrafts

Sert, Muhammet 01 June 2008 (has links) (PDF)
In this thesis, a new guidance method for fighter aircrafts and a new guidance method for missiles are developed. Also, guidance and control systems of the aircraft and the missile used are designed to simulate the generic engagement scenarios between the missile and the aircraft. Suggested methods have been tested under excessive simulation studies. The aircraft guidance method developed here is a rule based missile evasion method. The main idea to develop this method stems from the maximization of the miss distance for an engagement scenario between a missile and an aircraft. To do this, an optimal control problem with state and input dependent inequality constraints is solved and the solution method is applied on different problems that represent generic scenarios. Then, the solutions of the optimal control problems are used to extract rules. Finally, a method that uses the interpolation of the extracted rules is given to guide the aircraft. The new guidance method developed for missiles is formulated by modifying the classical proportional navigation guidance method using the position estimates. The position estimation is obtained by utilization of a Kalman based filtering method, called interacting multiple models.
245

Statistical Analysis and Modeling of Breast Cancer and Lung Cancer

Cong, Chunling 05 November 2010 (has links)
The objective of the present study is to investigate various problems associate with breast cancer and lung cancer patients. In this study, we compare the effectiveness of breast cancer treatments using decision tree analysis and come to the conclusion that although certain treatment shows overall effectiveness over the others, physicians or doctors should discretionally give different treatment to breast cancer patients based on their characteristics. Reoccurrence time of breast caner patients who receive different treatments are compared in an overall sense, histology type is also taken into consideration. To further understand the relation between relapse time and other variables, statistical models are applied to identify the attribute variables and predict the relapse time. Of equal importance, the transition between different breast cancer stages are analyzed through Markov Chain which not only gives the transition probability between stages for specific treatment but also provide guidance on breast cancer treatment based on stating information. Sensitivity analysis is conducted on breast cancer doubling time which involves two commonly used assumptions: spherical tumor and exponential growth of tumor and the analysis reveals that variation from those assumptions could cause very different statistical behavior of breast cancer doubling time. In lung cancer study, we investigate the mortality time of lung cancer patients from several different perspectives: gender, cigarettes per day and duration of smoking. Statistical model is also used to predict the mortality time of lung cancer patients.
246

Relative Survival of Gags Mycteroperca microlepis Released Within a Recreational Hook-and-Line Fishery: Application of the Cox Regression Model to Control for Heterogeneity in a Large-Scale Mark-Recapture Study

Sauls, Beverly J. 01 January 2013 (has links)
The objectives of this study were to measure injuries and impairments directly observed from gags Mycteroperca microlepis caught and released within a large-scale recreational fishery, develop methods that may be used to rapidly assess the condition of reef fish discards, and estimate the total portion of discards in the fishery that suffer latent mortality. Fishery observers were placed on for-hire charter and headboat vessels operating in the Gulf of Mexico from June 2009 through December 2012 to directly observe reef fishes as they were caught by recreational anglers fishing with hook-and-line gear. Fish that were not retained by anglers were inspected and marked with conventional tags prior to release. Fish were released in multiple regions over a large geographic area throughout the year and over multiple years. The majority of recaptured fish were reported by recreational and commercial fishers, and fishing effort fluctuated both spatially and temporally over the course of this study in response to changes in recreational harvest restrictions and the Deepwater Horizon oil spill. Therefore, it could not be assumed that encounter probabilities were equal for all individual tagged fish in the population. Fish size and capture depth when fish were initially caught-and-released also varied among individuals in the study and potentially influenced recapture reporting probabilities. The Cox proportional hazards regression model was used to control for potential covariates on both the occurrence and timing of recapture reporting events so that relative survival among fish released in various conditions could be compared. A total of 3,954 gags were observed in this study, and the majority (77.26%) were released in good condition (condition category 1), defined as fish that immediately submerged without assistance from venting and had not suffered internal injuries from embedded hooks or visible damage to the gills. However, compared to gags caught in shallower depths, a greater proportion of gags caught and released from depths deeper than 30 meters were in fair or poor condition. Relative survival was significantly reduced (alpha (underline)<(/underline)0.05) for gags released in fair and poor condition after controlling for variable mark-recapture reporting rates for different sized discards among regions and across months and years when individual fish were initially captured, tagged and released. Gags released within the recreational fishery in fair and poor condition were 66.4% (95% C.I. 46.9 to 94.0%) and 50.6% (26.2 to 97.8%) as likely to be recaptured, respectively, as gags released in good condition. Overall discard mortality was calculated for gags released in all condition categories at ten meter depth intervals. There was a significant linear increase in estimated mortality from less than 15% (range of uncertainty, 0.1-25.2%) in shallow depths up to 30 meters, to 35.6% (5.6-55.7%) at depths greater than 70 meters (p < 0.001, R2 = 0.917). This analysis demonstrated the utility of the proportional hazards regression model for controlling for potential covariates on both the occurrence and timing of recapture events in a large-scale mark-recapture study and for detecting significant differences in the relative survival of fish released in various conditions measured under highly variable conditions within a large-scale fishery.
247

A STUDY OF TIES AND TIME-VARYING COVARIATES IN COX PROPORTIONAL HAZARDS MODEL

Xin, Xin 12 September 2011 (has links)
In this thesis, ties and time-varying covariates in survival analysis are investigated. There are two types of ties: ties between event times (Type 1 ties) and ties between event times and the time that discrete time-varying covariates change or "jump"(Type 2 ties). The Cox proportional hazards model is one of the most important regression models for survival analysis. Methods for including Type 1 ties and time-varying covariates in the Cox proportional hazards model are well established in previous studies, but Type 2 ties have been ignored in the literature. This thesis discusses the effect of Type 2 ties on Cox's partial likelihood, the current default method to treat Type 2 ties in statistical packages SAS and R (called Fail before Jump in this thesis), and proposes alternative methods (Random and Equally Weighted) for Type 2 ties. A simulation study as well as an analysis of data sets from real research both suggest that both Random and Equally Weighted methods perform better than the other two methods. Also the effect of the percentages of Type 1 and Type 2 ties on these methods for handling both types of ties is discussed. / NSERC
248

Statistical inference with randomized nomination sampling

Nourmohammadi, Mohammad 08 1900 (has links)
In this dissertation, we develop several new inference procedures that are based on randomized nomination sampling (RNS). The first problem we consider is that of constructing distribution-free confidence intervals for quantiles for finite populations. The required algorithms for computing coverage probabilities of the proposed confidence intervals are presented. The second problem we address is that of constructing nonparametric confidence intervals for infinite populations. We describe the procedures for constructing confidence intervals and compare the constructed confidence intervals in the RNS setting, both in perfect and imperfect ranking scenario, with their simple random sampling (SRS) counterparts. Recommendations for choosing the design parameters are made to achieve shorter confidence intervals than their SRS counterparts. The third problem we investigate is the construction of tolerance intervals using the RNS technique. We describe the procedures of constructing one- and two-sided RNS tolerance intervals and investigate the sample sizes required to achieve tolerance intervals which contain the determined proportions of the underlying population. We also investigate the efficiency of RNS-based tolerance intervals compared with their corresponding intervals based on SRS. A new method for estimating ranking error probabilities is proposed. The final problem we consider is that of parametric inference based on RNS. We introduce different data types associated with different situation that one might encounter using the RNS design and provide the maximum likelihood (ML) and the method of moments (MM) estimators of the parameters in two classes of distributions; proportional hazard rate (PHR) and proportional reverse hazard rate (PRHR) models.
249

Monte Carlo simulation of gas-filled radiation detectors

Kundu, Ashoke January 2000 (has links)
No description available.
250

Models for Ordered Categorical Pharmacodynamic Data

Zingmark, Per-Henrik January 2005 (has links)
In drug development clinical trials are designed to investigate whether a new treatment is safe and has the desired effect on the disease in the target patient population. Categorical endpoints, for example different ranking scales or grading of adverse events, are commonly used to measure effects in the trials. Pharmacokinetic/Pharmacodynamic (PK/PD) models are used to describe the plasma concentration of a drug over time and its relationship to the effect studied. The models are utilized both in drug development and in discussions with drug regulating authorities. Methods for incorporation of ordered categorical data in PK/PD models were studied using a non-linear mixed effects modelling approach as implemented in the software NONMEM. The traditionally used proportional odds model was used for analysis of a 6-grade sedation scale in acute stroke patients and for analysis of a T-cell receptor expression in patients with Multiple Sclerosis, where the results also were compared with an analysis of the data on a continuous scale. Modifications of the proportional odds model were developed to enable analysis of a spontaneously reported side-effect and to analyze situations where the scale used is heterogeneous or where the drug affects the different scores in the scale in a non-proportional way. The new models were compared with the proportional odds model and were shown to give better predictive performances in the analyzed situations. The results in this thesis show that categorical data obtained in clinical trials with different design and different categorical endpoints successfully can be incorporated in PK/PD models. The models developed can also be applied to analyses of other ordered categorical scales than those presented.

Page generated in 0.1322 seconds