• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 4
  • 3
  • Tagged with
  • 17
  • 17
  • 17
  • 17
  • 6
  • 6
  • 6
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Two-stage adaptive designs in early phase clinical trials

Xu, Jiajing, 徐佳静 January 2013 (has links)
The primary goal of clinical trials is to collect enough scientific evidence for a new intervention. Despite the widespread use of equal randomization in clinical trials, response-adaptive randomization has attracted considerable interest in terms of ethical concerns. In this thesis, delayed response problems and innovative designs for cytostatic agents in oncology clinical trials are studied. There is typically a prerun of equal randomization before the implementation of response-adaptive randomization, while it is often not clear how many subjects are needed in this prephase, and in practice an arbitrary number of patients are allocated in this equal randomization stage. In addition, real-time response-adaptive randomization often requires patient response to be immediately available after the treatment, while clinical response, such as tumor shrinkage, may take a relatively long period of time to exhibit. In the first part of the thesis, a nonparametric fractional model and a parametric optimal allocation scheme are developed to tackle the common problem caused by delayed response. In addition, a two-stage procedure to achieve a balance between power and the number of responders is investigated, which is equipped with a likelihood ratio test before skewing the allocation probability toward a better treatment. The operating characteristics of the two-stage designs are evaluated through extensive simulation studies and an HIV clinical trial is used for illustration. Numerical results show that the proposed method satisfactorily resolves the issues involved in response-adaptive randomization and delayed response. In phase I clinical trials with cytostatic agents, toxicity endpoints, as well as efficacy effects, should be taken into consideration for identifying the optimal biological dose (OBD). In the second part of the thesis, a two-stage Bayesian mixture modeling approach is developed, which first locates the maximum tolerated dose (MTD) through a mixture of parametric and nonparametric models, and then determines the most efficacious dose using Bayesian adaptive randomization among multiple candidate models. In the first stage searching for the MTD, a beta-binomial model in conjunction with a probit model as a mixture modeling approach is studied, and decisions are made based on the model that better fits the toxicity data. The model fitting adequacy is measured by the deviance information criterion and the posterior model probability. In the second stage searching for the OBD, the assumption that efficacy monotonically increases with the dose is abandoned and, instead, all the possibilities that each dose could have the highest efficacy effect are enumerated so that the dose-efficacy curve can be increasing, decreasing, or umbrella-shape. Simulation studies show the advantages of the proposed mixture modeling approach for pinpointing the MTD and OBD, and demonstrate its satisfactory performance with cytostatic agents. / published_or_final_version / Statistics and Actuarial Science / Master / Master of Philosophy
2

Statistical methodology in clinical trial: a review of development and application

黃俊華, Wong, Chun Wa. January 1989 (has links)
published_or_final_version / Statistics / Master / Master of Social Sciences
3

How large should a clinical trial be?

Pezeshk, Hamid January 2000 (has links)
One of the most important questions in the planning of medical experiments to assess the performance of new drugs or treatments, is how big to make the trial. The problem, in its statistical formulation, is to determine the optimal size of a trial. The most frequently used methods of determining sample size in clinical trials is based on the required p-value, and the required power of the trial for a specified treatment effect. In contrast to the Bayesian decision theoretic approach there is no explicit balancing of the cost of a possible increase in the size of the trial against the benefit of the more accurate information which it would give. In this work we consider a fully Bayesian (or decision theoretic) approach to sample size determination in which the number of subsequent users of the therapy under investigation, and hence also the total benefit resulting from the trial, depend on the strength of the evidence provided by the trial. Our procedure differs from the usual Bayesian decision theory methodology, which assumes a single decision maker, by recognizing the existence of three decision makers, namely: the pharmaceutical company conducting the trial, which decides on its size; the regulator, whose approval is necessary for the drug to be licenced for sale; and the public at large, who determine the ultimate usage. Moreover, we model the subsequent usage by plausible assumptions for actual behaviour, rather than assuming that this represents decisions which are in some sense optimal. For this reason the procedure may be called "Behavioural Bayes" (or BeBay for short), the word Bayes referring to the optimization of the sample size. In the BeBay methodology the total expected benefit from carrying out the trial minus the cost of the trial is maximized. For any additional sales to occur as a result of the trial it must provide sufficient evidence both to convince the regulator to issue the necessary licence and to convince potential users that they should use the new treatment. The necessary evidence is in the form of a high probability after the trial that the new treatment achieves a clinically relevant improvement compared to the alternative treatment. The regulator is assumed to start from a more sceptical and less well-informed view of the likely performance of the treatment than the company carrying out the trial. The total benefit from a conclusively favourable trial is assessed on the basis of the size of the potential market and aggregated over the anticipated life-time of the product, using appropriate discounting for future years.
4

Sample size planning for clinical trials with repeated measurements

Suen, Wai-sing, Alan., 孫偉盛. January 2004 (has links)
published_or_final_version / Medical Sciences / Master / Master of Medical Sciences
5

Fully sequential monitoring of longitudinal trials using sequential ranks, with applications to an orthodontics study

Bogowicz, Paul Joseph Unknown Date
No description available.
6

Randomization in a two armed clinical trial: an overview of different randomization techniques

Batidzirai, Jesca Mercy January 2011 (has links)
Randomization is the key element of any sensible clinical trial. It is the only way we can be sure that the patients have been allocated into the treatment groups without bias and that the treatment groups are almost similar before the start of the trial. The randomization schemes used to allocate patients into the treatment groups play a role in achieving this goal. This study uses SAS simulations to do categorical data analysis and comparison of differences between two main randomization schemes namely unrestricted and restricted randomization in dental studies where there are small samples, i.e. simple randomization and the minimization method respectively. Results show that minimization produces almost equally sized treatment groups, but simple randomization is weak in balancing prognostic factors. Nevertheless, simple randomization can also produce balanced groups even in small samples, by chance. Statistical power is also improved when minimization is used than in simple randomization, but bigger samples might be needed to boost the power.
7

Escalation with overdose control for phase I drug-combination trials

Shi, Yun, 施昀 January 2013 (has links)
The escalation with overdose control (EWOC) method is a popular modelbased dose finding design for phase I clinical trials. Dose finding for combined drugs has grown rapidly in oncology drug development. A two-dimensional EWOC design is proposed for dose finding with two agents in combination based on a four-parameter logistic regression model. During trial conduct, the posterior distribution of the maximum tolerated dose (MTD) combination is updated continuously in order to find the appropriate dose combination for each cohort of patients. The probability that the next dose combination exceeds the MTD combination can be controlled by a feasibility bound, which is based on a prespecified quantile for the MTD distribution such as to reduce the possibility of over-dosing. Dose escalation, de-escalation or staying at the same doses is determined by searching the MTD combination along rows and columns in a two-drug combination matrix. Simulation studies are conducted to examine the performance of the design under various practical scenarios, and illustrate it with a trial example. / published_or_final_version / Statistics and Actuarial Science / Master / Master of Philosophy
8

Outcome-dependent randomisation schemes for clinical trials with fluctuations in patient characteristics

Coad, D. Stephen January 1989 (has links)
A clinical trial is considered in which two treatments are to be compared. Treatment allocation schemes are usually designed to assign approximately equal numbers of patients to each treatment. The purpose of this thesis is to investigate the efficiency of estimation and the effect of instability in the response variable for allocation schemes which are aimed at reducing the number of patients who receive the inferior treatment. The general background to outcome-dependent allocation schemes is described in Chapter 1. A discussion of ethical and practical problems associated with these methods is presented together with brief details of actual trials conducted. In Chapter 2, the response to treatment is Bernoulli and the trial size is fixed. A simple method for estimating the treatment difference is proposed. Simulation results for a selection of allocation schemes indicate that the effect of instability upon the performance of the schemes can sometimes be substantial. A decision-theory approach is taken in Chapter 3. The trial is conducted in a number of stages and the interests of both the patients in the trial and those who will be treated after the end of the trial are taken into account. Using results for conditional normal distributions, analytical results are derived for estimation of the treatment difference for both a stable and an unstable normal response variable for three allocation schemes. Some results for estimation are also given for other responses. The problem of sequential testing is addressed in Chapter 4. With instability in the response variable, it is shown that the error probabilities for the test for a stable response variable can be approximately preserved by using a modified test statistic with appropriately-widened stopping boundaries. In addition, some recent results for estimation following sequential tests are outlined. Finally, the main conclusions of the thesis are highlighted in Chapter 5.
9

Non-inferiority testing for correlated ordinal categorical data with misclassification. / CUHK electronic theses & dissertations collection / Digital dissertation consortium

January 2011 (has links)
Keywords: Non-inferiority Test, Bootstrap, Misclassification, Partially Validated Data. / Moreover, misclassification is frequently encountered in collecting ordinal categorical data. We also consider the non-inferiority test based on the data with misclassification. We have explored two different approaches. The first approach can be applied when misclassification probabilities are known or can be calibrated. The second approach deals with the case when we have partially validated data that provide the information on misclassification. The proposed approaches have wide applications that are not confined to tests in medical research. We design a substantive study to illustrate the practicality and applicability of the proposed approaches. / When a new treatment comes out, it is likely to find benefits of the new one, such as fewer side effects, greater convenience of employment, or lower cost in terms of money and time. Therefore, the more appropriate research question is whether the new one is non-inferior or equivalent to, but not necessarily superior to the reference treatment. Consequently, the non-inferiority test or equivalence test is widely used in medical research, which is oriented towards showing that the difference of effect between the two treatments probably lies in a tolerance interval with the pre-defined lower or upper bounds. In this thesis, we consider non-inferiority tests when the data are ordinal categorical. In particular, we are interested in correlated data. We will develop non-inferiority testing procedures for data that are obtained by the paired design and three-armed design. We take advantage of a latent normal distribution approach to model ordinal categorical data. / Han, Yuanyuan. / Adviser: Poon Wai-Yin. / Source: Dissertation Abstracts International, Volume: 73-06, Section: B, page: . / Thesis (Ph.D.)--Chinese University of Hong Kong, 2011. / Includes bibliographical references (leaves 114-117). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [201-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. Ann Arbor, MI : ProQuest Information and Learning Company, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese.
10

Dealing with paucity of data in meta-analysis of binary outcomes. / CUHK electronic theses & dissertations collection

January 2006 (has links)
A clinical trial may have no subject (0%) or every subject (100%) developing the outcome of concern in either of the two comparison groups. This will cause a zero-cell in the four-cell (2x2) table of a trial using a binary outcome and make it impossible to estimate the odds ratio, a commonly used effect measure. A usual way to deal with this problem is to add 0.5 to each of the four cells in the 2x2 table. This is known as Haldane's approximation. In meta-analysis, Haldane's approximation can also be applied. Two approaches are possible: add 0.5 to only the trials with a zero cell or to all the trials in the meta-analysis. Little is known which approach is better when used in combination with different definitions of the odds ratio: the ordinary odds ratio, Peto's odds ratio and Mantel-Haenszel odds ratio. / A new formula is derived for converting Peto's odds ratio to the risk difference. The derived risk difference through the new method was then compared with the true risk difference and the risk difference derived by taking the Peto's odds ratio as the ordinary odds ratio. All simulations and analyses were conducted on the Statistical Analysis Software (SAS). / Conclusions. The estimated confidence interval of a meta-analysis would mostly exclude the truth if an inappropriate correction method is used to deal with zero cells. Counter-intuitively, the combined result of a meta-analysis will be worse as the number of studies included becomes larger. Mantel-Haenszel odds ratio without applying Haldane's approximation is recommended in general for dealing with sparse data in meta-analysis. The ordinary odds ratio with adding 0.5 to only the trials with a zero cell can be used when the trials are heterogeneous and the odds ratio is close to 1. Applying Haldane's approximation to all trials in a meta-analysis should always be avoided. Peto's odds ratio without Haldane's approximation can always be considered but the new formula should be used for converting Peto's odds ratio to the risk difference. / In addition, the odds ratio needs to be converted to a risk difference to aid decision making. Peto's odds ratio is preferable in some situations and the risk difference is derived by taking Peto's odds ratio as an ordinary odds ratio. It is unclear whether this is appropriate. / Methods. For studying the validity of Haldane's approximation, we defined 361 types of meta-analysis. Each type of meta-analysis is determined by a unique combination of the risk in the two compared groups and thus provides a unique true odds ratio. The number of trials in a meta-analysis is set at 5, 10 and 50 and the sample size of each trial in a meta-analysis varies at random but is made sufficiently small so that at least one trial in a meta-analysis will have a zero-cell. The number of outcome events in a comparison group of a trial is generated at random according to the pre-determined risk for that group. One thousand homogeneous meta-analyses and one thousand heterogeneous meta-analyses are simulated for each type of meta-analysis. Two Haldane's approximation approaches in addition to no approximation are evaluated for three definitions of the odds ratio. Thus, nine combined odds ratios are estimated for each type of meta-analysis and are all compared with the true odds ratio. The percentage of meta-analyses with the 95% confidence interval including the true odds ratio is estimated as the main index for validity of the correction methods. / Objectives. (1) We conducted a simulation study to examine the validity of Haldane's approximation as applied to meta-analysis, and (2) we derived and evaluated a new method to covert Peto's odds ratio to the risk difference, and compared it with the conventional conversion method. / Results. By using the true ordinary odds ratio, the percentage of meta-analyses with the confidence interval containing the truth was lowest (from 23.2% to 53.6%) when Haldane's approximation was applied to all the trials regardless the definition of the odds ratios used. The percentage was highest with Mantel-Haenszel odds ratio (95.0%) with no approximation applied. The validity of the corrections methods increases as the true odds ratio gets close to one, as the number of trials in a meta-analysis decreases, as the heterogeneity decreases and the trial size increases. / The proposed new formula performed better than the conventional method. The mean relative difference between the true risk difference and the risk difference obtained from the new formula is -0.006% while the mean relative difference between the true risk difference and the risk difference obtained from the conventional method is -10.9%. / The validity is relatively close (varying from 86.8% to 95.8%) when the true odds ratio is between 1/3 and 3 for all combinations of the correction methods and definitions of the odds ratio. However, Peto's odds ratio performed consistently best if the true Peto's odds ratio is used as the truth for comparison among the three definitions of the odds ratio regardless the correction method (varying from 88% to 98.7%). / Tam Wai-san Wilson. / "Jan 2006." / Adviser: J. L. Tang. / Source: Dissertation Abstracts International, Volume: 67-11, Section: B, page: 6488. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2006. / Includes bibliographical references (p. 151-157). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.

Page generated in 0.1832 seconds