• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 71
  • 27
  • 12
  • 5
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 165
  • 165
  • 33
  • 33
  • 21
  • 21
  • 21
  • 17
  • 17
  • 17
  • 17
  • 17
  • 17
  • 17
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Statistical Models for Count Data from Multiple Sclerosis Clinical Trials and their Applications

Rettiganti, Mallikarjuna Rao 17 December 2010 (has links)
No description available.
132

STATISTICAL AND METHODOLOGICAL ISSUES ON COVARIATE ADJUSTMENT IN CLINICAL TRIALS

Chu, Rong 04 1900 (has links)
<p><strong>Background and objectives</strong></p> <p>We investigate three issues related to the adjustment for baseline covariates in late phase clinical trials: (1) the analysis of correlated outcomes in multicentre RCTs, (2) the assessment of the probability and implication of prognostic imbalance in RCTs, and (3) the adjustment for baseline confounding in cohort studies.</p> <p><strong>Methods</strong></p> <p>Project 1: We investigated the properties of six statistical methods for analyzing continuous outcomes in multicentre randomized controlled trials (RCTs) where within-centre clustering was possible. We simulated studies over various intraclass correlation (ICC) values with several centre combinations.</p> <p>Project 2: We simulated data from RCTs evaluating a binary outcome by varying risk of the outcome, effect of the treatment, power and prevalence of a binary prognostic factor (PF), and sample size. We compared logistic regression models with and without adjustment for the PF, in terms of bias, standard error, coverage of confidence interval, and statistical power. A tool to assess sample size requirement to control for chance imbalance was proposed.</p> <p>Project 3: We conducted a prospective cohort study to evaluate the effect of tuberculosis (TB) at the initiation of antiretroviral therapy (ART) on all cause mortality using Cox proportional hazard model on propensity score (PS) matched patients to control for potential confounding. We assessed the robustness of results using sensitivity analyses.</p> <p><strong>Results and conclusions</strong></p> <p>Project 1: All six methods produce unbiased estimates of treatment effect in multicentre trials. Adjusting for centre as a random intercept leads to the most efficient treatment effect estimation, and hence should be used in the presence of clustering.</p> <p>Project 2: The probability of prognostic imbalance in small trials can be substantial. Covariate adjustment improves estimation accuracy and statistical power, and hence should be performed when strong PFs are observed.</p> <p>Project 3: After controlling for the important confounding variables, HIV patients who had TB at the initiation of ART have a moderate increase in the risk of overall mortality.</p> / Doctor of Philosophy (PhD)
133

Methodological Issues in Design and Analysis of Studies with Correlated Data in Health Research

Ma, Jinhui 04 1900 (has links)
<p>Correlated data with complex association structures arise from longitudinal studies and cluster randomized trials. However, some methodological challenges in the design and analysis of such studies or trials have not been overcome. In this thesis, we address three of the challenges: 1) <em>Power analysis for population based longitudinal study investigating gene-environment interaction effects on chronic disease:</em> For longitudinal studies with interest in investigating the gene-environment interaction in disease susceptibility and progression, rigorous statistical power estimation is crucial to ensure that such studies are scientifically useful and cost-effective since human genome epidemiology is expensive. However conventional sample size calculations for longitudinal study can seriously overestimate the statistical power due to overlooking the measurement error, unmeasured etiological determinants, and competing events that can impede the occurrence of the event of interest. 2) <em>Comparing the performance of different multiple imputation strategies for missing binary outcomes in cluster randomized trials</em>: Though researchers have proposed various strategies to handle missing binary outcome in cluster randomized trials (CRTs), comprehensive guidelines on the selection of the most appropriate or optimal strategy are not available in the literature. 3) <em>Comparison of population-averaged and cluster-specific models for the analysis of cluster randomized trials with missing binary outcome</em>: Both population-averaged and cluster-specific models are commonly used for analyzing binary outcomes in CRTs. However, little attention has been paid to their accuracy and efficiency when analyzing data with missing outcomes. The objective of this thesis is to provide researchers recommendations and guidance for future research in handling the above issues.</p> / Doctor of Philosophy (PhD)
134

IDENTIFYING AND OVERCOMING OBSTACLES TO SAMPLE SIZE AND POWER CALCULATIONS IN FMRI STUDIES

Guo, Qing 25 September 2014 (has links)
<p>Functional<strong> </strong>magnetic resonance imaging (fMRI) is a popular technique to study brain function and neural networks. Functional MRI studies are often characterized by small sample sizes and rarely consider statistical power when setting a sample size. This could lead to data dredging, and hence false positive findings. With the widespread use of fMRI studies in clinical disorders, the vulnerability of participants points to an ethical imperative for reliable results so as to uphold promises typically made to participants that the study results will help understand their conditions. While important, power-based sample size calculations can be challenging. The majority of fMRI studies are observational, i.e., are not designed to randomize participants to test efficacy and safety of any therapeutic intervention. My PhD thesis therefore addresses two objectives: firstly, to identify potential obstacles to implementing sample size calculations, and secondly to provide solutions to these obstacles in observational clinical fMRI studies. This thesis contains three projects.</p> <p>Implementing a power-based sample size calculation requires specifications of effect sizes and variances. Typically in health research, these input parameters for the calculation are estimated from results of previous studies, however these often seem to be lacking in the fMRI literature. Project 1 addresses the first objective through a systematic review of 100 fMRI studies with clinical participants, examining how often observed input parameters were reported in the results section so as to help design a new well-powered study. Results confirmed that both input estimates and sample size calculations were rarely reported. The omission of observed inputs in the results section is an impediment to carrying out sample size calculations for future studies.</p> <p>Uncertainty in input parameters is typically dealt with using sensitivity analysis; however this can result in a wide range of candidate sample sizes, leading to difficulty in setting a sample size. Project 2 suggests a cost-efficiency approach as a short-term strategy to deal with the uncertainty in input data and, through an example, illustrates how it narrowed the range to choose a sample size on the basis of maximizing return on investment.</p> <p>Routine reporting of the input estimates can thus facilitate sample size calculations for future studies. Moreover, increasing the overall quality of reporting in fMRI studies helps reduce bias in reported input estimates and hence helps ensure a rigorous sample size calculation in the long run. Project 3 is a systematic review of overall reporting quality of observational clinical fMRI studies, highlighting under-reported areas for improvement and suggesting creating a shortened version of the checklist which contains essential details adapted from the guidelines proposed by Poldrack et al. (2008) to accommodate strict word limits for reporting observational clinical fMRI studies.</p> <p>In conclusion, this PhD thesis facilitates future sample size and power calculations in the fMRI literature by identifying impediments, by providing a short-term solution to overcome the impediments using a cost-efficiency approach in conjunction with conventional methods, and by suggesting a long-term strategy to ensure a rigorous sample size calculation through improving the overall quality of reporting.</p> / Doctor of Philosophy (PhD)
135

Contributions to Profile Monitoring and Multivariate Statistical Process Control

Williams, James Dickson 14 December 2004 (has links)
The content of this dissertation is divided into two main topics: 1) nonlinear profile monitoring and 2) an improved approximate distribution for the T² statistic based on the successive differences covariance matrix estimator. Part 1: Nonlinear Profile Monitoring In an increasing number of cases the quality of a product or process cannot adequately be represented by the distribution of a univariate quality variable or the multivariate distribution of a vector of quality variables. Rather, a series of measurements are taken across some continuum, such as time or space, to create a profile. The profile determines the product quality at that sampling period. We propose Phase I methods to analyze profiles in a baseline dataset where the profiles can be modeled through either a parametric nonlinear regression function or a nonparametric regression function. We illustrate our methods using data from Walker and Wright (2002) and from dose-response data from DuPont Crop Protection. Part 2: Approximate Distribution of T² Although the T² statistic based on the successive differences estimator has been shown to be effective in detecting a shift in the mean vector (Sullivan and Woodall (1996) and Vargas (2003)), the exact distribution of this statistic is unknown. An accurate upper control limit (UCL) for the T² chart based on this statistic depends on knowing its distribution. Two approximate distributions have been proposed in the literature. We demonstrate the inadequacy of these two approximations and derive useful properties of this statistic. We give an improved approximate distribution and recommendations for its use. / Ph. D.
136

Automated Tactile Sensing for Quality Control of Locks Using Machine Learning

Andersson, Tim January 2024 (has links)
This thesis delves into the use of Artificial Intelligence (AI) for quality control in manufacturing systems, with a particular focus on anomaly detection through the analysis of torque measurements in rotating mechanical systems. The research specifically examines the effectiveness of torque measurements in quality control of locks, challenging the traditional method that relies on human tactile sense for detecting mechanical anomalies. This conventional approach, while widely used, has been found to yield inconsistent results and poses physical strain on operators. A key aspect of this study involves conducting experiments on locks using torque measurements to identify mechanical anomalies. This method represents a shift from the subjective and physically demanding practice of manually testing each lock. The research aims to demonstrate that an automated, AI-driven approach can offer more consistent and reliable results, thereby improving overall product quality. The development of a machine learning model for this purpose starts with the collection of training data, a process that can be costly and disruptive to normal workflow. Therefore, this thesis also investigates strategies for predicting and minimizing the sample size used for training. Additionally, it addresses the critical need of trustworthiness in AI systems used for final quality control. The research explores how to utilize machine learning models that are not only effective in detecting anomalies but also offers a level of interpretability, avoiding the pitfalls of black box AI models. Overall, this thesis contributes to advancing automated quality control by exploring the state-of-the-art machine learning algorithms for mechanical fault detection, focusing on sample size prediction and minimization and also model interpretability. To the best of the author’s knowledge, it is the first study that evaluates an AI-driven solution for quality control of mechanical locks, marking an innovation in the field. / Denna avhandling fördjupar sig i användningen av Artificiell Intelligens (AI) för kvalitetskontroll i tillverkningssystem, med särskilt fokus på anomalidetektion genom analys av momentmätningar i roterande mekaniska system. Forskningen undersöker specifikt effektiviteten av momentmätningar för kvalitetskontroll av lås, vilket utmanar den traditionella metoden som förlitar sig på människans taktila sinne för att upptäcka mekaniska anomalier. Denna konventionella metod, som är brett använd, har visat sig ge inkonsekventa resultat och medför fysisk belastning för operatörerna. En nyckelaspekt av denna studie innebär att genomföra experiment på lås med hjälp av momentmätningar för att identifiera mekaniska anomalier. Denna metod representerar en övergång från den subjektiva och fysiskt krävande praxisen att manuellt testa varje lås. Forskningen syftar till att demonstrera att en automatiserad, AI-driven metod kan erbjuda mer konsekventa och tillförlitliga resultat, och därmed förbättra den övergripande produktkvaliteten. Utvecklingen av en maskininlärningsmodell för detta ändamål börjar med insamling av träningsdata, en process som kan vara kostsam och störande för det normala arbetsflödet. Därför undersöker denna avhandling också strategier för att förutsäga och minimera mängden av data som används för träning. Dessutom adresseras det kritiska behovet av tillförlitlighet i AI-system som används för slutlig kvalitetskontroll. Forskningen utforskar hur man kan använda maskininlärningsmodeller som inte bara är effektiva för att upptäcka anomalier, utan också erbjuder en nivå av tolkningsbarhet, för att undvika fallgroparna med svart låda AI-modeller. Sammantaget bidrar denna avhandling till att främja automatiserad kvalitetskontroll genom att utforska de senaste maskininlärningsalgoritmerna för detektion av mekaniska fel, med fokus på prediktion och minimering av mängden träningsdata samt tolkbarheten av modellens beslut. Denna avhandling utgör det första försöket att utvärdera en AI-driven strategi för kvalitetskontroll av mekaniska lås, vilket utgör en nyskapande innovation inom området.
137

Clinical studies on enteric fever

Arjyal, Amit January 2014 (has links)
I performed two randomised controlled trials (RCTs) to determine the best treatments for enteric fever in Kathmandu, Nepal, an area with a high proportion of nalidixic acid resistant S. Typhi and S. Paratyphi A isolates. I recruited 844 patients with suspected enteric fever to compare chloramphenicol versus gatifloxacin. 352 patients were culture confirmed. 14/175 patients treated with chloramphenicol and 12/177 patients treated with gatifloxacin experienced treatment failure (HR=0.86 (95% CI 0.40 to 1.86), p=0.70). The median times to fever clearance were 3.95 and 3.90 days, respectively (HR=1.06 [CI 0.86 to 1.32], p=0.59). The second RCT compared ofloxacin versus gatifloxacin and recruited 627 patients. Of the 170 patients infected with nalidixic acid resistant strains, the number of patients with treatment failure was 6/83 in the ofloxacin group and 5/87 in the gatifloxacin group (Hazard Ratio, HR=0.81, 95% CI 0.25 to 2.65; p=0.73); the median times to fever clearance were 4.7 and 3.3 days respectively (HR=1.59 [CI 1.16 to 2.18], p=0.004). I compared conventional blood culture against an electricity free culture approach. 66 of 304 patients with suspected enteric fever were positive for S. Typhi or S. Paratyphi A, 55 (85%) isolates were identified by the conventional blood culture and 60 (92%) isolates were identified by the experimental method. The percentages of positive and negative agreement for diagnosis of enteric fever were 90.9% and 96.0%, respectively. This electricity free blood culture system may have utility in resource-limited settings or potentially in disaster relief and refugee camps. I performed a literature review of RCTs of enteric fever which showed that trial design varied greatly. I was interested in the perspective of patients and what they regarded as cure. 1,481 patients were interviewed at the start of treatment, 860 (58%) reported that the resolution of fever would mean cure to them. At the completion of treatment, 877/1,448 (60.6%) reported that they felt cured when fever was completely gone. We suggest that fever clearance time is the best surrogate for clinical cure in patients with enteric fever and should be used as the primary outcome in future RCTs for the treatment of enteric fever.
138

Longitudinal Models for Quantifying Disease and Therapeutic Response in Multiple Sclerosis

Novakovic, Ana M. January 2017 (has links)
Treatment of patients with multiple sclerosis (MS) and development of new therapies have been challenging due to the disease complexity and slow progression, and the limited sensitivity of available clinical outcomes. Modeling and simulation has become an increasingly important component in drug development and in post-marketing optimization of use of medication. This thesis focuses on development of pharmacometric models for characterization and quantification of the relationships between drug exposure, biomarkers and clinical endpoints in relapse-remitting MS (RRMS) following cladribine treatment. A population pharmacokinetic model of cladribine and its main metabolite, 2-chloroadenine, was developed using plasma and urine data. The renal clearance of cladribine was close to half of total elimination, and was found to be a linear function of creatinine clearance (CRCL). Exposure-response models could quantify a clear effect of cladribine tablets on absolute lymphocyte count (ALC), burden of disease (BoD), expanded disability status scale (EDSS) and relapse rate (RR) endpoints. Moreover, they gave insight into disease progression of RRMS. This thesis further demonstrates how integrated modeling framework allows an understanding of the interplay between ALC and clinical efficacy endpoints. ALC was found to be a promising predictor of RR. Moreover, ALC and BoD were identified as predictors of EDSS time-course. This enables the understanding of the behavior of the key outcomes necessary for the successful development of long-awaited MS therapies, as well as how these outcomes correlate with each other. The item response theory (IRT) methodology, an alternative approach for analysing composite scores, enabled to quantify the information content of the individual EDSS components, which could help improve this scale. In addition, IRT also proved capable of increasing the detection power of potential drug effects in clinical trials, which may enhance drug development efficiency. The developed nonlinear mixed-effects models offer a platform for the quantitative understanding of the biomarker(s)/clinical endpoint relationship, disease progression and therapeutic response in RRMS by integrating a significant amount of knowledge and data.
139

Optimizacija problema sa stohastičkim ograničenjima tipa jednakosti – kazneni metodi sa promenljivom veličinom uzorka / Optimization of problems with stochastic equality constraints – penaltyvariable sample size methods

Rožnjik Andrea 24 January 2019 (has links)
<p>U disertaciji je razmatran problem stohastičkog programiranja s ograničenjima tipa jednakosti, odnosno problem minimizacije s ograničenjima koja su u obliku matematičkog očekivanja. Za re&scaron;avanje posmatranog problema kreirana su dva iterativna postupka u kojima se u svakoj iteraciji računa s uzoračkim očekivanjem kao aproksimacijom matematičkog očekivanja. Oba postupka koriste prednosti postupaka s promenljivom veličinom uzorka zasnovanih na adaptivnom ažuriranju veličine uzorka. To znači da se veličina uzorka određuje na osnovu informacija u tekućoj iteraciji. Konkretno, tekuće informacije o preciznosti aproksimacije očekivanja i tačnosti aproksimacije re&scaron;enja problema defini&scaron;u veličinu uzorka za narednu iteraciju. Oba iterativna postupka su zasnovana na linijskom pretraživanju, a kako je u pitanju problem s ograničenjima, i na kvadratnom kaznenom postupku prilagođenom stohastičkom okruženju. Postupci su zasnovani na istim idejama, ali s različitim pristupom.<br />Po prvom pristupu postupak je kreiran za re&scaron;avanje SAA reformulacije problema stohastičkog programiranja, dakle za re&scaron;avanje aproksimacije originalnog problema. To znači da je uzorak definisan pre iterativnog postupka, pa je analiza konvergencije algoritma deterministička. Pokazano je da se, pod standardnim pretpostavkama, navedenim algoritmom dobija podniz iteracija čija je tačka nagomilavanja KKT tačka SAA reformulacije.<br />Po drugom pristupu je formiran algoritam za re&scaron;avanje samog problema<br />stohastičkog programiranja, te je analiza konvergencije stohastička. Predstavljenim algoritmom se generi&scaron;e podniz iteracija čija je tačka nagomilavanja, pod standardnim pretpostavkama za stohastičku optimizaciju, skoro sigurno<br />KKT tačka originalnog problema.<br />Predloženi algoritmi su implementirani na istim test problemima. Rezultati numeričkog testiranja prikazuju njihovu efikasnost u re&scaron;avanju posmatranih problema u poređenju s postupcima u kojima je ažuriranje veličine uzorka<br />zasnovano na unapred definisanoj &scaron;emi. Za meru efikasnosti je upotrebljen<br />broj izračunavanja funkcija. Dakle, na osnovu rezultata dobijenih na skupu<br />testiranih problema može se zaključiti da se adaptivnim ažuriranjem veličine<br />uzorka može u&scaron;tedeti u broju evaluacija funkcija kada su u pitanju i problemi s<br />ograničenjima.<br />Kako je posmatrani problem deterministički, a formulisani postupci su stohastički, prva tri poglavlja disertacije sadrže osnovne pojmove determinističke<br />i stohastiˇcke optimizacije, ali i kratak pregled definicija i teorema iz drugih<br />oblasti potrebnih za lak&scaron;e praćenje analize originalnih rezultata. Nastavak disertacije čini prikaz formiranih algoritama, analiza njihove konvergencije i numerička implementacija.<br />&nbsp;</p> / <p>Stochastic programming problem with equality constraints is considered within thesis. More precisely, the problem is minimization problem with constraints in the form of mathematical expectation. We proposed two iterative methods for solving considered problem. Both procedures, in each iteration, use a sample average function instead of the mathematical expectation function, and employ the advantages of the variable sample size method based on adaptive updating the sample size. That means, the sample size is determined at every iteration using information from the current iteration. Concretely, the current precision of the approximation of expectation and the quality of the approximation of solution determine the sample size for the next iteration. Both iterative procedures are based on the line search technique as well as on the quadratic penalty method adapted to stochastic environment, since the considered problem has constraints. Procedures relies on same ideas, but the approach is different.<br />By first approach, the algorithm is created for solving an SAA reformulation of the stochastic programming problem, i.e., for solving the approximation of the original problem. That means the sample size is determined before the iterative procedure, so the convergence analyses is deterministic. We show that, under the standard assumptions, the proposed algorithm generates a subsequence which accumulation point is the KKT point of the SAA problem. Algorithm formed by the second approach is for solving the stochastic programming problem, and therefore the convergence analyses is stochastic. It generates a subsequence with&nbsp; accumulation point that is almost surely the KKT point of the original problem, under the standard assumptions for stochastic optimization.for sample size. The number of function evaluations is used as measure of efficiency. Results of the set of tested problems suggest that it is possible to make smaller number of function evaluations by adaptive sample size scheduling in the case of constrained problems, too.<br />Since the considered problem is deterministic, but the formed procedures are stochastic, the first three chapters of thesis contain basic notations of deterministic and stochastic optimization, as well as a short sight of definitions and theorems from another fields necessary for easier tracking the original results analysis. The rest of thesis consists of the presented algorithms, their convergence analysis and numerical implementation.</p>
140

Line search methods with variable sample size / Metodi linijskog pretrazivanja sa promenljivom velicinom uzorka

Krklec Jerinkić Nataša 17 January 2014 (has links)
<p>The problem under consideration is an unconstrained optimization&nbsp;problem with the objective function in the form of mathematical ex-pectation. The expectation is with respect to the random variable that represents the uncertainty. Therefore, the objective &nbsp;function is in fact deterministic. However, nding the analytical form of that objective function can be very dicult or even impossible. This is the reason why the sample average approximation is often used. In order to obtain reasonable good approximation of the objective function, we have to use relatively large sample size. We assume that the sample is generated at the beginning of the optimization process and therefore we can consider this sample average objective function as the deterministic one. However, applying some deterministic method on that sample average function from the start can be very costly. The number of evaluations of the function under expectation is a common way of measuring the cost of an algorithm. Therefore, methods that vary the sample size throughout the optimization process are developed. Most of them are trying to determine the optimal dynamics of increasing the sample size.</p><p>The main goal of this thesis is to develop the clas of methods that&nbsp;can decrease the cost of an algorithm by decreasing the number of&nbsp;function evaluations. The idea is to decrease the sample size whenever&nbsp;it seems to be reasonable - roughly speaking, we do not want to impose&nbsp;a large precision, i.e. a large sample size when we are far away from the&nbsp;solution we search for. The detailed description of the new methods&nbsp;<br />is presented in Chapter 4 together with the convergence analysis. It&nbsp;is shown that the approximate solution is of the same quality as the&nbsp;one obtained by dealing with the full sample from the start.</p><p>Another important characteristic of the methods that are proposed&nbsp;here is the line search technique which is used for obtaining the sub-sequent iterates. The idea is to nd a suitable direction and to search&nbsp;along it until we obtain a sucient decrease in the &nbsp;function value. The&nbsp;sucient decrease is determined throughout the line search rule. In&nbsp;Chapter 4, that rule is supposed to be monotone, i.e. we are imposing&nbsp;strict decrease of the function value. In order to decrease the cost of&nbsp;the algorithm even more and to enlarge the set of suitable search directions, we use nonmonotone line search rules in Chapter 5. Within that chapter, these rules are modied to t the variable sample size framework. Moreover, the conditions for the global convergence and the R-linear rate are presented.&nbsp;</p><p>In Chapter 6, numerical results are presented. The test problems&nbsp;are various - some of them are academic and some of them are real&nbsp;world problems. The academic problems are here to give us more&nbsp;insight into the behavior of the algorithms. On the other hand, data&nbsp;that comes from the real world problems are here to test the real&nbsp;applicability of the proposed algorithms. In the rst part of that&nbsp;chapter, the focus is on the variable sample size techniques. Different&nbsp;implementations of the proposed algorithm are compared to each other&nbsp;and to the other sample schemes as well. The second part is mostly&nbsp;devoted to the comparison of the various line search rules combined&nbsp;with dierent search directions in the variable sample size framework.&nbsp;The overall numerical results show that using the variable sample size&nbsp;can improve the performance of the algorithms signicantly, especially&nbsp;when the nonmonotone line search rules are used.</p><p>The rst chapter of this thesis provides the background material&nbsp;for the subsequent chapters. In Chapter 2, basics of the nonlinear&nbsp;optimization are presented and the focus is on the line search, while&nbsp;Chapter 3 deals with the stochastic framework. These chapters are&nbsp;here to provide the review of the relevant known results, while the&nbsp;rest of the thesis represents the original contribution.&nbsp;</p> / <p>U okviru ove teze posmatra se problem optimizacije bez ograničenja pri čcemu je funkcija cilja u formi matematičkog očekivanja. Očekivanje se odnosi na slučajnu promenljivu koja predstavlja neizvesnost. Zbog toga je funkcija cilja, u stvari, deterministička veličina. Ipak, odredjivanje analitičkog oblika te funkcije cilja može biti vrlo komplikovano pa čak i nemoguće. Zbog toga se za aproksimaciju često koristi uzoračko očcekivanje. Da bi se postigla dobra aproksimacija, obično je neophodan obiman uzorak. Ako pretpostavimo da se uzorak realizuje pre početka procesa optimizacije, možemo posmatrati uzoračko očekivanje kao determinističku funkciju. Medjutim, primena nekog od determinističkih metoda direktno na tu funkciju&nbsp; moze biti veoma skupa jer evaluacija funkcije pod ocekivanjem često predstavlja veliki tro&scaron;ak i uobičajeno je da se ukupan tro&scaron;ak optimizacije meri po broju izračcunavanja funkcije pod očekivanjem. Zbog toga su razvijeni metodi sa promenljivom veličinom uzorka. Većcina njih je bazirana na odredjivanju optimalne dinamike uvećanja uzorka.</p><p>Glavni cilj ove teze je razvoj algoritma koji, kroz smanjenje broja izračcunavanja funkcije, smanjuje ukupne tro&scaron;skove optimizacije. Ideja je da se veličina uzorka smanji kad god je to moguće. Grubo rečeno, izbegava se koriscenje velike preciznosti&nbsp; (velikog uzorka) kada smo daleko od re&scaron;senja. U čcetvrtom poglavlju ove teze opisana je nova klasa metoda i predstavljena je analiza konvergencije. Dokazano je da je aproksimacija re&scaron;enja koju dobijamo bar toliko dobra koliko i za metod koji radi sa celim uzorkom sve vreme.</p><p>Jo&scaron; jedna bitna karakteristika metoda koji su ovde razmatrani je primena linijskog pretražzivanja u cilju odredjivanja naredne iteracije. Osnovna ideja je da se nadje odgovarajući pravac i da se duž njega vr&scaron;si pretraga za dužzinom koraka koja će dovoljno smanjiti vrednost funkcije. Dovoljno smanjenje je odredjeno pravilom linijskog pretraživanja. U čcetvrtom poglavlju to pravilo je monotono &scaron;to znači da zahtevamo striktno smanjenje vrednosti funkcije. U cilju jos većeg smanjenja tro&scaron;kova optimizacije kao i pro&scaron;irenja skupa pogodnih pravaca, u petom poglavlju koristimo nemonotona pravila linijskog pretraživanja koja su modifikovana zbog promenljive velicine uzorka. Takodje, razmatrani su uslovi za globalnu konvergenciju i R-linearnu brzinu konvergencije.</p><p>Numerički rezultati su predstavljeni u &scaron;estom poglavlju. Test problemi su razliciti - neki od njih su akademski, a neki su realni. Akademski problemi su tu da nam daju bolji uvid u pona&scaron;anje algoritama. Sa druge strane, podaci koji poticu od stvarnih problema služe kao pravi test za primenljivost pomenutih algoritama. U prvom delu tog poglavlja akcenat je na načinu ažuriranja veličine uzorka. Različite varijante metoda koji su ovde predloženi porede se medjusobno kao i sa drugim &scaron;emama za ažuriranje veličine uzorka. Drugi deo poglavlja pretežno je posvećen poredjenju različitih pravila linijskog pretraživanja sa različitim pravcima pretraživanja u okviru promenljive veličine uzorka. Uzimajuci sve postignute rezultate u obzir dolazi se do zaključcka da variranje veličine uzorka može značajno popraviti učinak algoritma, posebno ako se koriste nemonotone metode linijskog pretraživanja.</p><p>U prvom poglavlju ove teze opisana je motivacija kao i osnovni pojmovi potrebni za praćenje preostalih poglavlja. U drugom poglavlju je iznet pregled osnova nelinearne optimizacije sa akcentom na metode linijskog pretraživanja, dok su u trećem poglavlju predstavljene osnove stohastičke optimizacije. Pomenuta poglavlja su tu radi pregleda dosada&scaron;njih relevantnih rezultata dok je originalni doprinos ove teze predstavljen u poglavljima 4-6.</p>

Page generated in 0.0866 seconds