51 |
Développement d’une méthode de recherche de dose modélisant un score de toxicité pour les essais cliniques de phase I en Oncologie / Development of dose-finding method based on a toxicity score for designs evaluating molecularly targeted therapies in oncologyEzzalfani Gahlouzi, Monia 02 October 2013 (has links)
Le but principal d'un essai de phase I en oncologie est d'identifier, parmi un nombre fini de doses, la dose à recommander d'un nouveau traitement pour les évaluations ultérieures, sur un petit nombre de patients.Le critère de jugement principal est classiquement la toxicité. Bien que la toxicité soit mesurée pour différents organes sur une échelle gradée, elle est généralement réduite à un indicateur binaire appelé "toxicité dose-limitante" (DLT). Cette simplification très réductrice est problématiqu, en particulier pour les thérapies, dites "thérapies ciblées", associées à peu de DLTs.Dans ce travail, nous proposons un score de toxicité qui résume l'ensemble des toxicités observées chez un patient. Ce score, appelé TTP pour Total Toxicity Profile, est défini par la norme euclidienne des poids associés aux différents types et grades de toxicités possibles. Les poids reflètent l'importance clinique des différentes toxicités.\\ Ensuite, nous proposons la méthode de recherche de dose, QLCRM pour Quasi-Likelihood Continual Reassessment Method, modélisant la relation entre la dose et le score de toxicité TTP à l'aide d'une régression logistique dans un cadre fréquentiste.A l'aide d'une étude de simulation, nous comparons la performance de cette méthode à celle de trois autres approches utilisant un score de toxicité : i) la méthode de Yuan et al. (QCRM) basée sur un modèle empirique pour estimer, dans un cadre bayésien, la relation entre la dose et le score, ii) la méthode d'Ivanova et Kim (UA) dérivée des méthodes algorithmiques et utilisant une régression isotonique pour estimer la dose à recommander en fin d'essai, iii) la méthode de Chen et al. (EID) basée sur une régression isotonique pour l'escalade de dose et l'identification de la dose à recommander. Nous comparons ensuite ces quatre méthodes utilisant le score de toxicité aux méthodes CRM basées sur le critère binaire DLT. Nous étudions également l'impact de l'erreur de classement des grades pour les différentes méthodes, guidées par le score de toxicité ou par la DLT.Enfin, nous illustrons le processus de construction du score de toxicité ainsi que l'application de la méthode QLCRM dans un essai réel de phase I. Dans cette application, nous avons utilisé une approche Delphi pour déterminer avec les cliniciens la matrice des poids et le score de toxicité jugé acceptable.Les méthodes QLCRM, QCRM, UA et EID présentent une bonne performance en termes de capacité à identifier correctement la dose à recommander et de contrôle du surdosage. Dans un essai incluant 36 patients, le pourcentage de sélection correcte de la dose à recommander obtenu avec les méthodes QLCRM et QCRM varie de 80 à 90% en fonction des situations. Les méthodes basées sur le score TTP sont plus performantes et plus robustes aux erreurs de classement des grades que les méthodes CRM basées sur le critère binaire DLT.Dans l'application rétrospective, le processus de construction du score apparaît faisable facilement. Cette étude nous a conduits à proposer des recommandations pour guider les investigateurs et faciliter l'utilisation de cette approche dans la pratique.En conclusion, la méthode QLCRM prenant en compte l'ensemble des toxicités s'avère séduisante pour les essais de phase I évaluant des médicaments associés à peu de DLTs a priori, mais avec des toxicités multiples modérées probables. / The aim of a phase I oncology trial is to identify a dose with an acceptable safety level. Most phase I designs use the Dose-Limiting Toxicity (DLT), a binary endpoint, to assess the level of toxicity. DLT might be an incomplete endpoint for investigating molecularly targeted therapies as a lot of useful toxicity information is discarded.In this work, we propose a quasi-continuous toxicity score, the Total Toxicity Profile (TTP), to measure quantitatively and comprehensively the overall burden of multiple toxicities. The TTP is defined as the Euclidean norm of the weights of toxicities experienced by a patient, where the weights reflect the relative clinical importance of each type and grade of toxicity.We propose then a dose-finding design, the Quasi-Likelihood Continual Reassessment Method (QLCRM), incorporating the TTP-score into the CRM, with a logistic model for the dose-toxicity relationship in a frequentist framework. Using simulations, we compare our design to three existing designs for quasi-continuous toxicity scores: i) the QCRM design, proposed by Yuan et al., with an empiric model for the dose-toxicity relationship in a Bayesian framework, ii) the UA design of Ivanova and Kim derived from the "up-and-down" methods for the dose-escalation process and using an isotonic regression to estimate the recommended dose at the end of the trial, and iii) the EID design of Chen et al. using the isotonic regression for the dose-escalation process and for the identification of the recommended dose.We also perform a simulation study to evaluate the TTP-driven methods in comparison to the classical DLT-driven CRM. We then evaluate the robustness of these designs in a setting where grades can be misclassified.In the last part of this work, we illustrate the process of building the TTP-score and the application of the QLCRM method through the example of a paediatric trial. In this study, we have used the Delphi method to elicit the weights and the target toxicity-score considered as an acceptable toxicity measure.All designs using the TTP-score to identify the recommended dose had good performance characteristics for most scenarios, with good overdosing control. For a sample size of 36, the percentage of correct selection for the QLCRM ranged from 80 to 90%, with similar results for the QCRM design. Simulation study demonstrates also that score-driven designs present an improved performance and robustness compared to conventional DLT-driven designs. In the retrospective application of erlotinib trial, the consensus weights as well as the target-TTP were easily obtained, confirming the feasibility of the process. Some guidelines to facilitate the process in a real clinical trial for a better practice of this approach are suggested.The QLCRM method based on the TTP-endpoint combining multiple graded toxicities is an appealing alternative to the conventional dose-finding designs, especially in the context of molecularly targeted agents.
|
52 |
Vers la résolution "optimale" de problèmes inverses non linéaires parcimonieux grâce à l'exploitation de variables binaires sur dictionnaires continus : applications en astrophysique / Towards an "optimal" solution for nonlinear sparse inverse problems using binary variables on continuous dictionaries : applications in AstrophysicsBoudineau, Mégane 01 February 2019 (has links)
Cette thèse s'intéresse à la résolution de problèmes inverses non linéaires exploitant un a priori de parcimonie ; plus particulièrement, des problèmes où les données se modélisent comme la combinaison linéaire d'un faible nombre de fonctions non linéaires en un paramètre dit de " localisation " (par exemple la fréquence en analyse spectrale ou le décalage temporel en déconvolution impulsionnelle). Ces problèmes se reformulent classiquement en un problème d'approximation parcimonieuse linéaire (APL) en évaluant les fonctions non linéaires sur une grille de discrétisation arbitrairement fine du paramètre de localisation, formant ainsi un " dictionnaire discret ". Cependant, une telle approche se heurte à deux difficultés majeures. D'une part, le dictionnaire provenant d'une telle discrétisation est fortement corrélé et met en échec les méthodes de résolution sous-optimales classiques comme la pénalisation L1 ou les algorithmes gloutons. D'autre part, l'estimation du paramètre de localisation, appartenant nécessairement à la grille de discrétisation, se fait de manière discrète, ce qui entraîne une erreur de modélisation. Dans ce travail nous proposons des solutions pour faire face à ces deux enjeux, d'une part via la prise en compte de la parcimonie de façon exacte en introduisant un ensemble de variables binaires, et d'autre part via la résolution " optimale " de tels problèmes sur " dictionnaire continu " permettant l'estimation continue du paramètre de localisation. Deux axes de recherches ont été suivis, et l'utilisation des algorithmes proposés est illustrée sur des problèmes de type déconvolution impulsionnelle et analyse spectrale de signaux irrégulièrement échantillonnés. Le premier axe de ce travail exploite le principe " d'interpolation de dictionnaire ", consistant en une linéarisation du dictionnaire continu pour obtenir un problème d'APL sous contraintes. L'introduction des variables binaires nous permet de reformuler ce problème sous forme de " programmation mixte en nombres entiers " (Mixed Integer Programming - MIP) et ainsi de modéliser de façon exacte la parcimonie sous la forme de la " pseudo-norme L0 ". Différents types d'interpolation de dictionnaires et de relaxation des contraintes sont étudiés afin de résoudre de façon optimale le problème grâce à des algorithmes classiques de type MIP. Le second axe se place dans le cadre probabiliste Bayésien, où les variables binaires nous permettent de modéliser la parcimonie en exploitant un modèle de type Bernoulli-Gaussien. Ce modèle est étendu (modèle BGE) pour la prise en compte de la variable de localisation continue. L'estimation des paramètres est alors effectuée à partir d'échantillons tirés avec des algorithmes de type Monte Carlo par Chaîne de Markov. Plus précisément, nous montrons que la marginalisation des amplitudes permet une accélération de l'algorithme de Gibbs dans le cas supervisé (hyperparamètres du modèle connu). De plus, nous proposons de bénéficier d'une telle marginalisation dans le cas non supervisé via une approche de type " Partially Collapsed Gibbs Sampler. " Enfin, nous avons adapté le modèle BGE et les algorithmes associés à un problème d'actualité en astrophysique : la détection d'exoplanètes par la méthode des vitesses radiales. Son efficacité sera illustrée sur des données simulées ainsi que sur des données réelles. / This thesis deals with solutions of nonlinear inverse problems using a sparsity prior; more specifically when the data can be modelled as a linear combination of a few functions, which depend non-linearly on a "location" parameter, i.e. frequencies for spectral analysis or time-delay for spike train deconvolution. These problems are generally reformulated as linear sparse approximation problems, thanks to an evaluation of the nonlinear functions at location parameters discretised on a thin grid, building a "discrete dictionary". However, such an approach has two major drawbacks. On the one hand, the discrete dictionary is highly correlated; classical sub-optimal methods such as L1- penalisation or greedy algorithms can then fail. On the other hand, the estimated location parameter, which belongs to the discretisation grid, is necessarily discrete and that leads to model errors. To deal with these issues, we propose in this work an exact sparsity model, thanks to the introduction of binary variables, and an optimal solution of the problem with a "continuous dictionary" allowing a continuous estimation of the location parameter. We focus on two research axes, which we illustrate with problems such as spike train deconvolution and spectral analysis of unevenly sampled data. The first axis focusses on the "dictionary interpolation" principle, which consists in a linearisation of the continuous dictionary in order to get a constrained linear sparse approximation problem. The introduction of binary variables allows us to reformulate this problem as a "Mixed Integer Program" (MIP) and to exactly model the sparsity thanks to the "pseudo-norm L0". We study different kinds of dictionary interpolation and constraints relaxation, in order to solve the problem optimally thanks to MIP classical algorithms. For the second axis, in a Bayesian framework, the binary variables are supposed random with a Bernoulli distribution and we model the sparsity through a Bernoulli-Gaussian prior. This model is extended to take into account continuous location parameters (BGE model). We then estimate the parameters from samples drawn using Markov chain Monte Carlo algorithms. In particular, we show that marginalising the amplitudes allows us to improve the sampling of a Gibbs algorithm in a supervised case (when the model's hyperparameters are known). In an unsupervised case, we propose to take advantage of such a marginalisation through a "Partially Collapsed Gibbs Sampler." Finally, we adapt the BGE model and associated samplers to a topical science case in Astrophysics: the detection of exoplanets from radial velocity measurements. The efficiency of our method will be illustrated with simulated data, as well as actual astrophysical data.
|
53 |
Modelos para análise de dados discretos longitudinais com superdispersão / Models for analysis of longitudinal discrete data in the presence of overdispersionFernanda Bührer Rizzato 08 February 2012 (has links)
Dados longitudinais na forma de contagens e na forma binária são muito comuns, os quais, frequentemente, podem ser analisados por distribuições de Poisson e de Bernoulli, respectivamente, pertencentes à família exponencial. Duas das principais limitações para modelar esse tipo de dados são: (1) a ocorrência de superdispersão, ou seja, quando a variabilidade dos dados não é adequadamente descrita pelos modelos, que muitas vezes apresentam uma relação pré-estabelecida entre a média e a variância, e (2) a correlação existente entre medidas realizadas repetidas vezes na mesma unidade experimental. Uma forma de acomodar a superdispersão é pela utilização das distribuições binomial negativa e beta binomial, ou seja, pela inclusão de um efeito aleatório com distribuição gama quando se considera dados provenientes de contagens e um efeito aleatório com distribuição beta quando se considera dados binários, ambos introduzidos de forma multiplicativa. Para acomodar a correlação entre as medidas realizadas no mesmo indivíduo podem-se incluir efeitos aleat órios com distribuição normal no preditor linear. Esses situações podem ocorrer separada ou simultaneamente. Molenberghs et al. (2010) propuseram modelos que generalizam os modelos lineares generalizados mistos Poisson-normal e Bernoulli-normal, incorporando aos mesmos a superdispersão. Esses modelos foram formulados e ajustados aos dados, usando-se o método da máxima verossimilhança. Entretanto, para um modelo de efeitos aleatórios, é natural pensar em uma abordagem Bayesiana. Neste trabalho, são apresentados modelos Bayesianos hierárquicos para dados longitudinais, na forma de contagens e binários que apresentam superdispersão. A análise Bayesiana hierárquica é baseada no método de Monte Carlo com Cadeias de Markov (MCMC) e para implementação computacional utilizou-se o software WinBUGS. A metodologia para dados na forma de contagens é usada para a análise de dados de um ensaio clínico em pacientes epilépticos e a metodologia para dados binários é usada para a análise de dados de um ensaio clínico para tratamento de dermatite. / Longitudinal count and binary data are very common, which often can be analyzed by Poisson and Bernoulli distributions, respectively, members of the exponential family. Two of the main limitations to model this data are: (1) the occurrence of overdispersion, i.e., the phenomenon whereby variability in the data is not adequately captured by the model, and (2) the accommodation of data hierarchies owing to, for example, repeatedly measuring the outcome on the same subject. One way of accommodating overdispersion is by using the negative-binomial and beta-binomial distributions, in other words, by the inclusion of a random, gamma-distributed eect when considering count data and a random, beta-distributed eect when considering binary data, both introduced by multiplication. To accommodate the correlation between measurements made in the same individual one can include normal random eects in the linear predictor. These situations can occur separately or simultaneously. Molenberghs et al. (2010) proposed models that simultaneously generalizes the generalized linear mixed models Poisson-normal and Bernoulli-normal, incorporating the overdispersion. These models were formulated and tted to the data using maximum likelihood estimation. However, these models lend themselves naturally to a Bayesian approach as well. In this paper, we present Bayesian hierarchical models for longitudinal count and binary data in the presence of overdispersion. A hierarchical Bayesian analysis is based in the Monte Carlo Markov Chain methods (MCMC) and the software WinBUGS is used for the computational implementation. The methodology for count data is used to analyse a dataset from a clinical trial in epileptic patients and the methodology for binary data is used to analyse a dataset from a clinical trial in toenail infection named onychomycosis.
|
54 |
Modelos para análise de dados discretos longitudinais com superdispersão / Models for analysis of longitudinal discrete data in the presence of overdispersionRizzato, Fernanda Bührer 08 February 2012 (has links)
Dados longitudinais na forma de contagens e na forma binária são muito comuns, os quais, frequentemente, podem ser analisados por distribuições de Poisson e de Bernoulli, respectivamente, pertencentes à família exponencial. Duas das principais limitações para modelar esse tipo de dados são: (1) a ocorrência de superdispersão, ou seja, quando a variabilidade dos dados não é adequadamente descrita pelos modelos, que muitas vezes apresentam uma relação pré-estabelecida entre a média e a variância, e (2) a correlação existente entre medidas realizadas repetidas vezes na mesma unidade experimental. Uma forma de acomodar a superdispersão é pela utilização das distribuições binomial negativa e beta binomial, ou seja, pela inclusão de um efeito aleatório com distribuição gama quando se considera dados provenientes de contagens e um efeito aleatório com distribuição beta quando se considera dados binários, ambos introduzidos de forma multiplicativa. Para acomodar a correlação entre as medidas realizadas no mesmo indivíduo podem-se incluir efeitos aleat órios com distribuição normal no preditor linear. Esses situações podem ocorrer separada ou simultaneamente. Molenberghs et al. (2010) propuseram modelos que generalizam os modelos lineares generalizados mistos Poisson-normal e Bernoulli-normal, incorporando aos mesmos a superdispersão. Esses modelos foram formulados e ajustados aos dados, usando-se o método da máxima verossimilhança. Entretanto, para um modelo de efeitos aleatórios, é natural pensar em uma abordagem Bayesiana. Neste trabalho, são apresentados modelos Bayesianos hierárquicos para dados longitudinais, na forma de contagens e binários que apresentam superdispersão. A análise Bayesiana hierárquica é baseada no método de Monte Carlo com Cadeias de Markov (MCMC) e para implementação computacional utilizou-se o software WinBUGS. A metodologia para dados na forma de contagens é usada para a análise de dados de um ensaio clínico em pacientes epilépticos e a metodologia para dados binários é usada para a análise de dados de um ensaio clínico para tratamento de dermatite. / Longitudinal count and binary data are very common, which often can be analyzed by Poisson and Bernoulli distributions, respectively, members of the exponential family. Two of the main limitations to model this data are: (1) the occurrence of overdispersion, i.e., the phenomenon whereby variability in the data is not adequately captured by the model, and (2) the accommodation of data hierarchies owing to, for example, repeatedly measuring the outcome on the same subject. One way of accommodating overdispersion is by using the negative-binomial and beta-binomial distributions, in other words, by the inclusion of a random, gamma-distributed eect when considering count data and a random, beta-distributed eect when considering binary data, both introduced by multiplication. To accommodate the correlation between measurements made in the same individual one can include normal random eects in the linear predictor. These situations can occur separately or simultaneously. Molenberghs et al. (2010) proposed models that simultaneously generalizes the generalized linear mixed models Poisson-normal and Bernoulli-normal, incorporating the overdispersion. These models were formulated and tted to the data using maximum likelihood estimation. However, these models lend themselves naturally to a Bayesian approach as well. In this paper, we present Bayesian hierarchical models for longitudinal count and binary data in the presence of overdispersion. A hierarchical Bayesian analysis is based in the Monte Carlo Markov Chain methods (MCMC) and the software WinBUGS is used for the computational implementation. The methodology for count data is used to analyse a dataset from a clinical trial in epileptic patients and the methodology for binary data is used to analyse a dataset from a clinical trial in toenail infection named onychomycosis.
|
55 |
Sample Footprints für Data-Warehouse-DatenbankenRösch, Philipp, Lehner, Wolfgang 20 January 2023 (has links)
Durch stetig wachsende Datenmengen in aktuellen Data-Warehouse-Datenbanken erlangen Stichproben eine immer größer werdende Bedeutung. Insbesondere interaktive Analysen können von den signifikant kürzeren Antwortzeiten der approximativen Anfrageverarbeitung erheblich profitieren. Linked-Bernoulli-Synopsen bieten in diesem Szenario speichereffiziente, schemaweite Synopsen, d. h. Synopsen mit Stichproben jeder im Schema enthaltenen Tabelle bei minimalem Mehraufwand für die Erhaltung der referenziellen Integrität innerhalb der Synopse. Dies ermöglicht eine effiziente Unterstützung der näherungsweisen Beantwortung von Anfragen mit beliebigen Fremdschlüsselverbundoperationen. In diesem Artikel wird der Einsatz von Linked-Bernoulli-Synopsen in Data-Warehouse-Umgebungen detaillierter analysiert. Dies beinhaltet zum einen die Konstruktion speicherplatzbeschränkter, schemaweiter Synopsen, wobei unter anderem folgende Fragen adressiert werden: Wie kann der verfügbare Speicherplatz auf die einzelnen Stichproben aufgeteilt werden? Was sind die Auswirkungen auf den Mehraufwand? Zum anderen wird untersucht, wie Linked-Bernoulli-Synopsen für die Verwendung in Data-Warehouse-Datenbanken angepasst werden können. Hierfür werden eine inkrementelle Wartungsstrategie sowie eine Erweiterung um eine Ausreißerbehandlung für die Reduzierung von Schätzfehlern approximativer Antworten von Aggregationsanfragen mit Fremdschlüsselverbundoperationen vorgestellt. Eine Vielzahl von Experimenten zeigt, dass Linked-Bernoulli-Synopsen und die in diesem Artikel präsentierten Verfahren vielversprechend für den Einsatz in Data-Warehouse-Datenbanken sind. / With the amount of data in current data warehouse databases growing steadily, random sampling is continuously gaining in importance. In particular, interactive analyses of large datasets can greatly benefit from the significantly shorter response times of approximate query processing. In this scenario, Linked Bernoulli Synopses provide memory-efficient schema-level synopses, i. e., synopses that consist of random samples of each table in the schema with minimal overhead for retaining foreign-key integrity within the synopsis. This provides efficient support to the approximate answering of queries with arbitrary foreign-key joins. In this article, we focus on the application of Linked Bernoulli Synopses in data warehouse environments. On the one hand, we analyze the instantiation of memory-bounded synopses. Among others, we address the following questions: How can the given space be partitioned among the individual samples? What is the impact on the overhead? On the other hand, we consider further adaptations of Linked Bernoulli Synopses for usage in data warehouse databases. We show how synopses can incrementally be kept up-to-date when the underlying data changes. Further, we suggest additional outlier handling methods to reduce the estimation error of approximate answers of aggregation queries with foreign-key joins. With a variety of experiments, we show that Linked Bernoulli Synopses and the proposed techniques have great potential in the context of data warehouse databases.
|
56 |
Non-uniformity issues and workarounds in bounded-size samplingGemulla, Rainer, Haas, Peter J., Lehner, Wolfgang 27 January 2023 (has links)
A variety of schemes have been proposed in the literature to speed up query processing and analytics by incrementally maintaining a bounded-size uniform sample from a dataset in the presence of a sequence of insertion, deletion, and update transactions. These algorithms vary according to whether the dataset is an ordinary set or a multiset and whether the transaction sequence consists only of insertions or can include deletions and updates. We report on subtle non-uniformity issues that we found in a number of these prior bounded-size sampling schemes, including some of our own. We provide workarounds that can avoid the non-uniformity problem; these workarounds are easy to implement and incur negligible additional cost. We also consider the impact of non-uniformity in practice and describe simple statistical tests that can help detect non-uniformity in new algorithms.
|
57 |
Loosely coupled, modular framework for linear static aeroelastic analysesDettmann, Aaron January 2019 (has links)
A computational framework for linear static aeroelastic analyses is presented. The overall aeroelasticity model is applicable to conceptual aircraft design studies and other low-fidelity aero-structural analyses. A partitioned approach is used, i. e. separate solvers for aerodynamics and structure analyses are coupled in a suitable way, together forming a model for aeroelastic simulations. Aerodynamics are modelled using the vortexlattice method (VLM), a simple computational fluid dynamics (CFD) model based on potential flow. The structure is represented by a three-dimensional (3D) Euler-Bernoulli beam model in a finite element method (FEM) formulation. A particular focus was put on the modularity and loose coupling of aforementioned models. The core of the aeroelastic framework was abstracted, such that it does not depend on any specific details of the underlying aerodynamics and structure modules. The final aeroelasticity model constitutes independent software tools for the VLM and the beam FEM, as well as a framework enabling the aeroelastic coupling. These different tools have been developed as part of this thesis work. A wind tunnel experiment with a simple wing model is presented as a validation test case. An aero-structural analysis of a fully elastic unmanned aerial vehicle (UAV) (OptiMale) is described and results are compared with an existing higherfidelity study. / Rapporten beskriver en beräkningsmodell för linjära, statisk aeroelastiska analyser. Modellen kan användas för konceptuella designstudier av flygplan. En partitionerad metod används, d v s separata lösare för aerodynamik- och strukturanalyser kopplas på ett lämpligt sätt, och bildar tillsammans en modell för aeroelastiska simulationer. Aerodynamik modelleras med hjälp av en så kallad vortex-lattice method (VLM), en enkel modell för beräkningsströmningsdynamik (CFD) som är baserad på friktionsfri strömning. Strukturen representeras av en tredimensionell (3D) Euler-Bernoulli-balkmodell implementerad med hjälp av en finita elementmetod (FEM). Ovannämnda modeller har utvecklats med fokus på modularitet och lös koppling. Kärnan i den aeroelastiska modellen har abstraherats så att den inte beror på specifika detaljer i de underliggande aerodynamik- och strukturmodulerna. Aeroelasticitetsmodellen i sin helhet består av separata mjukvaruprogram för VLM och balk-FEM, såväl som ett ramverk som möjliggör den aeroelastiska kopplingen. Dessa olika program har utvecklats som en del av examensarbetet. Ett vindtunnelförsök med en enkel vingmodell presenteras som ett valideringstest. Dessutom beskrivs en analys av ett elastiskt obemannad flygplan (OptiMale) och resultaten jämförs med en befintlig studie som har genomförts med modeller av högre trovärdighet.
|
58 |
On some queueing systems with server vacations, extended vacations, breakdowns, delayed repairs and stand-bysKhalaf, Rehab F. January 2012 (has links)
This research investigates a batch arrival queueing system with a Bernoulli scheduled vacation and random system breakdowns. It is assumed that the repair process does not start immediately after the breakdown. Consequently there maybe a delay in starting repairs. After every service completion the server may go on an optional vacation. When the original vacation is completed the server has the option to go on an extended vacation. It is assumed that the system is equipped with a stand-by server to serve the customers during the vacation period of the main server as well as during the repair process. The service times, vacation times, repair times, delay times and extended vacation times are assumed to follow different general distributions while the breakdown times and the service times of the stand-by server follow an exponential distribution. By introducing a supplementary variable we are able to obtain steady state results in an explicit closed form in terms of the probability generating functions. Some important performance measures including; the average length of the queue, the average number of customers in the system, the mean response time, and the value of the traffic intensity are presented. The professional MathCad 2001 software has been used to illustrate the numerical results in this study.
|
59 |
Modos em vigas com secção transversal de variação linearJuver, Jovita Rasch Bracht January 2002 (has links)
O objetivo principal deste trabalho é a obtenção dos modos e as freqüências naturais de vigas de variação linear e em forma de cunha, com condições de contorno clássicas e não-clássicas, descritas pelo modelo estrutural de Euler-Bernoulli. A forma dos modos foi determinado com o uso das funções cilíndricas. No caso forçado se considera uma força harmônica e se resolve o problema pelo método espectral, utuilizando o software simbólico Maple V5. Realiza-se uma análise comparativa dos resultados obtidos com os resultados existentes na literatura para vigas uniformes.
|
60 |
Modélisation probabiliste d’impression à l’échelle micrométrique / Probabilistic modeling of prints at the microscopic scaleNguyen, Quoc Thong 18 May 2015 (has links)
Nous développons des modèles probabilistes pour l’impression à l’échelle micrométrique. Tenant compte de l’aléa de la forme des points qui composent les impressions, les modèles proposés pourront être ultérieurement exploités dans différentes applications dont l’authentification de documents imprimés. Une analyse de l’impression sur différents supports papier et par différentes imprimantes a été effectuée. Cette étude montre que la grande variété de forme dépend de la technologie et du papier. Le modèle proposé tient compte à la fois de la distribution du niveau de gris et de la répartition spatiale de l’encre sur le papier. Concernant le niveau de gris, les modèles des surfaces encrées/vierges sont obtenues en sélectionnant les distributions dans un ensemble de lois de forme similaire aux histogrammes et à l’aide de K-S critère. Le modèle de répartition spatiale de l’encre est binaire. Le premier modèle consiste en un champ de variables indépendantes de Bernoulli non-stationnaire dont les paramètres forment un noyau gaussien généralisé. Un second modèle de répartition spatiale des particules d’encre est proposé, il tient compte de la dépendance des pixels à l’aide d’un modèle de Markov non stationnaire. Deux méthodes d’estimation ont été développées, l’une approchant le maximum de vraisemblance par un algorithme de Quasi Newton, la seconde approchant le critère de l’erreur quadratique moyenne minimale par l’algorithme de Metropolis within Gibbs. Les performances des estimateurs sont évaluées et comparées sur des images simulées. La précision des modélisations est analysée sur des jeux d’images d’impression à l’échelle micrométrique obtenues par différentes imprimantes. / We develop the probabilistic models of the print at the microscopic scale. We study the shape randomness of the dots that originates the prints, and the new models could improve many applications such as the authentication. An analysis was conducted on various papers, printers. The study shows a large variety of shape that depends on the printing technology and paper. The digital scan of the microscopic print is modeled in: the gray scale distribution, and the spatial binary process modeling the printed/blank spatial distribution. We seek the best parametric distribution that takes account of the distributions of the blank and printed areas. Parametric distributions are selected from a set of distributions with shapes close to the histograms and with the Kolmogorov-Smirnov divergence. The spatial binary model handles the wide diversity of dot shape and the range of variation of spatial density of inked particles. At first, we propose a field of independent and non-stationary Bernoulli variables whose parameters form a Gaussian power. The second spatial binary model encompasses, in addition to the first model, the spatial dependence of the inked area through an inhomogeneous Markov model. Two iterative estimation methods are developed; a quasi-Newton algorithm which approaches the maximum likelihood and the Metropolis-Hasting within Gibbs algorithm that approximates the minimum mean square error estimator. The performances of the algorithms are evaluated and compared on simulated images. The accuracy of the models is analyzed on the microscopic scale printings coming from various printers. Results show the good behavior of the estimators and the consistency of the models.
|
Page generated in 0.057 seconds