Spelling suggestions: "subject:"nonhomogeneous"" "subject:"inhomogeneous""
371 |
Optimisation des plans de traitement en radiothérapie grâce aux dernières techniques de calcul de dose rapide / Optimization in radiotherapy treatment planning thanks to a fast dose calculation methodYang, Ming Chao 13 March 2014 (has links)
Cette thèse s'inscrit dans la perspective des traitements de radiothérapie en insistant sur la nécessité de disposer d’un logiciel de planification de traitement (TPS) rapide et fiable. Le TPS est composé d'un algorithme de calcul de dose et d’une méthode d’optimisation. L'objectif est de planifier le traitement afin de délivrer la dose à la tumeur tout en sauvegardant les tissus sains et sensibles environnant. La planification des traitements consiste à déterminer les paramètres d’irradiation les mieux adaptés au patient. Dans le cadre de cette thèse, les paramètres d'un traitement par RCMI (Radiothérapie Conformationnelle avec Modulation d'Intensité) sont la position de la source, les orientations des faisceaux et, pour chaque faisceau composé de faisceaux élémentaires, la fluence de ces derniers. La fonction objectif est multicritère en associant des contraintes linéaires. L’objectif de la thèse est de démontrer la faisabilité d'une méthode d'optimisation du plan de traitement fondée sur la technique de calcul de dose rapide développée par (Blanpain, 2009). Cette technique s’appuie sur un fantôme segmenté en mailles homogènes. Le calcul de dose s’effectue en deux étapes. La première étape concerne les mailles : les projections et pondérations y sont paramétrées en fonction de critères physiques et géométriques. La seconde étape concerne les voxels: la dose y est calculée en évaluant les fonctions préalablement associées à leur maille.Une reformulation de cette technique permet d’aborder le problème d’optimisation par la méthode de descente de gradient. L’optimisation en continu des paramètres du traitement devient envisageable. Les résultats obtenus dans le cadre de cette thèse ouvrent de nombreuses perspectives dans le domaine de l’optimisation des plans de traitement en radiothérapie. / This thesis deals with the radiotherapy treatments planning issue which need a fast and reliable treatment planning system (TPS). The TPS is composed of a dose calculation algorithm and an optimization method. The objective is to design a plan to deliver the dose to the tumor while preserving the surrounding healthy and sensitive tissues.The treatment planning aims to determine the best suited radiation parameters for each patient’s treatment. In this thesis, the parameters of treatment with IMRT (Intensity modulated radiation therapy) are the beam angle and the beam intensity. The objective function is multicritiria with linear constraints.The main objective of this thesis is to demonstrate the feasibility of a treatment planning optimization method based on a fast dose-calculation technique developed by (Blanpain, 2009). This technique proposes to compute the dose by segmenting the patient’s phantom into homogeneous meshes. The dose computation is divided into two steps. The first step impacts the meshes: projections and weights are set according to physical and geometrical criteria. The second step impacts the voxels: the dose is computed by evaluating the functions previously associated to their mesh.A reformulation of this technique makes possible to solve the optimization problem by the gradient descent algorithm. The main advantage of this method is that the beam angle parameters could be optimized continuously in 3 dimensions. The obtained results in this thesis offer many opportunities in the field of radiotherapy treatment planning optimization.
|
372 |
Diferentes noções de diferenciabilidade para funções definidas na esfera / Different notions of differentiability for functions defined on the sphereCastro, Mario Henrique de 01 March 2007 (has links)
Neste trabalho estudamos diferentes noções de diferenciabilidade para funções definidas na esfera unitária S^n-1 de R^n, n>=2. Em relação à derivada usual, encontramos condições necessárias e/ou suficientes para que uma função seja diferenciável até uma ordem fixada. Para as outras duas, a derivada forte de Laplace-Beltrami e a derivada fraca, apresentamos algumas propriedades básicas e procuramos condições que garantam a equivalência destas com a diferenciabilidade usual. / In this work we study different notions of differentiability for functions defined on the unit sphere S^n-1 of R^n, n>=2. With respect to the usual derivative, we find necessary and/or sufficient conditions in order that a function be differentiable up to a fixed order. As for the other two, the strong Laplace-Beltrami derivative and the weak derivative, we present some basic properties about them and search for conditions that guarantee the equivalence of them with the previous one.
|
373 |
Application de l'assimilation de données à la mécanique des fluides numérique : de la turbulence isotrope aux écoulements urbains / Application of data assimilation to computational fluid dynamics : from isotropic turbulence to urban flowsMons, Vincent 18 November 2016 (has links)
Dans cette thèse, l'application de l'assimilation de données (AD) à la MFN est étudiée, avec comme objectif global de contribuer à l'amélioration de la prévision numérique d'écoulements complexes. L'AD consiste à fusionner les outils de prévision numérique avec des données expérimentales afin d'améliorer l'estimation des paramètres d'entrée du code MFN. Les aspects méthodologiques de l'AD et son application pour des études physiques sont tous deux examinés dans cette thèse. Dans un premier temps, l'AD est utilisée pour une étude théorique de la turbulence de grille. Un modèle spectral pour les écoulements turbulents homogènes et anisotropes est également proposé. Plusieurs méthodes d'AD sont ensuite implémentées pour un code MFN et appliquées à la reconstruction d'écoulements instationnaires et compressibles en présence d'incertitudes sur des paramètres d'entrée de grandes dimensions afin d'évaluer les forces et faiblesses respectives de ces techniques. Des stratégies pour le placement optimal de réseaux de capteurs sont élaborées afin d'améliorer les performances du processus d'AD. Enfin, l'AD est appliquée à l'identification de sources de polluants et à la reconstruction de conditions météorologiques pour des écoulements en milieu urbain prédits par Simulation des Grandes Echelles. / In this thesis, we investigate the use of various data assimilation (DA) techniques in the context of CFD, with the ultimate goal of enhancing the prediction of real-world flows. DA consists in merging numerical predictions and experimental observations in order to improve the estimation of the CFD solver inputs. Both methodological aspects of DA and its potential application to physics investigations are explored for various flow configurations. First, DA is considered for the theoretical analysis of grid turbulence decay. Fundamental aspects of anisotropic homogeneous turbulence are also investigated through spectral modelling. Various DA methodologies are deployed in conjunction with a Navier-Stokes solver and are assessed for the reconstruction of unsteady compressible flows with large control vectors. Sensor placement strategies are developed to enhance the performances of the DA process. Finally, a first application of DA to Large Eddy Simulations of full-scale urban flows is proposed with the aim of identifying source and wind parameters from concentration measurements.
|
374 |
Work Group Composition Effects on Leadership Styles in Aircraft Manufacturing Organizations.Dunnagan, Monica Lynn 01 January 2014 (has links)
leadership styles
homogeneous versus heterogeneous
manufacturing leaders
contractor workforce
|
375 |
Procédés catalytiques et outils millifluidiques : applications aux réactions de Friedel-Crafts et d'oxydation. / Catalytic processes and millifluidic tools : application to Friedel-Crafts and oxidation reactionsOlivon, Kevin 29 September 2014 (has links)
La recherche de nouvelles méthodes pour l’acquisition de données physiques et chimiques de réactions en limitant les effet néfastes sur l’Homme est d’une grande importance pour la chimie moderne. L’utilisation de nouveaux outils miniaturisés permet de limiter les quantités de produits chimiques utilisées tout en augmentant la productivité de la recherche. En effet, l’étude de différents paramètres contrôlés simultanément permet d’augmenter le nombre d’expériences pour un temps donné. Malgré tout, cette étape doit être réalisée après détermination au préalable des paramètres clés de la réaction par l’utilisation d’outils haut débit tels que la robotique. Cesdifférents outils sont utilisés pour l’optimisation et la recherche d’une nouvelle voie de synthèse d’une réaction d’intérêt industriel.De plus, pour répondre à l’intérêt de la catalyse hétérogène dans l’industrie pour la séparation facilitée et le recyclage de ces catalyseurs, nous avons développé deux outils miniaturisés. Cesderniers permettent l’étude et l’acquisition de données de réactions chimiques catalysées par des solides. Le développement s’est inscrit en deux étapes : une caractérisation physique des outils puis l’étude d’une réaction modèle industrielle, l’acylation de l’anisole par des catalyseurs de type zéolite. / The search for new methods for the acquisition of physical and chemical reactions by limiting the effect on humans is of great importance for modern chemistry. The use of new miniaturized tools can limit the quantities of chemicals used while increasing research productivity. Indeed, the study of the various parameters monitored simultaneously increases the number of experimentsfor a given time. Nevertheless, this step must be performed after prior determination ofthe key parameters of the reaction by the use of high flow tools such as robotics. These tools are used for optimization and the search for new synthetic pathway of a reaction of industrial interest. In addition, to meet the interests of heterogeneous catalysis in industry for easier separation and recycling of these catalysts, we have developed two miniaturized tools. These allow the study and data acquisition of chemical reactions catalyzed by solid. Development was registeredin two stages : a physical characterization tools and the study of an industrial model reaction, the acylation of anisole with zeolite catalysts.
|
376 |
Statistical inference for non-homogeneous Poisson process with competing risks: a repairable systems approach under power-law process / Inferência estatística para processo de Poisson não-homogêneo com riscos competitivos: uma abordagem de sistemas reparáveis sob processo de lei de potênciaAlmeida, Marco Pollo 30 August 2019 (has links)
In this thesis, the main objective is to study certain aspects of modeling failure time data of repairable systems under a competing risks framework. We consider two different models and propose more efficient Bayesian methods for estimating the parameters. In the first model, we discuss inferential procedures based on an objective Bayesian approach for analyzing failures from a single repairable system under independent competing risks. We examined the scenario where a minimal repair is performed at each failure, thereby resulting in that each failure mode appropriately follows a power-law intensity. Besides, it is proposed that the power-law intensity is reparametrized in terms of orthogonal parameters. Then, we derived two objective priors known as the Jeffreys prior and reference prior. Moreover, posterior distributions based on these priors will be obtained in order to find properties which may be optimal in the sense that, for some cases, we prove that these posterior distributions are proper and are also matching priors. In addition, in some cases, unbiased Bayesian estimators of simple closed-form expressions are derived. In the second model, we analyze data from multiple repairable systems under the presence of dependent competing risks. In order to model this dependence structure, we adopted the well-known shared frailty model. This model provides a suitable theoretical basis for generating dependence between the components failure times in the dependent competing risks model. It is known that the dependence effect in this scenario influences the estimates of the model parameters. Hence, under the assumption that the cause-specific intensities follow a PLP, we propose a frailty-induced dependence approach to incorporate the dependence among the cause-specific recurrent processes. Moreover, the misspecification of the frailty distribution may lead to errors when estimating the parameters of interest. Because of this, we considered a Bayesian nonparametric approach to model the frailty density in order to offer more flexibility and to provide consistent estimates for the PLP model, as well as insights about heterogeneity among the systems. Both simulation studies and real case studies are provided to illustrate the proposed approaches and demonstrate their validity. / Nesta tese, o objetivo principal é estudar certos aspectos da modelagem de dados de tempo de falha de sistemas reparáveis sob uma estrutura de riscos competitivos. Consideramos dois modelos diferentes e propomos métodos Bayesianos mais eficientes para estimar os parâmetros. No primeiro modelo, discutimos procedimentos inferenciais baseados em uma abordagem Bayesiana objetiva para analisar falhas de um único sistema reparável sob riscos competitivos independentes. Examinamos o cenário em que um reparo mínimo é realizado em cada falha, resultando em que cada modo de falha segue adequadamente uma intensidade de lei de potência. Além disso, propõe-se que a intensidade da lei de potência seja reparametrizada em termos de parâmetros ortogonais. Então, derivamos duas prioris objetivas conhecidas como priori de Jeffreys e priori de referência. Além disso, distribuições posteriores baseadas nessas prioris serão obtidas a fim de encontrar propriedades que podem ser ótimas no sentido de que, em alguns casos, provamos que essas distribuições posteriores são próprias e que também são matching priors. Além disso, em alguns casos, estimadores Bayesianos não-viesados de forma fechada são derivados. No segundo modelo, analisamos dados de múltiplos sistemas reparáveis sob a presença de riscos competitivos dependentes. Para modelar essa estrutura de dependência, adotamos o conhecido modelo de fragilidade compartilhada. Esse modelo fornece uma base teórica adequada para gerar dependência entre os tempos de falha dos componentes no modelo de riscos competitivos dependentes. Sabe-se que o efeito de dependência neste cenário influencia as estimativas dos parâmetros do modelo. Assim, sob o pressuposto de que as intensidades específicas de causa seguem um PLP, propomos uma abordagem de dependência induzida pela fragilidade para incorporar a dependência entre os processos recorrentes específicos da causa. Além disso, a especificação incorreta da distribuição de fragilidade pode levar a erros na estimativa dos parâmetros de interesse. Por isso, consideramos uma abordagem Bayesiana não paramétrica para modelar a densidade da fragilidade, a fim de oferecer mais flexibilidade e fornecer estimativas consistentes para o modelo PLP, bem como insights sobre a heterogeneidade entre os sistemas. São fornecidos estudos de simulação e estudos de casos reais para ilustrar as abordagens propostas e demonstrar sua validade.
|
377 |
Inference for Discrete Time Stochastic Processes using Aggregated Survey DataDavis, Brett Andrew, Brett.Davis@abs.gov.au January 2003 (has links)
We consider a longitudinal system in which transitions between the states are governed by a discrete time finite state space stochastic process X. Our aim, using aggregated sample survey data of the form typically collected by official statistical agencies, is to undertake model based inference for the underlying process X. We will develop inferential techniques for continuing sample surveys of two distinct types. First, longitudinal surveys in which the same individuals are sampled in each cycle of the survey. Second, cross-sectional
surveys which sample the same population in successive cycles but with no attempt to track particular individuals from one cycle to the next. Some of the basic results have appeared in Davis et al (2001) and Davis et al (2002).¶ Longitudinal surveys provide data in the form of transition frequencies between the states of X. In Chapter Two we develop a method for modelling and estimating the one-step transition probabilities in the case where X is a non-homogeneous Markov chain and transition frequencies are observed at unit time intervals. However, due to their expense, longitudinal surveys are typically conducted at widely, and sometimes irregularly, spaced time points. That is, the observable frequencies pertain to multi-step transitions. Continuing to assume the Markov property for X, in Chapter Three, we show that these multi-step transition frequencies can be stochastically interpolated to provide accurate estimates of the one-step transition probabilities of the underlying process. These estimates for a unit time increment can be used to calculate estimates of expected future occupation time, conditional on an individuals state at initial point of observation, in the different states of X.¶ For reasons of cost, most statistical collections run by official agencies are cross-sectional sample surveys. The data observed from an on-going survey of this type are marginal frequencies in the states of X at a sequence of time points. In Chapter Four we develop a model based technique for estimating the marginal probabilities of X using data of this form. Note that, in contrast to the longitudinal case, the Markov assumption does not simplify inference based on marginal frequencies. The marginal probability estimates enable estimation of future occupation times (in each of the states of X) for an individual of unspecified initial state. However, in the applications of the technique that we discuss (see Sections 4.4 and 4.5) the estimated occupation times will be conditional on both gender and initial age of individuals.¶ The longitudinal data envisaged in Chapter Two is that obtained from the surveillance of the same sample in each cycle of an on-going survey. In practice, to preserve data quality it is necessary to control respondent burden using sample rotation. This is usually achieved using a mechanism known as rotation group sampling. In Chapter Five we consider the particular form of rotation group sampling used by the Australian Bureau of Statistics in their Monthly Labour Force Survey (from which official estimates of labour force participation rates are produced). We show that our approach to estimating the one-step transition probabilities of X from transition frequencies observed at incremental time intervals, developed in Chapter Two, can be modified to deal with data collected under this sample rotation scheme. Furthermore, we show that valid inference is possible even when the Markov property does not hold for the underlying process.
|
378 |
Energetics and Kinetics of Dislocation Initiation in the Stressed Volume at Small ScalesLi, Tianlei 01 December 2010 (has links)
Instrumented nanoindentation techniques have been widely used in characterizing mechanical behavior of materials in small length scales. For defect-free single crystals under nanoindentation, the onset of elastic-plastic transition is often shown by a sudden displacement burst in the measured load-displacement curve. It is believed to result from the homogeneous dislocation nucleation because the maximum shear stress at the pop-in load approaches the theoretical strength of the material and because statistical measurements agree with a thermally activated process of homogeneous dislocation nucleation. For single crystals with defects, the pop-in is believed to result from the sudden motion of pre-existing dislocations or heterogeneous dislocation nucleation. If the sample is prestrained before nanoindentation tests, a monotonic decrease of the measured pop-in load with respect to the increase of prestrain on Ni and Mo single crystals is observed. A similar trend is also observed that the pop-in load will gradually decrease if the size of indenter tip radius increases.
This dissertation presents a systematic modeling endeavor of energetics and kinetics of defect initiation in the stressed volume at small scales. For homogeneous dislocation nucleation, an indentation Schmid factor is determined as the ratio of maximum resolved shear stress to the maximum contact pressure. The orientation-depended nanoindentation pop-in loads are predicted based on the indentation Schmid factor, theoretical strength of the material, indenter radius, and the effective indentation modulus. A good agreement has been reached when comparing the experimental data of nanoindentation tests on NiAl, Mo, and Ni, with different loading orientations to theoretical predictions. Statistical measurements generally confirm the thermal activation model of homogeneous dislocation nucleation, because the extracted dependence of activation energy on resolved shear stress is almost unique for all the indentation directions. For pop-in due to pre-existing defects, the pop-in load is predicted to be dependent on the defect density and the critical strength for heterogeneous dislocation nucleation. The cumulative probability of pop-in loads contains convoluted information from the homogenous dislocation nucleation, which is sensitive to temperature and loading rate, and heterogeneous dislocation nucleation due to the unstable change of existing defect network, which is sensitive to the initial defect distribution.
|
379 |
Immuntechnologische Verfahren zum Aufbau homogener Immunoassays sowie zur Selektion Antikörper produzierender Zellen / Immunotechnological procedures for the development of homogeneous immunoassays and the selection of antibody producing cellsSellrie, Frank January 2007 (has links)
Homogene Immunoassays sind immunologische Testverfahren, bei deren Durchführung vollständig auf Separations- und Waschschritte verzichtet werden kann.
Der Substrate Channeling Immunoassay beruht auf der Weitergabe eines Substrates in einem immunologischen Komplex aus zwei Enzymen. Das Produkt des ersten Enzyms dient dem zweiten Enzym als Substrat zur Generierung eines photometrisch nachweisbaren Produktes. Voraussetzung für diese Weitergabe ist die enge räumliche Nähe beider Enzyme. Diese Nähe wird durch eine Bindung zwischen Analyt und anti-Analyt Antikörper vermittelt. Ein solcher Substrate Channeling Immunoassay wurde unter Verwendung der Enzyme Glucoseoxidase und Peroxidase aufgebaut. Das so etablierte System war funktionstüchtig, jedoch blieb seine Sensitivität hinter der normaler, heterogener Immunoassays zurück.
Die Grundlage eines Fluorescence Quenching Immunoassays ist der gegenseitige Ausschluß zweier Antikörper bei der Bindung eines Dihapten-Konjugates. Das Konjugat besteht dabei aus dem Analyten und einem Fluorophor. Die beiden um die Konjugatbindung konkurrierenden Antikörper sind ein anti-Analyt Antikörper und ein anti-Fluorophor Antikörper, der zudem über die Eigenschaft verfügt, bei Bindung des Fluorophors dessen Fluoreszenz zu löschen. Externe Gaben des freien Analyten verschieben das eingestellte Gleichgewicht in Richtung Fluorophor-Bindung und damit Fluoreszenz-Löschung. Die Änderung der Fluoreszenz ist direkt an die Konzentration des freien Analyten gekoppelt und dient zu deren Bestimmung. Ein solcher Fluorescence Quenching Immunoassays wurde für die Konzentrationsbestimmung des Herbizides Diuron etabliert. Die erreichten Sensitivitäten erlauben die praktische, immundiagnostische Anwendung des Systems.
Ein Dihapten-Konjugat wurde ebenfalls zum Aufbau eines Verfahrens zur Selektion Antikörper produzierender Zellen eingesetzt.
Die Selektion der Antikörper produzierenden Zellen erfolgt unter Verwendung eines Toxinkonjugates. Dieses Konjugat besteht aus einem Liganden und einem Toxin. Die Antikörperbindung des Liganden behindert sterisch die Wechselwirkung der Toxinkomponente im Konjugat mit deren Zielstruktur in oder auf der Zelle. Nur Zellen die einen geeigneten Antikörper sezernieren, überleben die Selektion und reichern sich in der Kultur an. Das Selektionsverfahren wurde erfolgreich für die Selektion von E.coli Zellen eingesetzt, die einen rekombinanten, Fluorescein bindenden Antikörper produzierten. Das hierfür synthetisierte Toxinkonjugat bestand aus Fluorescein (Ligand) und Ampicillin (Toxinkomponente). Eine Ablösung der bisher für diese Aufgabe gebräuchlichen, außerordentlich kostenintensiven, Screening Methoden wird damit möglich. / Homogeneous immunoassays are test systems which do not depend on separation steps. The substrate channeling immunoassay is based on the product/substrate transfer in an immunological complex built up by two enzymes. The product of the first enzyme functions as substrate for the second enzyme. The second enzyme generates a photometrically detectable product. The close proximity of these two enzymes is the basis of the substrate channeling. This proximity is created by antibody binding to the corresponding analyte. The enzymes glucose oxidase and peroxidase were used for the development of such an assay system. The established homogeneous immunoassay was functional. But the sensitivity of the assay was much lower than that of conventional heterogeneous immunoassays.
The principle of a fluorescence quenching immunoassay is based on the fact that two antibodies exclude each other from binding to a dihapten conjugate. The conjugate consists of the analyte and the fluorophore. The two antibodies which compete for the conjugate binding are an anti-analyte antibody and an anti-fluorophore antibody. This anti-fluorophore antibody quenches the fluorescence of the fluorophore after binding. The addition of free analyte alters the equilibrium of the system so that the anti-fluorophore antibody is bound to the fluorophore and the fluorescence is quenched. The change in fluorescence is therefore an indicator of the concentration of free analyte added. A homogeneous fluorescence quenching immunoassay was established for the determination of the herbicide diuron. The sensitivities obtained allow the practical immunodiagnostic application of the system.
A dihapten conjugate was also employed for the development of a selection method for antibody-producing cells. Toxin conjugates were used in this system. Each conjugate consisted of a ligand and a toxin. Antibody binding to the ligand sterically inhibits the toxin component to interact with its target structure. Only cells secreting a binding antibody will survive the selection and will accumulate in culture. The system was applied to the selection of E.coli cells producing a recombinant fluorescein-binding antibody. The toxin conjugate used in experiment consisted of fluorescein (ligand) and ampicillin (toxin component). This selection procedure allowed the isolation of recombinant antibody-producing E.coli cells. It has the potential to replace the time-consuming and labour-intensive methods used so far.
|
380 |
Asymptotic Problems on Homogeneous SpacesSödergren, Anders January 2010 (has links)
This PhD thesis consists of a summary and five papers which all deal with asymptotic problems on certain homogeneous spaces. In Paper I we prove asymptotic equidistribution results for pieces of large closed horospheres in cofinite hyperbolic manifolds of arbitrary dimension. All our results are given with precise estimates on the rates of convergence to equidistribution. Papers II and III are concerned with statistical problems on the space of n-dimensional lattices of covolume one. In Paper II we study the distribution of lengths of non-zero lattice vectors in a random lattice of large dimension. We prove that these lengths, when properly normalized, determine a stochastic process that, as the dimension n tends to infinity, converges weakly to a Poisson process on the positive real line with intensity 1/2. In Paper III we complement this result by proving that the asymptotic distribution of the angles between the shortest non-zero vectors in a random lattice is that of a family of independent Gaussians. In Papers IV and V we investigate the value distribution of the Epstein zeta function along the real axis. In Paper IV we determine the asymptotic value distribution and moments of the Epstein zeta function to the right of the critical strip as the dimension of the underlying space of lattices tends to infinity. In Paper V we determine the asymptotic value distribution of the Epstein zeta function also in the critical strip. As a special case we deduce a result on the asymptotic value distribution of the height function for flat tori. Furthermore, applying our results we discuss a question posed by Sarnak and Strömbergsson as to whether there in large dimensions exist lattices for which the Epstein zeta function has no zeros on the positive real line.
|
Page generated in 0.078 seconds