• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 134
  • 44
  • 24
  • 14
  • 6
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 267
  • 267
  • 267
  • 45
  • 28
  • 26
  • 24
  • 23
  • 21
  • 20
  • 20
  • 18
  • 18
  • 18
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Development of Wastewater Collection Network Asset Database, Deterioration Models and Management Framework

Younis, Rizwan January 2010 (has links)
The dynamics around managing urban infrastructure are changing dramatically. Today’s infrastructure management challenges – in the wake of shrinking coffers and stricter stakeholders’ requirements – include finding better condition assessment tools and prediction models, and effective and intelligent use of hard-earn data to ensure the sustainability of urban infrastructure systems. Wastewater collection networks – an important and critical component of urban infrastructure – have been neglected, and as a result, municipalities in North America and other parts of the world have accrued significant liabilities and infrastructure deficits. To reduce cost of ownership, to cope with heighten accountability, and to provide reliable and sustainable service, these systems need to be managed in an effective and intelligent manner. The overall objective of this research is to present a new strategic management framework and related tools to support multi-perspective maintenance, rehabilitation and replacement (M, R&R) planning for wastewater collection networks. The principal objectives of this research include: (1) Developing a comprehensive wastewater collection network asset database consisting of high quality condition assessment data to support the work presented in this thesis, as well as, the future research in this area. (2) Proposing a framework and related system to aggregate heterogeneous data from municipal wastewater collection networks to develop better understanding of their historical and future performance. (3) Developing statistical models to understand the deterioration of wastewater pipelines. (4) To investigate how strategic management principles and theories can be applied to effectively manage wastewater collection networks, and propose a new management framework and related system. (5) Demonstrating the application of strategic management framework and economic principles along with the proposed deterioration model to develop long-term financial sustainability plans for wastewater collection networks. A relational database application, WatBAMS (Waterloo Buried Asset Management System), consisting of high quality data from the City of Niagara Falls wastewater collection system is developed. The wastewater pipelines’ inspections were completed using a relatively new Side Scanner and Evaluation Technology camera that has advantages over the traditional Closed Circuit Television cameras. Appropriate quality assurance and quality control procedures were developed and adopted to capture, store and analyze the condition assessment data. To aggregate heterogeneous data from municipal wastewater collection systems, a data integration framework based on data warehousing approach is proposed. A prototype application, BAMS (Buried Asset Management System), based on XML technologies and specifications shows implementation of the proposed framework. Using wastewater pipelines condition assessment data from the City of Niagara Falls wastewater collection network, the limitations of ordinary and binary logistic regression methodologies for deterioration modeling of wastewater pipelines are demonstrated. Two new empirical models based on ordinal regression modeling technique are proposed. A new multi-perspective – that is, operational/technical, social/political, regulatory, and finance – strategic management framework based on modified balanced-scorecard model is developed. The proposed framework is based on the findings of the first Canadian National Asset Management workshop held in Hamilton, Ontario in 2007. The application of balanced-scorecard model along with additional management tools, such as strategy maps, dashboard reports and business intelligence applications, is presented using data from the City of Niagara Falls. Using economic principles and example management scenarios, application of Monte Carlo simulation technique along with the proposed deterioration model is presented to forecast financial requirements for long-term M, R&R plans for wastewater collection networks. A myriad of asset management systems and frameworks were found for transportation infrastructure. However, to date few efforts have been concentrated on understanding the performance behaviour of wastewater collection systems, and developing effective and intelligent M, R&R strategies. Incomplete inventories, and scarcity and poor quality of existing datasets on wastewater collection systems were found to be critical and limiting issues in conducting research in this field. It was found that the existing deterioration models either violated model assumptions or assumptions could not be verified due to limited and questionable quality data. The degradation of Reinforced Concrete pipes was found to be affected by age, whereas, for Vitrified Clay pipes, the degradation was not age dependent. The results of financial simulation model show that the City of Niagara Falls can save millions of dollars, in the long-term, by following a pro-active M, R&R strategy. The work presented in this thesis provides an insight into how an effective and intelligent management system can be developed for wastewater collection networks. The proposed framework and related system will lead to the sustainability of wastewater collection networks and assist municipal public works departments to proactively manage their wastewater collection networks.
202

Investigation of Buildup Dose for Therapeutic Intensity Modulated Photon Beams in Radiation Therapy

Javedan, Khosrow 14 July 2010 (has links)
Buildup dose of Mega Voltage (MV) photon beams can be a limiting factor in intensitymodulated radiation therapy (IMRT) treatments. Excessive doses can cause patient discomfort and treatment interruptions, while underdosing may lead to local failure. Many factors which contribute to buildup dose, including the photon beam energy spectrum, scattered or contaminant radiation and their angular distribution, are not modeled well in commercial treatment planning systems. The accurate Monte Carlo method was employed in the studies to estimate the doses. Buildup dose of 6MV photon beams was investigated for three fundamentally different IMRT modalities: between Helical TomoTherapy and traditional opposed tangential beams, solid IMRT and multileaf collimator (MLC)-based IMRT techniques. Solid IMRT, as an alternative to MLC, achieves prescription dose distribution objectives, according to our study. Measurements and Monte Carlo calculations of buildup dose in chest wall treatment were compared between TomoTherapy IMRT and traditional tangential-beam technique. The effect of bolus in helical delivery was also investigated in this study. In addition, measurements and Monte Carlo calculations of buildup dose in solid IMRT and MLC based IMRT treatment modalities were compared. A brass step compensator was designed and built for the solid IMRT. Matching MLC step sequences were used for the MLC IMRT. This dissertation also presents the commissioning of a Monte Carlo code system, BEAMnrc, for a Varian Trilogy linear accelerator (LINAC) and the application in buildup dose calculation. Scattered dose components, MLC component dose and mean spectral energy for the IMRT treatment techniques were analyzed. The agreement between measured 6MV and calculated depth dose and beam profiles was (± 1% or ±1 mm) for 10x10 and 40x40 cm2 fields. The optimum electron beam energy and its radial distribution incident on tungsten target were found to be 6 MeV and 1 mm respectively. The helical delivery study concluded that buildup dose is higher with TomoTherapy compared to the opposed tangential technique in chest wall treatment. The solid and MLC IMRT comparison concluded that buildup dose was up to 7% lower for solid IMRT compared to MLC IMRT due to beam hardening of brass.
203

Diffusion processes in membranes containing coexisting domains investigated by Fluorescence Correlation Spectroscopy / Diffusionsprozesse in Membranen mit koexistierenden Domänen nach Fluoreszenz-Korrelationsspektroskopie Messungen

Hac, Agnieszka 17 December 2003 (has links)
No description available.
204

Modèles de Potts désordonnés et hors de l'équilibre

Chatelain, Christophe 17 December 2012 (has links) (PDF)
Cette thèse présente de manière synthétique mes travaux de recherche dont les deux thématiques principales sont le comportement critique du modèle de Potts en présence de désordre et le vieillissement de modèles de spin lors d'une trempe.
205

Étude des transitions de phases dans le modèle de Higgs abélien en (2+1) dimensions et l'effet du terme de Chern-Simons

Nebia-Rahal, Faïza 10 1900 (has links)
Nous avons investigué, via les simulations de Monte Carlo, les propriétés non-perturbatives du modèle de Higgs abélien en 2+1 dimensions sans et avec le terme de Chern-Simons dans la phase de symétrie brisée, en termes de ses excitations topologiques: vortex et anti-vortex. Le but du présent travail est de rechercher les phases possibles du système dans ce secteur et d'étudier l'effet du terme de Chern-Simons sur le potentiel de confinement induit par les charges externes trouvé par Samuel. Nous avons formulé une description sur réseau du modèle effectif en utilisant une tesselation tétraédrique de l'espace tridimensionnel Euclidien pour générer des boucles de vortex fermées. En présence du terme de Chern-Simons, dans une configuration donnée, nous avons formulé et calculé le nombre d'enlacement entre les différentes boucles de vortex fermées. Nous avons analysé les propriétés du vide et calculé les valeurs moyennes de la boucle de Wilson, de la boucle de Polyakov à différentes températures et de la boucle de 't Hooft en présence du terme de Chern-Simons. En absence du terme de Chern-Simons, en variant la masse des boucles de vortex, nous avons trouvé deux phases distinctes dans le secteur de la symétrie brisée, la phase de Higgs habituelle et une autre phase caractérisée par l'apparition de boucles infinies. D'autre part, nous avons trouvé que la force entre les charges externes est écrantée correpondant à la loi périmètre pour la boucle de Wilson impliquant qu'il n'y a pas de confinement. Cependant, après la transition, nous avons trouvé qu'il existe toujours une portion de charges externes écrantée, mais qu'après une charge critique, l'énergie libre diverge. En présence du terme de Chern-Simons, et dans la limite de constante de couplage faible de Chern-Simons nous avons trouvé que les comportements de la boucle de Wilson et de la boucle de 't Hooft ne changent pas correspondants à une loi périmètre, impliquant qu'il n'y a pas de confinement. De plus, le terme de Chern-Simons ne contribue pas à la boucle de Wilson. / We investigate, via Monte Carlo simulations, non-perturbative properties of a 2+1 dimensional Abelian Higgs model without and with the Chern-Simons term in the symmetry broken phase in terms of its topological excitations: vortices and anti-vortices. The aim of the present work is to understand what phases exist for the system in that sector and the effect of the Chern-Simons term on the confining potential induced between external charges found by Samuel. We formulate a lattice description of the effective model starting from a tetrahedral tessellation of Euclidean three space to generate non-intersecting closed vortex loops. In the presence of the Chern-Simons term, for a given configuration, we formulate and compute the linking number between different closed vortex loops. We analyse properties of the vacuum and compute the expectation value of Wilson loop operator, Polyakov loop operator at different temperatures and the 't Hooft loop operator in the presence of the Chern-Simons term. In the absence of a Chern-Simons term, as we vary the mass of the vortex loops, we find two distinct phases in the symmetry broken sector, the usual Higgs phase and a novel phase which is heralded by the appearance of the so-called infinite loops. On the other hand, we find that the force between all external charges is screened, corresponding to a perimeter law for the Wilson loop implying no confinement. However, after the transition, we find that small external charges are still screened, but after a critical value of the external charge, free energy diverges. In the presence of Chern-Simons term, and in the limit where the coupling constant is low for Chern-Simons we find that the behavior of Wilson loop does not change: it is still a perimeter law, implying no confinement. Moreover, the Chern-Simons term does not contribute to the Wilson loop. 'tHooft loop behaves like a perimeter law too.
206

Recherche de la production exotique de paires de quarks top de même signe au LHC avec le détecteur ATLAS / Search for same-sign top quark pair exotic production at the LHC with the ATLAS detector

Berlendis, Simon 21 September 2017 (has links)
Le Modèle Standard, qui décrit les interactions entre les particules à l’échelle quantique, est une théorie imparfaite. Il possède plusieurs problèmes théoriques non-résolus et ne permet pas d’expliquer certaines observations astrophysiques comme celles de la matière noire et de l’asymétrie baryonique. Plusieurs théories, dites au-delà du Modèle Standard, ont été proposées afin de résoudre certains de ces problèmes, et une grande partie d’entre elles prévoient l’apparition de nouveaux phénomènes à haute énergie. L’objectif principal de cette thèse est de rechercher ces phénomènes dans les collisions proton-proton produites par le Large Hadron Collider à une énergie de centre de masse de 13 TeV. Une partie des travaux présentés dans cette thèse a en particulier été dédiée à la recherche de processus de production de quarks top de même signe, c’est-à-dire de même charge électrique, qui sont prédits par des modèles de supersymétrie à R-parité violée. Ces processus engendrent des évènements composés de leptons de même charge accompagnés de b-jets, lesquels ont l’avantage d’être faiblement contaminer par le bruit de fond provenant du Modèle Standard.Les travaux présentés dans cette thèse ont essentiellement porté sur deux analyses, chacune recherchant des phénomènes de nouvelle physique de nature différente dans les évènements composés de leptons de même charge dans les données enregistrées par le détecteur ATLAS. Une première analyse a porté sur la recherche de processus supersymétriques sur les données enregistrées en 2015 et en 2016 avec 36.1 fb$^{-1}$ de luminosité intégrée. Des signaux de production de quarks top de même signe ont été implémentés en se basant sur des processus supersymétriques violant la R-parité, et des régions de signal associées à ces processus ont été optimisées. Une deuxième analyse a porté sur la recherche de processus non-supersymétriques, dits exotiques, dans les données enregistrées en 2015 avec 3.2 fb$^{-1}$ de luminosité intégrée. Cette analyse a surtout été motivée par les résultats obtenus à 8 TeV, dans lesquels un modeste excès de nombre d’évènements par rapport aux prédictions du Modèle Standard a été observé dans deux régions de signal. Une partie des études relatives à cette analyse a été dédiée au développement et à la validation des méthodes d’estimation des différentes sources de bruit de fond.Aucune déviation par rapport aux prédictions du Modèle Standard n’a été observée dans chacune des régions de signal considérées dans les deux analyses. L’excès qui avait été observé dans les résultats obtenus à 8 TeV n’est donc pas confirmé. Des limites d’exclusion sur les modèles de nouvelle physique ont de plus été extraites à partir des résultats obtenus, en particulier sur les modèles de supersymétrie à R-parité violée utilisés pour produire les processus de production de quarks top de même signe. / The Standard Model, which describes the particle interactions at quantum level, is an imperfect theory. It has several theoretical problems and is unable to explain astrophysical observations like the dark matter and the baryonic asymmetry of the universe. Several beyond-standard models have been proposed to solve some of these issues, and predict new-physics phenomena at high energy. The aim of this thesis is to search for these phenomena in proton-proton collisions produced by the Large Hadron Collider at a center-of-mass energy of 13 TeV. Part of the studies presented in this thesis was dedicated to the search for production of same-sign top quarks based on R-parity violating supersymmetric models. These processes lead to a signature of same-sign leptons and b-jets, which have the advantage to be lowly contaminated by the Standard Model background.The studies presented in this thesis focused on two analyses, both searching for new-physics phenomena of different nature in same-sign leptons events in data recorded by the ATLAS detector. A first analysis focused on supersymmetric processes with data recorded in 2015 and in 2016 with 36.1 fb$^{-1}$ of integrated luminosity. Same-sign top quarks signals were implemented using R-parity violating supersymmetric processes, and signal regions associated to these processes were optimized. A second analysis focused on exotic (non-supersymmetric) processes with data recorded in 2015 with 3.2 fb$^{-1}$ of integrated luminosity. This analysis was motivated by a modest excess seen in two signal regions in the 8 TeV results. Several studies were focused on the development and the validation of background estimation methods.No deviations from the Standard Model predictions were observed the signal regions considered in both analyses. The 8 TeV excess is therefore not confirmed with the most recent data. Exclusion limits on new-physics models were extracted, in particular for the R-parity violating supersymmetric models that were used to produce the same-sign top quarks processes.
207

Imagerie multimodale et quantitative en TEP/IRM / Multimodal and quantitative imaging in PET/MRI

Monnier, Florian 02 March 2018 (has links)
L’introduction en clinique de la bimodalité combinant la tomographie d’émission de positons (TEP) et la tomodensitométrie (TDM) a été un succès dans les années 2000. La multimodalité dans le contexte de l’imagerie médicale a souvent pour but de combiner une information physiologique à une information anatomique. Deux approches existent : la première étant d’acquérir séparément les modalités et de les combiner ultérieurement par fusion d’images sur ordinateur, la seconde s’affranchit des problèmes possibles du recalage en opérant une acquisition dans le même statif des deux modalités. Cependant, il existe des limites à l’imagerie TEP/TDM. L’idée de combiner l’imagerie par résonance (IRM) magnétique à la TEP offre des avantages par rapport à la TDM. Notamment, l’IRM offre un excellent contraste des tissus et offre l’accès à de l’information multidimensionnelle, fonctionnelle et morphologique grâce à la modularité offerte par l’acquisition IRM. Cette information pourrait permettre de mieux comprendre les processus physiopathologiques des maladies. De plus, l’IRM n’est pas ionisante, au contraire de l’imagerie TDM. L’introduction au début des années 2010 des premiers appareils d’acquisition simultanée TEP/IRM offre de nombreuses possibilités, mais il reste des défis à résoudre avant d’assister à la même diffusion en clinique que l’imagerie TEP/TDM de ces appareils. Notamment, l’atténuation photonique, qui doit être corrigée afin de permettre le caractère quantitatif de l’imagerie, est un problème. Dans ce travail, nous abordons cette question en proposant des solutions pour les différentes régions du corps. Une attention particulière est portée à la région pelvienne. L’état de l’art des méthodes disponibles expose un faible nombre de solutions pour cette région, pourtant riche en tissus osseux atténuants et zone d’occurrence du second cancer le plus commun chez l’homme : le cancer de la prostate. Nous évaluons l’impact de la solution proposée sur la correction des photons diffusés, toujours dans un objectif d’obtenir une imagerie quantitative. Les différentes méthodologies de correction et d’évaluation font intervenir des simulations numériques Monte Carlo. / The clinical introduction of the bimodality combining the positron emission tomography (PET) and the computed tomography (CT) has been a major success in the 2000s. Multimodality, in the context of medical imaging, often has the aim of associating a physiological information and an anatomical information. Two approaches exist : either the two modalities are acquired separately and then fused through computerized image fusion, or we discard the issues related to image registration by acquiring in the same system the two modalities. However, there remain limits to PET/CT imaging. The idea to combinemagnetic resonance imaging (MRI) to PET offers solutions and advantages compared to the use of TDM.MRI offers an excellent tissue contrast and offers an access to multidimensional functional and morphological information thanks to the modularity offered by MRI. This information could improve the understanding of the physiopathological processes involved in diseases. Moreover, MRI is non-ionizing modality, on the contrary to CT. The introduction in the early 2010s of the first simultaneous PET/MRI systems offers a lot of possibilities, but there remains challenged to solve before observing the same spread as the PET/CT in imaging facilities. In particular, the photon attenuation, which must be corrected to provide a quantitative imaging, remains an issue. In this work, we address this issue by proposing solutions for the different regions of the body. A special attention is drawn to the pelvic region. Indeed, the state of the art of available methods exposes a small number of solutions for this area ; even so it is rich in attenuation osseous tissues et area of occurrence of the second most common cancer in men : prostate cancer. We assess the impact of the proposed solution on the scattered photons correction, still in the aim of obtaining a quantitative imaging modality. The different methodologiesof correction and evaluation use Monte Carlo numerical simulations.
208

Essays on Birnbaum-Saunders models

Santos, Helton Saulo Bezerra dos January 2013 (has links)
Nessa tese apresentamos três diferentes aplicações dos modelos Birnbaum-Saunders. No capítulo 2 introduzimos um novo método por função-núcleo não-paramétrico para a estimação de densidades assimétricas, baseado nas distribuições Birnbaum-Saunders generalizadas assimétricas. Funções-núcleo baseadas nessas distribuições têm a vantagem de fornecer flexibilidade nos níveis de assimetria e curtose. Em adição, os estimadores da densidade por função-núcleo Birnbaum-Saunders gene-ralizadas assimétricas são livres de viés na fronteira e alcançam a taxa ótima de convergência para o erro quadrático integrado médio dos estimadores por função-núcleo-assimétricas-não-negativos da densidade. Realizamos uma análise de dados consistindo de duas partes. Primeiro, conduzimos uma simulação de Monte Carlo para avaliar o desempenho do método proposto. Segundo, usamos esse método para estimar a densidade de três dados reais da concentração de poluentes atmosféricos. Os resultados numéricos favorecem os estimadores não-paramétricos propostos. No capítulo 3 propomos uma nova família de modelos autorregressivos de duração condicional baseados nas distribuições misturas de escala Birnbaum-Saunders (SBS). A distribuição Birnbaum-Saunders (BS) é um modelo que tem recebido considerável atenção recentemente devido às suas boas propriedades. Uma extensão dessa distribuição é a classe de distribuições SBS, a qual (i) herda várias das boas propriedades da distribuição BS, (ii) permite a estimação de máxima verossimilhança em uma forma eficiente usando o algoritmo EM, e (iii) possibilita a obtenção de um procedimento de estimação robusta, entre outras propriedades. O modelo autorregressivo de duração condicional é a família primária de modelos para analisar dados de duração de transações de alta frequência. A metodologia estudada aqui inclui estimação dos parâmetros pelo algoritmo EM, inferência para esses parâmetros, modelo preditivo e uma análise residual. Realizamos simulações de Monte Carlo para avaliar o desempenho da metodologia proposta. Ainda, avalia-mos a utilidade prática dessa metodologia usando dados reais de transações financeiras da bolsa de valores de Nova Iorque. O capítulo 4 trata de índices de capacidade do processo (PCIs), os quais são ferramentas utilizadas pelas empresas para determinar a qualidade de um produto e avaliar o desempenho de seus processos de produção. Estes índices foram desenvolvidos para processos cuja característica de qualidade tem uma distribuição normal. Na prática, muitas destas ca-racterísticas não seguem esta distribuição. Nesse caso, os PCIs devem ser modificados considerando a não-normalidade. O uso de PCIs não-modificados podemlevar a resultados inadequados. De maneira a estabelecer políticas de qualidade para resolver essa inadequação, transformação dos dados tem sido proposta, bem como o uso de quantis de distribuições não-normais. Um distribuição não-normal assimétrica o qual tem tornado muito popular em tempos recentes é a distribuição Birnbaum-Saunders (BS). Propomos, desenvolvemos, implementamos e aplicamos uma metodologia baseada em PCIs para a distribuição BS. Além disso, realizamos um estudo de simulação para avaliar o desempenho da metodologia proposta. Essa metodologia foi implementada usando o software estatístico chamado R. Aplicamos essa metodologia para um conjunto de dados reais de maneira a ilustrar a sua flexibilidade e potencialidade. / In this thesis, we present three different applications of Birnbaum-Saunders models. In Chapter 2, we introduce a new nonparametric kernel method for estimating asymmetric densities based on generalized skew-Birnbaum-Saunders distributions. Kernels based on these distributions have the advantage of providing flexibility in the asymmetry and kurtosis levels. In addition, the generalized skew-Birnbaum-Saunders kernel density estimators are boundary bias free and achieve the optimal rate of convergence for the mean integrated squared error of the nonnegative asymmetric kernel density estimators. We carry out a data analysis consisting of two parts. First, we conduct a Monte Carlo simulation study for evaluating the performance of the proposed method. Second, we use this method for estimating the density of three real air pollutant concentration data sets, whose numerical results favor the proposed nonparametric estimators. In Chapter 3, we propose a new family of autoregressive conditional duration models based on scale-mixture Birnbaum-Saunders (SBS) distributions. The Birnbaum-Saunders (BS) distribution is a model that has received considerable attention recently due to its good properties. An extension of this distribution is the class of SBS distributions, which allows (i) several of its good properties to be inherited; (ii) maximum likelihood estimation to be efficiently formulated via the EM algorithm; (iii) a robust estimation procedure to be obtained; among other properties. The autoregressive conditional duration model is the primary family of models to analyze high-frequency financial transaction data. This methodology includes parameter estimation by the EM algorithm, inference for these parameters, the predictive model and a residual analysis. We carry out a Monte Carlo simulation study to evaluate the performance of the proposed methodology. In addition, we assess the practical usefulness of this methodology by using real data of financial transactions from the New York stock exchange. Chapter 4 deals with process capability indices (PCIs), which are tools widely used by companies to determine the quality of a product and the performance of their production processes. These indices were developed for processes whose quality characteristic has a normal distribution. In practice, many of these characteristics do not follow this distribution. In such a case, the PCIs must be modified considering the non-normality. The use of unmodified PCIs can lead to inadequacy results. In order to establish quality policies to solve this inadequacy, data transformation has been proposed, as well as the use of quantiles from non-normal distributions. An asymmetric non-normal distribution which has become very popular in recent times is the Birnbaum-Saunders (BS) distribution. We propose, develop, implement and apply a methodology based on PCIs for the BS distribution. Furthermore, we carry out a simulation study to evaluate the performance of the proposed methodology. This methodology has been implemented in a noncommercial and open source statistical software called R. We apply this methodology to a real data set to illustrate its flexibility and potentiality.
209

Cinq essais dans le domaine monétaire, bancaire et financier

Mercier, Fabien 12 December 2014 (has links)
La thèse étudie plusieurs problématiques centrales et actuelles de la finance moderne : la rationalité limitée des agents et leurs biais comportementaux vis-à-vis des valeurs nominales,le problème de la juste évaluation du prix des actions, la refonte du paysage de l'industrie post-négociation en Europe suite à l'introduction du projet de l'Euro système Target-2 Securities, ainsi que les modèles de défaut et les méthodes d’estimation des cycles de défaut pour un secteur donné. Les techniques employées sont variées: enquêtes sur données individuelles, économétrie, théorie des jeux, théorie des graphes, simulations de Monte-Carlo,chaînes de Markov cachées. Concernant l’illusion monétaire, les résultats confirment la robustesse des résultats d’études précédentes tout en dévoilant de nouvelles perspectives de recherche, par exemple tenter d’expliquer la disparité des réponses selon les caractéristiques individuelles des répondants,en particulier leur formation universitaire. L’étude du modèle de la Fed montre que la relation de long terme entre taux nominal des obligations d’Etat et rendement des actions n’est ni robuste, ni utile à la prédiction sur des horizons temporels réduits. L’étude sur Target 2 Securities a été confirmée par les faits. Enfin, le modèle d’estimation des défauts à partir de chaînes de Markov cachées fait preuve de bonnes performances dans un contexte européen, malgré la relative rareté des données pour sa calibration. / The thesis studies various themes that are central to modern finance : economic agents rationality and behavioural biases with respect to nominal values, the problem of asset fundamental valuation, the changing landscape of the European post-trade industry catalysed by the Eurosystem project Target 2 Securities, and models of defaults and methods to estimate defaults cycles for a given sector. Techniques employed vary: studies on individual data,econometrics, game theory, graph theory, Monte-Carlo simulations and hidden Markov chains. Concerning monetary illusion, results confirm those of previous study while emphasizing new areas for investigation concerning the interplay of individual characteristics, such as university education, and money illusion. The study of the Fed model shows that the long term relationship assumed between nominal government bond yield and dividend yield is neither robust, nor useful for reduced time horizons. The default model based on hidden Markov chains estimation gives satisfactory results in a European context, and this besides the relative scarcity of data used for its calibration.
210

Conception, étude et modélisation d’une nouvelle génération de transistors à nanofils de silicium pour applications biocapteurs / Design, study and modeling of a new generation of silicon nanowire transistors for biosensing applications

Legallais, Maxime 15 November 2017 (has links)
Un nanonet possède des propriétés remarquables qui proviennent non seulement des propriétés intrinsèques de chaque nanostructure mais aussi de leur assemblage en réseau ce qui les rend particulièrement attractifs pour de multiples applications, notamment dans les domaines de l’optique, l’électronique ou encore le biomédical. Dans ce travail de thèse, des nanonets constitués de nanofils de silicium ont été intégrés pour la première fois sous forme de transistors à effet de champ avec une grille en face arrière. La filière technologique développée est parfaitement compatible avec une production des dispositifs en masse, à bas coût et à grande échelle pour un budget thermique n’excédant pas 400°C. Des avancées technologiques majeures ont été réalisées grâce à la maîtrise du frittage des jonctions entre nanofils, de la siliciuration des contacts et de la passivation des nanofils avec de l’alumine. Les transistors à nanonets fabriqués présentent des caractéristiques électriques excellentes, stables sous air et reproductibles qui sont capables de concurrencer celles des transistors à nanofil unique. Une étude approfondie de la percolation par des mesures expérimentales et des simulations Monte-Carlo a mis en évidence que la limitation de la conduction par les jonctions entre nanofils permet d’améliorer considérablement les performances électriques. Après une intégration des dispositifs sous forme de biocapteurs, il a été montré que les transistors sont sensibles électriquement à l’hybridation de l’ADN. Bénéficiant d’un procédé de fabrication compatible avec l’industrie de la microélectronique, une intégration 3D de ces transistors à nanonet sur un circuit de lecture peut alors être envisagée ce qui ouvre la voie à des biocapteurs portables, capables de détecter l’ADN en temps réel et sans marquage. De plus, la flexibilité mécanique et la transparence optique du nanonet offrent d’autres opportunités dans le domaine de l’électronique flexible. / A nanonet exhibits remarkable properties which arises from, not only, the intrinsic properties of each nanostructure but also from their assembly into network which makes them particularly attractive for various applications, notably in the field of optics, electronics or even biomedical. During this Ph.D. work, silicon nanowire-based nanonets were integrated for the first time into field effect transistors with a back gate configuration. The developed technological process is perfectly suitable with a large-scale and massive production of these devices at low cost without exceeding a thermal budget of 400°C. Major technological breakthroughs were achieved through the control of the sintering of nanowire junctions, the contact silicidation and the nanowire passivation with alumina. The as-fabricated nanonet transistors display outstanding, air stable and reproducible electrical characteristics which can compete with single nanowire-based devices. An in-depth study of percolation using experimental measurements and Monte-Carlo simulations highlighted that the conduction limitation by nanowire junctions allow to enhance drastically the electrical performances. After device integration into biosensors, it has been shown that transistors are electrically sensitive to DNA hybridization.Beneficiating from a fabrication process compatible with the microelectronic industry, a 3D integration of these nanonet-based transistors onto a readout circuit can therefore be envisioned which opens new avenues for portable biosensors, allowing direct and label-free detection of DNA. Furthermore, mechanical flexibility and optical transparency offer other opportunities in flexible electronic field.

Page generated in 0.1257 seconds