• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 304
  • 139
  • 34
  • 31
  • 23
  • 19
  • 16
  • 16
  • 14
  • 12
  • 7
  • 5
  • 4
  • 3
  • 2
  • Tagged with
  • 743
  • 743
  • 743
  • 141
  • 118
  • 112
  • 102
  • 86
  • 68
  • 65
  • 59
  • 57
  • 55
  • 54
  • 52
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
571

In silico tools in risk assessment : of industrial chemicals in general and non-dioxin-like PCBs in particular

Stenberg, Mia January 2012 (has links)
Industrial chemicals in European Union produced or imported in volumes above 1 tonne annually, necessitate a registration within REACH. A common problem, concerning these chemicals, is deficient information and lack of data for assessing the hazards posed to human health and the environment. Animal studies for the type of toxicological information needed are both expensive and time consuming, and to that an ethical aspect is added. Alternative methods to animal testing are thereby requested. REACH have called for an increased use of in silico tools for non-testing data as structure-activity relationships (SARs), quantitative structure-activity relationships (QSARs), and read-across. The main objective of the studies underlying this thesis is related to explore and refine the use of in silico tools in a risk assessment context of industrial chemicals. In particular, try to relate properties of the molecular structure to the toxic effect of the chemical substance, by using principles and methods of computational chemistry. The initial study was a survey of all industrial chemicals; the Industrial chemical map was created. A part of this map was identified including chemicals of potential concern. Secondly, the environmental pollutants, polychlorinated biphenyls (PCBs) were examined and in particular the non-dioxin-like PCBs (NDL-PCBs). A set of 20 NDL-PCBs was selected to represent the 178 PCB congeners with three to seven chlorine substituents. The selection procedure was a combined process including statistical molecular design for a representative selection and expert judgements to be able to include congeners of specific interest. The 20 selected congeners were tested in vitro in as much as 17 different assays. The data from the screening process was turned into interpretable toxicity profiles with multivariate methods, used for investigation of potential classes of NDL-PCBs. It was shown that NDL-PCBs cannot be treated as one group of substances with similar mechanisms of action. Two groups of congeners were identified. A group including in general lower chlorinated congeners with a higher degree of ortho substitution showed a higher potency in more assays (including all neurotoxic assays). A second group included abundant congeners with a similar toxic profile that might contribute to a common toxic burden. To investigate the structure-activity pattern of PCBs effect on DAT in rat striatal synaptosomes, ten additional congeners were selected and tested in vitro. NDL-PCBs were shown to be potent inhibitors of DAT binding. The congeners with highest DAT inhibiting potency were tetra- and penta-chlorinated with 2-3 chlorine atoms in ortho-position. The model was not able to distinguish the congeners with activities in the lower μM range, which could be explained by a relatively unspecific response for the lower ortho chlorinated PCBs. / Den europeiska kemikalielagstiftningen REACH har fastställt att kemikalier som produceras eller importeras i en mängd över 1 ton per år, måste registreras och riskbedömmas. En uppskattad siffra är att detta gäller för 30 000 kemikalier. Problemet är dock att data och information ofta är otillräcklig för en riskbedömning. Till stor del har djurförsök använts för effektdata, men djurförsök är både kostsamt och tidskrävande, dessutom kommer den etiska aspekten in. REACH har därför efterfrågat en undersökning av möjligheten att använda in silico verktyg för att bidra med efterfrågad data och information. In silico har en ungefärlig betydelse av i datorn, och innebär beräkningsmodeller och metoder som används för att få information om kemikaliers egenskaper och toxicitet. Avhandlingens syfte är att utforska möjligheten och förfina användningen av in silico verktyg för att skapa information för riskbedömning av industrikemikalier. Avhandlingen beskriver kvantitativa modeller framtagna med kemometriska metoder för att prediktera, dvs förutsäga specifika kemikaliers toxiska effekt. I den första studien (I) undersöktes 56 072 organiska industrikemikalier. Med multivariata metoder skapades en karta över industrikemikalierna som beskrev dess kemiska och fysikaliska egenskaper. Kartan användes för jämförelser med kända och potentiella miljöfarliga kemikalier. De mest kända miljöföroreningarna visade sig ha liknande principal egenskaper och grupperade i kartan. Genom att specialstudera den delen av kartan skulle man kunna identifiera fler potentiellt farliga kemiska substanser. I studie två till fyra (II-IV) specialstuderades miljögiftet PCB. Tjugo PCBs valdes ut så att de strukturellt och fysiokemiskt representerade de 178 PCB kongenerna med tre till sju klorsubstituenter. Den toxikologiska effekten hos dessa 20 PCBs undersöktes i 17 olika in vitro assays. De toxikologiska profilerna för de 20 testade kongenerna fastställdes, dvs vilka som har liknande skadliga effekter och vilka som skiljer sig åt. De toxicologiska profilerna användes för klassificering av PCBs. Kvantitativa modeller utvecklades för prediktioner, dvs att förutbestämma effekter hos ännu icke testade PCBs, och för att få ytterligare kunskap om strukturella egenskaper som ger icke önskvärda effekter i människa och natur. Information som kan användas vid en framtida riskbedömning av icke-dioxinlika PCBs. Den sista studien (IV) är en struktur-aktivitets studie som undersöker de icke-dioxinlika PCBernas hämmande effekt av signalsubstansen dopamin i hjärnan.
572

Hybrid 2D and 3D face verification

McCool, Christopher Steven January 2007 (has links)
Face verification is a challenging pattern recognition problem. The face is a biometric that, we as humans, know can be recognised. However, the face is highly deformable and its appearance alters significantly when the pose, illumination or expression changes. These changes in appearance are most notable for texture images, or two-dimensional (2D) data. But the underlying structure of the face, or three dimensional (3D) data, is not changed by pose or illumination variations. Over the past five years methods have been investigated to combine 2D and 3D face data to improve the accuracy and robustness of face verification. Much of this research has examined the fusion of a 2D verification system and a 3D verification system, known as multi-modal classifier score fusion. These verification systems usually compare two feature vectors (two image representations), a and b, using distance or angular-based similarity measures. However, this does not provide the most complete description of the features being compared as the distances describe at best the covariance of the data, or the second order statistics (for instance Mahalanobis based measures). A more complete description would be obtained by describing the distribution of the feature vectors. However, feature distribution modelling is rarely applied to face verification because a large number of observations is required to train the models. This amount of data is usually unavailable and so this research examines two methods for overcoming this data limitation: 1. the use of holistic difference vectors of the face, and 2. by dividing the 3D face into Free-Parts. The permutations of the holistic difference vectors is formed so that more observations are obtained from a set of holistic features. On the other hand, by dividing the face into parts and considering each part separately many observations are obtained from each face image; this approach is referred to as the Free-Parts approach. The extra observations from both these techniques are used to perform holistic feature distribution modelling and Free-Parts feature distribution modelling respectively. It is shown that the feature distribution modelling of these features leads to an improved 3D face verification system and an effective 2D face verification system. Using these two feature distribution techniques classifier score fusion is then examined. This thesis also examines methods for performing classifier fusion score fusion. Classifier score fusion attempts to combine complementary information from multiple classifiers. This complementary information can be obtained in two ways: by using different algorithms (multi-algorithm fusion) to represent the same face data for instance the 2D face data or by capturing the face data with different sensors (multimodal fusion) for instance capturing 2D and 3D face data. Multi-algorithm fusion is approached as combining verification systems that use holistic features and local features (Free-Parts) and multi-modal fusion examines the combination of 2D and 3D face data using all of the investigated techniques. The results of the fusion experiments show that multi-modal fusion leads to a consistent improvement in performance. This is attributed to the fact that the data being fused is collected by two different sensors, a camera and a laser scanner. In deriving the multi-algorithm and multi-modal algorithms a consistent framework for fusion was developed. The consistent fusion framework, developed from the multi-algorithm and multimodal experiments, is used to combine multiple algorithms across multiple modalities. This fusion method, referred to as hybrid fusion, is shown to provide improved performance over either fusion system on its own. The experiments show that the final hybrid face verification system reduces the False Rejection Rate from 8:59% for the best 2D verification system and 4:48% for the best 3D verification system to 0:59% for the hybrid verification system; at a False Acceptance Rate of 0:1%.
573

The application of time-of-flight secondary ion mass spectrometry (ToF-SIMS) to forensic glass analysis and questioned document examination

Denman, John A January 2007 (has links)
The combination of analytical sensitivity and selectivity provided by time-of-flight secondary ion mass spectrometry (ToF-SIMS), with advanced statistical interrogation by principal component analysis (PCA), has allowed a significant advancement in the forensic discrimination of pen, pencil and glass materials based on trace characterisation.
574

Multi-layer Perceptron Error Surfaces: Visualization, Structure and Modelling

Gallagher, Marcus Reginald Unknown Date (has links)
The Multi-Layer Perceptron (MLP) is one of the most widely applied and researched Artificial Neural Network model. MLP networks are normally applied to performing supervised learning tasks, which involve iterative training methods to adjust the connection weights within the network. This is commonly formulated as a multivariate non-linear optimization problem over a very high-dimensional space of possible weight configurations. Analogous to the field of mathematical optimization, training an MLP is often described as the search of an error surface for a weight vector which gives the smallest possible error value. Although this presents a useful notion of the training process, there are many problems associated with using the error surface to understand the behaviour of learning algorithms and the properties of MLP mappings themselves. Because of the high-dimensionality of the system, many existing methods of analysis are not well-suited to this problem. Visualizing and describing the error surface are also nontrivial and problematic. These problems are specific to complex systems such as neural networks, which contain large numbers of adjustable parameters, and the investigation of such systems in this way is largely a developing area of research. In this thesis, the concept of the error surface is explored using three related methods. Firstly, Principal Component Analysis (PCA) is proposed as a method for visualizing the learning trajectory followed by an algorithm on the error surface. It is found that PCA provides an effective method for performing such a visualization, as well as providing an indication of the significance of individual weights to the training process. Secondly, sampling methods are used to explore the error surface and to measure certain properties of the error surface, providing the necessary data for an intuitive description of the error surface. A number of practical MLP error surfaces are found to contain a high degree of ultrametric structure, in common with other known configuration spaces of complex systems. Thirdly, a class of global optimization algorithms is also developed, which is focused on the construction and evolution of a model of the error surface (or search spa ce) as an integral part of the optimization process. The relationships between this algorithm class, the Population-Based Incremental Learning algorithm, evolutionary algorithms and cooperative search are discussed. The work provides important practical techniques for exploration of the error surfaces of MLP networks. These techniques can be used to examine the dynamics of different training algorithms, the complexity of MLP mappings and an intuitive description of the nature of the error surface. The configuration spaces of other complex systems are also amenable to many of these techniques. Finally, the algorithmic framework provides a powerful paradigm for visualization of the optimization process and the development of parallel coupled optimization algorithms which apply knowledge of the error surface to solving the optimization problem.
575

探索性資料分析方法在文本資料中的應用─以「新青年」雜誌為例 / A Study of Exploratory Data Analysis on Text Data ── A Case study based on New Youth Magazine

潘艷艷, Pan, Yan Yan Unknown Date (has links)
隨著經濟繁榮和網絡發展的日新月異,線上線下每時每刻都產生龐大數據,其中約有80%的文字、影像等非結構化數據,如何量化和採取適合的分析方法,成為有效提取有價值信息及對其加以利用的關鍵。針對文字類型的資料,本文提出探索性資料分析方法,並以《新青年》雜誌的語言變化為例,呈現如何選取文本特徵并对其量化及分析的過程。 首先,本文以卷為分析單位,多角度量化《新青年》雜誌各卷的文本結構,包括文本用字、用句、文言和白虛字使用以及常用字詞共用等方面,通過多種圖表相結合的呈現方式,窺探《新青年》雜誌語言變化歷程以及轉變特點。這其中既包括了對文言文到白話文轉變機制的探索,也包括白話語言演化的探索。其次,根據各卷初探的結果,尋找可區隔文言文和白話文兩種語言形式的文本特徵變數,再以《新青年》第一卷和第七卷為訓練樣本,結合主成分和羅吉斯迴歸,對文、白兩種語言形式的文章進行分類訓練,再利用第四卷進行測試。結果證實,所提取的文本變數能夠有效實現對文、白兩種語言形式的文章的區分。此外,本文亦根據前述初探結果以及人文學者經驗,探索《新青年》雜誌後期語言形式的變化,即從五四運動時期的白話文至以「紅色中文」為特徵的白話文(二戰之後中國使用的白話文)的變化。以第七卷和第十一卷為樣本進行訓練,結果證實這兩卷語言形式存在明顯區別;並加入台灣《聯合報》和中國大陸的《人民日報》進行分類預測,發現兩類報刊的語言偏向有明顯差異,值得後續深入研究。 / Tremendous data are produced every day, due to the rapid development of computer technology and economics. Unstructured data, such as text, pictures, videos, etc., account for nearly 80 percent of all data created. Choosing appropriate methods for quantifying and analyzing this kind of data would determine whether or not we can extract useful information. For that, we propose a standard operating process of exploratory data analysis (EDA) and use a case study of language changes in New Youth Magazine as a demonstration. First, we quantify the texts of New Youth magazine from different perspectives, including the uses of words, sentences, function words, and share of common vocabulary. We aim to detect the evolution of modern language itself as well as changes from traditional Chinese to modern Chinese. Then, according to the results of exploratory data analysis, we treat the first and seventh volumes of New Youth magazine for training data to develop classification model and apply the model to fourth volume (i.e., testing data). The results show that the traditional Chinese and modern Chinese can be successfully classified. Next, we intend to verify the changes from modern Chinese of the May 4th Movement to those by advocating Socialism. We treat the seventh volume and eleventh volume of New Youth magazine as training data and again develop a classification model. Then we apply this model to the United Daily News from Taiwan and People’s Daily from Mainland China. We found these two newspapers are very different and the style of United Daily News is closer to that of seventh volume, while the style of People’s Daily is more like that of eleventh volume. This indicates that the People’s Daily is likely to be influenced by the Soviet Union.
576

Ageing assessment of transformer insulation through oil test database analysis

Tee, Sheng Ji January 2016 (has links)
Transformer ageing is inevitable and it is a challenge for utilities to manage a large fleet of ageing transformers. This means the need for monitoring transformer condition. One of the most widely used methods is oil sampling and testing. Databases of oil test records hence manifest as a great source of information for facilitating transformer ageing assessment and asset management. In this work, databases from three UK utilities including about 4,600 transformers and 65,000 oil test entries were processed, cleaned and analysed. The procedures used could help asset managers in how to approach databases, such as the need for addressing oil contamination, measurement procedure change and oil treatment discontinuities. An early degradation phenomenon was detected in multiple databases/utilities, which was investigated and found to be caused by the adoption of hydrotreatment oil refining technique in the late 1980s. Asset managers may need to monitor more frequently the affected units and restructure long term plans. The work subsequently focused on population analyses which indicated higher voltage transformers (275 kV and 400 kV) are tested more frequently and for more parameters compared with lower voltage units (33 kV and 132 kV). Acidity is the parameter that shows the highest correlation with transformer in-service age. In addition, the influence of the length of oil test records on population ageing trends was studied. It is found that it is possible to have a representative population ageing trend even with a short period (e.g. two years) of oil test results if the transformer age profile is representative of the whole transformer population. Leading from population analyses, seasonal influence on moisture was investigated which implies the importance of incorporating oil sampling temperature for better interpretation of moisture as well as indirectly breakdown voltage records. A condition mismatch between dielectric dissipation factor and resistivity was also discovered which could mean the need for revising the current IEC 60422 oil maintenance guide. Finally, insulation condition ranking was performed through principal component analysis (PCA) and analytic hierarchy process (AHP). These two techniques were demonstrated to be not just capable alternatives to traditional empirical formula but also allow fast, objective interpretation in PCA case, as well as flexible and comprehensive (objective and subjective incorporations) analysis in AHP case.
577

Aide à la conception de chaînes logistiques humanitaires efficientes et résilientes : application au cas des crises récurrentes péruviennes / Resilient and efficient humanitarian supply chain design approach : application to recurrent peruvian disasters

Vargas Florez, Jorge 15 October 2014 (has links)
Chaque année, plus de 400 catastrophes naturelles frappent le monde. Pour aider les populations touchées, les organisations humanitaires stockent par avance de l’aide d’urgence dans des entrepôts. Cette thèse propose des outils d’aide à la décision pour les aider à localiser et dimensionner ces entrepôts. Notre approche repose sur la construction de scénarios représentatifs. Un scénario représente la survenue d’une catastrophe dont on connaît l’épicentre, la gravité et la probabilité d’occurrence. Cette étape repose sur l’exploitation et l’analyse de bases de données des catastrophes passées. La seconde étape porte sur la propagation géographique de la catastrophe et détermine son impact sur la population des territoires touchés. Cet impact est fonction de la vulnérabilité et de la résilience du territoire. La vulnérabilité mesure la valeur attendue des dégâts alors que la résilience estime la capacité à résister au choc et à se rétablir rapidement. Les deux sont largement déterminées par des facteurs économiques et sociaux, soit structurels (géographie, PIB…) ou politiques (existence d’infrastructure d’aide, normes de construction…). Nous proposons par le biais d’analyses en composantes principales (ACP) d’identifier les facteurs influents de résilience et de vulnérabilité, puis d’estimer le nombre de victimes touchées à partir de ces facteurs. Souvent, les infrastructures (eau, télécommunication, électricité, voies de communication) sont détruits ou endommagés par la catastrophe (ex : Haïti en 2010). La dernière étape a pour objectif d’évaluer les impacts logistiques en ce qui concerne : les restrictions des capacités de transport existant et la destruction de tout ou partie des stocks d’urgence. La suite de l’étude porte sur la localisation et le dimensionnement du réseau d’entrepôt. Nos modèles présentent l’originalité de tenir compte de la dégradation des ressources et infrastructures suite due à la catastrophe (dimension résilience) et de chercher à optimiser le rapport entre les coûts engagés et le résultat obtenu (dimension efficience). Nous considérons d’abord un scénario unique. Le problème est une extension d’un problème de location classique. Puis, nous considérons un ensemble de scénarios probabilisés. Cette approche est indispensable à la considération du caractère très incertain des catastrophes humanitaires. L’ensemble de ces contributions a été confronté à la réalité des faits dans le cadre d’une application au cas des crises récurrentes du Pérou. Ces crises, essentiellement dues aux tremblements de terre et aux inondations (El Niño), imposent la constitution d’un réseau logistique de premiers secours qui soit résilient et efficient. / Every year, more than 400 natural disasters hit the world. To assist those affected populations, humanitarian organizations store in advance emergency aid in warehouses. This PhD thesis provides tools for support decisions on localization and sizing of humanitarian warehouses. Our approach is based on the design of representative and realistic scenarios. A scenario expresses some disasters’ occurrences for which epicenters are known, as well as their gravity and frequency. This step is based on the exploitation and analysis of databases of past disasters. The second step tackles about possible disaster’s propagation. The objective consists in determining their impact on population on each affected area. This impact depends on vulnerability and resilience of the territory. Vulnerability measures expected damage values meanwhile resilience estimates the ability to withstand some shock and recover quickly. Both are largely determined by social and economic factors, being structural (geography, GDP, etc.) or political (establishment or not relief infrastructure, presence and strict enforcement of construction standards, etc.). We propose through Principal Component Analysis (PCA) to identify, for each territory, influential factors of resilience and vulnerability and then estimate the number of victims concerned using these factors. Often, infrastructure (water, telecommunications, electricity, communication channels) are destroyed or damaged by the disaster (e.g. Haiti in 2010). The last step aims to assess the disaster logistics impact, specifically those related to with: transportation flows capacity limitations and destruction of all or part of emergency relief inventories. The following of our study focuses on location and allocation of a warehouses’ network. The proposed models have the originality to consider potential resources and infrastructure degradation after a disaster (resilience dimension) and seek optimizing the equilibrium between costs and results (effectiveness dimension). Initially we consider a single scenario. The problem is an extension of classical location studies. Then we consider a set of probable scenarios. This approach is essential due to the highly uncertain character of humanitarian disasters. All of these contributions have been tested and validated through a real application case: Peruvian recurrent disasters. These crises, mainly due to earthquakes and floods (El Niño), require establishment of a first aid logistics network that should be resilient and efficient.
578

Analyse par ToF-SIMS de matériaux fragiles pour les micro/nanotechnologies : évaluation et amplification de l'information chimique / ToF-SIMS characterisation of fragile materials used in microelectronic and microsystem devices : validation and enhancement of the chemical information

Scarazzini, Riccardo 04 July 2016 (has links)
Aujourd’hui, une grande variété de matériaux dit « fragiles » sont intégrés dans des dispositifs micro ou nanotechnologiques. Ces matériaux sont définissables comme « fragiles » en raison de leur forme, de leur dimension ou encore de leur densité. Dans ce travail, trois catégories de matériaux, de différents niveaux de maturités industrielle et technologique, ont été étudiés par spectrométrie de masse des ions secondaires à temps du vol (ToF-SIMS). Ces matériaux sont: du silicium méso-poreux, des polyméthacrylates déposés en couches très minces par voie chimique en phase vapeur initiée (iCVD) et des matériaux organosilicates (SiOCH) à basse constante diélectrique (low-k). L’objectif de ce travail est de vérifier et de valider la méthode ToF-SIMS comme une technique fiable pour répondre aux besoins de caractérisation chimique rencontrés pas ces matériaux Il s’agit également d’établir la cohérence de l’information chimique produite par l’interprétation de l’interaction ion/matière se déroulant lors de l’analyse. Pour le silicium méso-poreux, les échantillons ont été pulvérisés par différentes sources primaires d’ions (Césium, Xénon, Oxygène) et l’information secondaire générée comme, par exemple, les différences d’ionisation entre la couche poreuse et le matériau dense ont été analysées, notamment vis de l’énergie du faisceau de pulvérisation mais aussi du taux de porosité du matériau cible. Des modifications morphologiques significativement différentes selon la source d’ions ont également été observées et ont été corrélées à différents types de régime de pulvérisation, principalement induits par le taux de porosité de la cible.Concernant la caractérisation de polymères en couches minces, des conditions d’abrasion très peu agressives, notamment l’usage d’ions d’argon en cluster polyatomiques, ont été appliquées avec l’intention d’obtenir une information chimique secondaire riche en hautes masses moléculaires. La discrimination de films de polyméthacrylate avec une structure chimique quasi-identique a pu être obtenue et un protocole de quantification de copolymères proposé. De plus, par l’utilisation de la méthode d’analyse de données en composantes principales (PCA) appliquée aux spectres,une corrélation claire a été établie entre les composantes principales et la masse moléculaire des films de polymères.Enfin l’impact de traitements d’intégration tels que de la gravure ou du nettoyage chimique, nécessaires à la mise en œuvre industrielle des matériaux low-k, mais défavorables à leurs propriétés diélectriques, a été étudié. Pour obtenir une information chimique résolue en profondeur, l’abrasion par césium à basse énergie a été identifiée comme la stratégie la plus sensible et la plus adaptée. De même, la PCA a permis d’amplifier significativement les différences chimiques entre échantillons, permettant de rapprocher les variations de constante diélectrique aux compositions chimiques / Nowadays, the micro and nanotechnology field integrates a wide range of materials that can be defined as “fragile” because of their shape, dimension or density. In this work, three materials of this kind, at different level of technological and industrial maturity are studied by time of flight secondary ion mass spectrometry (ToF-SIMS). These materials are: mesoporous silicon, thin polymethacrylate films deposited by initiated Chemical Vapour Deposition (i-CVD)and hybrid organosilicate (SiOCH) dielectric materials (low-k). The objective is to verify and validate the ToF-SIMS as a reliable characterisation technique for describing the chemical properties of these materials. Indeed, because of this intrinsic ‘fragility’ the consistency of the chemical information is connected to an appropriate interpretation of the specific ion/matter interactions taking place.For mesoporous silicon, a systematic analysis is carried out considering various sputtering ion sources (Caesium, Xenon and Oxygen); both sputtering and ionisation behaviours are examined relatively to the nonporous silicon, taking into account energy of the sputtering beam and porosity rate of the target material.Concerning nanometric thick polymer films, low damaging analysis conditions are applied by the use of argon cluster primary ion sources in order to obtain a significant molecular secondary ion information. In these conditions, a discrimination of quasi-identical nanometre thick structures is made possible and a quantification method for copolymers is then proposed. In addition, with the supplement of data principal component analysis (PCA) an innovative and significant correlation is obtained between main Principal Component and sample molecular weights.Finally, the effect of several industrial integration processes (such as etching or wet cleaning) applied on low-k materials are studied in order to understand their detrimental impact on low-k insulating properties. To achieve a depth-resolved chemical information, low energy caesium sputterings are shown to be the most adapted and sensitive strategy. In addition, PCA is shown to be almost essential to amplify differences between samples significantly. This approach allowed combining the variation of physical properties (dielectric constant) with the chemical ones.
579

Análise de Sinais Eletrocardiográficos Atriais Utilizando Componentes Principais e Mapas Auto-Organizáveis. / Atrial Eletrocardiographics Signals Analysis Using Principal Components and Self-Organizing Maps.

Coutinho, Paulo Silva 21 November 2008 (has links)
A análise de sinais provenientes de um eletrocardiograma (ECG) pode ser de grande importância para avaliação do comportamento cardíaco de um paciente. Os sinais de ECG possuem características específicas de acordo com os tipos de arritmias e sua classificação depende da morfologia do sinal. Neste trabalho é considerada uma abordagem híbrida utilizando análise de componentes principais (PCA) e mapas auto-organizáveis (SOM) para classificação de agrupamentos provenientes de arritmias como a taquicardia sinusal e, principalmente, fibrilação atrial. Nesse sentido, O PCA é utilizado como um pré-processador buscando suprimir sinais de atividades ventriculares, de maneira que a atividade atrial presente no ECG seja evidenciada sob a forma das ondas f. A Rede Neural SOM, é usada na classificação dos padrões de fibrilação atrial e seus agrupamentos / A análise de sinais provenientes de um eletrocardiograma (ECG) pode ser de grande importância para avaliação do comportamento cardíaco de um paciente. Os sinais de ECG possuem características específicas de acordo com os tipos de arritmias e sua classificação depende da morfologia do sinal. Neste trabalho é considerada uma abordagem híbrida utilizando análise de componentes principais (PCA) e mapas auto-organizáveis (SOM) para classificação de agrupamentos provenientes de arritmias como a taquicardia sinusal e, principalmente, fibrilação atrial. Nesse sentido, O PCA é utilizado como um pré-processador buscando suprimir sinais de atividades ventriculares, de maneira que a atividade atrial presente no ECG seja evidenciada sob a forma das ondas f. A Rede Neural SOM, é usada na classificação dos padrões de fibrilação atrial e seus agrupamentos
580

Modelo HJM com jumps: o caso brasileiro

Suzuki, Fernando Kenji 22 August 2015 (has links)
Submitted by Fernando Kenji Suzuki (fernandok.suzuki@gmail.com) on 2015-09-15T02:03:13Z No. of bitstreams: 1 main.pdf: 1014824 bytes, checksum: 78c5726b7429d94596849075c18716ec (MD5) / Rejected by Renata de Souza Nascimento (renata.souza@fgv.br), reason: Prezado Fernando, boa tarde Conforme Normas da ABNT, será necessário realizar os seguintes ajustes: Na CAPA: Seu nome deve estar um pouco acima, de uma maneira centralizada entre o nome da escola e o título do trabalho. CAPA e CONTRACAPA: Retirar a formatação Itálica da palavra Jumps. Em seguida realize uma nova submissão. Att. on 2015-09-15T18:58:14Z (GMT) / Submitted by Fernando Kenji Suzuki (fernandok.suzuki@gmail.com) on 2015-09-16T02:49:23Z No. of bitstreams: 1 main.pdf: 992654 bytes, checksum: 97c7605bf15b07b1b7554b66c33f1a12 (MD5) / Approved for entry into archive by Renata de Souza Nascimento (renata.souza@fgv.br) on 2015-09-16T19:44:21Z (GMT) No. of bitstreams: 1 main.pdf: 992654 bytes, checksum: 97c7605bf15b07b1b7554b66c33f1a12 (MD5) / Made available in DSpace on 2015-09-16T20:12:00Z (GMT). No. of bitstreams: 1 main.pdf: 992654 bytes, checksum: 97c7605bf15b07b1b7554b66c33f1a12 (MD5) Previous issue date: 2015-08-22 / Using market data obtained from BM&F Bovespa, this paper proposes a possible variation of Heath, Jarrow and Morton model in his discrete and multifactorial way, with the insertion of jumps as a way to consider the effect of the meetings held by the Brazilian Monetary Policy Committee (Copom). Through the use of principal component analysis (PCA), the calibration of the model parameters is made, allowing the simulation of the evolution of the term structure of interest rate known as PRE via Monte Carlo Simulation (MCS). With the scenarios generated by the simulation of the curve at fixed vertices (synthetic), the results are compared to the data observed in the market. / Utilizando dados de mercado obtidos na BM&F Bovespa, este trabalho propõe uma possível variação do modelo Heath, Jarrow e Morton em sua forma discreta e multifatorial, com a introdução de jumps como forma de considerar o efeito das reuniões realizadas pelo Cômite de Políticas Monetárias (Copom). Através do uso da análise de componentes principais (PCA), é feita a calibração dos parâmetros do modelo, possibilitando a simulação da evolução da estrutura a termo de taxa de juros (ETTJ) da curva prefixada em reais via simulação de Monte Carlo (MCS). Com os cenários da curva simulada em vértices fixos (sintéticos), os resultados são comparados aos dados observados no mercado.

Page generated in 0.0796 seconds