• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 29
  • 13
  • 8
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 68
  • 13
  • 10
  • 10
  • 10
  • 9
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Critical Points of Uncertain Scalar Fields: With Applications to the North Atlantic Oscillation

Vietinghoff, Dominik 29 May 2024 (has links)
In an era of rapidly growing data sets, information reduction techniques such as extracting and highlighting characteristic features, are becoming increasingly important for efficient data analysis. Particularly relevant features of scalar fields are their critical points since they mark locations in the domain where a field's level set undergoes fundamental topological changes. There are well-established methods for locating and relating such points in a deterministic setting. However, many real-world phenomena studied in the computational sciences today are the result of a chaotic system that cannot be fully described by a single scalar field. Instead, the variability of such systems is typically captured with ensemble simulations, which generate a variety of possible outcomes of the simulated process. The topological analysis of such ensemble data sets, and uncertain data in general, is less well studied. In particular, there is no established definition for critical points of uncertain scalar fields. This thesis therefore aims to generalize the concept of critical points to uncertain scalar fields. While a deterministic field has a single set of critical points, each outcome of an uncertain scalar field has its own set of critical points. A first step towards finding an appropriate analog for critical points in uncertain data is to look at the distribution of all these critical points. In this work, different methods for analyzing this distribution are presented, which identify and track the likely locations of critical points over time, estimate their local occurrence probabilities, and eventually characterize their spatial uncertainty. A driving factor of winter weather in western Europe is the North Atlantic Oscillation (NAO), which is manifested by fluctuations in the sea level pressure difference between the Icelandic Low and the Azores High. Several methods have been developed to describe the strength of this oscillation. Some of them are based on certain assumptions, such as fixed positions of these two pressure systems. It is possible, however, that climate change will affect the locations of the main pressure variations and thus the validity of these descriptive methods. An alternative approach is based on the leading empirical orthogonal function (EOF) computed from the sea level pressure fields over the North Atlantic. The critical points of these fields indicate the actual locations of maximum pressure variations and can thus be used to assess how climate change affects these locations and to evaluate the validity of methods that use fixed locations to characterize the strength of the NAO. Because the climate is described by a chaotic system, such an analysis should incorporate the uncertain nature of climate predictions to produce statistically robust results. Extracting and tracking the positions of the maximum pressure variations that characterize the NAO therefore serves as a motivating practical application for the study of critical points in uncertain data in this work. Because uncertain data tend to be noisy, filtering is often required to separate relevant signals of variation from irrelevant fluctuations. A well-established method for extracting dominant signals from a time series of fields is to compute its empirical orthogonal functions (EOFs). In the first part of this thesis, this concept is extended to the analysis of spatiotemporal ensemble data sets to decompose their variation into modes describing the variation in the ensemble direction and modes describing the variation in the time direction. An application to different climate data sets revealed that, depending on the way an ensemble has been generated, temporal and ensemble-wise variations are not necessarily independent, making it difficult to separate these signals. Next, a computational pipeline for tracking likely locations of critical points in ensembles of scalar fields is presented. It computes leading EOFs on sliding time windows for all ensemble members, extracts regions where critical points can be expected from the resulting ensembles of EOFs for every time window, and finally tracks the barycenters of these regions over time. An application of this pipeline to sea level pressure fields over the North Atlantic revealed systematic shifts in the locations of the maximum pressure variations that characterize the NAO. These found shift were more pronounced for more extreme climate change scenarios. Existing methods for the identification of critical points in ensembles of scalar fields do not distinguish between uncertainties that are inherent in the analyzed system itself and those that are additionally introduced by using a finite sample of fields to capture these variations. In the next part of this thesis, two approaches for estimating the occurrence probabilities of critical points are presented that explicitly take into account and communicate to the viewer the additional uncertainties caused by estimating these probabilities from finite-sized ensembles. A comparison with existing works on synthetic data demonstrates the added value of the new approaches. The last part of this thesis is devoted to the question of how to characterize the spatial uncertainty of critical points. It provides a sound mathematical formulation of the problem of finding critical points with spatial uncertainty and computing their spatial distribution. This ultimately leads to the notion of uncertain critical points as a generalization of critical points to uncertain scalar fields. An analysis of the theoretical properties of these structures gave conditions under which well-interpretable results can be obtained and revealed interpretational difficulties when these conditions are not met. / In Zeiten immer größerer Datensätze gewinnen Techniken zur Informationsreduktion, etwa die Extraktion und Hervorhebung charakteristischer Merkmale, zunehmend an Bedeutung für eine effiziente Datenanalyse. Besonders relevante Merkmale von Skalarfeldern sind ihre kritischen Punkte, da sie Orte in der Domäne kennzeichnen, an denen sich die Topologie der Niveaumenge eines Feldes grundlegend verändert. Es existieren etablierte Methoden, um diese Punkte in deterministischen Feldern zu lokalisieren und sie miteinander in Beziehung zu setzen. Viele Alltagsphänomene, die heute untersucht werden, sind jedoch das Ergebnis chaotischer Systeme, die sich nicht vollständig durch ein einzelnes Skalarfeld beschreiben lassen. Stattdessen wird die Variabilität solcher Systeme mit Ensemblesimulationen erfasst, die eine Vielzahl möglicher Ergebnisse des simulierten Prozesses erzeugen. Die topologische Analyse solcher Ensemble-Datensätze und unsicherer Daten im Allgemeinen ist bisher weniger gut erforscht. Insbesondere gibt es noch keine etablierte Definition für die kritischen Punkte von unsicheren Skalarfeldern. In dieser Dissertation wird daher eine Verallgemeinerung des Konzepts kritischer Punkte auf unsichere Skalarfelder angestrebt. Während ein deterministisches Feld einen einzigen Satz kritischer Punkte hat, hat jede Realisierung eines unsicheren Skalarfeldes ihre eigenen kritischen Punkte. Ein erster Schritt, um ein geeignetes Analogon für kritische Punkte in unsicheren Daten zu finden, besteht darin, die Verteilung all dieser kritischen Punkte zu untersuchen. Zu diesem Zweck werden in dieser Arbeit verschiedene Methoden vorgestellt, die es ermöglichen, die wahrscheinlichen Orte kritischer Punkte zu identifizieren und über die Zeit zu verfolgen, die lokalen Wahrscheinlichkeiten für das Auftreten kritischer Punkte zu schätzen und schließlich die räumliche Unsicherheit von kritischen Punkten zu charakterisieren. Ein bestimmender Faktor für das Winterwetter in Westeuropa ist die Nordatlantische Oszillation (NAO), die sich in Schwankungen des Druckunterschieds auf Meereshöhe zwischen dem Islandtief und dem Azorenhoch äußert. Es existieren unterschiedliche Methoden, um die Stärke dieser Oszillation zu beschreiben, von denen einige auf bestimmten Annahmen beruhen, wie etwa der fixen Position der beiden Drucksysteme. Es ist jedoch möglich, dass der Klimawandel die Lage der Hauptdruckschwankungen und somit die Gültigkeit dieser Beschreibungsmethoden beeinträchtigt. Ein alternativer Ansatz basiert auf der führenden empirischen Orthogonalfunktion (EOF), welche aus den Druckfeldern auf Meereshöhe über dem Nordatlantik berechnet wird. Die kritischen Punkte dieses Feldes entsprechen den tatsächlichen Orten maximaler Druckschwankungen. Sie können daher verwendet werden, um die Auswirkungen des Klimawandels auf diese Orte zu bewerten und dadurch die Gültigkeit von Methoden, die feste Positionen zur Charakterisierung der Stärke der NAO verwenden, zu beurteilen. Da das Klima durch ein chaotisches System beschrieben wird, sollte eine solche Analyse die Unsicherheit von Klimavorhersagen berücksichtigen, um statistisch zuverlässige Ergebnisse zu erhalten. Die Extraktion und Verfolgung der für die NAO charakteristischen Positionen maximaler Druckschwankungen dient daher als motivierende praktische Anwendung für die Untersuchung kritischer Punkte in unsicheren Daten in dieser Arbeit. Da unsichere Daten oft verrauscht sind, ist meist zunächst eine Filterung erforderlich, um relevante Signale von irrelevanten Fluktuationen zu trennen. Ein etabliertes Konzept zur Extraktion dominanter Signale aus Zeitreihen von Skalarfeldern ist die empirische Orthogonalfunktionsanalyse (EOF-Analyse). Im ersten Teil dieser Arbeit wird dieses Konzept auf die Analyse von zeitabhängigen Ensemble-Datensätzen erweitert, um deren Variation in Moden zu zerlegen, die die jeweiligen Schwankungen in Ensemble- und Zeitrichtung beschreiben. Eine Anwendung auf verschiedene Klimadatensätze hat gezeigt, dass je nachdem, wie ein Ensemble generiert wurde, zeitliche und ensemblebezogene Variationen nicht zwangsläufig unabhängig sind, was eine Trennung dieser Signale erschwert. Im weiteren wird eine Berechnungspipeline zur Verfolgung der wahrscheinlichen Positionen kritischer Punkte in Ensemblen von Skalarfeldern vorgestellt. Sie berechnet zunächst die führenden EOFs auf gleitenden Zeitfenstern für jedes Ensemblemitglied, extrahiert dann aus den resultierenden Ensemblen von EOFs an jedem Zeitfenster Regionen, in denen kritische Punkte zu erwarten sind, und verfolgt schließlich die Baryzentren dieser Regionen über die Zeit. Die Anwendung dieser Pipeline auf die nordatlantischen Meeresspiegeldruckfelder hat eine systematische Verschiebungen der für die NAO charakteristischen Orte der maximalen Druckvariationen offenbart. Dabei führten extremere Klimawandelszenarien zu stärkeren Verschiebungen. Vorhandene Methoden zur Identifikation von kritischen Punkten in Ensemblen von Skalarfeldern unterscheiden nicht zwischen Unsicherheiten, die dem analysierten System selbst innewohnen, und solchen, die durch die Verwendung einer endlichen Stichprobe von Feldern zur Erfassung dieser Variationen zusätzlich verursacht werden. Im nächsten Teil dieser Arbeit werden daher zwei Ansätze zur Schätzung der Auftrittswahrscheinlichkeiten kritischer Punkte vorgestellt, die explizit auch die zusätzlichen Unsicherheiten berücksichtigen, die durch die Schätzung dieser Wahrscheinlichkeiten aus endlichen Ensemblen entstehen, und diese an den Betrachter kommunizieren. Der Mehrwert der neuen Verfahren wurde in einem Vergleich mit bestehenden Arbeiten auf synthetischen Daten demonstriert. Der letzte Teil dieser Arbeit ist der Frage gewidmet, wie sich die räumliche Unsicherheit kritischer Punkte charakterisieren lässt. Es wird eine fundierte mathematische Formulierung des Problems der Suche nach kritischen Punkten mit räumlicher Unsicherheit und der Berechnung ihrer räumlichen Verteilung erbracht. Das führt schließlich zum Begriff unsicherer kritischer Punkte als Verallgemeinerung von kritischen Punkten auf unsichere Skalarfelder. Eine Analyse der theoretischen Eigenschaften dieser Strukturen hat Bedingungen ergeben, unter denen einfach zu interpretierende Ergebnisse erzielt werden können, und offenbarte Interpretationsschwierigkeiten, die entstehen, wenn diese Bedingungen nicht erfüllt sind.
42

Toward Error-Statistical Principles of Evidence in Statistical Inference

Jinn, Nicole Mee-Hyaang 02 June 2014 (has links)
The context for this research is statistical inference, the process of making predictions or inferences about a population from observation and analyses of a sample. In this context, many researchers want to grasp what inferences can be made that are valid, in the sense of being able to uphold or justify by argument or evidence. Another pressing question among users of statistical methods is: how can spurious relationships be distinguished from genuine ones? Underlying both of these issues is the concept of evidence. In response to these (and similar) questions, two questions I work on in this essay are: (1) what is a genuine principle of evidence? and (2) do error probabilities have more than a long-run role? Concisely, I propose that felicitous genuine principles of evidence should provide concrete guidelines on precisely how to examine error probabilities, with respect to a test's aptitude for unmasking pertinent errors, which leads to establishing sound interpretations of results from statistical techniques. The starting point for my definition of genuine principles of evidence is Allan Birnbaum's confidence concept, an attempt to control misleading interpretations. However, Birnbaum's confidence concept is inadequate for interpreting statistical evidence, because using only pre-data error probabilities would not pick up on a test's ability to detect a discrepancy of interest (e.g., "even if the discrepancy exists" with respect to the actual outcome. Instead, I argue that Deborah Mayo's severity assessment is the most suitable characterization of evidence based on my definition of genuine principles of evidence. / Master of Arts
43

Image texture analysis for inferential sensing in the process industries

Kistner, Melissa 12 1900 (has links)
Thesis (MScEng)-- Stellenbosch University, 2013. / ENGLISH ABSTRACT: The measurement of key process quality variables is important for the efficient and economical operation of many chemical and mineral processing systems, as these variables can be used in process monitoring and control systems to identify and maintain optimal process conditions. However, in many engineering processes the key quality variables cannot be measured directly with standard sensors. Inferential sensing is the real-time prediction of such variables from other, measurable process variables through some form of model. In vision-based inferential sensing, visual process data in the form of images or video frames are used as input variables to the inferential sensor. This is a suitable approach when the desired process quality variable is correlated with the visual appearance of the process. The inferential sensor model is then based on analysis of the image data. Texture feature extraction is an image analysis approach by which the texture or spatial organisation of pixels in an image can be described. Two texture feature extraction methods, namely the use of grey-level co-occurrence matrices (GLCMs) and wavelet analysis, have predominated in applications of texture analysis to engineering processes. While these two baseline methods are still widely considered to be the best available texture analysis methods, several newer and more advanced methods have since been developed, which have properties that should theoretically provide these methods with some advantages over the baseline methods. Specifically, three advanced texture analysis methods have received much attention in recent machine vision literature, but have not yet been applied extensively to process engineering applications: steerable pyramids, textons and local binary patterns (LBPs). The purpose of this study was to compare the use of advanced image texture analysis methods to baseline texture analysis methods for the prediction of key process quality variables in specific process engineering applications. Three case studies, in which texture is thought to play an important role, were considered: (i) the prediction of platinum grade classes from images of platinum flotation froths, (ii) the prediction of fines fraction classes from images of coal particles on a conveyor belt, and (iii) the prediction of mean particle size classes from images of hydrocyclone underflows. Each of the five texture feature sets were used as inputs to two different classifiers (K-nearest neighbours and discriminant analysis) to predict the output variable classes for each of the three case studies mentioned above. The quality of the features extracted with each method was assessed in a structured manner, based their classification performances after the optimisation of the hyperparameters associated with each method. In the platinum froth flotation case study, steerable pyramids and LBPs significantly outperformed the GLCM, wavelet and texton methods. In the case study of coal fines fractions, the GLCM method was significantly outperformed by all four other methods. Finally, in the hydrocyclone underflow case study, steerable pyramids and LBPs significantly outperformed GLCM and wavelet methods, while the result for textons was inconclusive. Considering all of these results together, the overall conclusion was drawn that two of the three advanced texture feature extraction methods, namely steerable pyramids and LBPs, can extract feature sets of superior quality, when compared to the baseline GLCM and wavelet methods in these three case studies. The application of steerable pyramids and LBPs to further image analysis data sets is therefore recommended as a viable alternative to the traditional GLCM and wavelet texture analysis methods. / AFRIKAANSE OPSOMMING: Die meting van sleutelproseskwaliteitsveranderlikes is belangrik vir die doeltreffende en ekono-miese werking van baie chemiese– en mineraalprosesseringsisteme, aangesien hierdie verander-likes gebruik kan word in prosesmonitering– en beheerstelsels om die optimale prosestoestande te identifiseer en te handhaaf. In baie ingenieursprosesse kan die sleutelproseskwaliteits-veranderlikes egter nie direk met standaard sensors gemeet word nie. Inferensiële waarneming is die intydse voorspelling van sulke veranderlikes vanaf ander, meetbare prosesveranderlikes deur van ‘n model gebruik te maak. In beeldgebaseerde inferensiële waarneming word visuele prosesdata, in die vorm van beelde of videogrepe, gebruik as insetveranderlikes vir die inferensiële sensor. Hierdie is ‘n gepaste benadering wanneer die verlangde proseskwaliteitsveranderlike met die visuele voorkoms van die proses gekorreleer is. Die inferensiële sensormodel word dan gebaseer op die analise van die beelddata. Tekstuurkenmerkekstraksie is ‘n beeldanalisebenadering waarmee die tekstuur of ruimtelike organisering van die beeldelemente beskryf kan word. Twee tekstuurkenmerkekstraksiemetodes, naamlik die gebruik van grysskaalmede-aanwesigheidsmatrikse (GSMMs) en golfie-analise, is sterk verteenwoordig in ingenieursprosestoepassings van tekstuuranalise. Alhoewel hierdie twee grondlynmetodes steeds algemeen as die beste beskikbare tekstuuranalisemetodes beskou word, is daar sedertdien verskeie nuwer en meer gevorderde metodes ontwikkel, wat beskik oor eienskappe wat teoreties voordele vir hierdie metodes teenoor die grondlynmetodes behoort te verskaf. Meer spesifiek is daar drie gevorderde tekstuuranalisemetodes wat baie aandag in onlangse masjienvisieliteratuur geniet het, maar wat nog nie baie op ingenieursprosesse toegepas is nie: stuurbare piramiedes, tekstons en lokale binêre patrone (LBPs). Die doel van hierdie studie was om die gebruik van gevorderde tekstuuranalisemetodes te vergelyk met grondlyntekstuuranaliesemetodes vir die voorspelling van sleutelproseskwaliteits-veranderlikes in spesifieke prosesingenieurstoepassings. Drie gevallestudies, waarin tekstuur ‘n belangrike rol behoort te speel, is ondersoek: (i) die voorspelling van platinumgraadklasse vanaf beelde van platinumflottasieskuime, (ii) die voorspelling van fynfraksieklasse vanaf beelde van steenkoolpartikels op ‘n vervoerband, en (iii) die voorspelling van gemiddelde partikelgrootteklasse vanaf beelde van hidrosikloon ondervloeie. Elk van die vyf tekstuurkenmerkstelle is as insette vir twee verskillende klassifiseerders (K-naaste bure en diskriminantanalise) gebruik om die klasse van die uitsetveranderlikes te voorspeel, vir elk van die drie gevallestudies hierbo genoem. Die kwaliteit van die kenmerke wat deur elke metode ge-ekstraheer is, is op ‘n gestruktureerde manier bepaal, gebaseer op hul klassifikasieprestasie na die optimering van die hiperparameters wat verbonde is aan elke metode. In die platinumskuimflottasiegevallestudie het stuurbare piramiedes en LBPs betekenisvol beter as die GSMM–, golfie– en tekstonmetodes presteer. In die steenkoolfynfraksiegevallestudie het die GSMM-metode betekenisvol slegter as al vier ander metodes presteer. Laastens, in die hidrosikloon ondervloeigevallestudie het stuurbare piramiedes en LBPs betekenisvol beter as die GSMM– en golfiemetodes presteer, terwyl die resultaat vir tekstons nie beslissend was nie. Deur al hierdie resultate gesamentlik te beskou, is die oorkoepelende gevolgtrekking gemaak dat twee van die drie gevorderde tekstuurkenmerkekstraksiemetodes, naamlik stuurbare piramiedes en LBPs, hoër kwaliteit kenmerkstelle kan ekstraheer in vergelyking met die GSMM– en golfiemetodes, vir hierdie drie gevallestudies. Die toepassing van stuurbare piramiedes en LBPs op verdere beeldanalise-datastelle word dus aanbeveel as ‘n lewensvatbare alternatief tot die tradisionele GSMM– en golfietekstuuranalisemetodes.
44

Preferências ecológicas e potencial bioindicador das diatomáceas para avaliação ambiental de represas do Estado de São Paulo, Brasil /

Lehmkuhl, Angela Maria da Silva January 2019 (has links)
Orientador: Denise de Campos Bicudo / Resumo: Este estudo baseou-se em um banco de dados limnológicos e biológicos (diatomáceas de sedimento superficial e da coluna da água) de 33 reservatórios com gradiente trófico (ultraoligotrófico a hipereutrófico) distribuídos na região sudeste do Estado de São Paulo. Visou, como primeira etapa, calcular os ótimos e as tolerâncias ecológicas (etapa de regressão) das espécies de diatomáceas com a finalidade de propor um índice de diatomáceas para avaliar o estado trófico de represas, bem como um modelo de função de transferência diatomácea-fósforo (etapa de calibração) para inferir níveis pretéritos de fósforo da água. Além disso, visou avaliar o efeito da eutrofização sobre a homogeneização taxonômica e funcional das comunidades de diatomáceas. As amostras do sedimento superficial (n = 113) e do plâncton (verão e inverno, n = 226) foram obtidas entre 2009 e 2014. O método da média ponderada (WA) foi utilizado para a etapa de regressão (ótimo e tolerância das espécies), e modelos de regressão clássica e inversa foram testados para a etapa de calibração para a proposição do índice trófico de diatomáceas e para o modelo de função de transferência diatomácea-fósforo. Foram calculados os ótimos e as tolerâncias de fósforo total para 58 (sedimento superficial) e 53 (plâncton) espécies de diatomáceas. O modelo proposto com base nas diatomáceas do sedimento superficial apresentou melhor habilidade (r2 0.71, p<0.001, RMSE 49.43 μg L-1) do que as planctônicas para proposição do índice de esta... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: This study was based on a limnological and biological database (surface sediment and water column diatoms) of 33 reservoirs with trophic gradient (ultraoligo- to hyperereutrophic) distributed in the southeastern region of São Paulo State. The first aim was to calculate the optimum and ecological tolerances (regression stage) of diatom species in order to propose a diatom index to evaluate the trophic state of reservoirs, as well as a model of diatom-phosphorus transfer function (calibration step) to infer past levels of water phosphorus. In addition, it aimed to evaluate the effect of eutrophication on the taxonomic and functional homogenization of diatom communities. Surface sediment (n = 113) and plankton (summer and winter, n = 226) samples were obtained between 2009 and 2014. The weighted average (WA) method was used for the regression step (optimal and species tolerance), and classical and inverse regression models were tested for the calibration step for the proposition of the trophic index of diatoms and for the diatom-phosphorus transfer function model. Optimum and tolerances for total phosphorus were calculated for 58 (surface sediment) and 53 (plankton) diatom species. The model based on the surface sediment diatoms presented better ability (r2 0.71, p<0.001, RMSE 49.43 μg L-1) than phytoplankton diatom to propose the trophic diatom index of reservoirs (TDIR). The transfer function model showed high predictive ability (r2 0.80) and was based on 63 diatom species (su... (Complete abstract click electronic access below) / Doutor
45

O humor como estratégia de compreensão e produção de charges: um estudo inferencial das charges de Myrria

Lima, Maria Francisca Morais de 25 February 2016 (has links)
Made available in DSpace on 2016-04-28T19:34:03Z (GMT). No. of bitstreams: 1 Maria Francisca Morais de Lima.pdf: 3869227 bytes, checksum: 108564ea7f7635be117a95366c0943d8 (MD5) Previous issue date: 2016-02-25 / The Understanding of texts that represent someone's opinions, as cartoons, requires the reader to develop contextual skills capable of generate meaning. So, this thesis discusses the importance of inferential process as a strategy of understanding the humor in political cartoon, taking as a basis the principles of textuality of Beaugrande and Dresller (1981) and inferential categorization framework prepared by Marcuschi (2012). The research's problem was to analyze inferential processes and their importance for critical analysis of texts of humor. To this end, the following objectives are -: analyze inferential procedures that contribute to the understanding of the humor in the cartoon; perform a theoretical path, not only about the first studies on laughter, but also about the perception of humor and its use as a social criticism; identify how the inferential process may contribute to the perception of constituted political criticism in cartoon gender . The study of inferential process for understanding cartoons is justified, since the reader, while reading a cartoon uses the inference to fill the gaps left towards sometimes on purpose by the author in the text. Such gaps are evidenced by the incongruity intentionally assigned by cartoonist. This thesis is divided into four chapters: the first three, presented a theoretical framework that buoyed the analysis of the research corpus consists of cartoons published on Acrítica newspaper opinion notebook from February to November 2013. As an analytical tool, picked up ten (10) cartoons of Myrria organized into five groups, considering the similarity of the issues presented. In the methodological field, it was chosen a phenomenological research method, whose premises will enable an understanding from the man of visions and world and content analysis. As standard understanding of the presented charges were used skills to locate and to infer implicit and explicit information in text and establishing the relationship between the significant resources and order effects, thus enabling the player, not only out of the surface of the text frame, but also be able to understand the relationships built in interdiscourse and intertext of cartoon texts / A compreensão de textos opinativos como a charge exige do leitor o desenvolvimento de habilidades contextuais capaz de gerar sentido. Para tanto, esta tese discute a importância do processo inferencial como estratégia de compreensão do humor na charge política, tomando como base de análise os princípios de textualidade de Beaugrande e Dresller (1981) e o quadro de categorização inferencial elaborado por Marcuschi (2012). O problema da pesquisa consistiu em analisar os processos inferenciais e sua importância para a análise crítica de textos de humor. Para tanto, elencaram-se os seguintes objetivos: analisar os procedimentos inferenciais que contribuem para a compreensão do humor presente na charge; realizar um trajeto teórico, não só a respeito dos primeiros estudos sobre o riso, como também a respeito da percepção de humor e sua utilização como aporte de crítica social; identificar como o processo inferencial pode contribuir para a percepção da crítica política constituída no gênero charge. O estudo do processo inferencial para a compreensão de charges se justifica, uma vez que o leitor, ao ler um texto chárgico, utiliza a inferência para preencher as lacunas de sentido deixadas, às vezes de propósito, pelo autor no texto. Tais lacunas são evidenciadas pela incongruência intencionalmente atribuída pelo chargista. Esta tese está dividida em quatro capítulos: nos três primeiros, apresentou-se um aporte teórico que balizou a análise do Corpus da pesquisa constituído por charges publicadas no caderno de opinião do jornal Acrítica no período de fevereiro a novembro de 2013. Como instrumento de análise, escolheram-se dez (10) charges de Myrria, organizadas em cinco grupos, considerando a similaridade dos assuntos apresentados. No campo metodológico, optou-se como método de investigação a fenomenologia, cujos pressupostos permitem realizar uma compreensão a partir das visões de homem e de mundo e a análise de conteúdo. Como padrão de compreensão das charges apresentadas, foram utilizadas as habilidades de localizar e inferir informações explícitas e implícitas no texto e o estabelecimento de relação entre os recursos expressivos e efeitos de sentido, possibilitando assim ao leitor, não só sair da estrutura superficial do texto, como também ser capaz de perceber as relações construídas no interdiscurso e no intertexto dos textos chárgicos
46

Improving process monitoring and modeling of batch-type plasma etching tools

Lu, Bo, active 21st century 01 September 2015 (has links)
Manufacturing equipments in semiconductor factories (fabs) provide abundant data and opportunities for data-driven process monitoring and modeling. In particular, virtual metrology (VM) is an active area of research. Traditional monitoring techniques using univariate statistical process control charts do not provide immediate feedback to quality excursions, hindering the implementation of fab-wide advanced process control initiatives. VM models or inferential sensors aim to bridge this gap by predicting of quality measurements instantaneously using tool fault detection and classification (FDC) sensor measurements. The existing research in the field of inferential sensor and VM has focused on comparing regressions algorithms to demonstrate their feasibility in various applications. However, two important areas, data pretreatment and post-deployment model maintenance, are usually neglected in these discussions. Since it is well known that the industrial data collected is of poor quality, and that the semiconductor processes undergo drifts and periodic disturbances, these two issues are the roadblocks in furthering the adoption of inferential sensors and VM models. In data pretreatment, batch data collected from FDC systems usually contain inconsistent trajectories of various durations. Most analysis techniques requires the data from all batches to be of same duration with similar trajectory patterns. These inconsistencies, if unresolved, will propagate into the developed model and cause challenges in interpreting the modeling results and degrade model performance. To address this issue, a Constrained selective Derivative Dynamic Time Warping (CsDTW) method was developed to perform automatic alignment of trajectories. CsDTW is designed to preserve the key features that characterizes each batch and can be solved efficiently in polynomial time. Variable selection after trajectory alignment is another topic that requires improvement. To this end, the proposed Moving Window Variable Importance in Projection (MW-VIP) method yields a more robust set of variables with demonstrably more long-term correlation with the predicted output. In model maintenance, model adaptation has been the standard solution for dealing with drifting processes. However, most case studies have already preprocessed the model update data offline. This is an implicit assumption that the adaptation data is free of faults and outliers, which is often not true for practical implementations. To this end, a moving window scheme using Total Projection to Latent Structure (T-PLS) decomposition screens incoming updates to separate the harmless process noise from the outliers that negatively affects the model. The integrated approach was demonstrated to be more robust. In addition, model adaptation is very inefficient when there are multiplicities in the process, multiplicities could occur due to process nonlinearity, switches in product grade, or different operating conditions. A growing structure multiple model system using local PLS and PCA models have been proposed to improve model performance around process conditions with multiplicity. The use of local PLS and PCA models allows the method to handle a much larger set of inputs and overcome several challenges in mixture model systems. In addition, fault detection sensitivities are also improved by using the multivariate monitoring statistics of these local PLS/PCA models. These proposed methods are tested on two plasma etch data sets provided by Texas Instruments. In addition, a proof of concept using virtual metrology in a controller performance assessment application was also tested.
47

The role of doubt in bulimia nervosa

Wilson, Samantha 12 1900 (has links)
No description available.
48

Sistema H?brido de Infer?ncia Baseado em An?lise de Componentes Principais e Redes Neurais Artificiais Aplicado a Plantas de Processamento de G?s Natural

Linhares, Leandro Luttiane da Silva 19 March 2010 (has links)
Made available in DSpace on 2014-12-17T14:55:42Z (GMT). No. of bitstreams: 1 LeandroLSL_DISSERT.pdf: 1890433 bytes, checksum: 540cbd4cf39fb3515249b7cecd6d0dcc (MD5) Previous issue date: 2010-03-19 / Conselho Nacional de Desenvolvimento Cient?fico e Tecnol?gico / Nowadays, where the market competition requires products with better quality and a constant search for cost savings and a better use of raw materials, the research for more efficient control strategies becomes vital. In Natural Gas Processin Units (NGPUs), as in the most chemical processes, the quality control is accomplished through their products composition. However, the chemical composition analysis has a long measurement time, even when performed by instruments such as gas chromatographs. This fact hinders the development of control strategies to provide a better process yield. The natural gas processing is one of the most important activities in the petroleum industry. The main economic product of a NGPU is the liquefied petroleum gas (LPG). The LPG is ideally composed by propane and butane, however, in practice, its composition has some contaminants, such as ethane and pentane. In this work is proposed an inferential system using neural networks to estimate the ethane and pentane mole fractions in LPG and the propane mole fraction in residual gas. The goal is to provide the values of these estimated variables in every minute using a single multilayer neural network, making it possibly to apply inferential control techniques in order to monitor the LPG quality and to reduce the propane loss in the process. To develop this work a NGPU was simulated in HYSYS R software, composed by two distillation collumns: deethanizer and debutanizer. The inference is performed through the process variables of the PID controllers present in the instrumentation of these columns. To reduce the complexity of the inferential neural network is used the statistical technique of principal component analysis to decrease the number of network inputs, thus forming a hybrid inferential system. It is also proposed in this work a simple strategy to correct the inferential system in real-time, based on measurements of the chromatographs which may exist in process under study / Nos dias atuais, em que a concorr?ncia de mercado exige produtos de melhor qualidade e a busca constante pela redu??o de custos e pelo melhor aproveitamento das mat?rias-primas, a utiliza??o de estrat?gias de controle mais eficientes torna-se fundamental. Nas Unidades de Processamento de G?s Natural (UPGNs), assim como na maioria dos processos qu?micos, o controle de qualidade ? realizado a partir da composi??o de seus produtos. Entretanto, a an?lise de composi??es qu?micas, mesmo quando realizada por equipamentos como os cromat?grafos a g?s, apresenta longos intervalos de medi??o. Esse fato dificulta a elabora??o de estrat?gias de controle que proporcionem um melhor rendimento do processo. Geralmente, o principal produto econ?mico de uma UPGN ? o GLP (G?s Liquefeito de Petr?leo). Outros produtos comumente obtidos nessas unidades s?o a gasolina natural e o g?s residual. O GLP ? formado idealmente por propano e butano. Entretanto, na pr?tica, apresenta em sua composi??o contaminantes, tais como o etano e o pentano. Neste trabalho ? proposto um sistema de infer?ncia utilizando redes neurais para estimar as fra??es molares de etano e pentano no GLP e a fra??o molar de propano no g?s residual. O objetivo ? estimar essas vari?veis a cada minuto com uma ?nica rede neural de m?ltiplas camadas, permitindo a aplica??o de t?cnicas de controle inferencial visando a controlar a qualidade do GLP e reduzir a perda de propano no processo. No desenvolvimento deste trabalho, ? simulada no software HYSYS R uma UPGN formada por uma coluna de destila??o deetanizadora e outra debutanizadora. A infer?ncia ? realizada a partir das vari?veis de processo de alguns controladores PID presentes na instrumenta??o das colunas citadas. Com o intuito de reduzir a complexidade da rede neural de infer?ncia, ? utilizada a t?cnica estat?stica de an?lise de componentes principais (ACP) para diminuir o n?mero de entradas da rede. Tem-se, portanto, um sistema h?brido de infer?ncia. Tamb?m ? proposta neste trabalho, uma estrat?gia simples para a corre??o em tempo real do sistema de infer?ncia, tendo como base as medi??es dos poss?veis cromat?grafos de linha presentes no processo em estudo
49

Aplikace fuzzy logiky při hodnocení dodavatelů firmy / The Application of Fuzzy Logic for Rating of Suppliers for the Firm

Zegzulka, Ivo January 2014 (has links)
This thesis deals with the design of fuzzy system that can evaluate supplier of spare parts for service. The result should be applicable to a company Iveta Šťastníková - car and tire service. Primarily it should simplify operations associated with the selection of appropriate spare parts, tools and other equipment needed to operate with car service station. First, we introduce the theoretical basis for the paper, and then we go to the present state and the analysis itself. The result is a proposed solution which should correspond to the needs of the owner.
50

The effects of types of question on EFL learners' reading comprehension scores

Ehara, Kazuhiro January 2008 (has links)
Little empirical research has been conducted on what effect task-based reading instruction with reading questions will have on reading comprehension, particularly in the domain of second language reading comprehension. The purpose of this research is to investigate which type of questions, textually explicit (TE) or inferential (IF) questions, will best facilitate text comprehension, and which type will have the most beneficial effect on Japanese EFL learners at three proficiency levels (low, intermediate, and high). In the study, two groups of Japanese senior high school students (N = 69) were classified into three different proficiency groups. One group received instruction emphasizing TE questions while the other received instruction emphasizing IF questions. TE questions are text-bound questions whose answers are locally and explicitly stated in the text. In contrast, IF questions are more knowledge-bound questions whose answers largely depend on readers' cognitive resources, such as relevant linguistic knowledge, background knowledge, world knowledge or context. The different treatments lasted five months. The results were statistically analyzed. The study revealed a significant task effect for reading questions on Japanese EFL learners' reading. Although one type of instruction did not have a significantly better effect than the other, the large between-groups gain gap seems to imply that instruction emphasizing IF questions might facilitate text comprehension more. The study also found that the participants who received instruction emphasizing IF questions benefited from their instruction regardless of proficiency level. With regard to instruction emphasizing TE questions, the higher proficiency participants benefited significantly more from their instruction than the lower proficiency students. The study suggests that reading teachers should use a task-based teaching method with reading questions. If the use of reading questions is already a part of reading teachers' methodology, they should include not only commonly used textually explicit reading questions but also inferential ones. The study suggests that implementing these changes might help break the cycle of translation-bound reading instruction with its overemphasis on lower-level processing, and might lead students to read texts in a more meaningful, interactive way. / CITE/Language Arts

Page generated in 0.0472 seconds