• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 684
  • 252
  • 79
  • 57
  • 42
  • 37
  • 30
  • 26
  • 25
  • 14
  • 9
  • 8
  • 7
  • 7
  • 7
  • Tagged with
  • 1504
  • 1030
  • 249
  • 238
  • 223
  • 215
  • 195
  • 185
  • 167
  • 163
  • 151
  • 124
  • 123
  • 122
  • 111
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1461

Causal latent space-based models for scientific learning in Industry 4.0

Borràs Ferrís, Joan 30 October 2023 (has links)
[ES] La presente tesis doctoral está dedicada a estudiar, desarrollar y aplicar metodologías basadas en datos, fundamentadas en modelos estadísticos multivariantes de variables latentes, para abordar el paradigma del aprendizaje científico en el entorno de la Industria 4.0. Se pone especial énfasis en los modelos causales basados en variables latentes que utilizan tanto datos provenientes de un diseño de experimentos como, principalmente, datos provenientes del proceso de producción diario, es decir, datos históricos. La tesis está estructurada en cinco partes. La primera parte discute el paradigma del aprendizaje científico en el entorno de la Industria 4.0. Se destacan los objetivos de la tesis. Además, se presenta una descripción exhaustiva de los modelos basados en variables latentes, sobre los cuales se fundamentan las metodologías novedosas propuestas en esta tesis. En la segunda parte, se presentan las novedosas aportaciones metodológicas. En primer lugar, se muestra el potencial de PLS para analizar datos del DOE, con o sin datos faltantes. Posteriormente, el potencial de los modelos causales basados en variables latentes se centra en definir el espacio de diseño de la materia prima que proporciona garantía de calidad con un cierto nivel de confianza para los atributos críticos de calidad, junto con el desarrollo de un nuevo índice de capacidad multivariante basado en el espacio latente para clasificar y seleccionar proveedores para una materia prima particular utilizada en un proceso de fabricación. La tercera parte pretende abordar aplicaciones novedosas mediante modelos causales basados en variables latentes utilizando datos históricos. En primer lugar, se trata de su aplicación en el ámbito sanitario: la Pandemia COVID-19. En este contexto, se utiliza el uso de modelos basados en variables latentes para desarrollar una alternativa a los ensayos clínicos controlados con placebo. Luego, se utilizan modelos basados en variables latentes para optimizar procesos en el marco de aplicaciones industriales. La cuarta parte presenta una interfaz gráfica de usuario desarrollada en código Python que integra los métodos desarrollados con el objetivo de ser autoexplicativa y fácil de usar. Finalmente, la última parte discute la relevancia de esta disertación, incluyendo propuestas que merecen mayor investigación. / [CA] Aquesta tesi doctoral està dedicada a estudiar, desenvolupar i aplicar metodologies basades en dades, fonamentades en models estadístics multivariants de variables latents, per abordar el paradigma de l'aprenentatge científic a l'entorn de la Indústria 4.0. Es posa un èmfasi especial en els models causals basats en variables latents que utilitzen tant; dades provinents d'un disseny d'experiments com, principalment, dades provinents del procés de producció diari, és a dir, dades històriques. La tesi està estructurada en cinc parts. A la primera part es discuteix el paradigma de l'aprenentatge científic a l'entorn de la Indústria 4.0. Es destaquen els objectius de la tesi. A més, es presenta una descripció exhaustiva dels models basats en variables latents, sobre els quals es fonamenten les noves metodologies proposades en aquesta tesi. A la segona part, es presenten les noves aportacions metodològiques. En primer lloc, es mostra el potencial de PLS per analitzar dades del DOE, amb dades faltants o sense aquestes. Posteriorment, el potencial dels models causals basats en variables latents se centra a definir l'espai de disseny de la matèria prima que proporciona garantia de qualitat amb un cert nivell de confiança per als atributs crítics de qualitat, juntament amb el desenvolupament d'un nou índex de capacitat multivariant basat en l'espai latent per a classificar i seleccionar proveïdors per a una primera matèria particular utilitzada en un procés de fabricació. La tercera part pretén abordar aplicacions noves mitjançant models causals basats en variables latents utilitzant dades històrques. En primer lloc, es tracta de la seva aplicació a l'àmbit sanitari: la Pandèmia COVID-19. En aquest context, es fa servir l'ús de models basats en variables latents per desenvolupar una alternativa als assaigs clínics controlats amb placebo. Després s'utilitzen models basats en variables latents per optimitzar processos en el marc d'aplicacions industrials. La quarta part presenta una interfície gràfica d'usuari desenvolupada en codi Python que integra els mètodes desenvolupats amb l'objectiu de ser autoexplicativa i fàcil d'usar. Finalment, l'última part discuteix la rellevància d'aquesta dissertació, incloent-hi propostes que mereixen més investigació. / [EN] The present Ph.D. thesis is devoted to studying, developing, and applying data-driven methodologies, based on multivariate statistical models of latent variables, to address the scientific learning paradigm in the Industry 4.0 environment. Particular emphasis is placed on causal latent variable-based models using both data coming from a planned design of experiments and, mainly, data coming from the daily production process, namely happenstance data. The dissertation is structured in five parts. The first part discusses the scientific learning paradigm in the Industry 4.0 environment. The objectives of the thesis are highlighted. In addition to that, a comprehensive description of latent variable-based models is presented, on which the novel methodologies proposed in this thesis are founded. In the second part, the novel methodological contributions are presented. Firstly, the potential of PLS to analyze data from DOE, with or without missing runs is illustrated. Then, the potential of causal latent variable-based models is concentrated on defining the raw material design space providing assurance of quality with a certain confidence level for the critical to quality attributes, jointly with the development of a novel latent space-based multivariate capability index to rank and select suppliers for a particular raw material used in a manufacturing process. The third part aims to address novel applications by means of causal latent variable-based models using happenstance data. First, it concerns a health application: the Pandemic COVID-19. In this context, the use of latent variable-based models is applied to develop an alternative to placebo-controlled clinical trials. Then, latent variable-based models are used to optimize processes within the framework of industrial applications. The fourth part introduces a graphical user interface developed in Python code that integrates the developed methods with the aim of being self-explanatory and user-friendly. Finally, the last part discusses the relevance of this dissertation, including proposals that deserve further research. / Borràs Ferrís, J. (2023). Causal latent space-based models for scientific learning in Industry 4.0 [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/198993
1462

An Analysis of Consequences of Land Evaluation and Path Optimization

Murekatete, Rachel Mundeli January 2018 (has links)
Planners who are involved in locational decision making often use raster-based geographic information systems (GIS) to quantify the value of land in terms of suitability or cost for a certain use. From a computational point of view, this process can be seen as a transformation of one or more sets of values associated with a grid of cells into another set of such values through a function reflecting one or more criteria. While it is generally anticipated that different transformations lead to different ‘best’ locations, little has been known on how such differences arise (or do not arise). Examples of such spatial decision problems can be easily found in the literature and many of them concern the selection of a set of cells (to which the land use under consideration is allocated) from a raster surface of suitability or cost depending on context. To facilitate GIS’s algorithmic approach, it is often assumed that the quality of the set of cells can be evaluated as a whole by the sum of their cell values. The validity of this assumption must be questioned, however, if those values are measured on a scale that does not permit arithmetic operations. Ordinal scale of measurement in Stevens’s typology is one such example. A question naturally arises: is there a more mathematically sound and consistent approach to evaluating the quality of a path when the quality of each cell of the given grid is measured on an ordinal scale? The thesis attempts to answer the questions highlighted above in the context of path planning through a series of computational experiments using a number of random landscape grids with a variety of spatial and non-spatial structures. In the first set of experiments, we generated least-cost paths on a number of cost grids transformed from the landscape grids using a variety of transformation parameters and analyzed the locations and (weighted) lengths of those paths. Results show that the same pair of terminal cells may well be connected by different least-cost paths on different cost grids though derived from the same landscape grid and that the variation among those paths is affected by how given values are distributed in the landscape grid as well as by how derived values are distributed in the cost grids. Most significantly, the variation tends to be smaller when the landscape grid contains more distinct patches of cells potentially attracting or distracting cost-saving passage or when the cost grid contains a smaller number of low-cost cells. The second set of experiments aims to compare two optimization models, minisum and minimax (or maximin) path models, which aggregate the values of the cells associated with a path using the sum function and the maximum (or minimum) function, respectively. Results suggest that the minisum path model is effective if the path search can be translated into the conventional least-cost path problem, which aims to find a path with the minimum cost-weighted length between two terminuses on a ratio-scaled raster cost surface, but the minimax (or maximin) path model is mathematically sounder if the cost values are measured on an ordinal scale and practically useful if the problem is concerned not with the minimization of cost but with the maximization of some desirable condition such as suitability. / Planerare som arbetar bland annat med att fatta beslut som hänsyftar till vissa lokaler använder ofta rasterbaserade geografiska informationssystem (GIS) för att sätta ett värde på marken med avseende på lämplighet eller kostnad för en viss användning. Ur en beräkningssynpunkt kan denna process ses som en transformation av en eller flera uppsättningar värden associerade med ett rutnät av celler till en annan uppsättning sådana värden genom en funktion som återspeglar ett eller flera kriterier. Medan det generellt förväntas att olika omvandlingar leder till olika "bästa" platser, har lite varit känt om hur sådana skillnader uppstår (eller inte uppstår). Exempel på sådana rumsliga beslutsproblem kan lätt hittas i litteraturen och många av dem handlar om valet av en uppsättning celler (som markanvändningen övervägs tilldelas) från en rasteryta av lämplighet eller kostnad beroende på kontext. För att underlätta GISs algoritmiska tillvägagångssätt antas det ofta att kvaliteten på uppsättningen av celler kan utvärderas som helhet genom summan av deras cellvärden. Giltigheten av detta antagande måste emellertid ifrågasättas om dessa värden mäts på en skala som inte tillåter aritmetiska transformationer. Användning av ordinal skala enligt Stevens typologi är ett exempel av detta. En fråga uppstår naturligt: Finns det ett mer matematiskt sunt och konsekvent tillvägagångssätt för att utvärdera kvaliteten på en rutt när kvaliteten på varje cell i det givna rutnätet mäts med ordinalskala? Avhandlingen försöker svara på ovanstående frågor i samband med ruttplanering genom en serie beräkningsexperiment med hjälp av ett antal slumpmässigt genererade landskapsnät med en rad olika rumsliga och icke-rumsliga strukturer. I den första uppsättningen experiment genererade vi minsta-kostnad rutter på ett antal kostnadsnät som transformerats från landskapsnätverket med hjälp av en mängd olika transformationsparametrar, och analyserade lägen och de (viktade) längderna för dessa rutter. Resultaten visar att samma par ändpunkter mycket väl kan vara sammanbundna med olika minsta-kostnad banor på olika kostnadsraster härledda från samma landskapsraster, och att variationen mellan dessa banor påverkas av hur givna värden fördelas i landskapsrastret såväl som av hur härledda värden fördelas i kostnadsrastret. Mest signifikant är att variationen tenderar att vara mindre när landskapsrastret innehåller mer distinkta grupper av celler som potentiellt lockar eller distraherar kostnadsbesparande passage, eller när kostnadsrastret innehåller ett mindre antal låg-kostnad celler. Den andra uppsättningen experiment syftar till att jämföra två optimeringsmodeller, minisum och minimax (eller maximin) sökmodeller, vilka sammanställer värdena för cellerna som är associerade med en sökväg med summanfunktionen respektive maximum (eller minimum) funktionen. Resultaten tyder på att minisumbanemodellen är effektiv om sökningen av sökvägen kan översättas till det konventionella minsta kostnadsproblemet, vilket syftar till att hitta en väg med den minsta kostnadsvägda längden mellan två terminaler på en ratio-skalad rasterkostyta, men minimax (eller maximin) banmodellen är matematiskt sundare om kostnadsvärdena mäts i ordinär skala och praktiskt användbar om problemet inte bara avser minimering av kostnad men samtidigt maximering av någon önskvärd egenskap såsom lämplighet. / <p>QC 20181002</p>
1463

A Wide-Area Perspective on Power System Operation and Dynamics

Gardner, Robert Matthew 23 April 2008 (has links)
Classically, wide-area synchronized power system monitoring has been an expensive task requiring significant investment in utility communications infrastructures for the service of relatively few costly sensors. The purpose of this research is to demonstrate the viability of power system monitoring from very low voltage levels (120 V). Challenging the accepted norms in power system monitoring, the document will present the use of inexpensive GPS time synchronized sensors in mass numbers at the distribution level. In the past, such low level monitoring has been overlooked due to a perceived imbalance between the required investment and the usefulness of the resulting deluge of information. However, distribution level monitoring offers several advantages over bulk transmission system monitoring. First, practically everyone with access to electricity also has a measurement port into the electric power system. Second, internet access and GPS availability have become pedestrian commodities providing a communications and synchronization infrastructure for the transmission of low-voltage measurements. Third, these ubiquitous measurement points exist in an interconnected fashion irrespective of utility boundaries. This work offers insight into which parameters are meaningful to monitor at the distribution level and provides applications that add unprecedented value to the data extracted from this level. System models comprising the entire Eastern Interconnection are exploited in conjunction with a bounty of distribution level measurement data for the development of wide-area disturbance detection, classification, analysis, and location routines. The main contributions of this work are fivefold: the introduction of a novel power system disturbance detection algorithm; the development of a power system oscillation damping analysis methodology; the development of several parametric and non-parametric power system disturbance location methods, new methods of power system phenomena visualization, and the proposal and mapping of an online power system event reporting scheme. / Ph. D.
1464

Phytochemical investigation of Acronychia species using NMR and LC-MS based dereplication and metabolomics approaches / Etude phytochimique d’espèces du genre Acronychia en utilisant des approches de déréplication et métabolomique basées sur des techniques RMN et SM

Kouloura, Eirini 28 November 2014 (has links)
Les plantes médicinales constituent une source inexhaustible de composés (des produits naturels - PN) utilisé en médecine pour la prévention et le traitement de diverses maladies. L'introduction de nouvelles technologies et méthodes dans le domaine de la chimie des produits naturels a permis le développement de méthodes ‘high throughput’ pour la détermination de la composition chimique des extraits de plantes, l'évaluation de leurs propriétés et l'exploration de leur potentiel en tant que candidats médicaments. Dernièrement, la métabolomique, une approche intégrée incorporant les avantages des technologies d'analyse moderne et la puissance de la bioinformatique s’est révélé un outil efficace dans la biologie des systèmes. En particulier, l'application de la métabolomique pour la découverte de nouveaux composés bioactifs constitue un domaine émergent dans la chimie des produits naturels. Dans ce contexte, le genre Acronychia de la famille des Rutaceae a été choisi sur la base de son usage en médecine traditionnelle pour ses propriétés antimicrobienne, antipyrétique, antispasmodique et anti-inflammatoire. Nombre de méthodes chromatographiques modernes, spectrométriques et spectroscopiques sont utilisées pour l'exploration de leur contenu en métabolites suivant trois axes principaux constituant les trois chapitres de cette thèse. En bref, le premier chapitre décrit l’étude phytochimique d’Acronychia pedunculata, l’identification des métabolites secondaires contenus dans cette espèce et l'évaluation de leurs propriétés biologiques. Le deuxième chapitre vise au développement de méthodes analytiques pour l'identification des dimères d’acétophénones (marqueurs chimiotaxonomiques du genre) et aux stratégies utilisées pour la déréplication de ces différents extraits et la caractérisation chimique des composés par UHPLC-HRMSn. Le troisième chapitre se concentre sur l'application de méthodologies métabolomique (RMN et LC-MS) pour l'analyse comparative (entre les différentes espèces, origines, organes), pour des études chimiotaxonomiques (entre les espèces) et pour la corrélation des composés contenus avec une activité pharmacologique. / Medicinal plants constitute an unfailing source of compounds (natural products – NPs) utilised in medicine for the prevention and treatment of various deceases. The introduction of new technologies and methods in the field of natural products chemistry enabled the development of high throughput methodologies for the chemical composition determination of plant extracts, evaluation of their properties and the exploration of their potentials as drug candidates. Lately, metabolomics, an integrated approach incorporating the advantages of modern analytical technologies and the power of bioinformatics has been proven an efficient tool in systems biology. In particular, the application of metabolomics for the discovery of new bioactive compounds constitutes an emerging field in natural products chemistry. In this context, Acronychia genus of Rutaceae family was selected based on its well-known traditional use as antimicrobial, antipyretic, antispasmodic and anti-inflammatory therapeutic agent. Modern chromatographic, spectrometric and spectroscopic methods were utilised for the exploration of their metabolite content following three basic axes constituting the three chapters of this thesis. Briefly, the first chapter describes the phytochemical investigation of Acronychia pedunculata, the identification of secondary metabolites contained in this species and evaluation of their biological properties. The second chapter refers to the development of analytical methods for the identification of acetophenones (chemotaxonomic markers of the genus) and to the dereplication strategies for the chemical characterisation of extracts by UHPLC-HRMSn. The third chapter focuses on the application of metabolomic methodologies (LC-MS & NMR) for comparative analysis (between different species, origins, organs), chemotaxonomic studies (between species) and compound-activity correlations.
1465

Network Based Tools and Indicators for Landscape Ecological Assessments, Planning, and Design

Zetterberg, Andreas January 2009 (has links)
<p>Land use change constitutes a primary driving force in shaping social-ecological systems world wide, and its effects reach far beyond the directly impacted areas. Graph based landscape ecological tools have become established as a promising way to efficiently explore and analyze the complex, spatial systems dynamics of ecological networks in physical landscapes. However, little attention has been paid to making these approaches operational within ecological assessments, physical planning, and design. This thesis presents a network based, landscape-ecological tool that can be implemented for effective use by practitioners within physical planning and design, and ecological assessments related to these activities. The tool is based on an ecological profile system, a common generalized network model of the ecological infrastructure, graph theoretic metrics, and a spatially explicit, geographically defined representation, deployable in a GIS. Graph theoretic metrics and analysis techniques are able to capture the spatio-temporal dynamics of complex systems, and the generalized network model places the graph theoretic toolbox in a geographically defined landscape. This provides completely new insights for physical planning, and environmental assessment activities. The design of the model is based on the experience gained through seven real-world cases, commissioned by different governmental organizations within Stockholm County. A participatory approach was used in these case studies, involving stakeholders of different backgrounds, in which the tool proved to be flexible and effective in the communication and negotiation of indicators, targets, and impacts. In addition to successful impact predictions for alternative planning scenarios, the tool was able to highlight critical ecological structures within the landscape, both from a system-centric, and a site-centric perspective. In already being deployed and used in planning, assessments, inventories, and monitoring by several of the involved organizations, the tool has proved to effectively meet some of the challenges of application in a multidisciplinary landscape.</p>
1466

變數轉換之離群值偵測 / Detection of Outliers with Data Transformation

吳秉勳, David Wu Unknown Date (has links)
在迴歸分析中,當資料中存在很多離群值時,偵測的工作變得非常不容易。 在此狀況下,我們無法使用傳統的殘差分析正確地偵測出其是否存在,此現象稱為遮蔽效應(The Masking Effect)。 而為了避免此效應的發生,我們利用最小中位數穩健迴歸估計值(Least Median Squares Estimator)正確地找出這些群集離群值,此估計值擁有最大即50﹪的容離值 (Breakdown point)。 在這篇論文中,用來求出最小中位數穩健迴歸估計值的演算法稱為步進搜尋演算法 (the Forward Search Algorithm)。 結果顯示,我們可以利用此演算法得到的穩健迴歸估計值,很快並有效率的找出資料中的群集離群值;另外,更進一步的結果顯示,我們只需從資料中隨機選取一百次子集,並進行步進搜尋,即可得到概似的穩健迴歸估計值並正確的找出那些群集離群值。 最後,我們利用鐘乳石圖(Stalactite Plot)列出所有被偵測到的離群值。 在多變量資料中,我們若使用Mahalanobis距離也會遭遇到同樣的屏蔽效應。 而此一問題,隨著另一高度穩健估計值的採用,亦可迎刃而解。 此估計值稱為最小體積橢圓體估計值 (Minimum Volume Ellipsoid),其亦擁有最大即50﹪的容離值。 在此,我們也利用步進搜尋法求出此估計值,並利用鐘乳石圖列出所有被偵測到的離群值。 這篇論文的第二部分則利用變數轉換的技巧將迴歸資料中的殘差項常態化並且加強其等變異的特性以利後續的資料分析。 在步進搜尋進行的過程中,我們觀察分數統計量(Score Statistic)和其他相關診斷統計量的變化。 結果顯示,這些統計量一起提供了有關轉換參數選取豐富的資訊,並且我們亦可從步進搜尋進行的過程中觀察出某些離群值對參數選取的影響。 / Detecting regression outliers is not trivial when there are many of them. The methods of using classical diagnostic plots sometimes fail to detect them. This phenomenon is known as the masking effect. To avoid this, we propose to find out those multiple outliers by using a highly robust regression estimator called the least median squares (LMS) estimator which has maximal breakdown point. The algorithm in search of the LMS estimator is called the forward search algorithm. The estimator found by the forward search is shown to lead to the rapid detection of multiple outliers. Furthermore, the result reveals that 100 repeats of a simple forward search from a random starting subset are shown to provide sufficiently robust parameter estimators to reveal multiple outliers. Finally, those detected outliers are exhibited by the stalactite plot that shows greatly stable pattern of them. Referring to multivariate data, the Mahalanobis distance also suffers from the masking effect that can be remedied by using a highly robust estimator called the minimum volume ellipsoid (MVE) estimator. It can also be found by using the forward search algorithm and it also has maximal breakdown point. The detected outliers are then displayed in the stalactite plot. The second part of this dissertation is the transformation of regression data so that the approximate normality and the homogeneity of the residuals can be achieved. During the process of the forward search, we monitor the quantity of interest called score statistic and some other diagnostic plots. They jointly provide a wealth of information about transformation along with the effect of individual observation on this statistic.
1467

NaV1.5 Modulation: From Ionic Channels to Cardiac Conduction and Substrate Heterogeneity

Raad, Nour 16 January 2014 (has links)
No description available.
1468

AJUSTAMENTO DE LINHA POLIGONAL NO ELIPSÓIDE / TRAVERSE ADJUSTMENT IN THE ELLIPSOID

Bisognin, Márcio Giovane Trentin 26 April 2006 (has links)
Traverses Adjustment in the surface of the ellipsoid with the objectives to guarantee the solution unicity in the transport of curvilinear geodesic coordinates (latitude and longitude) and in the azimuth transport and to get the estimates of quality. It deduces the coordinate transport and the azimuth transport by mean Legendre s series of the geodesic line. This series is based on the Taylor s series, where the argument is the length of the geodesic line. For the practical applications, it has the necessity to effect the truncation of the series and to calculate the function error for the latitude, the function error for the longitude and the function error for the azimuth. In this research, these series are truncated in the derivative third and calculates the express functions error in derivative fourth. It is described the adjustment models based on the least-squares method: combined model with weighted parameters, combined model or mixed model, parametric model or observations equations and correlates model or condition equations model. The practical application is the adjustment by mean parametric model of a traverse measured by the Instituto Brasileiro de Geografia e Estatística (IBGE), constituted of 8 vertices and the 129.661 km length. The localization of errors in the observations is calculated by the Baarda s data snooping test in the last iteration of the adjustment that showed some observations with error. The estimates of quality are in the variance-covariance matrices and calculate the semiaxes of the error ellipse or standard ellipse of each point by means of the spectral decomposition (or Jordan s decomposition) of the submatrices of the variance-covariance matrix of the adjusted parameters (the coordinates). It is important to note that the application of the Legendre s series is satisfactory for short distances until 40km length. The convergence of the series is fast for the adjusted coordinates, where the stopped criterion of the iterations is four decimals in the sexagesimal second arc, where it is obtained from interation second of the adjustment. / Ajustamento de linhas poligonais na superfície do elipsóide com os objetivos de garantir a unicidade de solução no transporte de coordenadas geodésicas curvilíneas (latitude ϕ e longitude λ ) e no transporte de azimute e de obter as estimativas de qualidade. Deduz o transporte de coordenadas e o transporte de azimute pelas séries de Legendre da linha geodésica. Essa série se fundamenta na série de Taylor, em que o argumento é o comprimento da linha geodésica. Para as aplicações práticas, há a necessidade de efetuar o truncamento da série e calcular a função erro para a latitude, função erro para a longitude e função erro para o azimute. Nesta pesquisa, trunca-se a série na derivada terceira e calculam-se as funções erro expressas em derivada quarta. Expõe os modelos de ajustamento fundamentados no método dos mínimos quadrados (MMQ): modelo combinado com ponderação aos parâmetros, modelo combinado ou implícito, modelo paramétrico ou das equações de observação e modelo dos correlatos ou das equações de condição. A aplicação prática é o ajustamento pelo modelo paramétrico de uma linha poligonal medida pelo Instituto Brasileiro de Geografia e Estatística (IBGE), constituída de 8 vértices e de comprimento igual a 129,661 km. A localização de erros nas observações é efetuada pelo teste data snooping de Baarda na última etapa do ajustamento que mostrou algumas observações com erro. As estimativas de qualidade estão nas matrizes variância-covariância (MVC) e calcula-se os semieixos da elipse dos erros (ou elipse padrão) de cada ponto mediante a decomposição espectral (ou decomposição de Jordan) das submatrizes da MVC dos parâmetros (as coordenadas) ajustados. Mostra-se que a aplicação das séries de Legendre é satisfatória para distâncias curtas até 40km. A convergência da série é rápida para as coordenadas ajustadas, onde o critério de parada das iterações seja quatro decimais do segundo de arco em que se atingiu na segunda etapa do ajustamento.
1469

Essays on Spatial Econometrics

Grahl, Paulo Gustavo de Sampaio 22 December 2012 (has links)
Submitted by Paulo Gustavo Grahl (pgrahl@fgvmail.br) on 2013-10-18T05:32:44Z No. of bitstreams: 1 DoutoradoPG_final.pdf: 23501670 bytes, checksum: 55b15051b9acc69ac74e639efe776fae (MD5) / Approved for entry into archive by ÁUREA CORRÊA DA FONSECA CORRÊA DA FONSECA (aurea.fonseca@fgv.br) on 2013-10-28T18:22:53Z (GMT) No. of bitstreams: 1 DoutoradoPG_final.pdf: 23501670 bytes, checksum: 55b15051b9acc69ac74e639efe776fae (MD5) / Approved for entry into archive by Marcia Bacha (marcia.bacha@fgv.br) on 2013-10-29T18:24:15Z (GMT) No. of bitstreams: 1 DoutoradoPG_final.pdf: 23501670 bytes, checksum: 55b15051b9acc69ac74e639efe776fae (MD5) / Made available in DSpace on 2013-10-29T18:25:35Z (GMT). No. of bitstreams: 1 DoutoradoPG_final.pdf: 23501670 bytes, checksum: 55b15051b9acc69ac74e639efe776fae (MD5) Previous issue date: 2012-12-22 / Esta dissertação concentra-se nos processos estocásticos espaciais definidos em um reticulado, os chamados modelos do tipo Cliff & Ord. Minha contribuição nesta tese consiste em utilizar aproximações de Edgeworth e saddlepoint para investigar as propriedades em amostras finitas do teste para detectar a presença de dependência espacial em modelos SAR (autoregressivo espacial), e propor uma nova classe de modelos econométricos espaciais na qual os parâmetros que afetam a estrutura da média são distintos dos parâmetros presentes na estrutura da variância do processo. Isto permite uma interpretação mais clara dos parâmetros do modelo, além de generalizar uma proposta de taxonomia feita por Anselin (2003). Eu proponho um estimador para os parâmetros do modelo e derivo a distribuição assintótica do estimador. O modelo sugerido na dissertação fornece uma interpretação interessante ao modelo SARAR, bastante comum na literatura. A investigação das propriedades em amostras finitas dos testes expande com relação a literatura permitindo que a matriz de vizinhança do processo espacial seja uma função não-linear do parâmetro de dependência espacial. A utilização de aproximações ao invés de simulações (mais comum na literatura), permite uma maneira fácil de comparar as propriedades dos testes com diferentes matrizes de vizinhança e corrigir o tamanho ao comparar a potência dos testes. Eu obtenho teste invariante ótimo que é também localmente uniformemente mais potente (LUMPI). Construo o envelope de potência para o teste LUMPI e mostro que ele é virtualmente UMP, pois a potência do teste está muito próxima ao envelope (considerando as estruturas espaciais definidas na dissertação). Eu sugiro um procedimento prático para construir um teste que tem boa potência em uma gama de situações onde talvez o teste LUMPI não tenha boas propriedades. Eu concluo que a potência do teste aumenta com o tamanho da amostra e com o parâmetro de dependência espacial (o que está de acordo com a literatura). Entretanto, disputo a visão consensual que a potência do teste diminui a medida que a matriz de vizinhança fica mais densa. Isto reflete um erro de medida comum na literatura, pois a distância estatística entre a hipótese nula e a alternativa varia muito com a estrutura da matriz. Fazendo a correção, concluo que a potência do teste aumenta com a distância da alternativa à nula, como esperado. / This dissertation focus on spatial stochastic process on a lattice (Cliff & Ord--type of models). My contribution consists of using Edgeworth and saddlepoint series to investigate small sample size and power properties of tests for detecting spatial dependence in spatial autoregressive (SAR) stochastic processes, and proposing a new class of spatial econometric models where the spatial dependence parameters that enter the mean structure are different from the ones in the covariance structure. This allows a clearer interpretation of models' parameters and generalizes the set of local and global models suggested by Anselin (2003) as an alternative to the traditional Cliff & Ord models. I propose an estimation procedure for the model's parameters and derive the asymptotic distribution of the parameters' estimators. The suggested model provides some insights on the structure of the commonly used mixed regressive, spatial autoregressive model with spatial autoregressive disturbances (SARAR). The study of the small sample properties of tests to detect spatial dependence expands on the existing literature by allowing the neighborhood structure to be a nonlinear function of the spatial dependence parameter. The use of series approximations instead of the often used Monte Carlo simulation allows a simple way to compare test properties across different neighborhood structures and to correct for size when comparing power. I obtain the power envelope for testing the presence of spatial dependence in the SAR process using the optimal invariant test statistic, which is also locally uniformly most powerful invariant (LUMPI). I have found that the LUMPI test is virtually UMP since its power is very close to the power envelope. I suggest a practical procedure to build a test that, while not UMP, retain good power properties in a wider range for the spatial parameter when compared to the LUMPI test. I find that power increases with sample size and with the spatial dependence parameter -- which agrees with the literature. However, I call into question the consensus view that power decreases as the spatial weight matrix becomes more densely connected. This finding in the literature reflects an error of measure because the hypothesis being compared are at very different statistical distance from the null. After adjusting for this, the power is larger for alternative hypothesis further away from the null -- as one would expect.
1470

Inégalités de déviations, principe de déviations modérées et théorèmes limites pour des processus indexés par un arbre binaire et pour des modèles markoviens / Deviation inequalities, moderate deviations principle and some limit theorems for binary tree-indexed processes and for Markovian models.

Bitseki Penda, Siméon Valère 20 November 2012 (has links)
Le contrôle explicite de la convergence des sommes convenablement normalisées de variables aléatoires, ainsi que l'étude du principe de déviations modérées associé à ces sommes constituent les thèmes centraux de cette thèse. Nous étudions principalement deux types de processus. Premièrement, nous nous intéressons aux processus indexés par un arbre binaire, aléatoire ou non. Ces processus ont été introduits dans la littérature afin d'étudier le mécanisme de la division cellulaire. Au chapitre 2, nous étudions les chaînes de Markov bifurcantes. Ces chaînes peuvent être vues comme une adaptation des chaînes de Markov "usuelles'' dans le cas où l'ensemble des indices à une structure binaire. Sous des hypothèses d'ergodicité géométrique uniforme et non-uniforme d'une chaîne de Markov induite, nous fournissons des inégalités de déviations et un principe de déviations modérées pour les chaînes de Markov bifurcantes. Au chapitre 3, nous nous intéressons aux processus bifurcants autorégressifs d'ordre p (). Ces processus sont une adaptation des processus autorégressifs linéaires d'ordre p dans le cas où l'ensemble des indices à une structure binaire. Nous donnons des inégalités de déviations, ainsi qu'un principe de déviations modérées pour les estimateurs des moindres carrés des paramètres "d'autorégression'' de ce modèle. Au chapitre 4, nous traitons des inégalités de déviations pour des chaînes de Markov bifurcantes sur un arbre de Galton-Watson. Ces chaînes sont une généralisation de la notion de chaînes de Markov bifurcantes au cas où l'ensemble des indices est un arbre de Galton-Watson binaire. Elles permettent dans le cas de la division cellulaire de prendre en compte la mort des cellules. Les hypothèses principales que nous faisons dans ce chapitre sont : l'ergodicité géométrique uniforme d'une chaîne de Markov induite et la non-extinction du processus de Galton-Watson associé. Au chapitre 5, nous nous intéressons aux modèles autorégressifs linéaires d'ordre 1 ayant des résidus corrélés. Plus particulièrement, nous nous concentrons sur la statistique de Durbin-Watson. La statistique de Durbin-Watson est à la base des tests de Durbin-Watson, qui permettent de détecter l'autocorrélation résiduelle dans des modèles autorégressifs d'ordre 1. Nous fournissons un principe de déviations modérées pour cette statistique. Les preuves du principe de déviations modérées des chapitres 2, 3 et 4 reposent essentiellement sur le principe de déviations modérées des martingales. Les inégalités de déviations sont établies principalement grâce à l'inégalité d'Azuma-Bennet-Hoeffding et l'utilisation de la structure binaire des processus. Le chapitre 5 est né de l'importance qu'a l'ergodicité explicite des chaînes de Markov au chapitre 3. L'ergodicité géométrique explicite des processus de Markov à temps discret et continu ayant été très bien étudiée dans la littérature, nous nous sommes penchés sur l'ergodicité sous-exponentielle des processus de Markov à temps continu. Nous fournissons alors des taux explicites pour la convergence sous exponentielle d'un processus de Markov à temps continu vers sa mesure de probabilité d'équilibre. Les hypothèses principales que nous utilisons sont : l'existence d'une fonction de Lyapunov et d'une condition de minoration. Les preuves reposent en grande partie sur la construction du couplage et le contrôle explicite de la queue du temps de couplage. / The explicit control of the convergence of properly normalized sums of random variables, as well as the study of moderate deviation principle associated with these sums constitute the main subjects of this thesis. We mostly study two sort of processes. First, we are interested in processes labelled by binary tree, random or not. These processes have been introduced in the literature in order to study mechanism of the cell division. In Chapter 2, we study bifurcating Markov chains. These chains may be seen as an adaptation of "usual'' Markov chains in case the index set has a binary structure. Under uniform and non-uniform geometric ergodicity assumptions of an embedded Markov chain, we provide deviation inequalities and a moderate deviation principle for the bifurcating Markov chains. In chapter 3, we are interested in p-order bifurcating autoregressive processes (). These processes are an adaptation of $p$-order linear autoregressive processes in case the index set has a binary structure. We provide deviation inequalities, as well as an moderate deviation principle for the least squares estimators of autoregressive parameters of this model. In Chapter 4, we dealt with deviation deviation inequalities for bifurcating Markov chains on Galton-Watson tree. These chains are a generalization of the notion of bifurcating Markov chains in case the index set is a binary Galton-Watson tree. They allow, in case of cell division, to take into account cell's death. The main hypothesis that we do in this chapter are : uniform geometric ergodicity of an embedded Markov chain and the non-extinction of the associated Galton-Watson process. In Chapter 5, we are interested in first-order linear autoregressive models with correlated errors. More specifically, we focus on the Durbin-Watson statistic. The Durbin-Watson statistic is at the base of Durbin-Watson tests, which allow to detect serial correlation in the first-order autoregressive models. We provide a moderate deviation principle for this statistic. The proofs of moderate deviation principle of Chapter 2, 3 and 4 are essentially based on moderate deviation for martingales. To establish deviation inequalities, we use most the Azuma-Bennet-Hoeffding inequality and the binary structure of processes. Chapter 6 was born from the importance that explicit ergodicity of Markov chains has in Chapter 2. Since explicit geometric ergodicity of discrete and continuous time Markov processes has been well studied in the literature, we focused on the sub-exponential ergodicity of continuous time Markov Processes. We thus provide explicit rates for the sub-exponential convergence of a continuous time Markov process to its stationary distribution. The main hypothesis that we use are : existence of a Lyapunov fonction and of a minorization condition. The proofs are largely based on the coupling construction and the explicit control of the tail of the coupling time.

Page generated in 0.0494 seconds