• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 340
  • 134
  • 37
  • 33
  • 32
  • 31
  • 16
  • 12
  • 11
  • 10
  • 8
  • 6
  • 6
  • 5
  • 4
  • Tagged with
  • 769
  • 118
  • 86
  • 84
  • 84
  • 72
  • 62
  • 58
  • 51
  • 50
  • 49
  • 48
  • 44
  • 41
  • 40
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Comparison of Different Methods for Estimating Log-normal Means

Tang, Qi 01 May 2014 (has links)
The log-normal distribution is a popular model in many areas, especially in biostatistics and survival analysis where the data tend to be right skewed. In our research, a total of ten different estimators of log-normal means are compared theoretically. Simulations are done using different values of parameters and sample size. As a result of comparison, ``A degree of freedom adjusted" maximum likelihood estimator and Bayesian estimator under quadratic loss are the best when using the mean square error (MSE) as a criterion. The ten estimators are applied to a real dataset, an environmental study from Naval Construction Battalion Center (NCBC), Super Fund Site in Rhode Island.
82

[en] THE LOG PERIODIC MODEL FOR FINANCIAL CRASHES FORECASTING: AN ECONOMETRICINVESTIGATION / [pt] UMA INVESTIGAÇÃO ECONOMÉTRICA DO MODELO LOG-PERIÓDICO PARA PREVISÃO DE CRASHES FINANCEIROS

LUIZA MORAES GAZOLA 04 July 2006 (has links)
[pt] Nesta dissertação utilizamos um modelo baseado na teoria de fenômenos críticos para explicar a formação de preços de ativos financeiros no período précrash. A evolução dos preços é descrita por um crescimento lento em forma de lei de potência, superposto a oscilações periódicas em escala logarítmica, sendo denominado modelo log-periódico. Este crescimento é eventualmente interrompido por um colapso dos preços que ocorre em um curto e crítico intervalo de tempo.O objetivo deste trabalho é o de investigar o modelo logperiódico do ponto de vista econométrico, criticando e propondo melhoramentos na sua especificação de forma que as inferências estatísticas dos seus parâmetros sejam mais confiáveis. Baseado nesta análise é proposta uma extensão do modelo log-periódico, com a incorporação de estrutura auto- regressiva e heterocedástica condicional no termo aleatório do modelo original. O modelo é aplicado a índices de diversos mercados mundiais, a saber: HANG SENG (Hong Kong), NASDAQ (EUA), IBOVESPA (Brasil), MERVAL (Argentina), INDIA BSE NATIONAL (Índia) e FTSE100 (Grã-Bretanha). Os nossos resultados indicam que a utilização destes modelos na prática requer alguma cautela uma vez que a sua base inferencial é frágil. / [en] In this work we employ a model based on the critical phenomena theory to explain the asset price formation associated to the pre- crash period. The evolution of the price is given by an over-all power law acceleration decorated by oscillations called log-periodic model. This growth is likely to be interrupted by a crash of prices that happen in a short and critical time interval. The purpose of this work is to investigate the log-periodic model within the econometric approach by suggesting guidelines to achieve its performance in order to accomplish reliable statistical inferences. Based on this analysis we here propose a stretching of the log-periodic model through the introduction of an autoregressive structure and an autoregressive conditional heteroskedasticity at the residual of the original model. The current model is applied to the study of financial index of the stock markets worldwide as: HANG SENG (Hong Kong), NASDAQ (USA), IBOVESPA (Brazil), MERVAL (Argentina), INDIA BSE NATIONAL (India) and FTSE100 (United Kingdom). The output of such work indicates that the use of the logperiodic model requires some care as far as its inference basis is fragile.
83

Determining the optimal log position during primary breakdown using internal wood scanning techniques and meta-heuristic algorithms

Van Zyl, Fritz 03 1900 (has links)
Thesis (MScEng (Industrial Engineering))--University of Stellenbosch, 2011. / ENGLISH ABSTRACT: During the 2009 financial year the sawlog production from plantations in South Africa amounted to 4.4 million m 3 and sawn timber of R4.2 billion was produced from these logs. At the current average price for structural timber, a 1% increase in volume recovery at a medium-sized South African sawmill with an annual log intake of 100 000m 3 will result in additional profit of about R2.2 million annually. The purpose of this project was to evaluate the potential of increasing in value recovery at sawmills through optimization of the positioning of a log at the primary workstation by considering the internal knot properties. Although not yet commercially available, a high speed industrial log CT scanner is currently in development and will enable the evaluation of the internal characteristics of a log before processing. The external profiles and the internal knot properties of ten pine logs were measured and the whole log shape was digitally reconstructed. By using the sawmill simulation program Simsaw, explicit enumeration was performed to gather data. This data include the monetary value that can be earned from sawing the log in a specific log position. For every log a total of 808 020 sawing positions were evaluated. In the sawmill production environment only a few seconds are available to make a decision on the positioning of each log. Meta-heuristic optimization algorithms were developed in order to come to a near optimal solution in a much shorter time than that required when simulating all possible log positions. The algorithms used in this study include the Genetic algorithm, Simulated Annealing, Population Based Incremental Learning and the CrossEntropy method. An Alternative algorithm was also developed to incorporate the trends identified through analysis of the sawmill simulation results. The effectiveness of these meta-heuristic algorithms were evaluated using the sawmill simulation data created. Analysis of the simulation data showed that a maximum increase in product value of 8.23% was possible when internal knot data was considered compared to using conventional log positioning rules. When only external shape was considered a maximum increase in product value of 5% was possible compared to using conventional log positioning rules. The efficiency of the meta-heuristic algorithms differed depending on the processing time available. As an example the Genetic algorithm increased the mean product value by 6.43% after 200 iterations. Finally, a method to evaluate the investment decision to purchase an internal scanning and log positioning system is illustrated. / AFRIKAANSE OPSOMMING: Gedurende die 2009 finansiële jaar is daar 4.4 miljoen m 3 rondehout op plantasies in Suid Afrika geproduseer en saaghout ter waarde van R4.2 biljoen is hieruit vervaardig. Met die huidige gemiddelde prys vir strukturele hout, kan ‘n 1% verhoging in volumeherwinning by ‘n gemiddelde grootte saagmeul in Suid Afrika met ‘n jaarlikse rondehout inname van 100 000 m 3 ‘n bykomende wins van R2.2 miljoen lewer. Die doel van hierdie projek was om die potensiële verhoging in waardeherwinning by ‘n saagmeul te evalueer, indien die posisionering van ‘n stomp by die primêre werkstasie geoptimeer word deur interne kwas eienskappe in ag te neem. Kommersiële CTskandeerders word tans nog nie hiervoor aangewend nie, maar ontwikkelinge in tegnologie sal dit moontlik binnekort prakties moontlik maak om die interne karakteristieke van ‘n stomp te evalueer voor prosessering. Die eksterne profiel en interne kwas eienskappe van tien Pinus rondehout stompe is gemeet en die al tien stompe is digitaal geherkonstrueer. Met behulp van die saagmeulsimulasieprogram, Simsaw, is 808 020 verskillende saagsimulasielopies uitgevoer. Elk van hierdie simulasielopies het ‘n ander beginposisie gehad in terme van rotasie, skeefheid en horisontale verskuiwing. Die finansiële waarde wat verdien kan word deur ‘n stomp in ‘n sekere posisie te saag is telkens bepaal. In die saagmeulomgewing is daar slegs ‘n paar sekondes beskikbaar om ‘n besluit te maak oor hoe ‘n stomp geposisioneer moet word. Meta-heuristiese optimisering algoritmes is ontwikkel om ‘n naby optimale oplossing te bepaal in ‘n baie korter tyd as wanneer alle saagposisies geëvalueer word. Vyf verskillende meta-heuristiese algoritmes is teen mekaar opgeweeg. Vier van hierdie algoritmes is bestaande heuristieke wat vir verskeie ander optimeringsprobleme ingespan word. Die vyfde algoritme is spesifiek vir doeleindes van hierdie projek ontwikkel om die neigings wat tydens die data-analise van die saagmeulsimulasie geïdentifiseer is, te inkorporeer. Die effektiwiteit van hierdie meta-heuristiese algoritmes is bepaal deur van die saagmeul simulasiedata wat gegenereer is gebruik te maak. Analise van die simulasiedata toon dat ‘n maksimum toename in produk waarde van 8% moontlik is wanneer interne kwaseienskappe ook geïnkorporeer word tydens besluitneming teenoor die konvensionele stompposisioneringreëls. Wanneer slegs die eksterne stompprofiel in ag geneem word, is ‘n maksimum produkwaardeverhoging van tot 5% moontlik teenoor resultate wat verkry word met konvensionele stompposisioneringsreëls.
84

Adubação nitrogenada, fosfatada e potássica na produtividade, ciclagem de nutrientes e no balanço nutricional do eucalipto / Nitrogen, phosphate and potassium fertilization on yield, nutrient cycling and nutritional balance of eucalyptus

Gazola, Rodolfo de Niro 21 December 2017 (has links)
Submitted by Rodolfo de Niro Gazola null (rngazola@gmail.com) on 2018-01-17T15:00:48Z No. of bitstreams: 1 gazola_rn_dr_ilha.pdf: 2617484 bytes, checksum: d2a2e8e5c35719fb2f8767697f233618 (MD5) / Approved for entry into archive by Cristina Alexandra de Godoy null (cristina@adm.feis.unesp.br) on 2018-01-17T15:45:20Z (GMT) No. of bitstreams: 1 gazola_rn_dr_ilha.pdf: 2617484 bytes, checksum: d2a2e8e5c35719fb2f8767697f233618 (MD5) / Made available in DSpace on 2018-01-17T15:45:20Z (GMT). No. of bitstreams: 1 gazola_rn_dr_ilha.pdf: 2617484 bytes, checksum: d2a2e8e5c35719fb2f8767697f233618 (MD5) Previous issue date: 2017-12-21 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP) / As áreas de cultivo do eucalipto têm ocupado novas regiões do Brasil, sendo que o estado do Mato Grosso do Sul lidera esta expansão. Nessas novas áreas, a condição edafoclimática é um fator limitante ao desenvolvimento dessa cultura, pois os solos do bioma Cerrado apresentam baixa fertilidade natural, com pouca disponibilidade de fósforo (P) e potássio (K), e baixo teor de matéria orgânica (M.O.), além disso, o clima é caracterizado pela presença de déficit hídrico acentuado e frequente. Portanto, a carência de nutrientes no solo aliado às condições climáticas influenciam no desenvolvimento da cultura. Neste contexto, objetivou-se avaliar a adubação nitrogenada, fosfatada e potássica dado à importância desses nutrientes na cultura do eucalipto, e a suas limitações no solo em estudo. O experimento foi conduzido na Fazenda Renascença, fundo agrícola administrado pela Cargill Agrícola S/A, localizado no município de Três Lagoas/MS, de setembro de 2011 a julho de 2017, em um NEOSSOLO QUARTZARÊNICO Órtico. Utilizou-se o delineamento em blocos casualizados com quatro tratamentos e cinco repetições. Os tratamentos foram constituídos de doses de nitrogênio (N) (0, 70, 105 e 140 kg ha-1), de P (0, 40, 70 e 100 kg ha-1 de P2O5) e de K (0, 90, 135 e 180 kg ha-1 de K2O). Cada parcela foi composta por 56 plantas, distribuídas em sete linhas de oito plantas cada, totalizando 420 m2. Nas linhas de plantio, as mudas do clone I144 (Eucalyptus urophylla) foram espaçadas em 2,5 m e nas entrelinhas em 3,0 m. Foram avaliados: a produtividade de madeira; as concentrações de nutrientes nas folhas; a produtividade de folhedos, as concentrações e a transferência de nutrientes de folhedos; a disponibilidade dos macronutrientes no solo; o balanço dos nutrientes nas folhas e no solo (relação log isométrica); a produção, concentrações de nutrientes e seu acúmulo na biomassa e a eficiência no uso dos nutrientes. A adubação nitrogenada, fosfatada e potássica aumentaram a produção de biomassa, volume de madeira e estoque de nutrientes na biomassa. As doses de N influenciaram nas concentrações de N na folha e no folhedo, nos balanços foliares de [N | P], [K | Ca, Mg] e [Ca | Mg] e na ciclagem de K e não interferiram na produtividade de folhedo, nos teores de P, K, Ca, Mg e S no solo e na eficiência da utilização dos nutrientes. As doses de P influenciaram positivamente na transferência de P e K para solo, no balanço foliares de [N | P], nos teores de P solo e não interferiram na produtividade de folhedo, nos teores de K, Ca, Mg e S no solo e na eficiência da utilização dos nutrientes. As doses de K influenciaram positivamente nas concentrações de K e negativamente nas de Mg na folha e no folhedo, na transferência de K e Mg para o solo, nos balanços foliares de [K | Ca, Mg] e [Ca | Mg] e do solo de [K, Ca, Mg | H+Al] e [K | Ca, Mg], na eficiência da utilização do N e não interferiram na produtividade de folhedo e nos teores de P, Ca, Mg e S no solo. / The areas of eucalyptus cultivation have occupied new regions of Brazil, being that the state of Mato Grosso do Sul leads this expansion. In these new areas, the soil and soil conditions of the Cerrado biome have low natural fertility, with low availability of phosphorus (P) and potassium (K), and low organic matter (OM), in addition, the climate is characterized by the presence of accentuated and frequent water deficit. Therefore, the lack of nutrients in the soil combined with climatic conditions influence the development of the crop. In this context, the objective was to evaluate the nitrogen, phosphate and potassium fertilization given to the importance of these nutrients in the eucalyptus crop and its limitations in the soil under study. The experiment was conducted at Fazenda Renascença, an agricultural fund managed by Cargill Agrícola S / A, located in the municipality of Três Lagoas / MS, from September 2011 to July 2017, in a Entisols. A randomized block design with four treatments and five replications. The treatments were composed rates of nitrogen (N) (0, 70, 105 and 140 kg ha-1), P (0, 40, 70 and 100 kg ha-1 P2O5) and K (0, 90, 135 and 180 kg ha-1 K2O). Each plot was composed of 56 plants, distributed in seven lines of eight plants each, totaling 420 m2. In the planting lines, the seedlings of clone I144 (Eucalyptus urophylla) were spaced at 2.5 m and in between lines at 3.0 m. The following were evaluated: wood productivity; nutrient concentrations in leaves; foliar productivity, nutrient concentrations and transfer of foliage; the availability of macronutrients in soil; nutrient balance in leaves and soil (isometric log ratio); production, nutrient concentrations and their accumulation in the biomass and the efficiency in the use of the nutrients. Nitrogen, phosphate and potassium fertilization increased biomass production, wood volume and nutrient stock in biomass. N rates influenced leaf and leaf N concentrations in leaf balances of [N | P], [K | Ca, Mg] and [Ca | Mg and K cycling and did not interfere in leaf productivity, P, K, Ca, Mg and S contents in the soil and nutrient utilization efficiency. The P rates influenced positively the transfer of P and K to soil, in the leaf balance of [N | P] levels in soil P and did not interfere with leaf productivity, soil K, Ca, Mg and S contents and nutrient utilization efficiency. The K rates influenced positively K concentrations and negatively Mg concentrations in leaf and leaf, K and Mg transfer to soil, in the leaf swings of [K | Ca, Mg] and [Ca | Mg] and the soil of [K, Ca, Mg | H + Al] and [K | Ca, Mg] in the efficiency of N utilization and did not interfere in leaf productivity and soil P, Ca, Mg and S contents. / FAPESP: 2014/02641-6
85

Similarity-based recommendation of OLAP sessions / Recommandation de sessions OLAP, basé sur des mesures de similarités

Aligon, Julien 13 December 2013 (has links)
L’OLAP (On-Line Analytical Processing) est le paradigme principal pour accéder aux données multidimensionnelles dans les entrepôts de données. Pour obtenir une haute expressivité d’interrogation, malgré un petit effort de formulation de la requête, OLAP fournit un ensemble d’opérations (comme drill-down et slice-and-dice ) qui transforment une requête multidimensionnelle en une autre, de sorte que les requêtes OLAP sont normalement formulées sous la forme de séquences appelées Sessions OLAP. Lors d’une session OLAP l’utilisateur analyse les résultats d’une requête et, selon les données spécifiques qu’il voit, applique une seule opération afin de créer une nouvelle requête qui lui donnera une meilleure compréhension de l’information. Les séquences de requêtes qui en résultent sont fortement liées à l’utilisateur courant, le phénomène analysé, et les données. Alors qu’il est universellement reconnu que les outils OLAP ont un rôle clé dans l’exploration souple et efficace des cubes multidimensionnels dans les entrepôts de données, il est aussi communément admis que le nombre important d’agrégations et sélections possibles, qui peuvent être exploités sur des données, peut désorienter l’expérience utilisateur. / OLAP (On-Line Analytical Processing) is the main paradigm for accessing multidimensional data in data warehouses. To obtain high querying expressiveness despite a small query formulation effort, OLAP provides a set of operations (such as drill-down and slice-and-dice) that transform one multidimensional query into another, so that OLAP queries are normally formulated in the form of sequences called OLAP sessions. During an OLAP session the user analyzes the results of a query and, depending on the specific data she sees, applies one operation to determine a new query that will give her a better understanding of information. The resulting sequences of queries are strongly related to the issuing user, to the analyzed phenomenon, and to the current data. While it is universally recognized that OLAP tools have a key role in supporting flexible and effective exploration of multidimensional cubes in data warehouses, it is also commonly agreed that the huge number of possible aggregations and selections that can be operated on data may make the user experience disorientating.
86

Factorial structure of Driving Log in a Spanish sample / Estructura factorial del Driving Log en una muestra española

Herrero-Fernández, David, Fonseca-Baeza, Sara, Pla-Sancho, Sara 25 September 2017 (has links)
The present study aimed the adaptation of the Driving Log, a questionnaire that assesses aggressive and risky driving behaviors in a day by day basis, with 395 Spanish participants. Confirmatory factor analysis showed that the questionnaire fitted properly in two correlated factors, labeled as Risky Driving and Aggressive Driving. Subsequent analyses showed that the number of drives is significantly associated to Risky Driving, while the number of occasions in which anger is experimented correlated with Risky Driving as well as Aggressive Driving. Other findings suggest that men behave in a more risky and aggressive mannerthan women. Young people follow this same tendency in comparison to their elders. / El presente estudio tuvo como objetivo la adaptación del Driving Log, un cuestionario que valora los comportamientos agresivos y arriesgados al volante, en una muestra española de 395 personas. El análisis factorial confirmatorio mostró que el cuestionario ajustaba satisfactoriamente en dos factores, etiquetados como Conducción Arriesgada y Conducción Agresiva. Los análisis posteriores mostraron que el número de trayectos realizados se asoció significativamente a la Conducción Arriesgada, mientras que el número de veces en que se experimentó ira lo hizo tanto con la Conducción Arriesgada como con la Conducción Agresiva. Igualmente, se vio que los hombres se comportaban de forma más arriesgada y agresiva que las mujeres, y que los jóvenes lo hacían en mayor grado que los mayores.
87

A novel classification method applied to well log data calibrated by ontology based core descriptions

Graciolli, Vinicius Medeiros January 2018 (has links)
Um método para a detecção automática de tipos litológicos e contato entre camadas foi desenvolvido através de uma combinação de análise estatística de um conjunto de perfis geofísicos de poços convencionais, calibrado por descrições sistemáticas de testemunhos. O objetivo deste projeto é permitir a integração de dados de rocha em modelos de reservatório. Os testemunhos são descritos com o suporte de um sistema de nomenclatura baseado em ontologias que formaliza extensamente uma grande gama de atributos de rocha. As descrições são armazenadas em um banco de dados relacional junto com dados de perfis de poço convencionais de cada poço analisado. Esta estrutura permite definir protótipos de valores de perfil combinados para cada litologia reconhecida através do cálculo de média e dos valores de variância e covariância dos valores medidos por cada ferramenta de perfilagem para cada litologia descrita nos testemunhos. O algoritmo estatístico é capaz de aprender com cada novo testemunho e valor de log adicionado ao banco de dados, refinando progressivamente a identificação litológica. A detecção de contatos litológicos é realizada através da suavização de cada um dos perfis através da aplicação de duas médias móveis de diferentes tamanhos em cada um dos perfis. Os resultados de cada par de perfis suavizados são comparados, e as posições onde as linhas se cruzam definem profundidades onde ocorrem mudanças bruscas no valor do perfil, indicando uma potencial mudança de litologia. Os resultados da aplicação desse método em cada um dos perfis são então unificados em uma única avaliação de limites litológicos Os valores de média e variância-covariância derivados da correlação entre testemunhos e perfis são então utilizados na construção de uma distribuição gaussiana n-dimensional para cada uma das litologias reconhecidas. Neste ponto, probabilidades a priori também são calculadas para cada litologia. Estas distribuições são comparadas contra cada um dos intervalos litológicos previamente detectados por meio de uma função densidade de probabilidade, avaliando o quão perto o intervalo está de cada litologia e permitindo a atribuição de um tipo litológico para cada intervalo. O método desenvolvido foi testado em um grupo de poços da bacia de Sergipe- Alagoas, e a precisão da predição atingida durante os testes mostra-se superior a algoritmos clássicos de reconhecimento de padrões como redes neurais e classificadores KNN. O método desenvolvido foi então combinado com estes métodos clássicos em um sistema multi-agentes. Os resultados mostram um potencial significante para aplicação operacional efetiva na construção de modelos geológicos para a exploração e desenvolvimento de áreas com grande volume de dados de perfil e intervalos testemunhados. / A method for the automatic detection of lithological types and layer contacts was developed through the combined statistical analysis of a suite of conventional wireline logs, calibrated by the systematic description of cores. The intent of this project is to allow the integration of rock data into reservoir models. The cores are described with support of an ontology-based nomenclature system that extensively formalizes a large set of attributes of the rocks, including lithology, texture, primary and diagenetic composition and depositional, diagenetic and deformational structures. The descriptions are stored in a relational database along with the records of conventional wireline logs (gamma ray, resistivity, density, neutrons, sonic) of each analyzed well. This structure allows defining prototypes of combined log values for each lithology recognized, by calculating the mean and the variance-covariance values measured by each log tool for each of the lithologies described in the cores. The statistical algorithm is able to learn with each addition of described and logged core interval, in order to progressively refine the automatic lithological identification. The detection of lithological contacts is performed through the smoothing of each of the logs by the application of two moving means with different window sizes. The results of each pair of smoothed logs are compared, and the places where the lines cross define the locations where there are abrupt shifts in the values of each log, therefore potentially indicating a change of lithology. The results from applying this method to each log are then unified in a single assessment of lithological boundaries The mean and variance-covariance data derived from the core samples is then used to build an n-dimensional gaussian distribution for each of the lithologies recognized. At this point, Bayesian priors are also calculated for each lithology. These distributions are checked against each of the previously detected lithological intervals by means of a probability density function, evaluating how close the interval is to each lithology prototype and allowing the assignment of a lithological type to each interval. The developed method was tested in a set of wells in the Sergipe-Alagoas basin and the prediction accuracy achieved during testing is superior to classic pattern recognition methods such as neural networks and KNN classifiers. The method was then combined with neural networks and KNN classifiers into a multi-agent system. The results show significant potential for effective operational application to the construction of geological models for the exploration and development of areas with large volume of conventional wireline log data and representative cored intervals.
88

A novel classification method applied to well log data calibrated by ontology based core descriptions

Graciolli, Vinicius Medeiros January 2018 (has links)
Um método para a detecção automática de tipos litológicos e contato entre camadas foi desenvolvido através de uma combinação de análise estatística de um conjunto de perfis geofísicos de poços convencionais, calibrado por descrições sistemáticas de testemunhos. O objetivo deste projeto é permitir a integração de dados de rocha em modelos de reservatório. Os testemunhos são descritos com o suporte de um sistema de nomenclatura baseado em ontologias que formaliza extensamente uma grande gama de atributos de rocha. As descrições são armazenadas em um banco de dados relacional junto com dados de perfis de poço convencionais de cada poço analisado. Esta estrutura permite definir protótipos de valores de perfil combinados para cada litologia reconhecida através do cálculo de média e dos valores de variância e covariância dos valores medidos por cada ferramenta de perfilagem para cada litologia descrita nos testemunhos. O algoritmo estatístico é capaz de aprender com cada novo testemunho e valor de log adicionado ao banco de dados, refinando progressivamente a identificação litológica. A detecção de contatos litológicos é realizada através da suavização de cada um dos perfis através da aplicação de duas médias móveis de diferentes tamanhos em cada um dos perfis. Os resultados de cada par de perfis suavizados são comparados, e as posições onde as linhas se cruzam definem profundidades onde ocorrem mudanças bruscas no valor do perfil, indicando uma potencial mudança de litologia. Os resultados da aplicação desse método em cada um dos perfis são então unificados em uma única avaliação de limites litológicos Os valores de média e variância-covariância derivados da correlação entre testemunhos e perfis são então utilizados na construção de uma distribuição gaussiana n-dimensional para cada uma das litologias reconhecidas. Neste ponto, probabilidades a priori também são calculadas para cada litologia. Estas distribuições são comparadas contra cada um dos intervalos litológicos previamente detectados por meio de uma função densidade de probabilidade, avaliando o quão perto o intervalo está de cada litologia e permitindo a atribuição de um tipo litológico para cada intervalo. O método desenvolvido foi testado em um grupo de poços da bacia de Sergipe- Alagoas, e a precisão da predição atingida durante os testes mostra-se superior a algoritmos clássicos de reconhecimento de padrões como redes neurais e classificadores KNN. O método desenvolvido foi então combinado com estes métodos clássicos em um sistema multi-agentes. Os resultados mostram um potencial significante para aplicação operacional efetiva na construção de modelos geológicos para a exploração e desenvolvimento de áreas com grande volume de dados de perfil e intervalos testemunhados. / A method for the automatic detection of lithological types and layer contacts was developed through the combined statistical analysis of a suite of conventional wireline logs, calibrated by the systematic description of cores. The intent of this project is to allow the integration of rock data into reservoir models. The cores are described with support of an ontology-based nomenclature system that extensively formalizes a large set of attributes of the rocks, including lithology, texture, primary and diagenetic composition and depositional, diagenetic and deformational structures. The descriptions are stored in a relational database along with the records of conventional wireline logs (gamma ray, resistivity, density, neutrons, sonic) of each analyzed well. This structure allows defining prototypes of combined log values for each lithology recognized, by calculating the mean and the variance-covariance values measured by each log tool for each of the lithologies described in the cores. The statistical algorithm is able to learn with each addition of described and logged core interval, in order to progressively refine the automatic lithological identification. The detection of lithological contacts is performed through the smoothing of each of the logs by the application of two moving means with different window sizes. The results of each pair of smoothed logs are compared, and the places where the lines cross define the locations where there are abrupt shifts in the values of each log, therefore potentially indicating a change of lithology. The results from applying this method to each log are then unified in a single assessment of lithological boundaries The mean and variance-covariance data derived from the core samples is then used to build an n-dimensional gaussian distribution for each of the lithologies recognized. At this point, Bayesian priors are also calculated for each lithology. These distributions are checked against each of the previously detected lithological intervals by means of a probability density function, evaluating how close the interval is to each lithology prototype and allowing the assignment of a lithological type to each interval. The developed method was tested in a set of wells in the Sergipe-Alagoas basin and the prediction accuracy achieved during testing is superior to classic pattern recognition methods such as neural networks and KNN classifiers. The method was then combined with neural networks and KNN classifiers into a multi-agent system. The results show significant potential for effective operational application to the construction of geological models for the exploration and development of areas with large volume of conventional wireline log data and representative cored intervals.
89

Estrutura lagrangiana para fluidos compressíveis não barotrópicos em dimensão dois / Lagrangian structure for a non-barotropic compressible fluid in two dimensions

Maluendas Pardo, Pedro Nel, 1977- 22 August 2018 (has links)
Orientador: Marcelo Martins dos Santos / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Matemática Estatística e Computação Científica / Made available in DSpace on 2018-08-22T23:55:02Z (GMT). No. of bitstreams: 1 MaluendasPardo_PedroNel_D.pdf: 1007695 bytes, checksum: 924306ee0dd8ae19e6725f7d2a3349f4 (MD5) Previous issue date: 2013 / Resumo: Estudamos a estrutura lagrangiana para soluções fracas das equações de Navier-Stokes para um fluido não barotrópico em dimensão dois, i.e., demonstramos a unicidade de trajetórias de partículas para fluidos compressíveis, incluindo a equação da energia, ou seja, com variações de temperatura. Isto estende os resultados de David Hoff e Marcelo Santos para o caso não barotrópico de dimensão dois / Abstract: In this work we study the Lagrangian structure for weak solutions of Navier-Stokes equations for a non-barotropic compressible fluid in two dimensions, i.e., we prove the uniqueness of particle trajectories for two-dimensional compressible fluids, including the energy equation (tempera-ture variations). It extends previous results in [19] for the barotropic two dimensional case / Doutorado / Matematica / Doutora em Matemática
90

Logghantering : En undersökning av logghantering och logghanteringssystem

Flodin, Anton January 2016 (has links)
This research includes a review of the log management of the company Telia. The research has also included a comparison of the two log management sys- tems Splunk and ELK. The review of the company’s log management shows that log messages are being stored in files on a hard drive that can be accessed through the network. The log messages are system-specific. ELK is able to fetch log messages of different formats simultaneously, but this feature is not possible in Splunk where the process of uploading log messages has to be re- peated for log messages that have different formats. Both systems store log messages through a file system on a hard drive, where the systems are installed. In networks that involve multiple servers, ELK is distributing the log messages between the servers. Thus, the workload to perform searches and storing large amounts of data is reduced. Using Splunk in networks can also reduce the workload. This is done by using forwarders that send the log messages to one or multiple central servers which stores the messages. Searches of log messages in Splunk are performed by using a graphical interface. Searches in ELK is done by using a REST-API which can be used by external systems as well, to retrieve search results. Splunk also has a REST-API that can be used by external sys- tems to receive search results. The research revealed that ELK had a lower search time than Splunk. However, no method was found that could be used to measure the indexing time of ELK, which meant that no comparison could be made with respect to the indexing time for Splunk. For future work there should be an investigation whether there is any possibility to measure the indexing time of ELK. Another recommendation is to include more log management sys- tem in the research to improve the results that may be suitable candidates for the company Telia. An improvement suggestion as well, is to do performance tests in a network with multiple servers and thereby draw conclusions how the performance is in practice. / Denna undersökning har innefattat en granskning av logghanteringen som exi- sterar hos företaget Telia och en jämförelse av två logghanteringssystem: Splunk och ELK. Undersökningen visar att loggmeddelanden hos företaget har olika format och lagras i filer på en hårddisk som nås genom nätverket. Både ELK och Splunk kan hantera loggmeddelanden med olika format. ELK kan läsa in loggmeddelanden av olika format samtidigt, men detta är inte möjligt i Splunk då inläsningsprocessen måste repeteras för loggmeddelanden som har olika format. Båda systemen lagrar loggmeddelanden genom ett filsystem på en servers hårddisk där systemen är installerad. I nätverk som involverar flera servrar arbetar ELK distributivt genom att distribuera loggmeddelanden mellan dessa servrar. Följder av distribuering av loggmeddelanden ger en lägre arbets- börda för varje server i nätverket. I nätverk där Splunk används kan forwarders användas som skickar vidare loggmeddelanden till en eller flera central server som lagrar loggmeddelanden, därmed kan arbetsbördan för sökningar och in- dexering av data minskas. Sökningar av loggmeddelanden i Splunk utförs ge- nom att använda ett grafiskt gränssnitt. Sökningar i ELK sker genom att använ- da ett REST-API som finns i systemet som även används av externa system för att hämta sökresultat. Splunk har också ett REST-API inkluderat som kan an- vändas för att exportera sökresultat. Undersökningen visade att ELK hade en lägre söktid än Splunk. För undersökningen fanns ingen metod att använda för att mäta indexeringstiden för ELK vilket innebar att ingen jämförelse kunde gö- ras med avseende på indexeringstid. För framtida arbete rekommenderas bland annat att undersöka om det finns någon möjlighet att mäta indexeringstiden för ELK. En annan rekommendation är att låta fler logghanteringssystem ingå i un- dersökningen för att förbättra resultatet som kan vara lämpliga kandidater för företaget Telia. Ett förbättringsförslag är att utföra prestandatester för ett nät- verk med flera servrar för att därmed dra slutsatser för hur prestandan är i praktiken.

Page generated in 0.0436 seconds