• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 81
  • 27
  • 10
  • 9
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 168
  • 168
  • 25
  • 16
  • 16
  • 15
  • 15
  • 15
  • 14
  • 14
  • 14
  • 14
  • 14
  • 14
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Statistical inference for non-homogeneous Poisson process with competing risks: a repairable systems approach under power-law process / Inferência estatística para processo de Poisson não-homogêneo com riscos competitivos: uma abordagem de sistemas reparáveis sob processo de lei de potência

Almeida, Marco Pollo 30 August 2019 (has links)
In this thesis, the main objective is to study certain aspects of modeling failure time data of repairable systems under a competing risks framework. We consider two different models and propose more efficient Bayesian methods for estimating the parameters. In the first model, we discuss inferential procedures based on an objective Bayesian approach for analyzing failures from a single repairable system under independent competing risks. We examined the scenario where a minimal repair is performed at each failure, thereby resulting in that each failure mode appropriately follows a power-law intensity. Besides, it is proposed that the power-law intensity is reparametrized in terms of orthogonal parameters. Then, we derived two objective priors known as the Jeffreys prior and reference prior. Moreover, posterior distributions based on these priors will be obtained in order to find properties which may be optimal in the sense that, for some cases, we prove that these posterior distributions are proper and are also matching priors. In addition, in some cases, unbiased Bayesian estimators of simple closed-form expressions are derived. In the second model, we analyze data from multiple repairable systems under the presence of dependent competing risks. In order to model this dependence structure, we adopted the well-known shared frailty model. This model provides a suitable theoretical basis for generating dependence between the components failure times in the dependent competing risks model. It is known that the dependence effect in this scenario influences the estimates of the model parameters. Hence, under the assumption that the cause-specific intensities follow a PLP, we propose a frailty-induced dependence approach to incorporate the dependence among the cause-specific recurrent processes. Moreover, the misspecification of the frailty distribution may lead to errors when estimating the parameters of interest. Because of this, we considered a Bayesian nonparametric approach to model the frailty density in order to offer more flexibility and to provide consistent estimates for the PLP model, as well as insights about heterogeneity among the systems. Both simulation studies and real case studies are provided to illustrate the proposed approaches and demonstrate their validity. / Nesta tese, o objetivo principal é estudar certos aspectos da modelagem de dados de tempo de falha de sistemas reparáveis sob uma estrutura de riscos competitivos. Consideramos dois modelos diferentes e propomos métodos Bayesianos mais eficientes para estimar os parâmetros. No primeiro modelo, discutimos procedimentos inferenciais baseados em uma abordagem Bayesiana objetiva para analisar falhas de um único sistema reparável sob riscos competitivos independentes. Examinamos o cenário em que um reparo mínimo é realizado em cada falha, resultando em que cada modo de falha segue adequadamente uma intensidade de lei de potência. Além disso, propõe-se que a intensidade da lei de potência seja reparametrizada em termos de parâmetros ortogonais. Então, derivamos duas prioris objetivas conhecidas como priori de Jeffreys e priori de referência. Além disso, distribuições posteriores baseadas nessas prioris serão obtidas a fim de encontrar propriedades que podem ser ótimas no sentido de que, em alguns casos, provamos que essas distribuições posteriores são próprias e que também são matching priors. Além disso, em alguns casos, estimadores Bayesianos não-viesados de forma fechada são derivados. No segundo modelo, analisamos dados de múltiplos sistemas reparáveis sob a presença de riscos competitivos dependentes. Para modelar essa estrutura de dependência, adotamos o conhecido modelo de fragilidade compartilhada. Esse modelo fornece uma base teórica adequada para gerar dependência entre os tempos de falha dos componentes no modelo de riscos competitivos dependentes. Sabe-se que o efeito de dependência neste cenário influencia as estimativas dos parâmetros do modelo. Assim, sob o pressuposto de que as intensidades específicas de causa seguem um PLP, propomos uma abordagem de dependência induzida pela fragilidade para incorporar a dependência entre os processos recorrentes específicos da causa. Além disso, a especificação incorreta da distribuição de fragilidade pode levar a erros na estimativa dos parâmetros de interesse. Por isso, consideramos uma abordagem Bayesiana não paramétrica para modelar a densidade da fragilidade, a fim de oferecer mais flexibilidade e fornecer estimativas consistentes para o modelo PLP, bem como insights sobre a heterogeneidade entre os sistemas. São fornecidos estudos de simulação e estudos de casos reais para ilustrar as abordagens propostas e demonstrar sua validade.
112

Zipf's Law for Natural Cities Extracted from Location-Based Social Media Data

Wu, Sirui January 2015 (has links)
Zipf’s law is one of the empirical statistical regularities found within many natural systems, ranging from protein sequences of immune receptors in cells to the intensity of solar flares from the sun. Verifying the universality of Zipf’s law can provide many opportunities for us to further seek the commonalities of phenomena that possess the power law behavior. Since power law-like phenomena, as many studies have previously indicated, is often interpreted as evidence for studying complex systems, exploring the universality of Zipf’s law is also of potential capability in explaining underlying generative mechanisms and endogenous processes, i.e. self-organization and chaos theory. The main purpose of this study was to verify whether Zipf’s law is valid for city sizes, city numbers and population extracted from natural cities. Unlike traditional city boundaries extracted by applying census-imposed and top-down imposed data, which are arbitrary and subjective, the study established the new kind of boundaries of cities, namely, natural cities through using four location-based social media data from Twitter, Brightkite, Gowalla and Freebase and head/tail breaks rule. In order to capture and quantify the hierarchical level for studying heterogeneous scales of cities, ht-index derived from head/tail breaks rule was employed. Furthermore, the validation of Zipf’s law was examined. The result revealed that the natural cities had deviations in subtle patterns when different social media data were examined. By employing head/tail breaks method, the result calculated the ht-index and detected that hierarchy levels were not largely influenced by spatial-temporal changes but rather data itself. On the other hand, the study found that Zipf’s law is not universal in the case of using location-based social media data. Compared to city numbers extracted from nightlight imagery, the study found out the reason why Zipf’s law does not hold for location-based social media data, i.e. due to bias of customer behavior. The bias mainly resulted in the emergence of natural cities were much more frequent than others in certain regions and countries so that making the emergence of natural cities was not exhibited objectively. Furthermore, the study showed whether Zipf’s law could be well observed depends not only on the data itself and man-made limitations but also on calculation methods, data precisions and scales and the idealized status of observed data.
113

Quantitative vulnerability analysis of electric power networks

Holmgren, Åke J. January 2006 (has links)
Disturbances in the supply of electric power can have serious implications for everyday life as well as for national (homeland) security. A power outage can be initiated by natural disasters, adverse weather, technical failures, human errors, sabotage, terrorism, and acts of war. The vulnerability of a system is described as a sensitivity to threats and hazards, and is measured by P (Q(t) > q), i.e. the probability of at least one disturbance with negative societal consequences Q larger than some critical value q, during a given period of time (0,t]. The aim of the thesis is to present methods for quantitative vulnerability analysis of electric power delivery networks to enable effective strategies for prevention, mitigation, response, and recovery to be developed. Paper I provides a framework for vulnerability assessment of infrastructure systems. The paper discusses concepts and perspectives for developing a methodology for vulnerability analysis, and gives examples related to power systems. Paper II analyzes the vulnerability of power delivery systems by means of statistical analysis of Swedish disturbance data. It is demonstrated that the size of large disturbances follows a power law, and that the occurrence of disturbances can be modeled as a Poisson process. Paper III models electric power delivery systems as graphs. Statistical measures for characterizing the structure of two empirical transmission systems are calculated, and a structural vulnerability analysis is performed, i.e. a study of the connectivity of the graph when vertices and edges are disabled. Paper IV discusses the origin of power laws in complex systems in terms of their structure and the dynamics of disturbance propagation. A branching process is used to model the structure of a power distribution system, and it is shown that the disturbance size in this analytical network model follows a power law. Paper V shows how the interaction between an antagonist and the defender of a power system can be modeled as a game. A numerical example is presented, and it is studied if there exists a dominant defense strategy, and if there is an optimal allocation of resources between protection of components, and recovery. / QC 20100831
114

Probabilistic Performance Forecasting for Unconventional Reservoirs With Stretched-Exponential Model

Can, Bunyamin 2011 May 1900 (has links)
Reserves estimation in an unconventional-reservoir setting is a daunting task because of geologic uncertainty and complex flow patterns evolving in a long-stimulated horizontal well, among other variables. To tackle this complex problem, we present a reserves-evaluation workflow that couples the traditional decline-curve analysis with a probabilistic forecasting frame. The stretched-exponential production decline model (SEPD) underpins the production behavior. Our recovery appraisal workflow has two different applications: forecasting probabilistic future performance of wells that have production history; and forecasting production from new wells without production data. For the new field case, numerical model runs are made in accord with the statistical design of experiments for a range of design variables pertinent to the field of interest. In contrast, for the producing wells the early-time data often need adjustments owing to restimulation, installation of artificial-lift, etc. to focus on the decline trend. Thereafter, production data of either new or existing wells are grouped in accord with initial rates to obtain common SEPD parameters for similar wells. After determining the distribution of model parameters using well grouping, the methodology establishes a probabilistic forecast for individual wells. We present a probabilistic performance forecasting methodology in unconventional reservoirs for wells with and without production history. Unlike other probabilistic forecasting tools, grouping wells with similar production character allows estimation of self-consistent SEPD parameters and alleviates the burden of having to define uncertainties associated with reservoir and well-completion parameters.
115

Self-Thinning Boundary Line and Dynamic Thinning Line in Prince Rupprecht's Larch (Larix principis-rupprechtii Mayr) Stands

XUE, Li, 薛, 立, OGAWA, Kazuharu, 小川, 一治, HAGIHARA, Akio, 萩原, 秋男, HUANG, Baolin, 黄, 宝霊, LONG, Xinmao, 竜, 新毛, CHEN, Biao, 陳, 彪 12 1900 (has links) (PDF)
農林水産研究情報センターで作成したPDFファイルを使用している。
116

Statistical Models for Environmental and Health Sciences

Xu, Yong 01 January 2011 (has links)
Statistical analysis and modeling are useful for understanding the behavior of different phenomena. In this study we will focus on two areas of applications: Global warming and cancer research. Global Warming is one of the major environmental challenge people face nowadays and cancer is one of the major health problem that people need to solve. For Global Warming, we are interest to do research on two major contributable variables: Carbon dioxide (CO2) and atmosphere temperature. We will model carbon dioxide in the atmosphere data with a system of differential equations. We will develop a differential equation for each of six attributable variables that constitute CO2 in the atmosphere and a differential system of CO2 in the atmosphere. We are using real historical data on the subject phenomenon to develop the analytical form of the equations. We will evaluate the quality of the developed model by utilizing a retrofitting process. Having such an analytical system, we can obtain good estimates of the rate of change of CO2 in the atmosphere, individually and cumulatively as a function of time for near and far target times. Such information is quite useful in strategic planning of the subject matter. We will develop a statistical model taking into consideration all the attributable variables that have been identified and their corresponding response of the amount of CO2 in the atmosphere in the continental United States. The development of the statistical model that includes interactions and higher order entities, in addition to individual contributions to CO2 in the atmosphere, are included in the present study. The proposed model has been statistically evaluated and produces accurate predictions for a given set of the attributable variables. Furthermore, we rank the attributable variables with respect to their significant contribution to CO2 in the atmosphere. For Cancer Research, the object of the study is to probabilistically evaluate commonly used methods to perform survival analysis of medical patients. Our study includes evaluation of parametric, semi-parametric and nonparametric analysis of probability survival models. We will evaluate the popular Kaplan-Meier (KM), the Cox Proportional Hazard (Cox PH), and Kernel density (KD) models using both Monte Carlo simulation and using actual breast cancer data. The first part of the evaluation will be based on how these methods measure up to parametric analysis and the second part using actual cancer data. As expected, the parametric survival analysis when applicable gives the best results followed by the not commonly used nonparametric Kernel density approach for both evaluations using simulation and actual cancer data. We will develop a statistical model for breast cancer tumor size prediction for United States patients based on real uncensored data. When we simulate breast cancer tumor size, most of time these tumor sizes are randomly generated. We want to construct a statistical model to generate these tumor sizes as close as possible to the real patients' data given other related information. We accomplish the objective by developing a high quality statistical model that identifies the significant attributable variables and interactions. We rank these contributing entities according to their percentage contribution to breast cancer tumor growth. This proposed statistical model can also be used to conduct surface response analysis to identify the necessary restrictions on the significant attributable variables and their interactions to minimize the size of the breast tumor. We will utilize the Power Law process, also known as Non-homogenous Poisson Process and Weibull Process to evaluate the effectiveness of a given treatment for Stage I & II Ductal breast cancer patients. We utilize the shape parameter of the intensity function to evaluate the behavior of a given treatment with respect to its effectiveness. We will develop a differential equation that will characterize the behavior of the tumor as a function of time. Having such a differential equation, the solution of which once plotted will identify the rate of change of tumor size as a function of age. The structure of the differential equation consists of the significant attributable variables and their interactions to the growth of breast cancer tumor. Once we have developed the differential equations and its solution, we proceed to validate the quality of the proposed differential equations and its usefulness.
117

Ζητήματα μοντελοποίησης και προσέγγισης του χρωματικού αριθμού σε scale-free δίκτυα

Δαγκλής, Οδυσσέας 20 October 2009 (has links)
Δίκτυα που εμφανίζουν μόνιμα μια συγκεκριμένη ιδιότητα ανεξάρτητα από το μέγεθος και την πυκνότητά τους ονομάζονται ανεξάρτητα από την κλίμακα (scale-free). Σε πολλά πραγματικά δίκτυα αυτή η ιδιότητα ταυτίζεται με την κατανομή των βαθμών των κόμβων σύμφωνα με τον νόμο της δύναμης με εκθέτη στο διάστημα [2..4]. Η εργασία παρουσιάζει τρία στατικά μοντέλα κατασκευής scale-free δικτύων με την παραπάνω ιδιότητα, βασισμένα στο δυναμικό μοντέλο Barabási-Albert, και επιχειρεί να προσεγγίσει πειραματικά τον χρωματικό τους αριθμό. / Networks that exhibit a certain quality irrespective of their size and density are called scale-free. In many real-life networks this quality coincides with a power-law distribution of the nodes' degree with exponent ranging in [2..4]. This work presents three static models for constructing scale-free networks, based on the dynamic Barabási-Albert model, and attempts to experimentally approximate their chromatic number.
118

Comparison of Emperical Decline Curve Analysis for Shale Wells

Kanfar, Mohammed Sami 16 December 2013 (has links)
This study compares four recently developed decline curve methods and the traditional Arps or Fetkovich approach. The four methods which are empirically formulated for shale and tight gas wells are: 1. Power Law Exponential Decline (PLE). 2. Stretched Exponential Decline (SEPD). 3. Duong Method. 4. Logistic Growth Model (LGM). Each method has different tuning parameters and equation forms. The main objective of this work is to determine the best method(s) in terms of Estimated Ultimate Recovery (EUR) accuracy, goodness of fit, and ease of matching. In addition, these methods are compared against each other at different production times in order to understand the effect of production time on forecasts. As a part of validation process, all methods are benchmarked against simulation. This study compares the decline methods to four simulation cases which represent the common shale declines observed in the field. Shale wells, which are completed with horizontal wells and multiple traverse highly-conductive hydraulic fractures, exhibit long transient linear flow. Based on certain models, linear flow is preceded by bilinear flow if natural fractures are present. In addition to this, linear flow is succeeded by Boundary Dominated Flow (BDF) decline when pressure wave reaches boundary. This means four declines are possible, hence four simulation cases are required for comparison. To facilitate automatic data fitting, a non-linear regression program was developed using excel VBA. The program optimizes the Least-Square (LS) objective function to find the best fit. The used optimization algorithm is the Levenberg-Marquardt Algorithm (LMA) and it is used because of its robustness and ease of use. This work shows that all methods forecast different EURs and some fit certain simulation cases better than others. In addition, no method can forecast EUR accurately without reaching BDF. Using this work, engineers can choose the best method to forecast EUR after identifying the simulation case that is most analogous to their field wells. The VBA program and the matching procedure presented here can help engineers automate these methods into their forecasting sheets.
119

Étude par spectroscopie résolue en temps des mécanismes de séparation de charges dans des mélanges photovoltaïques

Gélinas, Simon January 2009 (has links)
Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal
120

ERF and scale-free analyses of source-reconstructed MEG brain signals during a multisensory learning paradigm

Zilber, Nicolas 10 March 2014 (has links) (PDF)
The analysis of Human brain activity in magnetoencephalography (MEG) can be generally conducted in two ways: either by focusing on the average response evoked by a stimulus repeated over time, more commonly known as an ''event-related field'' (ERF), or by decomposing the signal into functionally relevant oscillatory or frequency bands (such as alpha, beta or gamma). However, the major part of brain activity is arrhythmic and these approaches fail in describing its complexity, particularly in resting-state. As an alternative, the analysis of the 1/f-type power spectrum observed in the very low frequencies, a hallmark of scale-free dynamics, can overcome these issues. Yet it remains unclear whether this scale-free property is functionally relevant and whether its fluctuations matter for behavior. To address this question, our first concern was to establish a visual learning paradigm that would entail functional plasticity during an MEG session. In order to optimize the training effects, we developed new audiovisual (AV) stimuli (an acoustic texture paired with a colored visual motion) that induced multisensory integration and indeed improved learning compared to visual training solely (V) or accompanied with acoustic noise (AVn). This led us to investigate the neural correlates of these three types of training using first a classical method such as the ERF analysis. After source reconstruction on each individual cortical surface using MNE-dSPM, the network involved in the task was identified at the group-level. The selective plasticity observed in the human motion area (hMT+) correlated across all individuals with the behavioral improvement and was supported by a larger network in AV comprising multisensory areas. On the basis of these findings, we further explored the links between the behavior and scale-free properties of these same source-reconstructed MEG signals. Although most studies restricted their analysis to the global measure of self-similarity (i.e. long-range fluctuations), we also considered local fluctuations (i.e. multifractality) by using the Wavelet Leader Based Multifractal Formalism (WLBMF). We found intertwined modulations of self-similarity and multifractality in the same cortical regions as those revealed by the ERF analysis. Most astonishing, the degree of multifractality observed in each individual converged during the training towards a single attractor that reflected the asymptotic behavioral performance in hMT+. Finally, these findings and their associated methodological issues are compared with the ones that came out from the ERF analysis.

Page generated in 0.0736 seconds