• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 80
  • 27
  • 10
  • 9
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 167
  • 167
  • 25
  • 16
  • 16
  • 15
  • 15
  • 15
  • 14
  • 14
  • 14
  • 14
  • 14
  • 14
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Zipf's Law for Natural Cities Extracted from Location-Based Social Media Data

Wu, Sirui January 2015 (has links)
Zipf’s law is one of the empirical statistical regularities found within many natural systems, ranging from protein sequences of immune receptors in cells to the intensity of solar flares from the sun. Verifying the universality of Zipf’s law can provide many opportunities for us to further seek the commonalities of phenomena that possess the power law behavior. Since power law-like phenomena, as many studies have previously indicated, is often interpreted as evidence for studying complex systems, exploring the universality of Zipf’s law is also of potential capability in explaining underlying generative mechanisms and endogenous processes, i.e. self-organization and chaos theory. The main purpose of this study was to verify whether Zipf’s law is valid for city sizes, city numbers and population extracted from natural cities. Unlike traditional city boundaries extracted by applying census-imposed and top-down imposed data, which are arbitrary and subjective, the study established the new kind of boundaries of cities, namely, natural cities through using four location-based social media data from Twitter, Brightkite, Gowalla and Freebase and head/tail breaks rule. In order to capture and quantify the hierarchical level for studying heterogeneous scales of cities, ht-index derived from head/tail breaks rule was employed. Furthermore, the validation of Zipf’s law was examined. The result revealed that the natural cities had deviations in subtle patterns when different social media data were examined. By employing head/tail breaks method, the result calculated the ht-index and detected that hierarchy levels were not largely influenced by spatial-temporal changes but rather data itself. On the other hand, the study found that Zipf’s law is not universal in the case of using location-based social media data. Compared to city numbers extracted from nightlight imagery, the study found out the reason why Zipf’s law does not hold for location-based social media data, i.e. due to bias of customer behavior. The bias mainly resulted in the emergence of natural cities were much more frequent than others in certain regions and countries so that making the emergence of natural cities was not exhibited objectively. Furthermore, the study showed whether Zipf’s law could be well observed depends not only on the data itself and man-made limitations but also on calculation methods, data precisions and scales and the idealized status of observed data.
112

Quantitative vulnerability analysis of electric power networks

Holmgren, Åke J. January 2006 (has links)
Disturbances in the supply of electric power can have serious implications for everyday life as well as for national (homeland) security. A power outage can be initiated by natural disasters, adverse weather, technical failures, human errors, sabotage, terrorism, and acts of war. The vulnerability of a system is described as a sensitivity to threats and hazards, and is measured by P (Q(t) > q), i.e. the probability of at least one disturbance with negative societal consequences Q larger than some critical value q, during a given period of time (0,t]. The aim of the thesis is to present methods for quantitative vulnerability analysis of electric power delivery networks to enable effective strategies for prevention, mitigation, response, and recovery to be developed. Paper I provides a framework for vulnerability assessment of infrastructure systems. The paper discusses concepts and perspectives for developing a methodology for vulnerability analysis, and gives examples related to power systems. Paper II analyzes the vulnerability of power delivery systems by means of statistical analysis of Swedish disturbance data. It is demonstrated that the size of large disturbances follows a power law, and that the occurrence of disturbances can be modeled as a Poisson process. Paper III models electric power delivery systems as graphs. Statistical measures for characterizing the structure of two empirical transmission systems are calculated, and a structural vulnerability analysis is performed, i.e. a study of the connectivity of the graph when vertices and edges are disabled. Paper IV discusses the origin of power laws in complex systems in terms of their structure and the dynamics of disturbance propagation. A branching process is used to model the structure of a power distribution system, and it is shown that the disturbance size in this analytical network model follows a power law. Paper V shows how the interaction between an antagonist and the defender of a power system can be modeled as a game. A numerical example is presented, and it is studied if there exists a dominant defense strategy, and if there is an optimal allocation of resources between protection of components, and recovery. / QC 20100831
113

Probabilistic Performance Forecasting for Unconventional Reservoirs With Stretched-Exponential Model

Can, Bunyamin 2011 May 1900 (has links)
Reserves estimation in an unconventional-reservoir setting is a daunting task because of geologic uncertainty and complex flow patterns evolving in a long-stimulated horizontal well, among other variables. To tackle this complex problem, we present a reserves-evaluation workflow that couples the traditional decline-curve analysis with a probabilistic forecasting frame. The stretched-exponential production decline model (SEPD) underpins the production behavior. Our recovery appraisal workflow has two different applications: forecasting probabilistic future performance of wells that have production history; and forecasting production from new wells without production data. For the new field case, numerical model runs are made in accord with the statistical design of experiments for a range of design variables pertinent to the field of interest. In contrast, for the producing wells the early-time data often need adjustments owing to restimulation, installation of artificial-lift, etc. to focus on the decline trend. Thereafter, production data of either new or existing wells are grouped in accord with initial rates to obtain common SEPD parameters for similar wells. After determining the distribution of model parameters using well grouping, the methodology establishes a probabilistic forecast for individual wells. We present a probabilistic performance forecasting methodology in unconventional reservoirs for wells with and without production history. Unlike other probabilistic forecasting tools, grouping wells with similar production character allows estimation of self-consistent SEPD parameters and alleviates the burden of having to define uncertainties associated with reservoir and well-completion parameters.
114

Self-Thinning Boundary Line and Dynamic Thinning Line in Prince Rupprecht's Larch (Larix principis-rupprechtii Mayr) Stands

XUE, Li, 薛, 立, OGAWA, Kazuharu, 小川, 一治, HAGIHARA, Akio, 萩原, 秋男, HUANG, Baolin, 黄, 宝霊, LONG, Xinmao, 竜, 新毛, CHEN, Biao, 陳, 彪 12 1900 (has links) (PDF)
農林水産研究情報センターで作成したPDFファイルを使用している。
115

Statistical Models for Environmental and Health Sciences

Xu, Yong 01 January 2011 (has links)
Statistical analysis and modeling are useful for understanding the behavior of different phenomena. In this study we will focus on two areas of applications: Global warming and cancer research. Global Warming is one of the major environmental challenge people face nowadays and cancer is one of the major health problem that people need to solve. For Global Warming, we are interest to do research on two major contributable variables: Carbon dioxide (CO2) and atmosphere temperature. We will model carbon dioxide in the atmosphere data with a system of differential equations. We will develop a differential equation for each of six attributable variables that constitute CO2 in the atmosphere and a differential system of CO2 in the atmosphere. We are using real historical data on the subject phenomenon to develop the analytical form of the equations. We will evaluate the quality of the developed model by utilizing a retrofitting process. Having such an analytical system, we can obtain good estimates of the rate of change of CO2 in the atmosphere, individually and cumulatively as a function of time for near and far target times. Such information is quite useful in strategic planning of the subject matter. We will develop a statistical model taking into consideration all the attributable variables that have been identified and their corresponding response of the amount of CO2 in the atmosphere in the continental United States. The development of the statistical model that includes interactions and higher order entities, in addition to individual contributions to CO2 in the atmosphere, are included in the present study. The proposed model has been statistically evaluated and produces accurate predictions for a given set of the attributable variables. Furthermore, we rank the attributable variables with respect to their significant contribution to CO2 in the atmosphere. For Cancer Research, the object of the study is to probabilistically evaluate commonly used methods to perform survival analysis of medical patients. Our study includes evaluation of parametric, semi-parametric and nonparametric analysis of probability survival models. We will evaluate the popular Kaplan-Meier (KM), the Cox Proportional Hazard (Cox PH), and Kernel density (KD) models using both Monte Carlo simulation and using actual breast cancer data. The first part of the evaluation will be based on how these methods measure up to parametric analysis and the second part using actual cancer data. As expected, the parametric survival analysis when applicable gives the best results followed by the not commonly used nonparametric Kernel density approach for both evaluations using simulation and actual cancer data. We will develop a statistical model for breast cancer tumor size prediction for United States patients based on real uncensored data. When we simulate breast cancer tumor size, most of time these tumor sizes are randomly generated. We want to construct a statistical model to generate these tumor sizes as close as possible to the real patients' data given other related information. We accomplish the objective by developing a high quality statistical model that identifies the significant attributable variables and interactions. We rank these contributing entities according to their percentage contribution to breast cancer tumor growth. This proposed statistical model can also be used to conduct surface response analysis to identify the necessary restrictions on the significant attributable variables and their interactions to minimize the size of the breast tumor. We will utilize the Power Law process, also known as Non-homogenous Poisson Process and Weibull Process to evaluate the effectiveness of a given treatment for Stage I & II Ductal breast cancer patients. We utilize the shape parameter of the intensity function to evaluate the behavior of a given treatment with respect to its effectiveness. We will develop a differential equation that will characterize the behavior of the tumor as a function of time. Having such a differential equation, the solution of which once plotted will identify the rate of change of tumor size as a function of age. The structure of the differential equation consists of the significant attributable variables and their interactions to the growth of breast cancer tumor. Once we have developed the differential equations and its solution, we proceed to validate the quality of the proposed differential equations and its usefulness.
116

Ζητήματα μοντελοποίησης και προσέγγισης του χρωματικού αριθμού σε scale-free δίκτυα

Δαγκλής, Οδυσσέας 20 October 2009 (has links)
Δίκτυα που εμφανίζουν μόνιμα μια συγκεκριμένη ιδιότητα ανεξάρτητα από το μέγεθος και την πυκνότητά τους ονομάζονται ανεξάρτητα από την κλίμακα (scale-free). Σε πολλά πραγματικά δίκτυα αυτή η ιδιότητα ταυτίζεται με την κατανομή των βαθμών των κόμβων σύμφωνα με τον νόμο της δύναμης με εκθέτη στο διάστημα [2..4]. Η εργασία παρουσιάζει τρία στατικά μοντέλα κατασκευής scale-free δικτύων με την παραπάνω ιδιότητα, βασισμένα στο δυναμικό μοντέλο Barabási-Albert, και επιχειρεί να προσεγγίσει πειραματικά τον χρωματικό τους αριθμό. / Networks that exhibit a certain quality irrespective of their size and density are called scale-free. In many real-life networks this quality coincides with a power-law distribution of the nodes' degree with exponent ranging in [2..4]. This work presents three static models for constructing scale-free networks, based on the dynamic Barabási-Albert model, and attempts to experimentally approximate their chromatic number.
117

Comparison of Emperical Decline Curve Analysis for Shale Wells

Kanfar, Mohammed Sami 16 December 2013 (has links)
This study compares four recently developed decline curve methods and the traditional Arps or Fetkovich approach. The four methods which are empirically formulated for shale and tight gas wells are: 1. Power Law Exponential Decline (PLE). 2. Stretched Exponential Decline (SEPD). 3. Duong Method. 4. Logistic Growth Model (LGM). Each method has different tuning parameters and equation forms. The main objective of this work is to determine the best method(s) in terms of Estimated Ultimate Recovery (EUR) accuracy, goodness of fit, and ease of matching. In addition, these methods are compared against each other at different production times in order to understand the effect of production time on forecasts. As a part of validation process, all methods are benchmarked against simulation. This study compares the decline methods to four simulation cases which represent the common shale declines observed in the field. Shale wells, which are completed with horizontal wells and multiple traverse highly-conductive hydraulic fractures, exhibit long transient linear flow. Based on certain models, linear flow is preceded by bilinear flow if natural fractures are present. In addition to this, linear flow is succeeded by Boundary Dominated Flow (BDF) decline when pressure wave reaches boundary. This means four declines are possible, hence four simulation cases are required for comparison. To facilitate automatic data fitting, a non-linear regression program was developed using excel VBA. The program optimizes the Least-Square (LS) objective function to find the best fit. The used optimization algorithm is the Levenberg-Marquardt Algorithm (LMA) and it is used because of its robustness and ease of use. This work shows that all methods forecast different EURs and some fit certain simulation cases better than others. In addition, no method can forecast EUR accurately without reaching BDF. Using this work, engineers can choose the best method to forecast EUR after identifying the simulation case that is most analogous to their field wells. The VBA program and the matching procedure presented here can help engineers automate these methods into their forecasting sheets.
118

Étude par spectroscopie résolue en temps des mécanismes de séparation de charges dans des mélanges photovoltaïques

Gélinas, Simon January 2009 (has links)
Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal
119

ERF and scale-free analyses of source-reconstructed MEG brain signals during a multisensory learning paradigm

Zilber, Nicolas 10 March 2014 (has links) (PDF)
The analysis of Human brain activity in magnetoencephalography (MEG) can be generally conducted in two ways: either by focusing on the average response evoked by a stimulus repeated over time, more commonly known as an ''event-related field'' (ERF), or by decomposing the signal into functionally relevant oscillatory or frequency bands (such as alpha, beta or gamma). However, the major part of brain activity is arrhythmic and these approaches fail in describing its complexity, particularly in resting-state. As an alternative, the analysis of the 1/f-type power spectrum observed in the very low frequencies, a hallmark of scale-free dynamics, can overcome these issues. Yet it remains unclear whether this scale-free property is functionally relevant and whether its fluctuations matter for behavior. To address this question, our first concern was to establish a visual learning paradigm that would entail functional plasticity during an MEG session. In order to optimize the training effects, we developed new audiovisual (AV) stimuli (an acoustic texture paired with a colored visual motion) that induced multisensory integration and indeed improved learning compared to visual training solely (V) or accompanied with acoustic noise (AVn). This led us to investigate the neural correlates of these three types of training using first a classical method such as the ERF analysis. After source reconstruction on each individual cortical surface using MNE-dSPM, the network involved in the task was identified at the group-level. The selective plasticity observed in the human motion area (hMT+) correlated across all individuals with the behavioral improvement and was supported by a larger network in AV comprising multisensory areas. On the basis of these findings, we further explored the links between the behavior and scale-free properties of these same source-reconstructed MEG signals. Although most studies restricted their analysis to the global measure of self-similarity (i.e. long-range fluctuations), we also considered local fluctuations (i.e. multifractality) by using the Wavelet Leader Based Multifractal Formalism (WLBMF). We found intertwined modulations of self-similarity and multifractality in the same cortical regions as those revealed by the ERF analysis. Most astonishing, the degree of multifractality observed in each individual converged during the training towards a single attractor that reflected the asymptotic behavioral performance in hMT+. Finally, these findings and their associated methodological issues are compared with the ones that came out from the ERF analysis.
120

Efeitos de topologia em sistemas biológicos / Effects to topology in biological systems

Claudino, Elder de Souza 25 February 2013 (has links)
In this work we analyse two problems coming from theoretical biology. In the first part we propose a spatially structured population model which is defined on a continuous lattice. In the model individuals disperse at a constant rate v and competition is local and delimitated by the competition radius R. Due to dispersal, the neighborhooh size fluctuates over time. We analyse how these variables affect the adaptive process. While the fixation probabilities of beneficial mutations are roughly the same as in a panmitic population for small and intermediate fitness effects s, a dependence on v and R appears for large s. These quantities also strongly influence fixation times. The model exhibits a dual behavior displaying a power-law growth for the fixation rate and speed of adaptation with the beneficial mutation rate as observed in spatially structured population models, but simultaneously showing a non-saturating behavior for the speed of adaptation with the population size. In the second part we numerically study the dynamics of model imune networks with random and scale-free topologies. We observe that a memory state is reached when the antigen is attached to the most connected sites of the network, where as a percolation state may occur when the antigen attaches to the less connected sites. For increasing values of the connectivity, its population converges exponentially to the asymptotic value of the memory state. On the other hand, the next-nearest populations evolve slowly as power-laws towards the virgin-like state. / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Neste trabalho, analisamos dois problemas provenientes da biologia teórica. Na primeira parte, propomos um modelo de população espacialmente estruturada, que é definido numa rede contínua. No modelo, indivíduos se dispersam numa taxa constante v e a competição é local e delimitada pelo raio de competição R. Devido à dispersão, o tamanho da vizinhança flutua ao longo do tempo. Analisamos como essas variáveis afetam o processo adaptativo. Embora as probabilidades de fixação de mutações benéficas sejam aproximadamente as mesmas que numa população panmítica para valores de adaptação de pequeno e médio s, uma dependência de v e R aparece para grandes s. Estas quantidades também influenciam fortemente os tempos de fixação. O modelo exibe um comportamento duplo que indica um crescimento em lei de potência para a taxa de fixação e a velocidade de adaptação com a taxa de mutação benéfica como observado em modelos de população espacialmente estruturadas, mas simultaneamente mostra um comportamento não saturante para a velocidade de adaptação com o tamanho da população. Na segunda parte, estudamos numericamente a dinâmica de modelos de redes imunes com topologias aleatória e livre de escala. Observamos que um estado memória é alcançado quando o antígeno é ligado aos sítios mais conectados da rede enquanto que um estado de percolação pode ocorrer quando o antígeno se liga aos sítios menos conectados. Para maiores valores de conectividade, sua população converge exponencialmente para o valor assintótico do estado de memória. Por outro lado, as populações mais próximas evoluem lentamente, como leis de potência para o estado virgem.

Page generated in 0.0952 seconds