• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 29
  • 16
  • 14
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 85
  • 85
  • 18
  • 17
  • 16
  • 13
  • 12
  • 11
  • 9
  • 8
  • 8
  • 8
  • 8
  • 8
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Semi-empirical Probability Distributions and Their Application in Wave-Structure Interaction Problems

Izadparast, Amir Hossein 2010 December 1900 (has links)
In this study, the semi-empirical approach is introduced to accurately estimate the probability distribution of complex non-linear random variables in the field of wavestructure interaction. The structural form of the semi-empirical distribution is developed based on a mathematical representation of the process and the model parameters are estimated directly from utilization of the sample data. Here, three probability distributions are developed based on the quadratic transformation of the linear random variable. Assuming that the linear process follows a standard Gaussian distribution, the three-parameter Gaussian-Stokes model is derived for the second-order variables. Similarly, the three-parameter Rayleigh-Stokes model and the four-parameter Weibull- Stokes model are derived for the crests, troughs, and heights of non-linear process assuming that the linear variable has a Rayleigh distribution or a Weibull distribution. The model parameters are empirically estimated with the application of the conventional method of moments and the newer method of L-moments. Furthermore, the application of semi-empirical models in extreme analysis and estimation of extreme statistics is discussed. As a main part of this research study, the sensitivity of the model statistics to the variability of the model parameters as well as the variability in the samples is evaluated. In addition, the sample size effects on the performance of parameter estimation methods are studied. Utilizing illustrative examples, the application of semi-empirical probability distributions in the estimation of probability distribution of non-linear random variables is studied. The examples focused on the probability distribution of: wave elevations and wave crests of ocean waves and waves in the area close to an offshore structure, wave run-up over the vertical columns of an offshore structure, and ocean wave power resources. In each example, the performance of the semi-empirical model is compared with appropriate theoretical and empirical distribution models. It is observed that the semi-empirical models are successful in capturing the probability distribution of complex non-linear variables. The semi-empirical models are more flexible than the theoretical models in capturing the probability distribution of data and the models are generally more robust than the commonly used empirical models.
32

Reliability Cost Model Design and Worth Analysis for Distribution System Planning

Yang, Chin-Der 29 May 2002 (has links)
Reliability worth analysis is an important tool for distribution systems planning and operations. The interruption cost model used in the analysis directly affects the accuracy of the reliability worth evaluation. In this dissertation, the reliability worth analysis was dealt with two interruption cost models including an average or aggregated model (AAM), and a probabilistic distribution model (PDM) in two phases. In the first phase, the dissertation presents a reliability cost model based AAM for distribution system planning. The reliability cost model has been derived as a linear function of line flows for evaluating the outages. The objective is to minimize the total cost including the outage cost, feeder resistive loss, and fixed investment cost. The Evolutionary Programming (EP) was used to solve the very complicated mixed-integer, highly non-linear, and non-differential problem. A real distribution network was modeled as the sample system for tests. There is also a higher opportunity to obtain the global optimum during the EP process. In the second phase, the interruption cost model PDM was proposed by using the radial basis function (RBF) neural network with orthogonal least-squares (OLS) learning method. The residential and industrial interruption costs in PDM were integrated by the proposed neural network technique. A Monte-Carlo time sequential simulation technique was adopted for worth assessment. The technique is tested by evaluating the reliability worth of a Taipower system for the installation of disconnected switches, lateral fuses, transformers and alternative supplies. The results show that the two cost models result in very different interruption costs, and PDM may be more realistic in modeling the system.
33

Age Dependent Analysis and Modeling of Prostate Cancer Data

Bonsu, Nana Osei Mensa 01 January 2013 (has links)
Growth rate of prostate cancer tumor is an important aspect of understanding the natural history of prostate cancer. Using real prostate cancer data from the SEER database with tumor size as a response variable, we have clustered the cancerous tumor sizes into age groups to enhance its analytical behavior. The rate of change of the response variable as a function of age is given for each cluster. Residual analysis attests to the quality of the analytical model and the subject estimates. In addition, we have identified the probability distribution that characterize the behavior of the response variable and proceeded with basic parametric analysis. There are several remarkable treatment options available for prostate cancer patients. In this present study, we have considered the three commonly used treatment for prostate cancer: radiation therapy, surgery, and combination of surgery and radiation therapy. The study uses data from the SEER database to evaluate and rank the effectiveness of these treatment options using survival analysis in conjunction with basic parametric analysis. The evaluation is based on the stage of the prostate cancer classification. Improvement in prostate cancer disease can be measured by improvement in its mortality. Also, mortality projection is crucial for policy makers and the financial stability of insurance business. Our research applies a parametric model proposed by Renshaw et al. (1996) to project the force of mortality for prostate cancer. The proposed modeling structure can pick up both age and year effects.
34

Development of reliable pavement models

Aguiar Moya, José Pablo, 1981- 13 October 2011 (has links)
As the cost of designing and building new highway pavements increases and the number of new construction and major rehabilitation projects decreases, the importance of ensuring that a given pavement design performs as expected in the field becomes vital. To address this issue in other fields of civil engineering, reliability analysis has been used extensively. However, in the case of pavement structural design, the reliability component is usually neglected or overly simplified. To address this need, the current dissertation proposes a framework for estimating the reliability of a given pavement structure regardless of the pavement design or analysis procedure that is being used. As part of the dissertation, the framework is applied with the Mechanistic-Empirical Pavement Design Guide (MEPDG) and failure is considered as a function of rutting of the hot-mix asphalt (HMA) layer. The proposed methodology consists of fitting a response surface, in place of the time-demanding implicit limit state functions used within the MEPDG, in combination with an analytical approach to estimating reliability using second moment techniques: First-Order and Second-Order Reliability Methods (FORM and SORM) and simulation techniques: Monte Carlo and Latin Hypercube Simulation. In order to demonstrate the methodology, a three-layered pavement structure is selected consisting of a hot-mix asphalt (HMA) surface, a base layer, and subgrade. Several pavement design variables are treated as random; these include HMA and base layer thicknesses, base and subgrade modulus, and HMA layer binder and air void content. Information on the variability and correlation between these variables are obtained from the Long-Term Pavement Performance (LTPP) program, and likely distributions, coefficients of variation, and correlation between the variables are estimated. Additionally, several scenarios are defined to account for climatic differences (cool, warm, and hot climatic regions), truck traffic distributions (mostly consisting of single unit trucks versus mostly consisting of single trailer trucks), and the thickness of the HMA layer (thick versus thin). First and second order polynomial HMA rutting failure response surfaces with interaction terms are fit by running the MEPDG under a full factorial experimental design consisting of 3 levels of the aforementioned design variables. These response surfaces are then used to analyze the reliability of the given pavement structures under the different scenarios. Additionally, in order to check for the accuracy of the proposed framework, direct simulation using the MEPDG was performed for the different scenarios. Very small differences were found between the estimates based on response surfaces and direct simulation using the MEPDG, confirming the accurateness of the proposed procedure. Finally, sensitivity analysis on the number of MEPDG runs required to fit the response surfaces was performed and it was identified that reducing the experimental design by one level still results in response surfaces that properly fit the MEPDG, ensuring the applicability of the method for practical applications. / text
35

Distributed Photovoltaics, Household Electricity Use and Electric Vehicle Charging : Mathematical Modeling and Case Studies

Munkhammar, Joakim January 2015 (has links)
Technological improvements along with falling prices on photovoltaic (PV) panels and electric vehicles (EVs) suggest that they might become more common in the future. The introduction of distributed PV power production and EV charging has a considerable impact on the power system, in particular at the end-user in the electricity grid. In this PhD thesis PV power production, household electricity use and EV charging are investigated on different system levels. The methodologies used in this thesis are interdisciplinary but the main contributions are mathematical modeling, simulations and data analysis of these three components and their interactions. Models for estimating PV power production, household electricity use, EV charging and their combination are developed using data and stochastic modeling with Markov chains and probability distributions. Additionally, data on PV power production and EV charging from eight solar charging stations is analyzed. Results show that the clear-sky index for PV power production applications can be modeled via a bimodal Normal probability distribution, that household electricity use can be modeled via either Weibull or Log-normal probability distributions and that EV charging can be modeled by Bernoulli probability distributions. Complete models of PV power production, household electricity use and EV home-charging are developed with both Markov chain and probability distribution modeling. It is also shown that EV home-charging can be modeled as an extension to the Widén Markov chain model for generating synthetic household electricity use patterns. Analysis of measurements from solar charging stations show a wide variety of EV charging patterns. Additionally an alternative approach to modeling the clear-sky index is introduced and shown to give a generalized Ångström equation relating solar irradiation to the duration of bright sunshine. Analysis of the total power consumption/production patterns of PV power production, household electricity use and EV home-charging at the end-user in the grid highlights the dependency between the components, which quantifies the mismatch issue of distributed intermittent power production and consumption. At an aggregate level of households the level of mismatch is shown to be lower.
36

Desenho de polígonos e sequenciamento de blocos de minério para planejamento de curto prazo procurando estacionarização dos teores

Toledo, Augusto Andres Torres January 2018 (has links)
O planejamento de curto prazo em minas a céu aberto exige a definição de poligonais, que representam os sucessivos avanços de lavra. As poligonais, tradicionalmente, são desenhadas em um processo laborioso na tentativa de delinear como minério em qualidade e quantidade de acordo com os limites determinados. O minério delimitado deve apresentar a menor variabilidade em qualidade possível, com o objetivo de maximizar a recuperação na usina de processamento. Essa dissertação visa desenvolver um fluxo do trabalho para definir poligonais de curto prazo de forma automática, além disso, sequenciar todos os blocos de minério de cada polígono de modo a definir uma sequência interconectada lavrável de poligonais. O fluxo do trabalho foi aplicada à incerteza de teores, obtida através de simulações estocásticas. Algoritmos genéticos foram desenvolvidos em linguagem de programação Python e implementados na forma de plug-in no software geoestatístico Ar2GeMS. Múltiplas iterações são criadas para cada avanço individual, gerando regiões (ou poligonais). Então, a região que apresenta menor variabilidade de teores é selecionada. A distribuição de probabilidade dos teores dos blocos em cada avanço é comparada com a distribuição global de teores, calculada a partir de todos os blocos do corpo de minério. Os resultados mostraram que os teores dos blocos abrangidos pelas poligonais criadas dessa forma apresentam teores similares à distribuição de referência, permitindo o sequenciamento de lavra com distribuição de teores mais próximo possível da distribuição global. Modelos equiprováveis permitem avaliar a incerteza associada à solução proposta. / Open-pit short-term planning requieres the definition of polygons identifying the successive mining advances. These polygons are drawn in a labour intensive task attempting to delineate ore with the quantity and quality within established ranges. The ore delineated by the polygons should have the least possible quality variability among them, helping in maximizing ore recovery at the processing plant. This thesis aims at developíng a workflow for drawing short-term polygons automatically, sequencing all ore blocks within each polygon and leading to a mineable and connected sequence of polygons. This workflow is also tested under grade uncertainty obtained through multiple syochastic simulated models. For this, genetics algorithms were developed in Python programming language and pluged in Ar2GeMS geostatistical software. Multiple iterations were generated for each of the individual advances, generating regions or polygons, and selecting the regions of lower grade variability. The blocks probability distribution within each advance were compared to the global distribution, including all blocks within the ore body. Results show that the polygons generated are comprised by block grades similar to the ones from the reference distribution, leading to mining sequence as close as possible to the global maintaining a quasi-satationarity. Equally probable models provide the means to access the uncertainy in the solution provided.
37

Modelos matemáticos em finanças: desenvolvimento histórico-científico e riscos associados às premissas estruturais / Mathematical models on finances: scientific historic development and the risks related to structural premises

Emerson Tadeu Gonçalves Rici 18 October 2007 (has links)
Este trabalho tem como objetivo estudar as origens dos estudos ligados à gestão do risco e suas aplicações no mercado de capitais, incluindo o mercado brasileiro. São destacadas importantes características estatísticas desses estudos, algumas premissas probabilísticas básicas e o questionamento do uso indiscriminado dos modelos matemáticos desenvolvidos para Finanças. Apresentamos alguns tipos de distribuições estatísticas que podem ser aplicadas ao mercado de capitais. Esta pesquisa apresenta, também, características de sistemas complexos, da Teoria da Utilidade de Bernoulli, da Teoria da Utilidade Esperada (TUE) de Von Neumann e Morgenstern (1944), da Hipótese de Mercados Eficientes (HME) organizado/sistematizado por Eugene Fama (1970), da Racionalidade Limitada, estudada por Simon (1959), das Finanças Comportamentais, tratada por Kahneman e Tverski (1979) e do uso de modelos, apresentado por Merton, (1994). É feito um estudo empírico, a título de ilustração, contemplando o mercado brasileiro, representado pelo índice BOVESPA (Ibovespa), comparado com resultados obtidos por Gabaix (2003), em estudo realizado no mercado americano, a fim de verificar a distribuição de probabilidade do retorno. Esta realização empírica é realizada no intento de reforçar a importância da reflexão acerca do uso indiscriminado dos modelos e das quebras de suas premissas. / The objective of this work is to study the origins of the research related to risk and its implications to capital markets, including the Brazilian market. Important statistical characteristics, several basic probabilistic premises and the questioning of indiscriminate use of mathematical models developed by accountants and analysts in finances had been highlighted. There had been shown some kinds of statistical distribution which can be applied to capital markets. This research also presents characteristics of complex systems, Utility Theory, studied by Von Neumann and Morgenstern (1944), Efficient Markets Hypothesis (EMH), organized/systematized by Eugene Fama (1970), Limited Rationality, studied by Simon (1959), Behavioral Finance, dealt by Kahneman and Tveski (1979) and model\'s use by Merton (1994). In order to illustrate the work, there had been made an empirical study, contemplating Brazilian market and comparing it to Garbaix\'s (2003) results, obtained by American market study. This was made in order to verify the market return probability distribution to reinforce the importance of reflection in indiscriminate usage of models and its premises crack.
38

Desenho de polígonos e sequenciamento de blocos de minério para planejamento de curto prazo procurando estacionarização dos teores

Toledo, Augusto Andres Torres January 2018 (has links)
O planejamento de curto prazo em minas a céu aberto exige a definição de poligonais, que representam os sucessivos avanços de lavra. As poligonais, tradicionalmente, são desenhadas em um processo laborioso na tentativa de delinear como minério em qualidade e quantidade de acordo com os limites determinados. O minério delimitado deve apresentar a menor variabilidade em qualidade possível, com o objetivo de maximizar a recuperação na usina de processamento. Essa dissertação visa desenvolver um fluxo do trabalho para definir poligonais de curto prazo de forma automática, além disso, sequenciar todos os blocos de minério de cada polígono de modo a definir uma sequência interconectada lavrável de poligonais. O fluxo do trabalho foi aplicada à incerteza de teores, obtida através de simulações estocásticas. Algoritmos genéticos foram desenvolvidos em linguagem de programação Python e implementados na forma de plug-in no software geoestatístico Ar2GeMS. Múltiplas iterações são criadas para cada avanço individual, gerando regiões (ou poligonais). Então, a região que apresenta menor variabilidade de teores é selecionada. A distribuição de probabilidade dos teores dos blocos em cada avanço é comparada com a distribuição global de teores, calculada a partir de todos os blocos do corpo de minério. Os resultados mostraram que os teores dos blocos abrangidos pelas poligonais criadas dessa forma apresentam teores similares à distribuição de referência, permitindo o sequenciamento de lavra com distribuição de teores mais próximo possível da distribuição global. Modelos equiprováveis permitem avaliar a incerteza associada à solução proposta. / Open-pit short-term planning requieres the definition of polygons identifying the successive mining advances. These polygons are drawn in a labour intensive task attempting to delineate ore with the quantity and quality within established ranges. The ore delineated by the polygons should have the least possible quality variability among them, helping in maximizing ore recovery at the processing plant. This thesis aims at developíng a workflow for drawing short-term polygons automatically, sequencing all ore blocks within each polygon and leading to a mineable and connected sequence of polygons. This workflow is also tested under grade uncertainty obtained through multiple syochastic simulated models. For this, genetics algorithms were developed in Python programming language and pluged in Ar2GeMS geostatistical software. Multiple iterations were generated for each of the individual advances, generating regions or polygons, and selecting the regions of lower grade variability. The blocks probability distribution within each advance were compared to the global distribution, including all blocks within the ore body. Results show that the polygons generated are comprised by block grades similar to the ones from the reference distribution, leading to mining sequence as close as possible to the global maintaining a quasi-satationarity. Equally probable models provide the means to access the uncertainy in the solution provided.
39

Estudo da variaÃÃo da Ãrea de preservaÃÃo permanente do reservatÃrio OrÃs-CE associada Ãs alteraÃÃes do cÃdigo florestal / Study of the variation of the permanent preservation area OrÃs-EC associated reservoir to changes in the Forest Code

Thomas LÃvio Santos Coelho 18 February 2016 (has links)
O estudo de variaÃÃo de Ãrea de preservaÃÃo permanente do reservatÃrio OrÃs utilizou o software ArcGis 10.2 para a modelagem computacional, a delimitaÃÃo foi realizada conforme os referenciais legais da atual legislaÃÃo ambiental (Lei 12.651/2012) e da anterior (Lei 4.771/65) comparativamente, quanto a referÃncia altimÃtrica estabelecida pela atual legislaÃÃo, utilizou-se a cota mÃxima maximorum original do projeto de 1960, e uma cota mÃxima maximorum modelada estatisticamente a partir de dados pluviomÃtricos atuais, quanto a referÃncia altimÃtrica estabelecida pela legislaÃÃo anterior modelou-se um Buffer de 100 metros medidos a partir da cota mÃxima operativa normal, assim comparando-se a restritividade ou a permissividade da legislaÃÃo em vigor. A modelagem de atualizaÃÃo de cota mÃxima maximorum para determinaÃÃo da Ãrea de preservaÃÃo permanente do reservatÃrio OrÃs se deu por meio de processos estatÃsticos, utilizando distribuiÃÃes de probabilidade, dentre as quais oito tipos foram selecionadas: Weibull 3 ParÃmetros, Menores Valores Extremos, Weibull, LogÃstica, Normal, Gamma, LogNormal e Maiores Valores Extremos. O desempenho destas foram avaliados pelo nÃvel de confianÃa, este estabelecido em 95% e quanto a aderÃncia pelo mÃtodo de Anderson Darling, destacando-se a distribuiÃÃo do tipo Weibull 3 ParÃmetros, a qual obteve o melhor desempenho global, modelando uma cota mÃxima maximorum de 208 metros, um metro a cima da estabelecida pelo projeto original. Estabelecido os referenciais, iniciou-se a modelagem computacional das APPâs, identificou-se que a atual legislaÃÃo, referenciada pela cota mÃxima maximorum do projeto original do DNOCS, Ã menos restritiva do que a legislaÃÃo anterior, prever uma Ãrea 26% menor do que a delimitada pela legislaÃÃo anterior, promovendo alteraÃÃes quanto a definiÃÃo territorial dos municÃpios contemplados pelo reservatÃrio. Assim, foram observados ganhos de APP em alguns municÃpios e em outros, perda, processo este nÃo ocorrendo de forma homogÃnea. / The study of ranging permanent preservation areas of OrÃs reservoir, used the ArcGIS 10.2 software for computer modeling, the delimitation was carried out according to legal references of current environmental legislation, and the previous legislation, using altimetry reference established by current legislation, was used two altimetric elevations, the maximum maximorum quota of original project 1960, and a maximum maximorum quota statistically modeled from current rainfall data, using altimetry reference established by previous legislation was modeled one buffer of 100 meters measured from the maximum normal operating height, so it was possible to compare the restrictiveness or permissiveness of the legislation. Modeling maximum maximorum quota update to determine the area of permanent preservation of OrÃs reservoir was through statistical process using probability distributions, among which eight types were selected: Weibull three parameters, Minor Values Extremes, Weibull, Logistics, Normal, Gamma, lognormal and Largest Values Extremes. The performance of these distributions were evaluated by confidence level, this posted at 95% and the adherence by Anderson Darling method, highlighting the type distribution Weibull three parameters, which obtained the best overall performance, modeling a maximum maximorum quota of 208 meters, one meter above the established by the original project. Established the demarcation reference, began the computational modeling of the APPs, it was identified that the current legislation by reference to the maximum maximorum quota of the original project of DNOCS, is less restrictive than the previous legislation establishes an area 26% smaller that established by previous legislation, promoting changes as the territorial definition of the municipalities covered by the reservoir. Thus, APP gains were observed in some municipalities and other municipalities was observed loss of APP, the process does not occur homogeneously.
40

Quantis mensais de precipitação no Estado do Paraná utilizando técnicas multivariadas de agrupamento. / Rainfall monthly quantile in paraná-br using multivariate clustering techniques

Pansera, Wagner Alessandro 01 February 2010 (has links)
Made available in DSpace on 2017-07-10T18:56:25Z (GMT). No. of bitstreams: 1 Wagner Alessandro Pansera.pdf: 570958 bytes, checksum: 6984d964a2c6069ce6ee03e50131b7a1 (MD5) Previous issue date: 2010-02-01 / The rainfall is the only way of entry of water in river basins, therefore the knowledge of their spatial and temporal behavior as well as their frequency is of vital importance for the planning, operation and design of hydraulic works. To this end, probabilistic models are built based on observations made in the place of interest in order to obtain future risks. It often happens that a network of hydrological monitoring data does not show the place of interest is necessary to use procedures that allow the transport of this information. Hydrological regionalization is a statistical procedure that allows, within a homogeneous region, estimate probabilities of occurrence, by data from neighboring stations, in site of interest. The problem is how to define homogeneous region. The solution proposed in this paper was the use of multivariate clustering methods, k-means and hierarquical cluster, validated by quality index. The clusters were then subjected to measures of discordancy and heterogeneity to evaluate homogeneity of groups. The clustering methodology that obtained the best performance was a hybrid approach between k-means and ward, getting errors of at most 10% in the estimation of quantiles regional dimensionless. / O conhecimento do comportamento espacial e temporal da precipitação pluvial e sua frequência são vitais para o planejamento, operação e dimensionamento de obras hidráulicas. Para este fim, são construídos modelos probabilísticos baseados nas observações efetuadas nos locais de interesse com a finalidade de obter os riscos de ocorrência futura. No entanto, uma rede de monitoramento hidrológico não apresenta dados de todos os locais de interesse, sendo necessário recorrer a procedimentos que possibilitem o transporte desta informação. A regionalização hidrológica é um procedimento estatístico que permite, dentro de uma região homogênea, estimar probabilidades de ocorrência, pelos dados das estações vizinhas, no local de interesse. O problema está em como determinar uma região homogênea. A solução proposta neste trabalho foi a utilização de metodologias de agrupamento multivariados, k-médias e hierárquicos, validados por índices de qualidade de agrupamento. Foram usadas 227 estações localizadas no estado do Paraná com dados mensais referentes ao período de 1976-2006. Os agrupamentos obtidos foram submetidos às medidas de discordância e heterogeneidade para avaliar a homogeneidade dos grupos. A metodologia de agrupamento que obteve o melhor desempenho foi a metodologia híbrida entre k-médias e ward, obtendo erros de, no máximo, 10% na estimativa de quantis regionais adimensionais.

Page generated in 0.0448 seconds