21 |
Recovery and Analysis of Regulatory Networks from Expression Data Using Sums of Separable FunctionsBotts, Ryan T. 22 September 2010 (has links)
No description available.
|
22 |
Noninformative Prior Bayesian Analysis for Statistical Calibration ProblemsEno, Daniel R. 24 April 1999 (has links)
In simple linear regression, it is assumed that two variables are linearly related, with unknown intercept and slope parameters. In particular, a regressor variable is assumed to be precisely measurable, and a response is assumed to be a random variable whose mean depends on the regressor via a linear function. For the simple linear regression problem, interest typically centers on estimation of the unknown model parameters, and perhaps application of the resulting estimated linear relationship to make predictions about future response values corresponding to given regressor values. The linear statistical calibration problem (or, more precisely, the absolute linear calibration problem), bears a resemblance to simple linear regression. It is still assumed that the two variables are linearly related, with unknown intercept and slope parameters. However, in calibration, interest centers on estimating an unknown value of the regressor, corresponding to an observed value of the response variable.
We consider Bayesian methods of analysis for the linear statistical calibration problem, based on noninformative priors. Posterior analyses are assessed and compared with classical inference procedures. It is shown that noninformative prior Bayesian analysis is a strong competitor, yielding posterior inferences that can, in many cases, be correctly interpreted in a frequentist context.
We also consider extensions of the linear statistical calibration problem to polynomial models and multivariate regression models. For these models, noninformative priors are developed, and posterior inferences are derived. The results are illustrated with analyses of published data sets. In addition, a certain type of heteroscedasticity is considered, which relaxes the traditional assumptions made in the analysis of a statistical calibration problem. It is shown that the resulting analysis can yield more reliable results than an analysis of the homoscedastic model. / Ph. D.
|
23 |
Comparison of porous media permeability : experimental, analytical and numerical methodsMahdi, Faiz M. January 2014 (has links)
Permeability is an important property of a porous medium and it controls the flow of fluid through the medium. Particle characteristics are known to affect the value of the permeability. However, experimental investigation of the effects of these particle characteristics on the value of permeability is time-consuming while analytical predictions have been reported to overestimate it leading to inefficient design. To overcome these challenges, there is the need for the development of new models that can predict permeability based on input variables and process conditions. In this research, data from experiments, Computational Fluid Dynamics (CFD) and literature were employed to develop new models using Multivariate Regression (MVR) and Artificial Neural Networks (ANNs). Experimental measurements of permeability were performed using high and low shear separation processes. Particles of talc, calcium carbonate and titanium dioxide (P25) were used in order to study porous media with different particle characteristics and feed concentrations. The effects of particle characteristics and initial stages of filtration as well as the reliability of filtration techniques (constant pressure filtration, CPF and constant rate filtration, CRF) were investigated. CFD simulations were also performed of porous media for different particle characteristics to generate additional data. The regression and ANN models also included permeability data taken from reliable literature sources. Particle cluster formation was only found in P25 leading to an increase of permeability especially in sedimentation. The constant rate filtration technique was found more suitable for permeability measurement than constant pressure. Analyses of data from the experiments, CFD and correlation showed that Sauter mean diameter (ranging from 0.2 to 168 μm), the fines ratio (x50/x10), particle shape (following Heywood s approach), and voidage of the porous medium (ranging from 98.5 to 37.2%) were the significant parameters for permeability prediction. Using these four parameters as inputs, performance of models based on linear and nonlinear MVR as well as ANN were investigated together with the existing analytical models (Kozeny-Carman, K-C and Happel-Brenner, H-B). The coefficient of correlation (R2), root mean square error (RMSE) and average absolute error (AAE) were used as performance criteria for the models. The K-C and H-B are two-variable models (Sauter mean diameter and voidage) and two variables ANN and MVR showed better predictive performance. Furthermore, four-variable (Sauter mean diameter, the x50/x10, particle shape, and voidage) models developed from the MVR and ANN exhibit excellent performance. The AAE was found with K-C and H-B models to be 35 and 40%, respectively while the results of using ANN2 model reduced the AAE to 14%. The ANN4 model further decreased the AAE to approximately 9% compared to the measured results. The main reason for this reduced error was the addition of a shape coefficient and particle spread (fine ratio) in the ANN4 model. These two parameters are absent in the analytical relations, such as K-C and H-B models. Furthermore, it was found that using the ANN4 (4-5-1) model led to increase in the R2 value from 0.90 to 0.99 and significant decrease in the RMSE value from 0.121 to 0.054. Finally, the investigations and findings of this work demonstrate that relationships between permeability and the particle characteristics of the porous medium are highly nonlinear and complex. The new models possess the capability to predict the permeability of porous media more accurately owing to the incorporation of additional particle characteristics that are missing in the existing models.
|
24 |
Applications and optimization of response surface methodologies in high-pressure, high-temperature gaugesHässig Fonseca, Santiago 05 July 2012 (has links)
High-Pressure, High-Temperature (HPHT) pressure gauges are commonly used in oil wells for pressure transient analysis. Mathematical models are used to relate input perturbation (e.g., flow rate transients) with output responses (e.g., pressure transients), and subsequently, solve an inverse problem that infers reservoir parameters. The indispensable use of pressure data in well testing motivates continued improvement in the accuracy (quality), sampling rate (quantity), and autonomy (lifetime) of pressure gauges.
This body of work presents improvements in three areas of high-pressure, high-temperature quartz memory gauge technology: calibration accuracy, multi-tool signal alignment, and tool autonomy estimation. The discussion introduces the response surface methodology used to calibrate gauges, develops accuracy and autonomy estimates based on controlled tests, and where applicable, relies on field gauge drill stem test data to validate accuracy predictions. Specific contributions of this work include:
- Application of the unpaired sample t-test, a first in quartz sensor calibration, which resulted in reduction of uncertainty in gauge metrology by a factor of 2.25, and an improvement in absolute and relative tool accuracies of 33% and 56%, accordingly. Greater accuracy yields more reliable data and a more sensitive characterization of well parameters.
- Post-processing of measurements from 2+ tools using a dynamic time warp algorithm that mitigates gauge clock drifts. Where manual alignment methods account only for linear shifts, the dynamic algorithm elastically corrects nonlinear misalignments accumulated throughout a job with an accuracy that is limited only by the clock's time resolution.
- Empirical modeling of tool autonomy based on gauge selection, battery pack, sampling mode, and average well temperature. A first of its kind, the model distills autonomy into two independent parameters, each a function of the same two orthogonal factors: battery power capacity and gauge current consumption as functions of sampling mode and well temperature -- a premise that, for 3+ gauge and battery models, reduces the design of future autonomy experiments by at least a factor of 1.5.
|
25 |
Efes: An Effort Estimation MethodologyTunalilar, Seckin 01 October 2011 (has links) (PDF)
The estimation of effort is at the heart of project tasks, since it is used for many purposes such as cost estimation, budgeting, monitoring, project planning, control and software investments. Researchers analyze problems of the estimation, propose new models and use new techniques to improve accuracy. However up to now, there is no comprehensive estimation methodology to guide companies in their effort estimation tasks. Effort estimation problem is not only a computational but also a managerial problem. It requires estimation goals, execution steps, applied measurement methods and updating mechanisms to be properly defined. Besides project teams should have motivation and responsibilities to build a reliable database. If such methodology is not defined, common interpretation will not be constituted among software teams of the company, and variances in measurements and divergences in collected information prevents to collect sufficient historical information for building accurate models. This thesis proposes a methodology for organizations to manage and execute effort estimation processes. The approach is based on the reported best practices,
v
empirical results of previous studies and solutions to problems & / conflicts described in literature. Five integrated processes: Data Collection, Size Measurement, Data Analysis, Calibration, Effort Estimation processes are developed with their artifacts, procedures, checklists and templates. The validation and applicability of the methodology is checked in a middle-size software company. During the validation of methodology we also evaluated some concepts such as Functional Similarity (FS) and usage of Base Functional Components (BFC) in effort model on a reliable dataset. By this way we evaluated whether these subjects should be a part of methodology or not. Besides in this study it is the first time that the COSMIC has been used for Artificial Neural Network models.
|
26 |
Walking in the Land of Cars: Automobile-Pedestrian Accidents in Hillsborough County, FloridaPoling, Marc Aaron 01 January 2012 (has links)
Analyses of traffic accidents are often focused on the characteristics of the accident event and hence do not take into account the broader neighborhood contexts in which accidents are located. This thesis seeks to extend empirical analyses of accidents by understanding the link between accidents and their surroundings. The case study for this thesis is Hillsborough County, Florida, within which the city of Tampa is located. The Tampa Bay region ranks very high in terms of accident rates within U.S. metropolitan areas and is also characterized by transport policies which favor private automobiles over mass transit options, making it an especially valuable case study. This thesis seeks explanations for accidents through regression models which relate accident occurrence and accident rates to traffic, roadway and socioeconomic characteristics of census tracts. The overall findings are that socioeconomic variables, especially poverty rates and percent non-white, and transport characteristics, such as density of bus stops, show a significant relationship with both dependent variables. This research provides support for considering the wider urban context of social inequalities in order to understand the complex geographic distribution of accidents.
|
27 |
The Firm-Specific Determinants of Capital Structure in Public Sector and Private Sector Banks in IndiaGarach, Jatin Bijay 23 April 2020 (has links)
The banking industry in India has undergone many phases in its history; evolving from a regulated, decentralised system in the early 1800’s, to a regulated, centralised system during British rule, to a nationalised system following India’s independence, and finally a combination of a nationalised and private system adopting global standards as it currently stands. This study has two main aims. Firstly, it will assess the relationship between the firm-specific determinants of capital structure, based on the prevailing literature, and the capital structure of public and private sector banks in India. Secondly, it will determine whether there is a difference in the firm-specific factors that contribute to the determination of the capital structure of public sector banks and private sector banks. This study adopts quantitative methods, similar to previous studies on the relationship between capital structure and its firm-specific determinants. The dependent variable, being total leverage, is regressed against multiple independent variables, being profitability, growth, firm size and credit risk (hereinafter referred to as “risk” unless otherwise indicated) in a multivariate linear regression model. This study adds to the current literature by applying the same firm-specific independent variables to the case of private and public sector banks and then to evaluate and compare the similarities and differences between the regression outputs. The results show that for private sector banks, all independent variables are statistically significant in explaining total leverage, where all the independent variables conform to the current literature on capital structure – profitability (-), firm size (-), growth (+) and credit risk (-). Conversely, for public sector banks, all independent variables were considered to be statistically significant, except for credit risk – profitability (-), firm size (+) and growth (+). These results imply that credit risk is not an important determination in a nationalised banks’ capital structure; thus, providing evidence for the moral hazard theory of public sector banks.
|
28 |
The role of video game quality in financial marketsSurminski, Nikolai January 2023 (has links)
Product quality is an often-overlooked factor in the financial analysis of video games. Quality measurements have been proven to work as a reliable predictor of sales while also directly influencing performance in financial markets. If markets are efficient in reflecting new information, perception of video game quality will lead to a rational response. This thesis examines the market reaction to this information set. The release structure in the video game industry allows for a direct observation of the isolated quality effect through third-party reviews. These reviews form an objective measurement of game quality without having other revealing characteristics, as all other information is released prior to these reviews. The possibility to exploit this unique case motivates the analysis through multiple empirical designs. Results from a multivariate regression model show a statistically significant positive effect of higher quality on short-term returns over all models. The release of a lower quality game reduces returns only for high-profile games. Both of these results are confirmed by the results from a rules-based trading strategy. These effects subside in the face of longer holding periods and higher exposure. This thesis finds sufficient evidence that video game quality should be an important factor in the analysis of video game companies. At the same time, these effects are only persistent in the short-time validating an efficient response to new information by financial investors.
|
29 |
Determinação simultânea de valsartana, hidroclorotiazida e besilato de anlodipino em formulação farmacêutica por infravermelho próximo e calibração multivariadaBecker, Natana 21 August 2015 (has links)
Submitted by Marcos Anselmo (marcos.anselmo@unipampa.edu.br) on 2016-09-21T20:29:32Z
No. of bitstreams: 2
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
Natana Becker.pdf: 1266893 bytes, checksum: 018f6e1bf563008837c534403e5c801e (MD5) / Approved for entry into archive by Marcos Anselmo (marcos.anselmo@unipampa.edu.br) on 2016-09-21T20:29:52Z (GMT) No. of bitstreams: 2
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
Natana Becker.pdf: 1266893 bytes, checksum: 018f6e1bf563008837c534403e5c801e (MD5) / Made available in DSpace on 2016-09-21T20:29:52Z (GMT). No. of bitstreams: 2
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
Natana Becker.pdf: 1266893 bytes, checksum: 018f6e1bf563008837c534403e5c801e (MD5)
Previous issue date: 2015-08-21 / Os fármacos valsartana (VAL), hidroclorotiazida (HCT) e besilato de anlodipino (ANL) são utilizados em associação e comercializados no Brasil como agentes anti-hipertensivos. Geralmente a determinação simultânea destes fármacos é realizada por cromatografia líquida de alta eficiência (CLAE). Este trabalho teve por objetivo a determinação simultânea de VAL, HCT e ANL em uma formulação comercial de comprimidos através da técnica de espectroscopia no infravermelho próximo com transformada de Fourier e acessório de esfera de integração (FT-NIR) associadas a métodos de análise multivariada. Os modelos de calibração foram construídos utilizando mínimos quadrados parciais (PLS) e seleção de variáveis através dos algoritmos mínimos quadrados parciais por intervalo (iPLS) e mínimos quadrados parciais por sinergismo de intervalos (siPLS). Um total de 36 amostras sintéticas e 1 amostra real (26 amostras para o conjunto de calibração e 11 amostras para o conjunto de previsão), foram utilizadas as faixas de concentração de 261,9-500,0 mg g-1 para VAL; 20,2-83,3 mg g-1 para HCT e 11,6-49,6 mg g-1 para ANL. Os dados espectrais foram adquiridos na faixa de 4000 a 10000 cm-1 com resolução de 4 cm-1 por FT-NIR. Os melhores modelos foram obtidos através da utilização do pré-processamento centrado na média (CM) e do tratamento de correção do espalhamento de luz (MSC). O erro relativo de previsão (RSEP%) de 1,27% para VAL, 1,92% para HCT e 5,19%para ANL, foi obtido após seleção dos melhores intervalos por siPLS para dados obtidos por FT-NIR. Não foi encontrada diferença significativa (teste t-pareado, 95% de confiança) entre os valores do método de referência e do método proposto. Os resultados mostraram que modelos de regressão PLS (associados a métodos de seleção de variáveis, como iPLS e siPLS) combinados com FT-NIR são promissores no desenvolvimento de metodologias mais simples, rápidas e não destrutivas. Estes modelos permitem a determinação simultânea de VAL, HCT e ANL na formulação farmacêutica. / Valsartan (VAL), hydrochlorothiazide (HCT) and amlodipine besylate (ANL) drugs are used in combination and they are commercialized in Brazil as antihypertensive agents. Generally, the simultaneous determination of these drugs is carried out by high performance liquid chromatography (CLAE). This study aimed to the simultaneous determination of VAL, HCT, and ANL in a comercial tablet formulation through the technique near infrared spectroscopy with Fourier transform and integrating sphere accessory (FT- NIR) associated with methods of multivariate analysis. The calibration models were built using partial least squares (PLS) and variable selection through partial least squares algorithms for interval (iPLS) and partial least squares by synergism intervals (siPLS). A total of 36 synthetic samples 1 and commercial sample (26 samples for the calibration sample set and 11 for the prediction set), were used the concentration ranges of 261.9-500.0 mg g-1 for VAL; 20.2-83.3 mg g-1 for HCT and 11.6-49.6 mg g-1 for ANL. The spectral data were acquired in the range 4000-10000 cm-1 with resolution of 4 cm-1 by FT-NIR. Multiplicative scatter correction (MSC) and the data centered in the media (CM) produced the best models. A relative standard error of prediction (RSEP%) of 1.27% for VAL, 1.92% for HCT and 5.19% for ANL was obtained after selection of the best intervals for data obtained by siPLS FT-NIR. There was no significant difference (paired t-test, 95% confidence) between the values of the reference method and the proposed method. Results showed that PLS models regression (associated with iPLS and siPLS regression models) combined with FT-NIR are promising in the development of simpler methods, rapid and non-destructive. These models allow simultaneous determination of VAL, HCT, and ANL in the pharmaceutical formulation.
|
30 |
Método de monitoramento para gestão de portfólio de produtosHerzer, Rafael 29 February 2016 (has links)
Submitted by Silvana Teresinha Dornelles Studzinski (sstudzinski) on 2016-06-14T15:45:26Z
No. of bitstreams: 1
Rafael Herzer_.pdf: 1397260 bytes, checksum: a9941bd0932b535c5699cf0b35a815dc (MD5) / Made available in DSpace on 2016-06-14T15:45:26Z (GMT). No. of bitstreams: 1
Rafael Herzer_.pdf: 1397260 bytes, checksum: a9941bd0932b535c5699cf0b35a815dc (MD5)
Previous issue date: 2016-02-29 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / O presente trabalho tem como objetivo propor um método de monitoramento para gestão de portfólio, o qual, através de um sistema multi-critério e de um modelo econométrico, identifica variações no cenário econômico e em indicadores da empresa sendo então, a partir do monitoramento de resíduos, possível definir o momento exato para alteração de portfólio de produtos. A Gestão de Portfólio de produtos vem atraindo interesse dos gestores das corporações e deste modo, é difícil encontrar alguma organização que não possua uma carteira de produtos e projetos para gerenciar. A Gestão de Portfólio trata das decisões de alocação de recursos e de como ficará a carteira dos produtos atuais, sendo uma ferramenta de extrema importância para o resultado, principalmente financeiro, das organizações. Encontram-se vários métodos na literatura para realizar a Gestão do Portfólio, dentre os quais modelos financeiros, modelos probabilísticos financeiros, modelos de escores e checklists, abordagens de hierarquia analítica, abordagens comportamentais e abordagens de mapas ou diagrama de bolhas são os mais relevantes. Mesmo existindo diversos métodos na literatura para realizar a gestão do portfólio, não há consenso sobre qual método deve ser utilizado em cada etapa específica. Esses métodos também necessitam de intervenção dos gestores, levando em consideração que geralmente as informações disponíveis para tomada de decisão não são completas ou exatas. Para este estudo, foi realizado um estudo de simulação Monte Carlo para avaliar a sensibilidade dos diversos elementos que compõem o método. Os resultados mostraram taxas de alarmes falsos e tempo médio para detectar a mudança semelhantes a estudos anteriores. Esse processo de gestão e tomada de decisão é considerado complexo para os gestores das empresas, uma vez que o portfólio necessita ser periodicamente revisado, buscando sempre maximização de valor e equilíbrio ideal de produtos no mercado. Por fim, a aplicação do modelo é ilustrada por um caso real, utilizando dados fornecidos por uma empresa multinacional do segmento agrícola. / Product Portfolio Management is attracting the interest of the managers of the corporations. With the competitiveness of the market, it is difficult to find an organization that does not have a portfolio of products to manage. The Portfolio Management deals with resource allocation decisions and how will the portfolio of current products be compouse, being an extremely important tool for the result, especially financial, for the organizations. This process of management and decision making is considered complex to company managers, since the portfolio needs to be periodically revised, always seeking to maximize value and correct balance of products on the market. There are several methods in the literature to perform portfolio management, among which financial models, financial probabilistic models, scores and checklists models, analytical hierarchy of approaches, behavioral approaches and approaches map or diagram bubbles are the most relevant. While there are several methods in the literature to make the portfolio management, there is no consensus about which method should be used in each specific step. These methods also require the intervention of managers, taking into account that generally available information for decision-making are not complete or accurate. This paper aims to propose a method, which, through a multi-criteria system containing an econometric model, identifies changes in the economic environment and business indicators and then, from the profile monitoring, can set the exacly time for change portfolio of products. We performed the Monte Carlo simulation study to assess the sensitivity of the various parts that make up the method. The results showed false alarm rate and mean time to detect changes similar to previous studies. Finally, the application of the model is illustrated by a real case using data provided by a multinational company, agricultural segment.
|
Page generated in 0.1174 seconds