Spelling suggestions: "subject:"istatistical modeling"" "subject:"bystatistical modeling""
21 |
Assessing The Probability Of Fluid Migration Caused By Hydraulic Fracturing; And Investigating Flow And Transport In Porous Media Using MriMontague, James 01 January 2017 (has links)
Hydraulic fracturing is used to extract oil and natural gas from low permeability formations. The potential of fluids migrating from depth through adjacent wellbores and through the production wellbore was investigated using statistical modeling and predic-tive classifiers. The probability of a hydraulic fracturing well becoming hydraulically connected to an adjacent well in the Marcellus shale of New York was determined to be between 0.00% and 3.45% at the time of the study. This means that the chance of an in-duced fracture from hydraulic fracturing intersecting an existing well is highly dependent on the area of increased permeability caused by fracturing. The chance of intersecting an existing well does not mean that fluid will flow upwards; for upward migration to occur, a pathway must exist and a pressure gradient is required to drive flow, with the exception of gas flow caused by buoyancy. Predictive classifiers were employed on a dataset of wells in Alberta Canada to identify well characteristics most associated to fluid migration along the production well. The models, specifically a random forest, were able to identify pathways better than random guessing with 78% of wells in the data set identified cor-rectly.
Magnetic resonance imaging (MRI) was used to visualize and quantify contami-nant transport in a soil column using a full body scanner. T1 quantification was used to determine the concentration of a contaminant surrogate in the form of Magnevist, an MRI contrast agent. Imaging showed a strong impact from density driven convection when the density difference between the two fluids was small (0.3%). MRI also identified a buildup of contrast agent concentration at the interface between a low permeability ground silica and higher permeability AFS 50-70 testing sand when density driven con-vection was eliminated.
|
22 |
Analysis and Modeling of Pedestrian Walking Behaviors Involving Individuals with DisabilitiesSharifi, Mohammad Sadra 01 May 2016 (has links)
The objective of this dissertation was to study walking behaviors of pedestrian groups involving individuals with disabilities. To this end, large scale controlled walking experiments were conducted at Utah State University (USU) to examine walking behaviors in various walking facility types, such as passageway, right angle, oblique angle, queuing area, bottleneck, and stairs. Walking experiments were conducted over four days involving participants with and without disabilities. Automated video identification and semi-structured questionnaires were used to collect revealed and stated walking data. This study provided statistical analysis and models to study three different aspects of operational walking behaviors.
Firstly, walking speed was examined as one of the most important behavioral variables. The differences in crowd walking speeds were carefully noted in analyzing the effects of adding individuals with disabilities and the impacts of different indoor walking facilities. Results showed that the presence of individuals with disabilities in a crowd significantly reduces the overall crowd speed. Statistical analysis also provided to compare walking speeds of pedestrian groups involving individuals with disabilities in different walking environments.
Secondly, the dissertation proposed a framework to study the interactions of different pedestrian groups. Specifically, a mixed time headway distribution model was used to examine the time headway between followers and different leader types. In addition, the implications of interaction behaviors were studied based on the capacity of the queuing area behind the doorway. Results revealed that: (1) individuals with disabilities had significant effects on capacity reduction; (2) individuals with visual impairments and non-motorized ambulatory devices had the minimum capacity reduction effects in queuing area; and (2) individuals with motorized wheelchairs and individuals with mobility canes had the maximum capacity reduction effects in queuing area.
Lastly, this study explored how a heterogeneous mix of pedestrians (including individuals with disabilities) perceive and evaluate operational performance of walking facilities. Both trajectory and survey data sources were used, and an ordered statistical approach was applied to analyze pedestrian perceptions. Results indicated that individuals with disabilities were less tolerant of extreme congested environments. Furthermore, analysis showed that the Level of Service (LOS) criteria provided in HCM does not follow the actual perceptions.
|
23 |
Blind image and video quality assessment using natural scene and motion modelsSaad, Michele Antoine 05 November 2013 (has links)
We tackle the problems of no-reference/blind image and video quality evaluation. The approach we take is that of modeling the statistical characteristics of natural images and videos, and utilizing deviations from those natural statistics as indicators of perceived quality. We propose a probabilistic model of natural scenes and a probabilistic model of natural videos to drive our image and video quality assessment (I/VQA) algorithms respectively. The VQA problem is considerably different from the IQA problem since it imposes a number of challenges on top of the challenges faced in the IQA problem; namely the challenges arising from the temporal dimension in video that plays an important role in influencing human perception of quality. We compare our IQA approach to the state of the art in blind, reduced reference and full-reference methods, and we show that it is top performing. We compare our VQA approach to the state of the art in reduced and full-reference methods (no blind VQA methods that perform reliably well exist), and show that our algorithm performs as well as the top performing full and reduced reference algorithms in predicting human judgments of quality. / text
|
24 |
Radio frequency interference modeling and mitigation in wireless receiversGulati, Kapil 21 October 2011 (has links)
In wireless communication systems, receivers have generally been designed under the assumption that the additive noise in system is Gaussian. Wireless receivers, however, are affected by radio frequency interference (RFI) generated from various sources such as other wireless users, switching electronics, and computational platforms. RFI is well modeled using non-Gaussian impulsive statistics and can severely degrade the communication performance of wireless receivers designed under the assumption of additive Gaussian noise.
Methods to avoid, cancel, or reduce RFI have been an active area of research over the past three decades. In practice, RFI cannot be completely avoided or canceled at the receiver. This dissertation derives the statistics of the residual RFI and utilizes them to analyze and improve the communication performance of wireless receivers. The primary contributions of this dissertation are to (i) derive instantaneous statistics of co-channel interference in a field of Poisson and Poisson-Poisson clustered interferers, (ii) characterize throughput, delay, and reliability of decentralized wireless networks with temporal correlation, and (iii) design pre-filters to mitigate RFI in wireless receivers. / text
|
25 |
Multivariate Modeling in Chemical Toner Manufacturing ProcessKhorami, Hassan January 2013 (has links)
Process control and monitoring is a common problem in high value added chemical manufacturing industries where batch processes are used to produce wide range of products on the same piece of equipment. This results in frequent adjustments on control and monitoring schemes. A chemical toner manufacturing process is representative of an industrial case which is used in this thesis. Process control and monitoring problem of batch processes have been researched, mostly through the simulation, and published in the past . However, the concept of applying the subject to chemical toner manufacturing process or to use a single indicator for multiple pieces of equipment have never been visited previously.
In the case study of this research, there are many different factors that may affect the final quality of the products including reactor batch temperature, jacket temperature, impeller speed, rate of the addition of material to the reactor, or process variable associated with the pre-weight tank. One of the challenging tasks for engineers is monitoring of these process variables and to make necessary adjustments during the progression of a batch and change controls strategy of future batches upon completion of an existing batch. Another objective of the proposed research is the establishment of the operational boundaries to monitor the process through the usage of process trajectories of the history of the past successful batches.
In this research, process measurements and product quality values of the past successful batches were collected and projected into matrix of data; and preprocessed through time alignment, centering, and scaling. Then the preprocessed data was projected into lower dimensions (latent variables) to produce latent variables and their trajectories during successful batches. Following the identification of latent variables, an empirical model was built through a 4-fold cross validation that can represent the operation of a successful batch.
The behavior of two abnormal batches, batch 517 and 629, is then compared to the model by testing its statistical properties. Once the abnormal batches were flagged, their data set were folded back to original dimension to form a localization path for the time of abnormality and process variables that contributed to the abnormality. In each case the process measurement were used to establish operational boundaries on the latent variable space.
|
26 |
NOVEL COMPUTATIONAL METHODS FOR TRANSCRIPT RECONSTRUCTION AND QUANTIFICATION USING RNA-SEQ DATAHuang, Yan 01 January 2015 (has links)
The advent of RNA-seq technologies provides an unprecedented opportunity to precisely profile the mRNA transcriptome of a specific cell population. It helps reveal the characteristics of the cell under the particular condition such as a disease. It is now possible to discover mRNA transcripts not cataloged in existing database, in addition to assessing the identities and quantities of the known transcripts in a given sample or cell. However, the sequence reads obtained from an RNA-seq experiment is only a short fragment of the original transcript. How to recapitulate the mRNA transcriptome from short RNA-seq reads remains a challenging problem. We have proposed two methods directly addressing this challenge. First, we developed a novel method MultiSplice to accurately estimate the abundance of the well-annotated transcripts. Driven by the desire of detecting novel isoforms, a max-flow-min-cost algorithm named Astroid is designed for simultaneously discovering the presence and quantities of all possible transcripts in the transcriptome. We further extend an \emph{ab initio} pipeline of transcriptome analysis to large-scale dataset which may contain hundreds of samples. The effectiveness of proposed methods has been supported by a series of simulation studies, and their application on real datasets suggesting a promising opportunity in reconstructing mRNA transcriptome which is critical for revealing variations among cells (e.g. disease vs. normal).
|
27 |
Credibility modeling with applicationsKhapaeva, Tatiana 16 May 2014 (has links)
The purpose of this thesis is to show how the theory and practice of credibility
can bene t statistical modeling. The task was, fundamentally, to derive models that
could provide the best estimate of the losses for any given class and also to assess the
variability of the losses, both from a class perspective as well as from an aggregate
perspective. The model tting and diagnostic tests will be carried out using standard
statistical packages. A case study that predicts the number of deaths due to cancer is
considered, utilizing data furnished by the Colorado Department of Public Health and
Environment. Several credibility models are used, including Bayesian, B uhlmann and
B uhlmann-Straub approaches, which are useful in a wide range of actuarial applications.
|
28 |
Multivariate Modeling in Chemical Toner Manufacturing ProcessKhorami, Hassan January 2013 (has links)
Process control and monitoring is a common problem in high value added chemical manufacturing industries where batch processes are used to produce wide range of products on the same piece of equipment. This results in frequent adjustments on control and monitoring schemes. A chemical toner manufacturing process is representative of an industrial case which is used in this thesis. Process control and monitoring problem of batch processes have been researched, mostly through the simulation, and published in the past . However, the concept of applying the subject to chemical toner manufacturing process or to use a single indicator for multiple pieces of equipment have never been visited previously.
In the case study of this research, there are many different factors that may affect the final quality of the products including reactor batch temperature, jacket temperature, impeller speed, rate of the addition of material to the reactor, or process variable associated with the pre-weight tank. One of the challenging tasks for engineers is monitoring of these process variables and to make necessary adjustments during the progression of a batch and change controls strategy of future batches upon completion of an existing batch. Another objective of the proposed research is the establishment of the operational boundaries to monitor the process through the usage of process trajectories of the history of the past successful batches.
In this research, process measurements and product quality values of the past successful batches were collected and projected into matrix of data; and preprocessed through time alignment, centering, and scaling. Then the preprocessed data was projected into lower dimensions (latent variables) to produce latent variables and their trajectories during successful batches. Following the identification of latent variables, an empirical model was built through a 4-fold cross validation that can represent the operation of a successful batch.
The behavior of two abnormal batches, batch 517 and 629, is then compared to the model by testing its statistical properties. Once the abnormal batches were flagged, their data set were folded back to original dimension to form a localization path for the time of abnormality and process variables that contributed to the abnormality. In each case the process measurement were used to establish operational boundaries on the latent variable space.
|
29 |
EVALUATION OF STATISTICAL METHODS FOR MODELING HISTORICAL RESOURCE PRODUCTION AND FORECASTINGNanzad, Bolorchimeg 01 August 2017 (has links)
This master’s thesis project consists of two parts. Part I of the project compares modeling of historical resource production and forecasting of future production trends using the logit/probit transform advocated by Rutledge (2011) with conventional Hubbert curve fitting, using global coal production as a case study. The conventional Hubbert/Gaussian method fits a curve to historical production data whereas a logit/probit transform uses a linear fit to a subset of transformed production data. Within the errors and limitations inherent in this type of statistical modeling, these methods provide comparable results. That is, despite that apparent goodness-of-fit achievable using the Logit/Probit methodology, neither approach provides a significant advantage over the other in either explaining the observed data or in making future projections. For mature production regions, those that have already substantially passed peak production, results obtained by either method are closely comparable and reasonable, and estimates of ultimately recoverable resources obtained by either method are consistent with geologically estimated reserves. In contrast, for immature regions, estimates of ultimately recoverable resources generated by either of these alternative methods are unstable and thus, need to be used with caution. Although the logit/probit transform generates high quality-of-fit correspondence with historical production data, this approach provides no new information compared to conventional Gaussian or Hubbert-type models and may have the effect of masking the noise and/or instability in the data and the derived fits. In particular, production forecasts for immature or marginally mature production systems based on either method need to be regarded with considerable caution. Part II of the project investigates the utility of a novel alternative method for multicyclic Hubbert modeling tentatively termed “cycle-jumping” wherein overlap of multiple cycles is limited. The model is designed in a way that each cycle is described by the same three parameters as conventional multicyclic Hubbert model and every two cycles are connected with a transition width. Transition width indicates the shift from one cycle to the next and is described as weighted coaddition of neighboring two cycles. It is determined by three parameters: transition year, transition width, and γ parameter for weighting. The cycle-jumping method provides superior model compared to the conventional multicyclic Hubbert model and reflects historical production behavior more reasonably and practically, by better modeling of the effects of technological transitions and socioeconomic factors that affect historical resource production behavior by explicitly considering the form of the transitions between production cycles.
|
30 |
Estimativa da irradiação solar global pelo método de Angstrom-Prescott e técnicas de aprendizado de máquinas / Estimation of global solar irradiation by Angstrom-Prescott method and machinelearning techniquesSilva, Maurício Bruno Prado da [UNESP] 22 February 2016 (has links)
Submitted by MAURÍCIO BRUNO PRADO DA SILVA null (mauricio.prado19@hotmail.com) on 2016-04-14T21:18:39Z
No. of bitstreams: 1
Dissertação_Mauricio com ficha.pdf: 839383 bytes, checksum: d8cae8991d7bfed483f452706bf3cd66 (MD5) / Approved for entry into archive by Felipe Augusto Arakaki (arakaki@reitoria.unesp.br) on 2016-04-18T17:08:20Z (GMT) No. of bitstreams: 1
silva_mbp_me_bot.pdf: 839383 bytes, checksum: d8cae8991d7bfed483f452706bf3cd66 (MD5) / Made available in DSpace on 2016-04-18T17:08:20Z (GMT). No. of bitstreams: 1
silva_mbp_me_bot.pdf: 839383 bytes, checksum: d8cae8991d7bfed483f452706bf3cd66 (MD5)
Previous issue date: 2016-02-22 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / No presente trabalho é descrito o estudo comparativo de métodos de estimativas da irradiação solar global (HG) nas partições diária (HGd) e mensal (HGm): geradas pela técnica de Angstrom-Prescott (A-P) e duas técnicas de Aprendizado de Máquina (AM), Máquinas de Vetores de Suporte (MVS) e Redes Neurais Artificiais (RNA). A base de dados usada foi medida no período de 1996 a 2011, na Estação Solarimétrica em Botucatu. Por meio da regressão entre a transmissividade atmosférica (HG/HO) e razão de insolação (n/N), o modelo estatístico (A-P) foi determinado, obtendo equações lineares que permitem estimar HG com elevados coeficientes de determinação. As técnicas, MVS e RNA, foram treinadas na mesma arquitetura de A-P (modelo 1). As técnicas MVS e RNA foram treinadas ainda em mais 3 modelos com acréscimos, uma a uma, das variáveis temperatura do ar, precipitação e umidade relativa (modelos 2, 3 e 4). Os modelos foram validados usando uma base de dados de dois anos, denominadas de típico e atipico, por meio de correlações entre os valores estimados e medidos, indicativos estatísticos rMBE, MBE, rRMSE, RMSE e d de Willmott. Os indicativos estatísticos r das correlações mostraram que o modelo (A-P) pode estimar HG com elevados coeficientes de determinação nas duas condições de validação. Já indicativos estatísticos rMBE, MBE, rRMSE, RMSE e d de Willmott indicam que o modelo (A-P) pode ser utilizado na estimativa de HGd com exatidão e precisão. Os indicativos estatísticos obtidos pelos 4 modelos das técnicas MVSd e RNAd (diária) e MVSm e RNAm (mensal) podem ser utilizadas nas estimativas de HGd com elevadas correlações e com precisão e exatidão. Entre os modelos foram selecionadas por comparação entre os indicativo estatisticos as redes MVS4d e RNA4d (diária) e MVS1m e RNA1m (mensal). A comparação dos indicativos estatísticos rMBE, MBE, rRMSE, RMSE, d de Willmott, r e R2 obtidos na validação entre os modelos (A-P), MVS e RNA mostrou que: a técnica MVS apresentou melhor resultado que o modelo estatístico de (A-P); esta técnica apresentou melhor resultado que a RNA; o modelo estatístico (A-P), apresentou no geral melhor resultado que a RNA. / In this paper describes the comparative study of different methods for estimating global solar irradiation (HG) in the daily partitions (HGd) and monthly (HGm): generated by Angstrom-Prescott (AP) and two machine learning techniques (ML), Support Vector Machines (SVM) and Artificial Neural Networks (ANN). The used database was measured from 1996 to 2011, in Solarimetric station in Botucatu. Through regression between atmospheric transmissivity (HG / HO) and insolation ratio (n / N), the statistical model (A-P) was determined, obtaining linear equations that allow estimating HG with high coefficients of determination. The techniques, svm and ANN, were trained on the same architecture of A-P (model 1). The SVM and ANN techniques were further trained on the most models with 3 additions, one by one, the variable air temperature, rainfall and relative humidity (model 2, 3 and 4 ). The models were validated using a database of two years, called of typical and atypical, with correlation between estimated and measured values, statistical indications: rMBE, MBE, rRMSE, RMSE, and d Willmott. The statistical indicative of correlations coefficient (r) showed that the model (A-P) can be estimated with high HG determination coefficients in the two validation conditions. The rMBE, MBE, rRMSE, RMSE Willmott and d indicate that the model (A-P) can be used to estimate HGD with accuracy and precision. The statistical indicative obtained by the four models of technical SVMd and ANNd (daily) and SVMm and ANNm (monthly) can be used in the estimates of HGD with high correlations and with precision and accuracy. Among the models were selected by comparing the indicative statistical SVM4d and ANN4d networks (daily) and SVM1m and ANN1m (monthly). The comparison of statistical indicative rMBE, MBE, rRMSE, RMSE, d Willmott, r and R2 obtained in the validation of the models (A-P), SVM and ANN showed that: the SVM technique showed better results than the statistical model (A-P); this technique showed better results than the ANN; the statistical model (A-P) showed overall better result than ANN.
|
Page generated in 0.0874 seconds