121 |
A study of risk index and measurement uncertainty for food surveillance ¡V A case of melamine incidentLwo, Shih-hsiung 17 July 2012 (has links)
The melamine incident 2008 was a global food crisis and drew attentions to other potential food safety risks. Although there are regulations and standards for food safety, but one common problem in food risk management is that it lacks on hazard indicators - indicators in ranking of food risk and control.
The three algorithms developed in this article were:
1. A distribution fitting algorithm proposed to estimate population parameters for left-censored melamine data under log-normal assumption.
2. A risk index algorithm proposed to screen out food product categories with higher concentration without considering measurement uncertainty.
3. A misjudgment probability algorithm proposed to calculate the probability that food categories containing melamine more than legal limit but classified satisfactory under consideration of measurement uncertainty.
The test results on melamine collected from the website of the Centre for Food Safety of Hong Kong are empirically analyzed by the proposed algorithms. The risk index (RI) and the consumer¡¦s risk (CR) of multiple food categories are discussed and compared in details. Based on risk index (RI) and consumer¡¦s risk (CR), we build a risk assessment process to help assess melamine risk and make sample strategy in surveillance programme. The proposed risk assessment process can be applied to other chemical contaminant problems such as plasticizer (phthalate esters) and ractopamine (paylean), etc.
|
122 |
A Channel Coding Scheme for Solving Ambiguity in OFDM Systems Using Blind Data DetectorHong, Guo-fong 31 July 2012 (has links)
In orthogonal frequency division multiplexing (OFDM) system, blind estimator was proposed which can obtain high bandwidth efficiently. There is a serious ambiguity problem in blind data detector structure. Solution methods can divide into three cases: pilot signal, superimposed training, and channel coding. In order to achieve totally blind estimate, we use channel coding to solve ambiguity in this thesis. In previous study, it had been use low-density-parity-check code (LDPC) to solve ambiguity, and proposed an encoding method to avoid ambiguity for BPSK. However, we consider generic linear block code (LBC) and want to extend BPSK modulation to higher modulation scheme, including QPSK, 16QAM, and 64QAM. For any constellation follows grey coding, we induct a difference of inner product for ambiguity and derive some sufficient conditions for LBC. If LBC satisfy some conditions, then it could avoid ambiguity between valid code words and it can achieve totally blind estimate. In simulation section, for data estimate, we respectively use two LBC cases, which exist ambiguity or not. In order to be fair, we insert a pilot to solve ambiguity in LBC, which exist ambiguity. In simulation results, the performance of two cases is similar in high signal to noise ratio (SNR). In other words, if we use proper channel code which it satisfy sufficient conditions, then we can increase bandwidth efficiently.
|
123 |
Net pay evaluation: a comparison of methods to estimate net pay and net-to-gross ratio using surrogate variablesBouffin, Nicolas 02 June 2009 (has links)
Net pay (NP) and net-to-gross ratio (NGR) are often crucial quantities to characterize a reservoir and assess the amount of hydrocarbons in place. Numerous methods in the industry have been developed to evaluate NP and NGR, depending on the intended purposes. These methods usually involve the use of cut-off values of one or more surrogate variables to discriminate non-reservoir from reservoir rocks. This study investigates statistical issues related to the selection of such cut-off values by considering the specific case of using porosity () as the surrogate. Four methods are applied to permeability-porosity datasets to estimate porosity cut-off values. All the methods assume that a permeability cut-off value has been previously determined and each method is based on minimizing the prediction error when particular assumptions are satisfied. The results show that delineating NP and evaluating NGR require different porosity cut-off values. In the case where porosity and the logarithm of permeability are joint normally distributed, NP delineation requires the use of the Y-on-X regression line to estimate the optimal porosity cut-off while the reduced major axis (RMA) line provides the optimal porosity cut-off value to evaluate NGR. Alternatives to RMA and regression lines are also investigated, such as discriminant analysis and a data-oriented method using a probabilistic analysis of the porosity-permeability crossplots. Joint normal datasets are generated to test the ability of the methods to predict accurately the optimal porosity cut-off value for sampled sub datasets. These different methods have been compared to one another on the basis of the bias, standard error and robustness of the estimates. A set of field data has been used from the Travis Peak formation to test the performance of the methods. The conclusions of the study have been confirmed when applied to field data: as long as the initial assumptions concerning the distribution of data are verified, it is recommended to use the Y-on-X regression line to delineate NP while either the RMA line or discriminant analysis should be used for evaluating NGR. In the case where the assumptions on data distribution are not verified, the quadrant method should be used.
|
124 |
Essays on the Predictability and Volatility of Asset ReturnsJacewitz, Stefan A. 2009 August 1900 (has links)
This dissertation collects two papers regarding the econometric and economic theory
and testing of the predictability of asset returns. It is widely accepted that stock
returns are not only predictable but highly so. This belief is due to an abundance
of existing empirical literature fi nding often overwhelming evidence in favor of predictability.
The common regressors used to test predictability (e.g., the dividend-price
ratio for stock returns) are very persistent and their innovations are highly correlated
with returns. Persistence when combined with a correlation between innovations in
the regressor and asset returns can cause substantial over-rejection of a true null hypothesis.
This result is both well documented and well known. On the other hand,
stochastic volatility is both broadly accepted as a part of return time series and largely
ignored by the existing econometric literature on the predictability of returns. The
severe e ffect that stochastic volatility can have on standard tests are demonstrated
here. These deleterious e ffects render standard tests invalid. However, this problem
can be easily corrected using a simple change of chronometer. When a return time
series is read in the usual way, at regular intervals of time (e.g., daily observations),
then the distribution of returns is highly non-normal and displays marked time heterogeneity.
If the return time series is, instead, read according to a clock based on
regular intervals of volatility, then returns will be independent and identically normally
distributed. This powerful result is utilized in a unique way in each chapter of
this dissertation. This time-deformation technique is combined with the Cauchy t-test and the newly introduced martingale estimation technique. This dissertation nds no
evidence of predictability in stock returns. Moreover, using martingale estimation,
the cause of the Forward Premium Anomaly may be more easily discerned.
|
125 |
Online Auctions: Theoretical and Empirical InvestigationsZhang, Yu 2010 August 1900 (has links)
This dissertation, which consists of three essays, studies online auctions both
theoretically and empirically.
The first essay studies a special online auction format used by eBay, “Buy-It-
Now” (BIN) auctions, in which bidders are allowed to buy the item at a fixed BIN
price set by the seller and end the auction immediately. I construct a two-stage
model in which the BIN price is only available to one group of bidders. I find that
bidders cutoff is lower in this model, which means, bidders are more likely to accept
the BIN option, compared with the models assuming all bidders are offered the BIN.
The results explain the high frequency of bidders accepting BIN price, and may also
help explain the popularity of temporary BIN auctions in online auction sites, such
as eBay, where BIN option is only offered to early bidders.
In the second essay, I study how bidders’ risk attitude and time preference affect
their behavior in Buy-It-Now auctions. I consider two cases, when both bidders enter
the auction at the same time (homogenous bidders) thus BIN option is offered to both
of them, and when two bidders enter the auction at two different stages (heterogenous
bidders) thus the BIN option is only offered to the early bidder. Bidders’ optimal
strategies are derived explicitly in both cases. In particular, given bidders’ risk attitude and time preference, the cutoff valuation, such that a bidder will accept BIN if
his valuation is higher than the cutoff valuation and reject it otherwise, is calculated.
I find that the cutoff valuation in the case of heterogenous bidders is lower than that
in the case of homogenous bidders.
The third essay focuses on the empirical modeling of the price processes of online
auctions. I generalize the monotone series estimator to model the pooled price
processes. Then I apply the model and the estimator to eBay auction data of a palm
PDA. The results are shown to capture closely the overall pattern of observed price
dynamics. In particular, early bidding, mid-auction draught, and sniping are well
approximated by the estimated price curve.
|
126 |
Optimal designs for multivariate calibrations in multiresponse regression modelsGuo, Jia-Ming 21 July 2008 (has links)
Consider a linear regression model with a two-dimensional control vector (x_1, x_2) and an m-dimensional response vector y = (y_1, . . . , y_m). The components of y are correlated with a known covariance matrix. Based on the assumed regression model, there are two problems of interest. The first one is to estimate unknown control vector x_c corresponding to an observed y, where xc will be estimated by the classical estimator. The second one is to obtain a suitable estimation of the control vector x_T corresponding to a given target T = (T_1, . . . , T_m) on the expected responses. Consideration in this work includes the deviation of the expected response E(y_i) from its corresponding target value T_i for each component and defines the optimal control vector x, say x_T , to be the one which minimizes the weighted sum of squares of standardized deviations within the range of x. The objective of this study is to find c-optimal designs for estimating x_c and x_T , which minimize the mean squared error of the estimator of xc and x_T respectively. The comparison of the difference between the optimal calibration design and the optimal design for estimating x_T is provided. The efficiencies of the optimal calibration design relative to the uniform design are also presented, and so are the efficiencies of the optimal design for given target vector relative to the uniform design.
|
127 |
Design of the nth Order Adaptive Integral Variable Structure Derivative EstimatorShih, Wei-Che 17 January 2009 (has links)
Based on the Lyapunov stability theorem, a methodology of designing an nth order adaptive integral variable structure derivative estimator (AIVSDE) is proposed in this thesis. The proposed derivative estimator not only is an improved version of the existing AIVSDE, but also can be used to estimate the nth derivative of a smooth signal which has continuous and bounded derivatives up to n+1. Analysis results show that adjusting some of the parameters can facilitate the derivative estimation of signals with higher frequency noise. The adaptive algorithm is incorporated in the estimation scheme for tracking the unknown upper bounded of the input signal as well as their's derivatives. The stability of the proposed derivative estimator is guaranteed, and the comparison between recently proposed derivative estimator of high-order sliding mode control and AIVSDE is also demonstrated.
|
128 |
Performance Analysis of Parametric Spectral EstimatorsVölcker, Björn January 2002 (has links)
No description available.
|
129 |
Error Estimation for Anisotropic Tetrahedral and Triangular Finite Element MeshesKunert, G. 30 October 1998 (has links) (PDF)
Some boundary value problems yield anisotropic solutions, e.g. solutions
with boundary layers. If such problems are to be solved with the finite
element method (FEM), anisotropically refined meshes can be
advantageous.
In order to construct these meshes or to control the error
one aims at reliable error estimators.
For \emph{isotropic} meshes many estimators are known, but they either fail
when used on \emph{anisotropic} meshes, or they were not applied yet.
For rectangular (or cuboidal) anisotropic meshes a modified
error estimator had already been found.
We are investigating error estimators on anisotropic tetrahedral or
triangular meshes because such grids offer greater geometrical flexibility.
For the Poisson equation a residual error estimator, a local Dirichlet problem
error estimator, and an $L_2$ error estimator are derived, respectively.
Additionally a residual error estimator is presented for a singularly
perturbed reaction diffusion equation.
It is important that the anisotropic mesh corresponds to the anisotropic
solution. Provided that a certain condition is satisfied, we have proven
that all estimators bound the error reliably.
|
130 |
Robust a posteriori error estimation for a singularly perturbed reaction-diffusion equation on anisotropic tetrahedral meshesKunert, Gerd 09 November 2000 (has links) (PDF)
We consider a singularly perturbed reaction-diffusion problem and
derive and rigorously analyse an a posteriori residual error
estimator that can be applied to anisotropic finite element meshes.
The quotient of the upper and lower error bounds is the so-called
matching function which depends on the anisotropy (of the
mesh and the solution) but not on the small perturbation parameter.
This matching function measures how well the anisotropic finite
element mesh corresponds to the anisotropic problem.
Provided this correspondence is sufficiently good, the matching
function is O(1).
Hence one obtains tight error bounds, i.e. the error estimator
is reliable and efficient as well as robust with respect to the
small perturbation parameter.
A numerical example supports the anisotropic error analysis.
|
Page generated in 0.0253 seconds