• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 32971
  • 15309
  • 9136
  • 4760
  • 3708
  • 3558
  • 831
  • 691
  • 615
  • 587
  • 508
  • 479
  • 449
  • 376
  • 365
  • Tagged with
  • 88876
  • 11799
  • 9394
  • 6308
  • 6198
  • 5273
  • 5035
  • 4572
  • 3983
  • 3898
  • 3808
  • 3756
  • 3556
  • 3432
  • 3388
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
491

Determinations of selected trace minerals in turkey muscles

Zenoble, Oleane Carden January 2011 (has links)
Digitized by Kansas Correctional Industries
492

Benchmarking non-linear series with quasi-linear regression.

January 2012 (has links)
一個社會經濟學的目標變量,經常存在兩種不同收集頻率的數據。由於較低頻率的一組數據通常由大型普查中所獲得,其準確度及可靠性會較高。因此較低頻率的一組數據一般會視作基準,用作對頻率較高的另一組數據進行修正。 / 在基準修正過程中,一般會假設調查誤差及目標數據的大小互相獨立,即「累加模型」。然而,現實中兩者通常是相關的,目標變量越大,調查誤差亦會越大,即「乘積模型」。對此問題,陳兆國及胡家浩提出了利用準線性回歸手法對乘積模型進行基準修正。在本論文中,假設調查誤差服從AR(1)模型,首先我們會示範如何利用準線性回歸手法及默認調查誤差模型進行基準數據修正。然後,運用基準預測的方式,提出一個對調查誤差模型的估計辦法。最後我們會比較兩者的表現以及一些選擇誤差模型的指引。 / For a target socio-economic variable, two sources of data with different collecting frequencies may be available in survey data analysis. In general, due to the difference of sample size or the data source, two sets of data do not agree with each other. Usually, the more frequent observations are less reliable, and the less frequent observations are much more accurate. In benchmarking problem, the less frequent observations can be treated as benchmarks, and will be used to adjust the higher frequent data. / In the common benchmarking setting, the survey error and the target variable are always assumed to be independent (Additive case). However, in reality, they should be correlated (Multiplicative case). The larger the variable, the larger the survey error. To deal with this problem, Chen and Wu (2006) proposed a regression method called quasi-linear regression for the multiplicative case. In this paper, by assuming the survey error to be an AR(1) model, we will demonstrate the benchmarking procedure using default error model for the quasi-linear regression. Also an error modelling procedure using benchmark forecast method will be proposed. Finally, we will compare the performance of the default error model with the fitted error model. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Luk, Wing Pan. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2012. / Includes bibliographical references (leaves 56-57). / Abstracts also in Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Recent Development For Benchmarking Methods --- p.2 / Chapter 1.2 --- Multiplicative Case And Benchmarking Problem --- p.3 / Chapter 2 --- Benchmarking With Quasi-linear Regression --- p.8 / Chapter 2.1 --- Iterative Procedure For Quasi-linear Regression --- p.9 / Chapter 2.2 --- Prediction Using Default Value φ --- p.16 / Chapter 2.3 --- Performance Of Using Default Error Model --- p.17 / Chapter 3 --- Estimation Of φ Via BM Forecasting method --- p.26 / Chapter 3.1 --- Benchmark Forecasting Method --- p.26 / Chapter 3.2 --- Performance Of Benchmark Forecasting Method --- p.28 / Chapter 4 --- Benchmarking By The Estimated Value --- p.34 / Chapter 4.1 --- Benchmarking With The Estimated Error Model --- p.35 / Chapter 4.2 --- Performance Of Using Estimated Error Model --- p.36 / Chapter 4.3 --- Suggestions For Selecting Error Model --- p.45 / Chapter 5 --- Fitting AR(1) Model For Non-AR(1) Error --- p.47 / Chapter 5.1 --- Settings For Non-AR(1) Model --- p.47 / Chapter 5.2 --- Simulation Studies --- p.48 / Chapter 6 --- An Illustrative Example: The Canada Total Retail Trade Se-ries --- p.50 / Chapter 7 --- Conclusion --- p.54 / Bibliography --- p.56
493

Influence measures for weibull regression in survival analysis.

January 2003 (has links)
Tsui Yuen-Yee. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2003. / Includes bibliographical references (leaves 53-56). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Parametric Regressions in Survival Analysis --- p.6 / Chapter 2.1 --- Introduction --- p.6 / Chapter 2.2 --- Exponential Regression --- p.7 / Chapter 2.3 --- Weibull Regression --- p.8 / Chapter 2.4 --- Maximum Likelihood Method --- p.9 / Chapter 2.5 --- Diagnostic --- p.10 / Chapter 3 --- Local Influence --- p.13 / Chapter 3.1 --- Introduction --- p.13 / Chapter 3.2 --- Development --- p.14 / Chapter 3.2.1 --- Normal Curvature --- p.14 / Chapter 3.2.2 --- Conformal Normal Curvature --- p.15 / Chapter 3.2.3 --- Q-displacement Function --- p.16 / Chapter 3.3 --- Perturbation Scheme --- p.17 / Chapter 4 --- Examples --- p.21 / Chapter 4.1 --- Halibut Data --- p.21 / Chapter 4.1.1 --- The Data --- p.22 / Chapter 4.1.2 --- Initial Analysis --- p.23 / Chapter 4.1.3 --- Perturbations of σ around 1 --- p.23 / Chapter 4.2 --- Diabetic Data --- p.30 / Chapter 4.2.1 --- The Data --- p.30 / Chapter 4.2.2 --- Initial Anaylsis --- p.31 / Chapter 4.2.3 --- Perturbations of σ around σ --- p.31 / Chapter 5 --- Conclusion Remarks and Further Research Topic --- p.35 / Appendix A --- p.38 / Appendix B --- p.47 / Bibliography --- p.53
494

Development of a sensory lexicon for smoky and applications of that lexicon

Jaffe, Taylor Rae January 1900 (has links)
Master of Science / Department of Food, Nutrition, Dietetics and Health / Edgar Chambers IV / Smoking of food is one of the oldest methods of food preservation and still is used widely to help preserve foods such as meats, fish and cheeses. Apart from its conservation function, the smoking process also has a considerable influence on the sensory characteristics of the products. A highly trained, skilled descriptive sensory panel identified, defined and referenced 14 attributes related to the flavor of food products labeled as smoked or smoky. The lexicon included: Smoky (Overall), Ashy, Woody, Musty/Dusty, Musty/Earthy, Burnt, Acrid, Pungent, Petroleum-Like, Creosote/Tar, Cedar, Bitter, Metallic and Sour. Definitions of these attributes were written and references were found that anchor a 0-15 point scale. This lexicon was used to evaluate the differences among smoked products under different circumstances such as products on the market versus products smoked at home, different woods used to smoke products and the length of time a product spends in the smoker. There are many methods used to impart this smoky flavor and due to health, environmental and economic concerns, many producers use nontraditional methods while hobbyists thrive on the traditional methods. Descriptive analysis was used to see if there are differences between products smoked using an at-home smoker and market products. Using principal component analysis, cluster analysis and analysis of variance, it was found that market products were significantly different than products smoked using an at home smoker. The market products were significantly more Sour and less Smoky, Ashy, Woody, Musty/Dusty and Acrid. Many types of woods are used to smoke products and many market products distinguish themselves based on the wood used. Six highly trained panelists evaluated pork that was smoked with either hickory, mesquite, cherrywood or Applewood and was smoked for 1, 2 or 4 hours. The flavor profiles of the smoke flavor was similar between the types of woods although as the length of time in the smoker increased and the intensities of most attributes rose, the differences among products smoked with different woods became more pronounced. Apple wood smoked products had higher intensities for Overall Smoky, Ashy, Burnt, Pungent, Petroleum-Like, Creosote/Tar and Cedar, while cherry wood smoked products had lower intensities for all attributes. Hickory and Mesquite smoked products were not significantly different from each other and typically scored between the other two woods. Smoking is a slow process and many popular restaurants that smoke their own products find that their claims of smoking for long periods of time are beneficial to their image. Descriptive analysis was used to see how the flavor changes based on the length of time the product (pork) was in the smoker. The samples of pork ranged from not smoked to smoked for 15 hours, with samples at every 2.5 hour increment. For most attributes, the intensities went up with the amount of time the product was in the smoker. The only exceptions were Musty/Earthy and Sour. The regression analysis revealed that Smoky, Ashy, Acrid, Creosote/Tar and Bitter are all at least moderately correlated with the time the product spent in the smoker.
495

The effect of alien germplasm on 2M urea-soluble protein electrophoresis

Chavez, Elyzabeth January 2011 (has links)
Digitized by Kansas Correctional Industries
496

Electrochemical methods for speciation of inorganic arsenic

D'Arcy, Karen Ann 01 January 1986 (has links)
Arsenic is found in the environment in several oxidation states as well as in a variety of organoarsenic compounds. This situation puts additional demands on the analysis in that it is desirable to measure the amount of each species, not just all of the arsenic. The reason for this is that the different species have greatly different toxicities; of the major inorganic forms, As(III) is much more toxic than As(V). The goal of this research was to develop a convenient method for the analysis of mixtures of As(III) and As(V) at trace levels. Electroanalytical methods are inherently sensitive to oxidation states of elements and therefore are a natural choice for this problem. In fact, a method was developed some years ago for As(III) that used differential pulse polarography: the detection limit is 0.3 parts per billion (ppb). However, As(V) was not detected since in its usual form as an oxyanion it is electrochemically inactive. There are coordinate compounds formed with catechol, AsL(,n)(n = 1-3), that can be reduced at a mercury electrode, but the active species, AsL, is only a small fraction of the major species, AsL(,3), so the detection limit is only 500 ppb. Many details of the electrochemistry of this unusual compound were examined in this work. In order to improve detection limits, a method involving cathodic stripping was developed. It involves codeposition of copper with arsenic on a mercury electrode to effectively concentrate the analyte. Then the elemental arsenic is converted to arsine, AsH(,3), during a cathodic potential scan. The resulting current peak is proportional to As(III) in the absence of catechol and to the sum of As(III) and As(V) in the presence of catechol. It was observed that the current peak was considerably larger than expected and additional experiments revealed that there was evolution of hydrogen during the formation of arsine. This is rather unusual in electrochemical reactions and so some of the details of this catalyzed coreaction were examined. The result is a fortunate enhancement of detection limit so that As(v) at 40 ppb can be measured.
497

Scale parameter modelling of the t-distribution

Taylor, Julian January 2005 (has links)
This thesis considers location and scale parameter modelling of the heteroscedastic t-distribution. This new distribution is an extension of the heteroscedastic Gaussian and provides robust analysis in the presence of outliers as well accommodates possible heteroscedasticity by flexibly modelling the scale parameter using covariates existing in the data. To motivate components of work in this thesis the Gaussian linear mixed model is reviewed. The mixed model equations are derived for the location fixed and random effects and this model is then used to introduce Restricted Maximum Likelihood ( REML ). From this an algorithmic scheme to estimate the scale parameters is developed. A review of location and scale parameter modelling of the heteroscedastic Gaussian distribution is presented. In this thesis, the scale parameters are a restricted to be a function of covariates existing in the data. Maximum Likelihood ( ML ) and REML estimation of the location and scale parameters is derived as well as an efficient computational algorithm and software are presented. The Gaussian model is then extended by considering the heteroscedastic t distribution. Initially, the heteroscedastic t is restricted to known degrees of freedom. Scoring equations for the location and scale parameters are derived and their intimate connection to the prediction of the random scale effects is discussed. Tools for detecting and testing heteroscedasticity are also derived and a computational algorithm is presented. A mini software package " hett " using this algorithm is also discussed. To derive a REML equivalent for the heteroscedastic t asymptotic likelihood theory is discussed. In this thesis an integral approximation, the Laplace approximation, is presented and two examples, with the inclusion of ML for the heteroscedastic t, are discussed. A new approximate integral technique called Partial Laplace is also discussed and is exemplified with linear mixed models. Approximate marginal likelihood techniques using Modified Profile Likelihood ( MPL ), Conditional Profile Likelihood ( CPL ) and Stably Adjusted Profile Likelihood ( SAPL ) are also presented and offer an alternative to the approximate integration techniques. The asymptotic techniques are then applied to the heteroscedastic t when the degrees of freedom is known to form two distinct REMLs for the scale parameters. The first approximation uses the Partial Laplace approximation to form a REML for the scale parameters, whereas, the second uses the approximate marginal likelihood technique MPL. For each, the estimation of the location and scale parameters is discussed and computational algorithms are presented. For comparison, the heteroscedastic t for known degrees of freedom using ML and the two new REML equivalents are illustrated with an example and a comparative simulation study. The model is then extended to incorporate the estimation of the degrees of freedom parameter. The estimating equations for the location and scale parameters under ML are preserved and the estimation of the degrees of freedom parameter is integrated into the algorithm. The approximate REML techniques are also extended. For the Partial Laplace approximation the estimation of the degrees of freedom parameter is simultaneously estimated with the scale parameters and therefore the algorithm differs only slightly. The second approximation uses SAPL to estimate the parameters and produces approximate marginal likelihoods for the location, scale and degrees of freedom parameters. Computational algorithms for each of the techniques are also presented. Several extensive examples, as well as a comparative simulation study, are used to illustrate ML and the two REML equivalents for the heteroscedastic t with unknown degrees of freedom. The thesis is concluded with a discussion of the new techniques derived for the heteroscedastic t distribution along with their advantages and disadvantages. Topics of further research are also discussed. / Thesis (Ph.D.)--School of Agriculture and Wine, 2005.
498

Ranking and Selection Procedures for Bernoulli and Multinomial Data

Malone, Gwendolyn Joy 02 December 2004 (has links)
Ranking and Selection procedures have been designed to select the best system from a number of alternatives, where the best system is defined by the given problem. The primary focus of this thesis is on experiments where the data are from simulated systems. In simulation ranking and selection procedures, four classes of comparison problems are typically encountered. We focus on two of them: Bernoulli and multinomial selection. Therefore, we wish to select the best system from a number of simulated alternatives where the best system is defined as either the one with the largest probability of success (Bernoulli selection) or the one with the greatest probability of being the best performer (multinomial selection). We focus on procedures that are sequential and use an indifference-zone formulation wherein the user specifies the smallest practical difference he wishes to detect between the best system and other contenders. We apply fully sequential procedures due to Kim and Nelson (2004) to Bernoulli data for terminating simulations, employing common random numbers. We find that significant savings in total observations can be realized for two to five systems when we wish to detect small differences between competing systems. We also study the multinomial selection problem. We offer a Monte Carlo simulation of the Bechhofer and Kulkarni (1984) MBK multinomial procedure and provide extended tables of results. In addition, we introduce a multi-factor extension of the MBK procedure. This procedure allows for multiple independent factors of interest to be tested simultaneously from one data source (e.g., one person will answer multiple independent surveys) with significant savings in total observations compared to the factors being tested in independent experiments (each survey is run with separate focus groups and results are combined after the experiment). Another multi-factor multinomial procedure is also introduced, which is an extension to the MBG procedure due to Bechhofer and Goldsman (1985, 1986). This procedure performs better that any other procedure to date for the multi-factor multinomial selection problem and should always be used whenever table values for the truncation point are available.
499

Some stochastic properties of random classical and Carlitz compositions /

Kheyfets, Boris Leonid. January 2004 (has links)
Thesis (Ph. D.)--Drexel University, 2004. / Includes abstract and vita.
500

Cure models for univariate and multivariate survival data

Zhou, Feifei., 周飞飞. January 2011 (has links)
published_or_final_version / Statistics and Actuarial Science / Doctoral / Doctor of Philosophy

Page generated in 0.083 seconds