• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 506
  • 257
  • 61
  • 58
  • 39
  • 19
  • 19
  • 17
  • 16
  • 16
  • 12
  • 4
  • 4
  • 4
  • 3
  • Tagged with
  • 1377
  • 295
  • 251
  • 177
  • 118
  • 103
  • 95
  • 92
  • 91
  • 77
  • 76
  • 76
  • 75
  • 74
  • 69
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The use of Raman spectroscopy in pharmaceutical analysis

De Paepe, Anne Therese Gustaaf January 2001 (has links)
No description available.
2

Two Novel Switched Current Circuits

Chang-Chan, Sun-Yu 26 July 2000 (has links)
Two novel clock feedthrough compensation circuits for switched - current (SI) memory cells are proposed to reduce the clock feedthrough error. One is a current compensation first generation SI memory cell and another is an error voltage reduction second generation SI memory cell. Both circuits are designed using a 0.5£gm UMC CMOS process. In this study, the first circuit has obtained an accuracy about 0.1% error with a frequency of 5MHz, and the second circuit has achieved 0.12% error in accuracy with 10.5MHz in frequency. The results are obtained by SPICE simulates.
3

Design, fabrication, and implementation of a single-cell capture chamber for a microfluidic impedance sensor a thesis /

Fadriquela, Joshua-Jed Doria. Clague, David. January 1900 (has links)
Thesis (M.S.)--California Polytechnic State University, 2009. / Title from PDF title page; viewed on December 17, 2009. Major professor: David Clague, Ph.D. "Presented to the faculty of California Polytechnic State University, San Luis Obispo." "In partial fulfillment of the requirements for the degree [of] Master of Science in biomedical Engineering." "June 2009." Also available on microfiche.
4

Sufficient sample sizes for the multivariate multilevel regression model

Chang, Wanchen 08 September 2015 (has links)
The three-level multivariate multilevel model (MVMM) is a multivariate extension of the conventional univariate two-level hierarchical linear model (HLM) and is used for estimating and testing the effects of explanatory variables on a set of correlated continuous outcome measures. Two simulation studies were conducted to investigate the sample size requirements for restricted maximum likelihood (REML) estimation of three-level MVMMs, the effects of sample sizes and other design characteristics on estimation, and the performance of the MVMMs compared to corresponding two-level HLMs. The model for the first study was a random-intercept MVMM, and the model for the second study was a fully-conditional MVMM. Study conditions included number of clusters, cluster size, intraclass correlation coefficient, number of outcomes, and correlations between pairs of outcomes. The accuracy and precision of estimates were assessed with parameter bias, relative parameter bias, relative standard error bias, and 95% confidence interval coverage. Empirical power and type I error rates were also calculated. Implications of the results for applied researchers and suggestions for future methodological studies are discussed. / text
5

An investigation into the use of power transformations to normality

Hayes, M. W. January 1986 (has links)
No description available.
6

Comparison of sample preparation techniques for the detection and quantification of twenty-three drugs in oral fluid

Hwang, Hajin 08 July 2020 (has links)
Forensic toxicology is a branch of science that involves the analysis of drugs and other substances in biological fluids and tissues such as blood, urine, and oral fluid to aid medical or legal investigation of death, poisoning, and drug use. Due to the various components of different matrices, efficient and effective sample preparation techniques are necessary for reliable and accurate analysis. Following sample clean-up, a sensitive, specific, and robust method is ideal for consistent detection, identification, and quantitation of analytes. With the rise of drug abuse, there is a growing need to develop a single method that can target multiple classes of drugs quickly and effectively. This study validated two different sample preparation techniques for the detection and quantitation of six drug classes comprised of twenty-three drugs and metabolites in oral fluid. The drug classes were as follows: amphetamines, local anesthetics, opioids, hallucinogens, antidepressants, and novel psychoactive substances (NPS). Amphetamines used were amphetamine, methamphetamine, 3,4-methylenedioxyamphetamine (MDA), 3,4-methylenedioxy-N-ethylamphetamine (MDEA), and 3,4-methylenedioxymethamphetamine (MDMA). Local anesthetics contained benzoylecgonine (BZE), cocaine, and lidocaine. Opioids included codeine, methadone, morphine, 6-monoacetylmorphine (6-MAM), fentanyl, and oxycodone. Hallucinogens included lysergic acid diethylamide (LSD) and phencyclidine (PCP). Antidepressants were amitriptyline, citalopram, fluoxetine, and trazodone. Lastly, NPS included ethylone, α-pyrrolidinopentiophenone (α-PVP), and 2,5-dimethoxy-4-iodophenethylamine N-(2-methoxybenzyl) (25I-NBOMe). Supported liquid extraction (SLE) and solid phase extraction (SPE) were assessed followed by confirmatory analysis by liquid chromatography (LC)-tandem mass spectrometry (MS/MS). Both methods were validated according to guidelines in the Standard Practices for Method Validation in Forensic Toxicology set by the American Academy of Forensic Science (AAFS) Standards Board (ASB). Parameters assessed include calibration model, bias, precision, limit of detection (LOD), limit of quantitation (LOQ), dilution integrity, ion suppression/enhancement, interference studies, and stability. Matrix recovery was added as another parameter. All calibration models were 0.99 or greater and all compounds were stable for at least 72 hours. Bias, precision, LOD, LOQ, dilution integrity, and interferences were similar between both methods. SLE yielded slightly better LOD and LOQ values. SLE had greater values of matrix recovery as well as lower levels of ionization suppression/enhancement. Overall, SLE was determined to be the better method of sample preparation for this panel of drugs in oral fluid. Not only did it yield higher values for several of the parameters assessed but it also was more efficient (1 hour versus 2 hours) while using less solvent.
7

Sample Size Determination in Multivariate Parameters With Applications to Nonuniform Subsampling in Big Data High Dimensional Linear Regression

Yu Wang (11821553) 20 December 2021 (has links)
Subsampling is an important method in the analysis of Big Data. Subsample size determination (SSSD) plays a crucial part in extracting information from data and in breaking<br>the challenges resulted from huge data sizes. In this thesis, (1) Sample size determination<br>(SSD) is investigated in multivariate parameters, and sample size formulas are obtained for<br>multivariate normal distribution. (2) Sample size formulas are obtained based on concentration inequalities. (3) Improved bounds for McDiarmid’s inequalities are obtained. (4) The<br>obtained results are applied to nonuniform subsampling in Big Data high dimensional linear<br>regression. (5) Numerical studies are conducted.<br>The sample size formula in univariate normal distribution is a melody in elementary<br>statistics. It appears that its generalization to multivariate normal (or more generally multivariate parameters) hasn’t been caught much attention to the best of our knowledge. In<br>this thesis, we introduce a definition for SSD, and obtain explicit formulas for multivariate<br>normal distribution, in gratifying analogy of the sample size formula in univariate normal.<br>Commonly used concentration inequalities provide exponential rates, and sample sizes<br>based on these inequalities are often loose. Talagrand (1995) provided the missing factor to<br>sharpen these inequalities. We obtained the numeric values of the constants in the missing<br>factor and slightly improved his results. Furthermore, we provided the missing factor in<br>McDiarmid’s inequality. These improved bounds are used to give shrunken sample sizes <br>
8

High-Frequency Electronics for Contactless Dielectrophoresis

Caldwell, John Lawrence 16 June 2010 (has links)
The field of sample enrichment is currently receiving a large amount of attention because it is essential to reduce the time required for many laboratory processes. Dielectrophoresis, or the motion of a polarized particle in the presence of a non-uniform electric field, has emerged as a promising method for biological sample concentration. By relying upon electrical properties that are intrinsic to a cell or microparticle, dielectrophoretic concentration avoids the need for sample preparation procedures which can greatly reduce the throughput of a system. Contactless Dielectrophoresis (cDEP) is a promising manifestation of dielectrophoresis in which the electrode structures that provide the non-uniform electric field are physically separated from the sample by a thin dielectric barrier. This work presents two methods for providing the high-voltage and high-frequency signal necessary to generate a non-uniform electric field in the sample channel of a cDEP device. The first method, an oscillator-based system, was able to produce DEP trapping and pearl-chaining of THP-1 and MCF-7 breast cancer cells in a cDEP device. The second method presented here utilizes an amplifier and transformer combination to generate very high voltages over a wide range of frequencies. Finally, electrorotation, or the spin imparted to a particle due to a rotating electric field has proven to be an extremely useful analysis of a cell's dielectric properties. A wideband, computer controlled function generator, outputting four sinusoidal waveforms in quadrature is presented. This device was able to produce outputs with the proper alignment over the range of 10 Hz to 100MHz. / Master of Science
9

Nonidentity Matching-to-Sample with Retarded Adolescents: Stimulus Equivalences and Sample-Comparison Control

Stromer, Robert 01 May 1980 (has links)
In Experiment 1, four subjects were trained to match two visual samples (A) and their respective nonidentical visual comparisons (B); i.e., A-B matching. During nonreinforced test trials, all subjects demonstrated stimulus equivalences within the context of sample-comparison reversibility (B-A matching): When B stimuli were used as samples, appropriate responding to A comparisons occurred. A-B and B-A matching persisted given novel stimuli as alternate comparisons. However, the novel comparisons were consistently selected in the presence of nonmatching stimuli: i.e., during trials comprised of a novel comparison, an A or B sample from one stimulus class, and an "incorrect" comparison from the other, B or A stimuli respectively. In Experiment 2, three groups of subjects were trained under three different mediated transfer paradigms (e.g., A-B, C-B matching). Tests for reversibility (e.g., B0A, B0C matching) and mediated transfer (e.g., A-C, C-A matching)evinced stimulus equivalences for 11 of 12 subjects. The 11 subjects also matched the mediated equivalences given novel comparisons; whereas, they selected the novel comparisons when combined with nonmatching stimuli. Overall, the demonstrated stimulus equivalences favor a concept learning interpretation of non-identity matching-to-sample. Additionally, the trained and mediated matching relations were comprised of complementary sets of S+ and S- rules: Any stimulus of a given class used as a sample designated both the "correct" and "incorrect" comparisons.
10

Sample Size Determination in Auditing Accounts Receivable Using a Zero-Inflated Poisson Model

Pedersen, Kristen E 28 April 2010 (has links)
In the practice of auditing, a sample of accounts is chosen to verify if the accounts are materially misstated, as opposed to auditing all accounts; it would be too expensive to audit all acounts. This paper seeks to find a method for choosing a sample size of accounts that will give a more accurate estimate than the current methods for sample size determination that are currently being used. A review of methods to determine sample size will be investigated under both the frequentist and Bayesian settings, and then our method using the Zero-Inflated Poisson (ZIP) model will be introduced which explicitly considers zero versus non-zero errors. This model is favorable due to the excess zeros that are present in auditing data which the standard Poisson model does not account for, and this could easily be extended to data similar to accounting populations.

Page generated in 0.0437 seconds