• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 513
  • 259
  • 61
  • 58
  • 39
  • 19
  • 19
  • 17
  • 16
  • 16
  • 12
  • 4
  • 4
  • 4
  • 3
  • Tagged with
  • 1387
  • 295
  • 254
  • 178
  • 119
  • 104
  • 95
  • 95
  • 93
  • 79
  • 77
  • 76
  • 75
  • 75
  • 69
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

The Relationship between Executive Functioning and Attention in a Clinically Referred Pediatric Sample

Hines, Lindsay June 01 January 2009 (has links)
This study examined the relationship between performance on measures of attention and executive functioning in a clinically referred pediatric sample. The purpose of this research was to determine if performance on tests of attention are significantly related to performance on measures of inhibition and cognitive shifting above and beyond that of age, education, and intelligence. The factor structure of attention and executive functioning was also evaluated. Attention was measured by the CPT-II Errors of Omission and Variability scores. Inhibition was measured by the CPT-II Errors of Commission score, and cognitive shifting was measured by the Wisconsin Card Sorting Test (WCST) Perseverative Errors score. These variables were examined in a factor analysis, and also included the Category Errors score, and WISC-IV Digit Span, and Letter-Number Sequencing subtests. Three hierarchical multiple regressions were conducted, with age, education, and IQ entered in the first block as covariates. Two exploratory factor analyses were performed. Results revealed that performance on measures of attention significantly predicted scores on a measure of inhibition above and beyond age, education, and IQ. Performance on measures of attention did not significantly predict scores on a measure of shifting ability. Results were not significantly different when IQ was not included as a covariate. Factor analysis initially revealed a two factor model, with measures of sustained attention loading on one factor, and measures of executive functioning loading on a separate factor. The three factor model was less precisely defined, and the factors were called sustained attention, working memory, and set shifting.
182

Intra-organ regulation of gene expression responses for the shade avoidance / 避陰応答における遺伝子発現応答の器官内調節

Kim, Sujung 23 May 2018 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(理学) / 甲第21250号 / 理博第4420号 / 新制||理||1634(附属図書館) / 京都大学大学院理学研究科生物科学専攻 / (主査)教授 長谷 あきら, 教授 鹿内 利治, 准教授 小山 時隆 / 学位規則第4条第1項該当 / Doctor of Science / Kyoto University / DGAM
183

The Effect of Sampling Processing on X-Ray Diffraction Peaks of Dolomite: Implications for Studies of Shock Metamorphosed Materials

Simpson, Emily N. January 2019 (has links)
No description available.
184

Initial Estimation of Forest Inventory Sizes for Timber Sales from Easily Observed Stand Attributes

Skidmore, Joshua Philip 30 April 2011 (has links)
Preliminary plots are required when beginning a cruise for a timber sale in order to get an idea of how much variation in volume exists within the sale area. This variation is known as the coefficient of variation (CV) and is subsequently used to estimate the number of plots needed to implement the cruise to a desired level of accuracy (allowable error). By looking at a large number of sale inventories and finding similarities among key attributes (trees per acre, diameter at breast height and an estimate of variance), two models were derived based on simple stand observations to aid field personnel in determining a more accurate estimate of the CV. Furthermore, the models estimate the number of 1/10 acre plots needed to sample a stand to within a ± 10% allowable error at the 90% confidence level for total tonnage.
185

Estimating wildlife viewing recreational demand and consumer surplus

Mingie, James Cory 06 August 2011 (has links)
Motivated by the increasing popularity of wildlife viewing and a growing emphasis on management for nontimber outputs, wildlife viewing demand was assessed. Specific objectives included determining factors affecting participation and frequency of use, and furthermore, deriving 2006 nationwide wildlife viewing consumer surplus estimates. With the travel cost method as the theoretical basis, the empirical estimation method employed was a two-step sample selection model that included a probit first step and a negative binomial second step. Consumer surplus per trip estimates ranged from $215.23 to $739.07 while aggregate national estimates ranged from $44.5 billion to $185.1 billion. Results reveal that age, race, and urban residence affect participation and frequency similarly. This research can help policymakers in particular better understand determinants of wildlife viewing participation and frequency. The value of wildlife viewing access can be used to justify funding initiatives aimed at protecting or managing for this use.
186

Estimating the Population Standard Deviation based on the Sample Range for Non-normal Data

Li, Yufeng January 2023 (has links)
Recently, an increasing number of researchers have attempted to overcome the constraints of size and scope in individual medical studies by estimating the overall treatment effects based on a combination of studies. A commonly used method is meta-analysis which combines results from multiple studies. The population standard deviation in primary studies is an essential quantitative value which is absent sometimes, especially when the outcome has a skewed distribution. Instead, the sample size and the sample range of the whole dataset is reported. There are several methods to estimate the standard deviation of the data based on the sample range if we assume the data are normally distributed. For example: Tippett Method2, Ramirez and Cox Method3, Hozo et al Method4, Rychtar and Taylor Method5, Mantel Method6, Sokal and Rohlf Method7 as well as Chen and Tyler Method8. Only a few papers provide a solution for estimating the population standard deviation of non-normally distributed data. In this thesis, some other distributions, which are commonly used in clinical studies, will be simulated to estimate the population standard deviation by using the methods mentioned above. The performance and the robustness of those methods for different sample sizes and different distribution parameters will be presented. Also, these methods will be evaluated on real-world datasets. This article will provide guidelines describing which methods perform best with non-normally distributed data. / Thesis / Master of Science (MSc)
187

ACEs and Adult Criminality in a Sample of University Students

Hall, Kelcey L., Stinson, Jill D., Levenson, J. S., Quinn, Megan A., Forgea, Victoria 04 August 2017 (has links)
No description available.
188

Microfluidic Devices with Integrated Sample Preparation for Improved Analysis of Protein Biomarkers

Nge, Pamela Nsang 06 December 2012 (has links) (PDF)
Biomarkers present a non-invasive means of detecting cancer because they can be obtained from body fluids. They can also be used for prognosis and assessing response to treatment. To limit interferences it is essential to pretreat biological samples before analysis. Sample preparation methods include extraction of analyte from an unsuitable matrix, purification, concentration or dilution and labeling. The many advantages offered by microfluidics include portability, speed, automation and integration. Because of the difficulties encountered in integrating this step in microfluidic devices most sample preparation methods are often carried out off-chip. In the fabrication of micro-total analysis systems it is important that all steps be integrated in a single platform. To fabricate polymeric microdevices, I prepared templates from silicon wafers by the process of photolithography. The design on the template was transferred to a polymer piece by hot embossing, and a complete device was formed by bonding the imprinted piece with a cover plate. I prepared affinity columns in these devices and used them for protein extraction. The affinity monolith was prepared from reactive monomers to facilitate immobilization of antibodies. Extraction and concentration of biomarkers on this column showed specificity to the target molecule. This shows that biomarkers could be extracted, purified and concentrated with the use of microfluidic affinity columns.I prepared negatively charged ion-permeable membranes in poly(methyl methacrylate) microchips by in situ polymerization just beyond the injection intersection. Cancer marker proteins were electrophoretically concentrated at the intersection by exclusion from this membrane on the basis of both size and charge, prior to microchip capillary electrophoresis. I optimized separation conditions to achieve baseline separation of the proteins. Band broadening and peak tailing were limited by controlling the preconcentration time. Under my optimized conditions a 40-fold enrichment of bovine serum albumin was achieved with 4 min of preconcentration while >10-fold enrichment was obtained for cancer biomarker proteins with just 1 min of preconcentration. I have also demonstrated that the processes of sample enrichment, on-chip fluorescence labeling and purification could be automated in a single voltage-driven platform. This required the preparation of a reversed-phase monolithic column, polymerized from butyl methacrylate monomers, in cyclic olefin copolymer microdevices. Samples enriched through solid phase extraction were labeled on the column, and much of the unreacted dye was rinsed off before elution. The retention and elution characteristics of fluorophores, amino acids and proteins on these columns were investigated. A linear relationship between eluted peak areas and protein concentration demonstrated that this technique could be used to quantify on-chip labeled samples. This approach could also be used to simultaneously concentrate, label and separate multiple proteins.
189

Modeling Autocorrelation and Sample Weights in Panel Data: A Monte Carlo Simulation Study

Acharya, Parul 01 January 2015 (has links)
This dissertation investigates the interactive or joint influence of autocorrelative processes (autoregressive-AR, moving average-MA, and autoregressive moving average-ARMA) and sample weights present in a longitudinal panel data set. Specifically, to what extent are the sample estimates influenced when autocorrelation (which is usually present in a panel data having correlated observations and errors) and sample weights (complex sample design feature used in longitudinal data having multi-stage sampling design) are modeled versus when they are not modeled or either one of them is taken into account. The current study utilized a Monte Carlo simulation design to vary the type and magnitude of autocorrelative processes and sample weights as factors incorporated in growth or latent curve models to evaluate the effect on sample latent curve estimates (mean intercept, mean slope, intercept variance, slope variance, and intercept slope correlation). Various latent curve models with weights or without weights were specified with an autocorrelative process and then fitted to data sets having either the AR, MA or ARMA process. The relevance and practical importance of the simulation results were ascertained by testing the joint influence of autocorrelation and weights on the Early Childhood Longitudinal Study for Kindergartens (ECLS-K) data set which is a panel data set having complex sample design features. The results indicate that autocorrelative processes and weights interact with each other as sources of error to a statistically significant degree. Accounting for just the autocorrelative process without weights or utilizing weights while ignoring the autocorrelative process may lead to bias in the sample estimates particularly in large-scale datasets in which these two sources of error are inherently embedded. The mean intercept and mean slope of latent curve models without weights was consistently underestimated when fitted to data sets having AR, MA or ARMA process. On the other hand, the intercept variance, intercept slope, and intercept slope correlation were overestimated for latent curve models with weights. However, these three estimates were not accurate as the standard errors associated with them were high. In addition, fit indices, AR and MA estimates, parsimony of the model, behavior of sample latent curve estimates, and interaction effects between autocorrelative processes and sample weights should be assessed for all the models before a particular model is deemed as most appropriate. If the AR estimate is high and MA estimate is low for a LCAR model than the other models that are fitted to a data set having sample weights and the fit indices are in the acceptable cut-off range, then the data set has a higher likelihood of having an AR process between the observations. If the MA estimate is high and AR estimate is low for a LCMA model than the other models that are fitted to a data set having sample weights and the fit indices are in the acceptable cut-off range, then the data set has a higher likelihood of having an MA process between the observations. If both AR and MA estimates are high for a LCARMA model than the other models that are fitted to a data set having sample weights and the fit indices are in the acceptable cut-off range, then the data set has a higher likelihood of having an ARMA process between the observations. The results from the current study recommends that biases from both autocorrelation and sample weights needs to be simultaneously modeled to obtain accurate estimates. The type of autocorrelation (AR, MA or ARMA), magnitude of autocorrelation, and sample weights influences the behavior of estimates and all the three facets should be carefully considered to correctly interpret the estimates especially in the context of measuring growth or change in the variable(s) of interest over time in large-scale longitudinal panel data sets.
190

Application of GC×GC-MS in VOC analysis of fermented beverages

Zhang, Penghan 20 December 2021 (has links)
GC×GC is an efficient tool for the analysis of volatile compound. However, improvements are still required on VOC extraction, GC×GC setup and data processing. Different sample preparation techniques and GC×GC setup were compared based on the literature study and experimental results. Each VOC extraction technology has its own drawbacks and needs new developments. There wasn’t an ideal sample preparation technique to recover all the VOCs from the beverage sample. Furthermore, the VOCs recovered by different techniques were very different. The discussion of the pros and cons of the different techniques in our study can serve as a guide for the further development and improvement of these techniques. Combining the results from different sample preparation techniques is necessary to achieve a higher coverage of global VOC profiling. For the known fermentative aromatic compounds, the best coverage can be reached by using SPME together with SPE for beer, and VALLME for wine and cider. A fine GC×GC method development involves modulator selection, column combination and parameter optimization. Thermal modulator provides high detection sensitivity and allow exceptional trace analysis. Since the analytes coverage is the most important factor of in beverage VOC profiling, thermal modulation is a better choice. In fermented beverages, there are more polar compounds than non-polar compounds. The most suitable column combination is polar-semipolar. Same column diameters shall be used to minimize the column overloading. GC×GC parameters must be optimized. These parameters interact with each other therefore statistical prediction model is required. Response surface model is capable of doing this job while using a small number of experimental tests. The nearest neighbor distance was a suitable measurement for peak dispersion. Column and detector saturations are unavoidable if the metabolic sample is measured at one dilution level, incorrect peak deconvolution and mass spectrum construction may happen. Data processing results can be improved by a two-stage data processing strategy that will incorporate a targeted data processing and cleaning approach upstream of the “standard” untargeted analysis. Our experiments show a significant improvement in annotation and quantification results for targeted compounds causing instrumental saturation. After subtracting the saturate signal of targeted compounds, the MS construction was improved for co-eluted compounds. Incomplete signal subtraction may occur. It leads to the detection of false positive peaks or to interferences with the construction of mass spectra of co-diluted peaks. High-resolution MS libraries and more accurate peak area detection methods should be tested for further improvement.

Page generated in 0.0806 seconds