• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 452
  • 274
  • 163
  • 47
  • 25
  • 22
  • 19
  • 10
  • 6
  • 5
  • 5
  • 4
  • 2
  • 2
  • 1
  • Tagged with
  • 1189
  • 259
  • 193
  • 143
  • 124
  • 87
  • 74
  • 67
  • 61
  • 61
  • 61
  • 61
  • 57
  • 54
  • 53
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

The utilisation of quantitative information in groups' capital investment decisions

Ang, Nicole Pamela, Accounting, Australian School of Business, UNSW January 2009 (has links)
One explanation for the increased use of interactive groups in organisations is that benefits are obtained from pooling individuals?? differing knowledge and abilities. However, prior experimental research has established that groups often do not discuss and use information effectively, exhibiting a bias toward information that is commonly known by all group members, rather than information that is unique to individual group members (common information bias). This dissertation investigated whether the provision of quantitative information resulted in improved group performance in two respects. First, it investigated whether quantitative information was discussed and used more than qualitative information. Second, it examined whether the quantification of information reduced the common information bias. This is important because a basic purpose of managerial accounting is to provide information that improves employees?? abilities to make optimal decisions. This dissertation utilised an experimental task known as a ??hidden profile?? to achieve the research objectives. In a hidden profile experiment, each group member receives some information that is common to everyone in the group, and some information that is unique to them. The group must discuss and use members?? unique information in order to uncover the optimal task solution. This dissertation examined the effect of information availability (common or unique) and information type (quantitative or qualitative) on information discussion and use. There were two stages to the experiment. First, individual group members had to make a capital investment decision, and write down their reasons for that decision. Second, groups had to discuss the information, come to a group decision, and write down their reasons for that decision. The results confirmed a common information bias at the group decision level, with groups significantly favouring common information over unique information, for all measures of discussion and information use. In contrast, while a preference for quantitative information was found at the individual decision level, at a group decision level there were no significant differences in the discussion or use of quantitative and qualitative information, with only one exception: significantly more statements were made about quantitative information.
12

Uncertainty Quantification of a Large 1-D Dynamic Aircraft System Simulation Model

Karlén, Johan January 2015 (has links)
A 1-D dynamic simulation model of a new cooling system for the upcoming Gripen E aircraft has been developed in the Modelica-based tool Dymola in order to examine the cooling performance. These types of low-dimensioned simulation models, which generally are described by ordinary differential equations or differential-algebraic equations, are often used to describe entire fluid systems. These equations are easier to solve than partial differential equations, which are used in 2-D and 3-D simulation models. Some approximations and assumptions of the physical system have to be made when developing this type of 1-D dynamic simulation model. The impact from these approximations and assumptions can be examined with an uncertainty analysis in order to increase the understanding of the simulation results. Most uncertainty analysis methods are not practically feasible when analyzing large 1-D dynamic simulation models with many uncertainties, implying the importance to simplify these methods in order to make them practically feasible. This study was aimed at finding a method that is easy to realize with low computational expense and engineering workload. The evaluated simulation model consists of several sub-models that are linked together. These sub-models run much faster when simulated as standalone models, compared to running the total simulation model as a whole. It has been found that this feature of the sub-models can be utilized in an interval-based uncertainty analysis where the uncertainty parameter settings that give the minimum and maximum simulation model response can be derived. The number of simulations needed of the total simulation model, in order to perform an uncertainty analysis, is thereby significantly reduced. The interval-based method has been found to be enough for most simulations since the control software in the simulation model controls the liquid cooling temperature to a specific reference value. The control system might be able to keep this reference value, even for the worst case uncertainty combinations, implying no need to further analyze these simulations with a more refined uncertainty propagation, such as a probabilistic propagation approach, where different uncertainty combinations are examined. While the interval-based uncertainty analysis method lacks probability information it can still increase the understanding of the simulation results. It is also computationally inexpensive and does not rely on an accurate and time-consuming characterization of the probability distribution of the uncertainties. Uncertainties from all sub-models in the evaluated simulation model have not been included in the uncertainty analysis made in this thesis. These neglected sub-model uncertainties can be included using the interval-based method, as a future work. Also, a method for combining the interval-based method with aleatory uncertainties is proposed in the end of this thesis and can be examined.
13

The utilisation of quantitative information in groups' capital investment decisions

Ang, Nicole Pamela, Accounting, Australian School of Business, UNSW January 2009 (has links)
One explanation for the increased use of interactive groups in organisations is that benefits are obtained from pooling individuals?? differing knowledge and abilities. However, prior experimental research has established that groups often do not discuss and use information effectively, exhibiting a bias toward information that is commonly known by all group members, rather than information that is unique to individual group members (common information bias). This dissertation investigated whether the provision of quantitative information resulted in improved group performance in two respects. First, it investigated whether quantitative information was discussed and used more than qualitative information. Second, it examined whether the quantification of information reduced the common information bias. This is important because a basic purpose of managerial accounting is to provide information that improves employees?? abilities to make optimal decisions. This dissertation utilised an experimental task known as a ??hidden profile?? to achieve the research objectives. In a hidden profile experiment, each group member receives some information that is common to everyone in the group, and some information that is unique to them. The group must discuss and use members?? unique information in order to uncover the optimal task solution. This dissertation examined the effect of information availability (common or unique) and information type (quantitative or qualitative) on information discussion and use. There were two stages to the experiment. First, individual group members had to make a capital investment decision, and write down their reasons for that decision. Second, groups had to discuss the information, come to a group decision, and write down their reasons for that decision. The results confirmed a common information bias at the group decision level, with groups significantly favouring common information over unique information, for all measures of discussion and information use. In contrast, while a preference for quantitative information was found at the individual decision level, at a group decision level there were no significant differences in the discussion or use of quantitative and qualitative information, with only one exception: significantly more statements were made about quantitative information.
14

Développement de stratégies protéomiques pour la découverte de nouvelles protéines codées dans des séquences codantes non canoniques chez les eucaryotes / Development of proteomics strategies for the discovery of novel proteins encoded within non-canonical open reading frames in eukaryotic species

Delcourt, Vivian 14 December 2017 (has links)
La vision traditionnelle de la synthèse protéique chez les eucaryotes comprend un ARN messager (ARNm) qui porte un seul cadre de lecture ouvert (ORF). Chaque gène codant eucaryote produit généralement une protéine canonique et éventuellement une ou plusieurs isoformes. Cependant, de nombreuses évidences expérimentales récentes démontrent que le protéome des eucaryotes a été sous-estimé, et que les cellules sont capables de synthétiser des protéines qui n’étaient jusqu’alors pas prédites. Ces nouvelles protéines « alternatives » (altProts) peuvent être issues de la traduction d’ORFs non annotés contenus sur des ARNms, ou des ARNs non codants. Ces découvertes ont été possibles grâce aux progrès techniques réalisés en biochimie analytique en protéomique par spectrométrie de masse. Dans le cadre de ces analyses, deux approches sont privilégiées. La première, ou bottom-up se base sur les produits peptidiques issus d’une digestion enzymatique des protéines quand la seconde ou top-down est basée sur la mesure des protéines entières. Les travaux réalisés dans cette thèse s’articulent autour du développement de stratégies pour la découverte et la caractérisation des altProts par approches protéomiques bottom-up et top-down. Ces aspects sont décrits dans plusieurs publications scientifiques qui seront présentées dans ce manuscrit. Elles comprennent une revue de bibliographie, deux publications relatives à l’application de l’approche top-down par micro-extractions de tissus de cerveau de rat et de biopsie tumorale ovarienne et une publication relative à la détermination de la stœchiométrie de deux protéines, l’une alternative et l’autre canonique toutes deux issues du même gène. / The traditional view of protein synthesis in eukaryotic species involves one messenger RNA (mRNA) bearing a single open reading frame (ORF). Thus, each eukaryotic coding gene may produce one canonical protein and possibly one or more of its isoforms. However, numerous experimental evidence report that eukaryotic proteomes may have been under-estimated and that cells are capable of synthetizing proteins which had not been predicted thus far. These novel proteins, termed “alternative proteins” (altProts) may be translated from non-canonical ORFs localized in mRNAs or from RNAs annotated as non-coding. These discoveries were made possible thanks to technical progresses in analytical chemistry in mass spectrometry-based proteomics. These analyses are based on two main strategies; the “bottom-up” approach is based on the peptidic products of enzymatic digestion of native proteins whereas the second and more recent approach, termed “top-down”, is based on the analysis of intact protein by mass spectrometry. The work described in this thesis is focused on the development of experimental strategies helping the discovery and characterization of altProts using bottom-up and top-down approaches. The findings are described in scientific publications which are included in the thesis. These publications include a review, two publications on the application of the top-down approach using micro-extractions on rat brain tissue and ovarian tumor biopsy and one publication related to the stoichiometry elucidation of a canonical and an alternative protein both encoded within the same gene.
15

Detection, quantification and genetic characterization of six major non-O157 Shiga toxin-producing Escherichia coli serogroups and E. coli O104 in feedlot cattle feces

Belagola Shridhar, Pragathi January 1900 (has links)
Doctor of Philosophy / Department of Diagnostic Medicine/Pathobiology / Jianfa Bai / Cattle feces are a major source of six Shiga toxin-producing E. coli (STEC) serogroups, O26, O45, O103, O111, O121, and O145, called non-O157 STEC, responsible for >70% of non-O157 STEC-associated human illnesses. Another E. coli serotype, O104:H4, a hybrid pathotype of enteroaggregative and STEC, was responsible for a large outbreak of foodborne illness in Germany. Studies were conducted to develop and validate culture- and PCR-based methods to detect and or quantify six non-O157 E. coli serogroups and E. coli O104 in cattle feces, and genetically assess their virulence potential, based on DNA microarray and whole genome sequencing (WGS). Two multiplex quantitative PCR (mqPCR) assays (assay 1: O26, O103 and O111; assay 2: O45, O121 and O145), targeting serogroup-specific genes, were developed and validated for the detection and quantification of six non-O157 E. coli in cattle feces and was compared to culture-based and end-point PCR methods. The mqPCR assays detected higher proportion of fecal samples as positive for one or more non-O157 E. coli serogroups compared to culture-based and end-point PCR methods. Spiral plating method was validated to quantify six non-O157 E. coli serogroups in cattle feces, and was compared to mqPCR assays. The mqPCR assays quantified higher proportion of fecal samples positive for one or more non-O157 E. coli serogroups compared to spiral plating method, however, unlike mqPCR, spiral plating method quantifies serogroups positive for virulence genes. Quantification by either mqPCR or spiral plating identified a subset of cattle that was shedding non-O157 E. coli at high concentrations (≥ 4 log CFU/g of feces), similar to E. coli O157. Identification of Shiga toxin subtypes associated with non-O157 E. coli serogroups isolated from cattle feces revealed a variety of subtypes, with stx1a and stx2a being the most predominant. Microarray-based analysis of six non-O157 E. coli serogroups isolated from cattle feces revealed the presence of stx, LEE-encoded, and other virulence genes associated with human illnesses. Analysis of WGS of STEC O145 strains isolated from cattle feces, hide and human clinical cases revealed similarity in virulence gene profiles, suggesting the potential of cattle E. coli O145 strains to cause human illnesses. Shiga toxin 1a was the most common stx subtype, followed by stx2a, and stx2c. The strains also carried LEE-encoded, and plasmid-encoded virulence genes. Model adjusted prevalence estimates of E. coli O104 in cattle fecal samples collected from feedlots (n=29) were 0.5% and 25.9% by culture and PCR methods, respectively. Cattle harbor O104 serotypes other than H4, with O104:H7 being the predominant serotype and only a small proportion of them carried stx. DNA microarray and WGS analysis revealed absence of LEE-encoded virulence genes in bovine and human O104 strains. Escherichia coli O104:H7 has the potential to be a diarrheagenic foodborne pathogen in humans, since they possess stx1c and genes that code for enterohemolysin and a variety of adhesins. Data on prevalence, concentration and virulence potential of non-O157 E. coli serogroups, including O104, isolated from cattle feces are essential to design effective intervention strategies to reduce the potential to cause human foodborne illness outbreaks.
16

Kinetic spectroscopic quantification of biomarkers in practical samples

Peng, Weiyu 06 August 2021 (has links)
Kinetic spectroscopic quantification refers to a subset of chromogenic (CG) and fluorogenic (FG) assays that deduce analyte concentration based on the UV-vis or fluorescence signal obtained during the CG/FG reaction processes. Existing kinetic spectroscopic quantification are based predominantly on reactions that can be approximated as a first-order process. Presented in this thesis is the kinetic spectroscopic quantification that uses higher order CG/FG reactions where the overall reaction can be approximated as combination of two sequential first-order processes. Included in chapter one is the theoretical model and several proof-of-concept applications. This model analyte is malondialdehyde (MDA), a lipid peroxidation biomarker of broad interest. Chapter two describes the study of the effects of the reaction solvent, temperature, acid catalyst, and calibration method on the assay performance. The most rapid MDA assays achieved so far is 3 mins, 30 times more efficient than the current equilibrium spectroscopic quantification.
17

Assessment of Model Validation, Calibration, and Prediction Approaches in the Presence of Uncertainty

Whiting, Nolan Wagner 19 July 2019 (has links)
Model validation is the process of determining the degree to which a model is an accurate representation of the true value in the real world. The results of a model validation study can be used to either quantify the model form uncertainty or to improve/calibrate the model. However, the model validation process can become complicated if there is uncertainty in the simulation and/or experimental outcomes. These uncertainties can be in the form of aleatory uncertainties due to randomness or epistemic uncertainties due to lack of knowledge. Four different approaches are used for addressing model validation and calibration: 1) the area validation metric (AVM), 2) a modified area validation metric (MAVM) with confidence intervals, 3) the standard validation uncertainty from ASME VandV 20, and 4) Bayesian updating of a model discrepancy term. Details are given for the application of the MAVM for accounting for small experimental sample sizes. To provide an unambiguous assessment of these different approaches, synthetic experimental values were generated from computational fluid dynamics simulations of a multi-element airfoil. A simplified model was then developed using thin airfoil theory. This simplified model was then assessed using the synthetic experimental data. The quantities examined include the two dimensional lift and moment coefficients for the airfoil with varying angles of attack and flap deflection angles. Each of these validation/calibration approaches will be assessed for their ability to tightly encapsulate the true value in nature at locations both where experimental results are provided and prediction locations where no experimental data are available. Generally it was seen that the MAVM performed the best in cases where there is a sparse amount of data and/or large extrapolations and Bayesian calibration outperformed the others where there is an extensive amount of experimental data that covers the application domain. / Master of Science / Uncertainties often exists when conducting physical experiments, and whether this uncertainty exists due to input uncertainty, uncertainty in the environmental conditions in which the experiment takes place, or numerical uncertainty in the model, it can be difficult to validate and compare the results of a model with those of an experiment. Model validation is the process of determining the degree to which a model is an accurate representation of the true value in the real world. The results of a model validation study can be used to either quantify the uncertainty that exists within the model or to improve/calibrate the model. However, the model validation process can become complicated if there is uncertainty in the simulation (model) and/or experimental outcomes. These uncertainties can be in the form of aleatory (uncertainties which a probability distribution can be applied for likelihood of drawing values) or epistemic uncertainties (no knowledge, inputs drawn within an interval). Four different approaches are used for addressing model validation and calibration: 1) the area validation metric (AVM), 2) a modified area validation metric (MAVM) with confidence intervals, 3) the standard validation uncertainty from ASME V&V 20, and 4) Bayesian updating of a model discrepancy term. Details are given for the application of the MAVM for accounting for small experimental sample sizes. To provide an unambiguous assessment of these different approaches, synthetic experimental values were generated from computational fluid dynamics(CFD) simulations of a multi-element airfoil. A simplified model was then developed using thin airfoil theory. This simplified model was then assessed using the synthetic experimental data. The quantities examined include the two dimensional lift and moment coefficients for the airfoil with varying angles of attack and flap deflection angles. Each of these validation/calibration approaches will be assessed for their ability to tightly encapsulate the true value in nature at locations both where experimental results are provided and prediction locations where no experimental data are available. Also of interest was to assess how well each method could predict the uncertainties about the simulation outside of the region in which experimental observations were made, and model form uncertainties could be observed.
18

Automatic surface defect quantification in 3D

Tailor, Mitul January 2013 (has links)
Three-dimensional (3D) non-contact optical methods for surface inspection are of significant interest to many industrial sectors. Many aspects of manufacturing processes have become fully automated resulting in high production volumes. However, this is not necessarily the case for surface defect inspection. Existing human visual analysis of surface defects is qualitative and subject to varying interpretation. Automated 3D non-contact analysis should provide a robust and systematic quantitative approach. However, different 3D optical measurement technologies use different physical principles, interact with surfaces and defects in diverse ways, leading to variation in measurement data. Instrument s native software processing of the data may be non-traceable in nature, leading to significant uncertainty about data quantisation. Sub-millimetric level surface defect artefacts have been created using Rockwell and Vickers hardness testing equipment on various substrates. Four different non-contact surface measurement instruments (Alicona InfiniteFocus G4, Zygo NewView 5000, GFM MikroCAD Lite and Heliotis H3) have been utilized to measure different defect artefacts. The four different 3D optical instruments are evaluated by calibrated step-height created using slipgauges and reference defect artefacts. The experimental results are compared to select the most suitable instrument capable of measuring surface defects in robust manner. This research has identified a need for an automatic tool to quantify surface defect and thus a mathematical solution has been implemented for automatic defect detection and quantification (depth, area and volume) in 3D. A simulated defect softgauge with a known geometry has been developed in order to verify the implemented algorithm and provide mathematical traceability. The implemented algorithm has been identified as a traceable, highly repeatable, and high speed solution to quantify surface defect in 3D. Various industrial components with suspicious features and solder joints on PCB are measured and quantified in order to demonstrate applicability.
19

Investigation of Improved Quantification Techniques in Dedicated Breast SPECT-CT

Mann, Steve Dean January 2015 (has links)
<p>The work presented in this dissertation focuses on evaluation of absolute quantification accuracy in dedicated breast SPECT-CT. The overall goal was to investigate through simulations and measurements the impact and utilization of various correction methods for scattered and attenuated photons, characterization of incomplete charge collection in Cadmium Zinc Telluride detectors as a surrogate means of improving scatter correction, and resolution recovery methods for modeling collimator blur during image reconstruction. The quantification accuracy of attenuation coefficients in CT reconstructions was evaluated in geometric phantoms, and a slice-by-slice breast segmentation algorithm was developed to separate adipose and glandular tissue. All correction and segmentation methods were then applied to a pilot study imaging parathyroid patients to determine the average uptake of Tc-99m Sestamibi in healthy breast tissue, including tissue specific uptake in adipose and glandular tissue. </p><p>Monte Carlo methods were utilized to examine the changes in incident scatter energy distribution on the SPECT detector as a function of 3D detector position about a pendant breast geometry. A simulated prone breast geometry with torso, heart, and liver was designed. An ideal detector was positioned at various azimuthal and tilted positions to mimic the capabilities of the breast SPECT subsystem. The limited near-photopeak scatter energy range in simulated spectra was linearly fit and the slope used to characterize changes in scatter distribution as a function of detector position. Results show that the detected scatter distribution changes with detector tilt, with increasing incidence of high energy scattered photons at larger detector tilts. However, reconstructions of various simulated trajectories show minimal impact on quantification (<5%) compared to a primary-only reconstruction.</p><p>Two scatter compensation methods were investigated and compared to a narrow photopeak-only windowing for quantification accuracy in large uniform regions and small, regional uptake areas: 1) a narrow ±4% photopeak energy window to minimize scatter in the photopeak window, 2) the previously calibrated dual-energy window scatter correction method, and 3) a modified dual-energy window correction method that attempts to account for the effects of incomplete charge collection in Cadmium Zinc Telluride detectors. Various cylindrical phantoms, including those with imbedded hot and cold regions, were evaluated. Results show that the Photopeak-only and DEW methods yield reasonable quantification accuracy (within 10%) for a wide range of activity concentrations and phantom configurations. The mDEW demonstrated highly accurate quantification measurements in large, uniform regions with improved uniformity compared to the DEW method. However, the mDEW method is susceptible to the calibration parameters and the activity concentration of the scanned phantom. The sensitivity of the mDEW to these factors makes it a poor choice for robust quantification applications. Thus, the DEW method using a high-performance CZT gamma camera is still a better choice for quantification purposes</p><p>Phantoms studies were performed to investigate the application of SPECT vs CT attenuation correction. Minor differences were observed between SPECT and CT maps when assuming a uniformly filled phantom with the attenuation coefficient of water, except when the SPECT attenuation map volume was significantly larger than the CT volume. Material specific attenuation coefficients reduce the corresponding measured activity concentrations compared to a water-only correction, but the results do not appear more accurate than a water-only attenuation map. Investigations on the impact of image registration show that accurate registration is necessary for absolute quantification, with errors up to 14% observed for 1.5cm shifts. </p><p>A method of modeling collimator resolution within the SPECT reconstruction algorithm was investigated for its impact on contrast and quantification accuracy. Three levels of resolution modeling, each with increasing ray-sampling, were investigated. The resolution model was applied to both cylindrical and anthropomorphic breast phantoms with hot and cold regions. Large volume quantification results (background measurements) are unaffected by the application of resolution modeling. For smaller chambers and simulated lesions, contrast generally increases with resolution modeling. Edges of lesions also appear sharper with resolution modeling. No significant differences were seen between the various levels of resolution modeling. However, Gibbs artifacts are amplified at the boundaries of high contrast regions, which can significantly affect absolute quantification measurements. Convergence with resolution modeling is also notably slower, requiring more iterations with OSEM to reach a stable mean activity concentration. Additionally, reconstructions require far more computing time with resolution modeling due to the increase in number of sampling rays. Thus while the edge enhancement and contrast improvements may benefit lesion detection, the artifacts, slower convergence, and increased reconstruction time limit the utility of resolution modeling for both absolute quantification and clinical imaging studies. </p><p>Finally, a clinical pilot study was initiated to measure the average uptake of Tc-99m Sestamibi in healthy breast tissue. Subjects were consented from those undergoing diagnostic parathyroid studies at Duke. Each subject was injected with 25mCi of Sestamibi as part of their pre-surgical parathyroid SPECT imaging studies and scanned with the dedicated breast SPECT-CT system before their diagnostic parathyroid SPECT scan. Based on phantom studies of CT reconstructed attenuation coefficient accuracy, a slice-by-slice segmentation algorithm was developed to separate breast CT data into adipose and glandular tissue. SPECT data were scatter, attenuation, and decay corrected to the time of injection. Segmented CT images were used to measure average radiotracer concentration in the whole breast, as well as adipose and glandular tissue. With 8 subjects scanned, the average measured whole breast activity concentration was found to be 0.10µCi/mL. No significant differences were seen between adipose and glandular tissue uptake. </p><p>In conclusion, the application of various characterization and correct methods for quantitative SPECT imaging were investigated. Changes in detected scatter distribution appear to have minimal impact on quantification, and characterization of low-energy tailing for a modified scatter subtraction method yields inferior overall quantification results. Comparable quantification accuracy is seen with SPECT and CT-based attenuation maps, assuming the SPECT-based volume is fairly accurate. In general, resolution recovery within OSEM yields higher contrast, but quantification accuracy appears more susceptible to measurement location. Finally, scatter, attenuation, and resolution recovery methods, along with a breast segmentation algorithm, were implemented in a clinical imaging study for quantifying Tc-99m Sestamibi uptake. While the average whole breast uptake was measured to be 0. 10µCi/mL, no significant differences were seen between adipose and glandular tissue or when implementing resolution recovery. Thus, for future clinical imaging, it's recommended that the application of the investigated correction methods should be limited to the traditional DEW method and CT-based attenuation maps for quantification studies.</p> / Dissertation
20

Développement de méthodes d'analyse de standards pharmaceutiques par électrophorèse capillaire

Léonard Charette, Marie-Ève January 2006 (has links)
Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.

Page generated in 0.0671 seconds