• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • Tagged with
  • 6
  • 6
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Comparison of the Swedish Reference Group of antibiotics and Clinical and Laboratory Standards Institute sensitivity testing methods

Neyvaldt, Julianna January 2006 (has links)
<p>The aim of this project was to compare the use of the Swedish Reference Group of Antibiotics and the Clinical Laboratory standard institute disc diffusions method using a variety of antibiotics on Klebsiella species and Pseudomonas aeruginosa. After the disc diffusions methods a Vitek system and Etest where preformed on those isolates with a different or abnormal result.</p><p>Klebsiella spp. and P. aeruginosa was colleted in St James’s hospital over a period of four months. These two isolates were chosen because of their resistance pattern to many commonly used antibiotics and the threat of upcoming resistance to other antibiotics.</p><p>The SRGA method is know to have lower breakpoints to catch low degree of resistance bacteria, while the CLSI often is blamed for not having species specific MIC breakpoints.</p><p>The results in this study showed that the SRGA-m often caught more resistant and</p><p>intermediate isolates then the CLSI-m.</p>
2

Comparison of the Swedish Reference Group of antibiotics and Clinical and Laboratory Standards Institute sensitivity testing methods

Neyvaldt, Julianna January 2006 (has links)
The aim of this project was to compare the use of the Swedish Reference Group of Antibiotics and the Clinical Laboratory standard institute disc diffusions method using a variety of antibiotics on Klebsiella species and Pseudomonas aeruginosa. After the disc diffusions methods a Vitek system and Etest where preformed on those isolates with a different or abnormal result. Klebsiella spp. and P. aeruginosa was colleted in St James’s hospital over a period of four months. These two isolates were chosen because of their resistance pattern to many commonly used antibiotics and the threat of upcoming resistance to other antibiotics. The SRGA method is know to have lower breakpoints to catch low degree of resistance bacteria, while the CLSI often is blamed for not having species specific MIC breakpoints. The results in this study showed that the SRGA-m often caught more resistant and intermediate isolates then the CLSI-m.
3

Comparison of Antibiotic Sensitivity Profiles, Molecular Typing Patterns, and Attribution of Salmonella Enterica Serotype Newport in the U.S., 2003-2006

Patel, Nehal Jitendralal 26 July 2007 (has links)
Salmonella causes gastrointestinal illness in humans. The purpose of the study was to determine the relative contribution of different food commodities to sporadic cases of salmonellosis (attribution analysis) caused by Salmonella Newport (SN) using Pulsed-Field Gel Electrophoresis (PFGE) patterns and antimicrobial sensitivity (AST) data submitted by public health laboratories and regulatory agencies from 2003 to 2006. The genetic relationship between isolates from non-human (348) and human (10,848) sources was studied by two unique clustering methods: UPGMA and Ward. Results show poultry was the highest contributor of human SN infections, followed by tomatoes and beef. Beef was the largest contributing food commodity of multi-drug resistant (MDR)-AmpC infection patterns. Results from this pilot study show that PFGE and AST can be useful tools in performing attribution analysis at the national level and that SN MDR-AmpC patterns are decreasing and seem to be restricted to isolates from animal sources.
4

Highway case study investigation and sensitivity testing using the Project Evaluation Toolkit

Fagnant, Daniel James 29 September 2011 (has links)
As transportation funding becomes increasingly constrained, it is imperative that decision makers invest precious resources wisely and effectively. Transportation planners need effective tools for anticipating outcomes (or ranges of outcomes) in order to select preferred project alternatives and evaluate funding options for competing projects. To this end, this thesis work describes multiple applications of a new Project Evaluation Toolkit (PET) for highway project assessment. The PET itself was developed over a two-year period by the thesis author, in conjunction with Dr. Kara Kockelman, Dr. Chi Xie, and some support by others, as described in Kockelman et al. (2010) and the PET Users Guidebook (Fagnant et al. 2011). Using just link-level traffic counts (and other parameter values, if users wish to change defaults), PET quickly estimates how transportation network changes impact traveler welfare (consisting of travel times and operating costs), travel time reliability, crashes, and emissions. Summary measures (such as net present values and benefit-cost ratios) are developed over multi-year/long-term horizons to quantify the relative merit of project scenarios. This thesis emphasizes three key topics: a background and description of PET, case study evaluations using PET, and sensitivity analysis (under uncertain inputs) using PET. The first section includes a discussion of PET’s purpose, operation and theoretical behavior, much of which is taken from Fagnant et al. (2010). The second section offers case studies on capacity expansion, road pricing, demand management, shoulder lane use, speed harmonization, incident management and work zone timing along key links in the Austin, Texas network. The final section conducts extensive sensitivity testing of results for two competing upgrade scenarios (one tolled, the other not); the work examines how input variations impact PET outputs over hundreds of model applications. Taken together, these investigations highlight PET’s capabilities while identifying potential shortcomings. Such findings allow transportation planners to better appreciate the impacts that various projects can have on the traveling public, how project evaluation may best be tackled, and how they may use PET to anticipate impacts of projects they may be considering, before embarking on more detailed analyses and finalizing investment decisions. / text
5

Modelling container logistics processes in container terminals : a case study in Alexandria

ElMesmary, Hebatallah Mohammed January 2015 (has links)
This study aims to optimize the logistics processes of container terminals. Potentially powerful pipe-flow models of container terminal logistics processes have been neglected to date and modelling of terminals is rare. Because research which adopts a pipe flow and dynamic operational perspective is rare, a case application in Alexandria, Egypt collated empirical container and information flows using interviews and company records to describe its logistics processes and model container and information flows. The methodology used includes qualitative and quantitative methods and a descriptive methodology proceeds sequentially. Primary and secondary data were presented as a pipe flow model to show interrelations between the company’s resources and to identify bottlenecks. Simulation modelling used Simul8 software. Operational level modelling of both import and export flows simulated the actual inbound and outbound flows of containers from entry to exit. The import logistics process includes activities such as unloading vessels by quay cranes, moving containers by tractors to yard cranes to go for storage where customs procedures take place before exiting the terminal by customer’s truck. The export logistics process includes the activities associated with customers’ trucks, lifters, storage yards, tractors and quay cranes. The model takes into account the uncertainties in each activity. This study focuses on operational aspects rather than cost issues, and considers container flows rather than vessel flows. Although the simulated model was not generalized, implementation elsewhere is possible. Following successful validation of a base simulation model which reproduces the case company’s historical scenario, scenario testing empowered the case company to pro-actively design and test the impact of operational changes on the entire logistics process. The study evaluates a typical container terminal logistics system including both import and export containers in the presence of multiple uncertainties in terminal operations (e.g. quay crane operations, tractor operations, yard crane operations). Sensitivity testing and scenario analysis can empower terminal managers to make decisions to improve performance, and to guide terminal planners, managers, and operators in testing future investment scenarios before implementation.
6

Using Code Mutation to Study Code Faults in Scientific Software

Hook, Daniel 22 April 2009 (has links)
Code faults can seriously degrade scientific software accuracy. Therefore, it is imperative that scientific software developers scrutinize their codes in an attempt to find these faults. This thesis explores, particularly, the efficacy of code testing as a method of scientific software code fault detection. Software engineers, as experts in code quality, have developed many code testing techniques, but many of these techniques cannot readily be applied to scientific codes for at least two reasons. First, scientific software testers do not usually have access to an ideal oracle. Second, scientific software outputs, by nature, can only be judged for accuracy and not correctness. Testing techniques developed under the assumption that these two problems can be ignored--as most have been--are of questionable value to computational scientists. To demonstrate the reality of these problems and to provide an example of how software engineers and scientists can begin to address them, this thesis discusses the development and application of a novel technique: Mutation Sensitivity Testing (MST). MST is based on traditional mutation testing, but--in place of a focus on mutant "killing"--MST focuses on assessing the mutation sensitivity of a test set. In this thesis, MST experiments are conducted using eight small numerical routines, four classes of mutation operators, and 1155 tests. The results are discussed and some conclusions are drawn. Two observations are of particular interest to computational scientists. First, it is found that oracles that exhibit uncertainties greater than (approximately) 80% of the expected output are of questionable value when they are used in the testing of scientific software. Second, it is found that a small number of carefully selected tests may be sufficient to falsify a code. / Thesis (Master, Computing) -- Queen's University, 2009-04-19 13:34:08.943

Page generated in 0.1195 seconds