• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 360
  • 108
  • 65
  • 26
  • 17
  • 14
  • 14
  • 14
  • 13
  • 10
  • 7
  • 7
  • 6
  • 2
  • 2
  • Tagged with
  • 824
  • 824
  • 180
  • 130
  • 130
  • 108
  • 106
  • 106
  • 90
  • 81
  • 77
  • 76
  • 75
  • 74
  • 65
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

The Generation of a Digital Phantom for Testing of Digitally Reconstructed Radiographs

Mason, Nicholas Andrew, 11 October 2004 (has links)
The construction of phantoms for testing imaging parameters has been well documented in the literature. As computers have been introduced into the different areas of medicine, they have become more and more relied upon to replace conventional technologies. One specific example is that of plane film X-rays. Digitally Reconstructed Radiographs (DRR's) are computer generated images that are generated from a 3 D volume of data, such as CT or MRI axial scans, and can be used in place of conventional X rays. The computer can generate a DRR image for any position, orientation and magnification, and geometries not physically possible in the real world. In this work a technique is developed to generate phantoms that can be used for testing the accuracy of DRR's. A computer generated phantom can produce multiple test cases that can be used to test specific variables of the DRR's. A series of 12 different standard phantoms were used to test the ability of three different commercially available treatment planning or virtual simulation systems to generate DRR's. A virtual simulation system under development by the author and collaborators and seeking approval from the Food and Drug Administration (FDA), was used as a development platform for this work. Initial evaluation of the usefulness of the digital phantoms for testing showed immediate results. The first virtual simulation system tested with the phantoms revealed a major error in its ability to generate accurate DRR's. Subsequently tests of the three commercially available systems further demonstrated the usefulness of the work. The tests revealed errors in two of the three systems evaluated but it was determined that they were not clinically significant. In conclusion, the digital phantoms developed in this work are a fast, accurate method for testing digitally reconstructed radiographs. It is an extremely versatile testing method, as the phantoms can be generated with ease for any geometry without needing access to a CT scanner. This method of testing can be used to test a number of different DRR image parameters. Should an error be found, it can be used to isolate errors that might exist in the imaging device. It is an extremely versatile testing method, as the phantoms can be generated with ease for any geometry without needing access to a CT scanner. This method of testing can be used to test a number of different DRR image parameters. Should an error be found, it can be used to isolate errors that might exist in the imaging device.
222

Evaluation of Geometric Accuracy and Image Quality of an On-Board Imager (OBI)

Djordjevic, Milos January 2007 (has links)
<p>In this project several tests were performed to evaluate the performance of an On-Board Imager® (OBI) mounted on a clinical linear accelerator. The measurements were divided into three parts; geometric accuracy, image registration and couch shift accuracy, and image quality. A cube phantom containing a radiation opaque marker was used to study the agreement with treatment isocenter for both kV-images and cone-beam CT (CBCT) images. The long term stability was investigated by acquiring frontal and lateral kV images twice a week over a 3 month period. Stability in vertical and longitudinal robotic arm motion as well as the stability of the center-of-rotation was evaluated. Further, the agreement of kV image and CBCT center with MV image center was examined.</p><p>A marker seed phantom was used to evaluate and compare the three applications in image registration; 2D/2D, 2D/3D and 3D/3D. Image registration using kV-kV image sets were compared with MV MV and MV-kV image sets. Further, the accuracy in 2D/2D matches with images acquired at non-orthogonal gantry angles was evaluated. The image quality in CBCT images was evaluated using a Catphan® phantom. Hounsfield unit (HU) uniformity and linearity was compared with planning CT. HU accuracy is crucial for dose verification using CBCT data.</p><p>The geometric measurements showed good long term stability and accurate position reproducibility after robotic arm motions. A systematic error of about 1 mm in lateral direction of the kV-image center was detected. A small difference between kV and CBCT center was observed and related to a lateral kV detector offset. The vector disagreement between kV- and MV-image centers was  2 mm at some gantry angles. Image registration with the different match applications worked sufficiently. 2D/3D match was seen to correct more accurately than 2D/2D match for large translational and rotational shifts. CBCT images acquired with full-fan mode showed good HU uniformity but half fan images were less uniform. In the soft tissue region the HU agreement with planning CT was reasonable while a larger disagreement was observed at higher densities. This work shows that the OBI is robust and stable in its performance. With regular QC and calibrations the geometric precision of the OBI can be maintained within 1 mm of treatment isocenter.</p>
223

Heapy: A Memory Profiler and Debugger for Python

Nilsson, Sverker January 2006 (has links)
<p>Excessive memory use may cause severe performance problems and system crashes. Without appropriate tools, it may be difficult or impossible to determine why a program is using too much memory. This applies even though Python provides automatic memory management --- garbage collection can help avoid many memory allocation bugs, but only to a certain extent due to the lack of information during program execution. There is still a need for tools helping the programmer to understand the memory behaviour of programs, especially in complicated situations. The primary motivation for Heapy is that there has been a lack of such tools for Python.</p><p>The main questions addressed by Heapy are how much memory is used by objects, what are the objects of most interest for optimization purposes, and why are objects kept in memory. Memory leaks are often of special interest and may be found by comparing snapshots of the heap population taken at different times. Memory profiles, using different kinds of classifiers that may include retainer information, can provide quick overviews revealing optimization possibilities not thought of beforehand. Reference patterns and shortest reference paths provide different perspectives of object access patterns to help explain why objects are kept in memory.</p>
224

Kvalitetssäkring av tjänsteföretag : en studie av utbildningsföretagens auktorisation

Mirnezami, Soheila, Hedengren, Susanna January 2005 (has links)
No description available.
225

External Quality Assessment of HbA1c for Point of Care Testing

Bjuhr, Mathias, Berne, Christian, Larsson, Anders January 2005 (has links)
<p>Objectives: To evaluate the long term total imprecision of HbA1c testing within the county of Uppsala in relation to the Swedish analytical goal of coefficient of variation (CV) <3% for HbA1c and to study the cost of an external quality assurance program for point-of-care HbA1c The county uses Bayer DCA 2000™ for point-of care HbA1c testing currently having 23 of these instruments.</p><p>Methods: Method imprecision was assessed by analysis of patient samples performed as split samples during a 3 year period (2002-2004) as part of the quality assurance program for point-of-care HbA1c testing. The samples were first analysed on a Bayer DCA 2000™ and the samples were then sent to the centralised laboratory for reanalysis with an HPLC system (Variant II™, Biorad). The testing was performed approximately 8 times per year with each instrument.</p><p>Results: The median CV between the HPLC method and the point-of-care instruments for each unit was slightly higher than 3%.</p><p>Conclusion: The DCA 2000™ systems have an acceptable imprecision and agreement with the central laboratory. The test results show acceptable agreements within the county regardless where the patient is tested. The cost of the external quality assurance program is calculated to be approximately SEK 1340 (Euro 150) per instrument.</p>
226

Quality Assurance in Quantitative Microbial Risk Assessment: Application of methods to a model for Salmonella in pork

Boone, Idesbald 31 January 2011 (has links)
Quantitative microbial risk assessment (QMRA) is being increasingly used to support decision-making for food safety issues. Decision-makers need to know whether these QMRA results can be trusted, especially when urgent and important decisions have to be made. This can be achieved by setting up a quality assurance (QA) framework for QMRA. A Belgian risk assessment project (the METZOON project) aiming to assess the risk of human salmonellosis due to the consumption of fresh minced pork meat was used as a case study to develop and implement QA methods for the evaluation of the quality of input data, expert opinion, model assumptions, and the quality of the QMRA model (the METZOON model). The first part of this thesis consists of a literature review of available QA methods of interest in QMRA (chapter 2). In the next experimental part, different QA methods were applied to the METZOON model. A structured expert elicitation study (chapter 4) was set up to fill in missing parameters for the METZOON model. Judgements of experts were used to derive subjective probability density functions (PDFs) to quantify the uncertainty on the model input parameters. The elicitation was based on Cookes classical model (Cooke, 1991) which aims to achieve a rational consensus about the elicitation protocol and allowed comparing different weighting schemes for the aggregation of the experts PDFs. Unique to this method was the fact that the performance of experts as probability assessors was measured by the experts ability to correctly and precisely provide estimates for a set of seed variables (=variables from the experts area of expertise for which the true values were known to the analyst). The weighting scheme using the experts performance on a set of calibration variables was chosen to obtain the combined uncertainty distributions of lacking parameters for the METZOON model. A novel method for the assessment of data quality, known as the NUSAP (Numeral Unit Spread Assessment Pedigree) system (chapter 5) was tested to screen the quality of the METZOON input parameters. First, an inventory with the essential characteristics of parameters including the source of information, the sampling methodology and distributional characteristics was established. Subsequently the quality of these parameters was evaluated and scored by experts using objective criteria (proxy, empirical basis, methodological rigour and validation). The NUSAP method allowed to debate on the quality of the parameters within the members of the risk assessment team using a structured format. The quality evaluation was supported by graphical representations which facilitated decisions on the inclusion or exclusion of inputs into the model. It is well known that assumptions and subjective choices can have a large impact on the output of a risk assessment. To assess the value-ladenness (degree of subjectivity) of assumptions in the METZOON model a structured approach based on the protocol by Kloprogge et al. (2005) was chosen (chapter 6). The key assumptions for the METZOON model were first identified and then evaluated by experts in a workshop using four criteria: the influence of situational limitations, the plausibility, the choice space and the agreement among peers. The quality of the assumptions was graphically represented (using kite diagrams, pedigree charts and diagnostic diagrams) and allowed to identify assumptions characterised by high degree of subjectivity and high expected influence on the model results, which can be considered as weak links in the model. The quality assessment of the assumptions was taken into account to modify parts of the METZOON model, and allows to increase the transparency in the QMRA process. In a last application of a QA method, a quality audit checklist (Paisley, 2007) was used to critically review and score the quality of the METZOON model and to identify its strengths and weaknesses (chapter 7). A high total score (87%) was obtained by reviewing the METZOON model with the Paisley-checklist. A higher score would have been obtained if the model was subjected to external peer review, if a sensitivity analysis, validation of the model with recent data, updating/replacing expert judgement data with empirical data was carried out. It would also be advisable to repeat the NUSAP/Pedigree on the input data and assumptions of the final model. The checklist can be used in its current form to evaluate QMRA models and to support model improvements from the early phases of development up to the finalised model for internal as well as for external peer review of QMRAs. The applied QA methods were found useful to improve the transparency in the QMRA process and to open the debate about the relevance (fitness for purpose) of a QMRA. A pragmatic approach by combining several QA methods is recommendable, as the application of one QA method often facilitates the application of another method. Many QA methods (NUSAP, structured expert judgement, checklists) are however not yet or insufficiently described in QMRA related guidelines (at EFSA and WHO level). Another limiting factor is the time and resources which need to be taken into account as well. To understand the degree of quality required from a QMRA a clear communication with the risk managers is required. It is therefore necessary to strengthen the training in QA methods and in the communication of its results. Understanding the usefulness of these QA methods could improve among the risk analysis actors when they will be tested in large number of QMRAs.
227

Datenqualität als Schlüsselfrage der Qualitätssicherung an Hochschulen / Data Quality as a key issue of quality assurance in higher education

Pohlenz, Philipp January 2008 (has links)
Hochschulen stehen zunehmend vor einem Legitimationsproblem bezüglich ihres Umgangs mit (öffentlich bereit gestellten) Ressourcen. Die Kritik bezieht sich hauptsächlich auf den Leistungsbereich der Lehre. Diese sei ineffektiv organisiert und trage durch schlechte Studienbedingungen – die ihrerseits von den Hochschulen selbst zu verantworten seien – zu langen Studienzeiten und hohen Abbruchquoten bei. Es wird konstatiert, dass mit der Lebenszeit der Studierenden verantwortungslos umgegangen und der gesellschaftliche Ausbildungsauftrag sowohl von der Hochschule im Ganzen, als auch von einzelnen Lehrenden nicht angemessen wahrgenommen werde. Um die gleichzeitig steigende Nachfrage nach akademischen Bildungsangeboten befriedigen zu können, vollziehen Hochschulen einen Wandel zu Dienstleistungsunternehmen, deren Leistungsfähigkeit sich an der Effizienz ihrer Angebote bemisst. Ein solches Leitbild ist von den Steuerungsgrundsätzen des New Public Management inspiriert. In diesem zieht sich der Staat aus der traditionell engen Verbindung zu den Hochschulen zurück und gewährt diesen lokale Autonomie, bspw. durch die Einführung globaler Haushalte zu ihrer finanziellen Selbststeuerung. Die Hochschulen werden zu Marktakteuren, die sich in der Konkurrenz um Kunden gegen ihre Wettbewerber durchsetzen, indem sie Qualität und Exzellenz unter Beweis stellen. Für die Durchführung von diesbezüglichen Leistungsvergleichen werden unterschiedliche Verfahren der Evaluation eingesetzt. In diese sind landläufig sowohl Daten der Hochschulstatistik, bspw. in Form von Absolventenquoten, als auch zunehmend Befragungsdaten, meist von Studierenden, zur Erhebung ihrer Qualitätseinschätzungen zu Lehre und Studium involviert. Insbesondere letzteren wird vielfach entgegen gehalten, dass sie nicht geeignet seien, die Qualität der Lehre adäquat abzubilden. Vielmehr seien sie durch subjektive Verzerrungen in ihrer Aussagefähigkeit eingeschränkt. Eine Beurteilung, die auf studentischen Befragungsdaten aufsetzt, müsse entsprechend zu Fehleinschätzungen und daraus folgend ungerechten Leistungssanktionen kommen. Im Sinne der Akzeptanz von Verfahren der Evaluation als Instrument hochschulinterner Qualitätssicherungs- und –entwicklungsprozesse ist daher zu untersuchen, inwieweit Beeinträchtigungen der Validität von für die Hochschulsteuerung eingesetzten Datenbasen deren Aussagekraft vermindern. Ausgehend von den entsprechenden Ergebnissen sind Entwicklungen der Verfahren möglich. Diese Frage steht im Zentrum der vorliegenden Arbeit. / Universities encounter public debate on the effectivenes of their handling of public funds. Criticism mainly refers to higher education which is regarded as ineffectively organised and -due to bad learning conditions- contributing to excessively long study times and student drop out. An irresponsible handling of students' life time is detected and it is stated that universities as institutions and individual teachers do not adquately meet society's demands regarding higher education quality. In order to respond to the raising request of higher education services, universities are modified to service-oriented "enterprises" which are competing with other institutions for "customers" by providing the publicly requested evidence of quality and excellencec of their educational services. For the implementation of respective quality comparisons, different procesures of educational evaluation are being established. Higher education statistics (students/graduates ratios) and -increasingly- students' surveys, inquiring their quality appraisals of higher education teaching are involved in these procedures. Particularly the latter encounter controverse debate on their suitability to display the quality of teaching and training adequately. Limitations of their informational value is regarded to stem from subjective distortions of the collected data. Quality assessments and respective sanctions thus are deemed by those who are evaluated to potentially result in misjudgments. In order to establish evaluation procedures as an accepted instrument of internal quality assurance and quality development, data quality and the validity concerns need to be inquired carefully. Based on respective research results, further developments and improvements of the evaluation procedures can be achieved.
228

Kvalitetssäkring av tjänsteföretag : en studie av utbildningsföretagens auktorisation

Mirnezami, Soheila, Hedengren, Susanna January 2005 (has links)
No description available.
229

Evaluation of Geometric Accuracy and Image Quality of an On-Board Imager (OBI)

Djordjevic, Milos January 2007 (has links)
In this project several tests were performed to evaluate the performance of an On-Board Imager® (OBI) mounted on a clinical linear accelerator. The measurements were divided into three parts; geometric accuracy, image registration and couch shift accuracy, and image quality. A cube phantom containing a radiation opaque marker was used to study the agreement with treatment isocenter for both kV-images and cone-beam CT (CBCT) images. The long term stability was investigated by acquiring frontal and lateral kV images twice a week over a 3 month period. Stability in vertical and longitudinal robotic arm motion as well as the stability of the center-of-rotation was evaluated. Further, the agreement of kV image and CBCT center with MV image center was examined. A marker seed phantom was used to evaluate and compare the three applications in image registration; 2D/2D, 2D/3D and 3D/3D. Image registration using kV-kV image sets were compared with MV MV and MV-kV image sets. Further, the accuracy in 2D/2D matches with images acquired at non-orthogonal gantry angles was evaluated. The image quality in CBCT images was evaluated using a Catphan® phantom. Hounsfield unit (HU) uniformity and linearity was compared with planning CT. HU accuracy is crucial for dose verification using CBCT data. The geometric measurements showed good long term stability and accurate position reproducibility after robotic arm motions. A systematic error of about 1 mm in lateral direction of the kV-image center was detected. A small difference between kV and CBCT center was observed and related to a lateral kV detector offset. The vector disagreement between kV- and MV-image centers was  2 mm at some gantry angles. Image registration with the different match applications worked sufficiently. 2D/3D match was seen to correct more accurately than 2D/2D match for large translational and rotational shifts. CBCT images acquired with full-fan mode showed good HU uniformity but half fan images were less uniform. In the soft tissue region the HU agreement with planning CT was reasonable while a larger disagreement was observed at higher densities. This work shows that the OBI is robust and stable in its performance. With regular QC and calibrations the geometric precision of the OBI can be maintained within 1 mm of treatment isocenter.
230

External Quality Assessment of HbA1c for Point of Care Testing

Bjuhr, Mathias, Berne, Christian, Larsson, Anders January 2005 (has links)
Objectives: To evaluate the long term total imprecision of HbA1c testing within the county of Uppsala in relation to the Swedish analytical goal of coefficient of variation (CV) &lt;3% for HbA1c and to study the cost of an external quality assurance program for point-of-care HbA1c The county uses Bayer DCA 2000™ for point-of care HbA1c testing currently having 23 of these instruments. Methods: Method imprecision was assessed by analysis of patient samples performed as split samples during a 3 year period (2002-2004) as part of the quality assurance program for point-of-care HbA1c testing. The samples were first analysed on a Bayer DCA 2000™ and the samples were then sent to the centralised laboratory for reanalysis with an HPLC system (Variant II™, Biorad). The testing was performed approximately 8 times per year with each instrument. Results: The median CV between the HPLC method and the point-of-care instruments for each unit was slightly higher than 3%. Conclusion: The DCA 2000™ systems have an acceptable imprecision and agreement with the central laboratory. The test results show acceptable agreements within the county regardless where the patient is tested. The cost of the external quality assurance program is calculated to be approximately SEK 1340 (Euro 150) per instrument.

Page generated in 0.0869 seconds