101 |
Méthodes et algorithmes de segmentation et déconvolution d'images pour l'analyse quantitative de Tissue Microarrays / Methods and algorithms of image segmentation and decovolution for quantitative analysis of Tissue MicroarraysNguyễn, Hoài Nam 18 December 2017 (has links)
Ce travail de thèse a pour objectif de développer les méthodes originales pour l'analyse quantitative des images de Tissue Microarrays (TMAs) acquises en fluorescence par des scanners dédiés. Nous avons proposé des contributions en traitement d'images portant sur la segmentation des objets d'intérêts (i.e. des échantillons de tissus sur la lame de TMA scannée), la correction des artefacts d'acquisition liés aux scanners en question ainsi que l'amélioration de la résolution spatiale des images acquises en tenant compte des modalités d'acquisition (imagerie en fluorescence) et la conception des scanners. Les développements permettent d'envisager une nouvelle plateforme d'analyse de TMAs automatisée, qui représente aujourd'hui une forte demande dans la recherche contre les cancers. Les TMAs (ou “puces à tissus”) sont les lames histologiques sur lesquelles de nombreux échantillons tissulaires venant de différents donneurs sont déposés selon une structure de grille afin de faciliter leur identification. Pour pouvoir établir le lien entre chaque échantillon et ses données cliniques correspondantes, on s'intéresse non seulement à segmenter ces échantillons mais encore à retrouver leur position théorique (les indices de ligne et de colonne) sur la grille TMA car cette dernière est souvent très déformée pendant la fabrication des lames. Au lieu de calculer directement les indices de ligne et de colonne (des échantillons), nous avons reformulé ce problème comme un problème d'estimation de la déformation de la grille de TMA théorique à partir du résultat de segmentation en utilisant l'interpolation par splines ''plaques minces''. Nous avons combiné les ondelettes et un modèle d'ellipses paramétriques pour éliminer les fausses alarmes, donc améliorer les résultats de segmentation. Selon la conception des scanners, les images sont acquises pixel par pixel le long de chaque ligne, avec un change de direction lors du balayage entre les deux lignes. Un problème fréquent est le mauvais positionnement des pixels dû à la mauvaise synchronisation des modules mécaniques et électroniques. Nous avons donc proposé une méthode variationnelle pour la correction de ces artefacts en estimant le décalage entre les pixels sur les lignes consécutives. Cette méthode, inspirée du calcul du flot optique, consiste à estimer un champ de vecteurs en minimisant une fonction d'énergie composée d'un terme d'attache aux données non convexe et d'un terme de régularisation convexe. La relaxation quadratique est ainsi utilisée pour découpler le problème original en deux sous-problèmes plus simples à résoudre. Enfin, pour améliorer la résolution spatiale des images acquises qui dépend de la PSF (point spread function) elle-même variant selon le faisceau laser d'excitation, nous avons introduit une méthode de déconvolution d'images en considérant une famille de régulariseurs convexes. Les régulariseurs considérés sont généralisés du concept de la variation parcimonieuses (Sparse Variation) combinant la norme L1 de l'image et la variation totale (Total Variation) pour rehausser les pixels dont l'intensité et le gradient sont non-nuls. Les expériences montrent que l'utilisation de cette régularisation produit des résultats déconvolution d'images très satisfaisants en comparaison avec d'autres approches telles que la variation totale ou la norme de Schatten de la matrice Hessienne. / This thesis aims at developing dedicated methods for quantitative analysis of Tissue Microarray (TMA) images acquired by fluorescence scanners. We addressed there issues in biomedical image processing, including segmentation of objects of interest (i.e. tissue samples), correction of acquisition artifacts during scanning process and improvement of acquired image resolution while taking into account imaging modality and scanner design. The developed algorithms allow to envisage a novel automated platform for TMA analysis, which is highly required in cancer research nowadays. On a TMA slide, multiple tissue samples which are collected from different donors are assembled according to a grid structure to facilitate their identification. In order to establish the link between each sample and its corresponding clinical data, we are not only interested in the localization of these samples but also in the computation of their array (row and column) coordinates according to the design grid because the latter is often very deformed during the manufacturing of TMA slides. However, instead of directly computing array coordinates as existing approach, we proposed to reformulate this problem as the approximation of the deformation of the theoretical TMA grid using “thin plate splines” given the result of tissue sample localization. We combined a wavelet-based detection and a ellipse-based segmentation to eliminate false alarms and thus improving the localization result of tissue samples. According to the scanner design, images are acquired pixel by pixel along each line, with a change of scan direction between two subsequent lines. Such scanning system often suffers from pixel mis-positioning (jitter) due to imperfect synchronization of mechanical and electronic components. To correct these scanning artifacts, we proposed a variational method based on the estimation of pixel displacements on subsequent lines. This method, inspired from optical flow methods, consists in estimating a dense displacement field by minimizing an energy function composed of a nonconvex data fidelity term and a convex regularization term. We used half-quadratic splitting technique to decouple the original problem into two small sub-problems: one is convex and can be solved by standard optimization algorithm, the other is non-convex but can be solved by a complete search. To improve the resolution of acquired fluorescence images, we introduced a method of image deconvolution by considering a family of convex regularizers. The considered regularizers are generalized from the concept of Sparse Variation which combines the L1 norm and Total Variation (TV) to favors the co-localization of high-intensity pixels and high-magnitude gradient. The experiments showed that the proposed regularization approach produces competitive deconvolution results on fluorescence images, compared to those obtained with other approaches such as TV or the Schatten norm of Hessian matrix.
|
102 |
Analyse de documents et du comportement des utilisateurs pour améliorer l'accès à l'information / Analysis of documents and user behavior to improve information accessJean-Caurant, Axel 08 October 2018 (has links)
L'augmentation constante du nombre de documents disponibles et des moyens d'accès transforme les pratiques de recherche d'information. Depuis quelques années, de plus en plus de plateformes de recherche d'information à destination des chercheurs ou du grand public font leur apparition sur la toile. Ce flot d'information est bien évidemment une opportunité pour les utilisateurs mais ils sont maintenant confrontés à de nouveaux problèmes. Auparavant, la principale problématique des chercheurs était de savoir si une information existait. Aujourd'hui, il est plutôt question de savoir comment accéder à une information pertinente. Pour résoudre ce problème, deux leviers d'action seront étudiés dans cette thèse. Nous pensons qu'il est avant tout important d'identifier l'usage qui est fait des principaux moyens d'accès à l'information. Être capable d'interpréter le comportement des utilisateurs est une étape nécessaire pour d'abord identifier ce que ces derniers comprennent des systèmes de recherche, et ensuite ce qui doit être approfondi. En effet, la plupart de ces systèmes agissent comme des boîtes noires qui masquent les différents processus sous-jacents. Si ces mécanismes n'ont pas besoin d'être entièrement maitrisés par les utilisateurs, ils ont cependant un impact majeur qui doit être pris en compte dans l'exploitation des résultats. Pourquoi le moteur de recherche me renvoie-t-il ces résultats ? Pourquoi ce document est-il plus pertinent qu'un autre ? Ces questions apparemment banales sont pourtant essentielles à une recherche d'information critique. Nous pensons que les utilisateurs ont le droit et le devoir de s'interroger sur la pertinence des outils informatiques mis à leur disposition. Pour les aider dans cette tâche, nous avons développé une plateforme de recherche d'information en ligne à double usage. Elle peut tout d'abord être utilisée pour l'observation et la compréhension du comportement des utilisateurs. De plus, elle peut aussi être utilisée comme support pédagogique, pour mettre en évidence les différents biais de recherche auxquels les utilisateurs sont confrontés. Dans le même temps, ces outils doivent être améliorés. Nous prenons dans cette thèse l'exemple de la qualité des documents qui a un impact certain sur leur accessibilité. La quantité de documents disponibles ne cessant d'augmenter, les opérateurs humains sont de moins en moins capables de les corriger manuellement et de s'assurer de leur qualité. Il est donc nécessaire de mettre en place de nouvelles stratégies pour améliorer le fonctionnement des systèmes de recherche. Nous proposons dans cette thèse une méthode pour automatiquement identifier et corriger certaines erreurs générées par les processus automatiques d'extraction d'information (en particulier l'OCR). / The constant increase of available documents and tools to access them has led to a change of research practices. For a few years now, more and more information retrieval platforms are made available online to the scientific community or the public. This data deluge is a great opportunity for users seeking information. However, it comes with new problems and new challenges to overcome. Formerly, the main issue for researchers was to identify if a particular resource existed. Today, the challenge is more about finding how to access pertinent information. We have identified two distinct levers to limit the impact of this new search paradigm. First, we believe that it is necessary to analyze how the different search platforms are used. To be able to understand and read into users behavior is a necessary step to comprehend what users understand, and to identify what they need to get an in-depth understanding of the operation of such platforms. Indeed, most systems act as black boxes which conceal the underlying transformations applied on data. Users do not need to understand in details how those algorithms work. However, because those algorithms have a major impact on the accessibility of information, and need to be taken into account during the exploitation of search results. Why is the search engine returning those particular results ? Why is this document more pertinent than another ? Such seemingly naive questions are nonetheless essential to undertake an analytical approach of the information search and retrieval task. We think that users have a right and a duty to question themselves about the relevance of such and such tool at their disposal. To help them cope with these issues, we developped a dual-use information search platform. On the one hand, it can be used to observe and understand user behavior. On the other hand, it can be used as a pedagogical medium to highlight research biases users can be exposed to. At the same time, we believe that the tools themselves must be improved. In the second part of this thesis, we study the impact that the quality of documents can have on their accessibility. Because of the increase of documents available online, human operators are less and less able to insure their quality. Thus, there is a need to set up new strategies to improve the way search platform operate and process documents. We propose a new method to automatically identify and correct errors generated by information extraction process such as OCR.
|
103 |
Feedback and Error Corrections : on Swedish Students' Written English AssignmentsEriksson, Maria January 2006 (has links)
<p>It is important to think about how to correct an essay and what the students should learn from it. My aim in this paper, is to look into what different researchers have said about feedback on written assignments and carry out a study of the kind of feedback that is actually used in secondary school today – and of what students and teachers think about it.</p><p>The results show that underlining is the marking technique mostly used in the secondary school where I did my investigation. This technique was also mostly preferred amongst the students. Two teachers were interviewed and both said that they used underlining because experience has shown that this marking technique is the most effective one. Furthermore, the results from the essays differed when analyzing errors corrected with complete underlining, partial underlining, crossing out and giving the right answer. One marking technique got good results when dealing with one kind of error, and worse in others. My conclusion is that teachers need to vary their marking technique depending on the specific kind of error.</p><p>Also, the results from a questionnaire showed that most of the students would like to get feedback on every written assignment. Not many of them said that they were already getting it, although this was what both teachers claimed. To conclude, there are many different ways to deal with marking and feedback. The key-word seems to be variation. As long as teachers vary their ways of dealing with marking and giving feedback, they will eventually find one or two that are most effective. Involving the students in this decision can also be a good idea, if they are interested.</p>
|
104 |
Characterization and Correction of Analog-to-Digital ConvertersLundin, Henrik January 2005 (has links)
Denna avhandling behandlar analog-digitalomvandling. I synnerhet behandlas postkorrektion av analog-digitalomvandlare (A/D-omvandlare). A/D-omvandlare är i praktiken behäftade med vissa fel som i sin tur ger upphov till distorsion i omvandlarens utsignal. Om felen har ett systematiskt samband med utsignalen kan de avhjälpas genom att korrigera utsignalen i efterhand. Detta verk behandlar den form av postkorrektion som implementeras med hjälp av en tabell ur vilken korrektionsvärden hämtas. Innan en A/D-omvandlare kan korrigeras måste felen i den mätas upp. Detta görs genom att estimera omvandlarens överföringsfunktion. I detta arbete behandlas speciellt problemet att skatta kvantiseringsintervallens mittpunkter. Det antas härvid att en referenssignal finns tillgänglig som grund för skattningen. En skattare som baseras på sorterade data visas vara bättre än den vanligtvis använda skattaren baserad på sampelmedelvärde. Nästa huvudbidrag visar hur resultatet efter korrigering av en A/D-omvandlare kan predikteras. Omvandlaren antas här ha en viss differentiell olinjäritet och insignalen antas påverkad av ett slumpmässigt brus. Ett postkorrektionssystem, implementerat med begränsad precision, korrigerar utsignalen från A/D-omvandlaren. Ett utryck härleds som beskriver signal-brusförhållandet efter postkorrektion. Förhållandet visar sig bero på den differentiella olinjäritetens varians, det slumpmässiga brusets varians, omvandlarens upplösning samt precisionen med vilken korrektionstermerna beskrivs. Till sist behandlas indexering av korrektionstabeller. Valet av metod för att indexera en korrektionstabell påverkar såväl tabellens storlek som förmågan att beskriva och korrigera dynamiska fel. I avhandlingen behandlas i synnerhet tillståndsmodellbaserade metoder, det vill säga metoder där tabellindex bildas som en funktion utav flera på varandra följande sampel. Allmänt gäller att ju fler sampel som används för att bilda ett tabellindex, desto större blir tabellen, samtidigt som förmågan att beskriva dynamiska fel ökar. En indexeringsmetod som endast använder en delmängd av bitarna i varje sampel föreslås här. Vidare så påvisas hur valet av indexeringsbitar kan göras optimalt, och experimentella utvärderingar åskådliggör att tabellstorleken kan reduceras avsevärt utan att fördenskull minska prestanda mer än marginellt. De teorier och resultat som framförs här har utvärderats med experimentella A/D-omvandlardata eller genom datorsimuleringar. / Analog-to-digital conversion and quantization constitute the topic of this thesis. Post-correction of analog-to-digital converters (ADCs) is considered in particular. ADCs usually exhibit non-ideal behavior in practice. These non-idealities spawn distortions in the converters output. Whenever the errors are systematic, it is possible to mitigate them by mapping the output into a corrected value. The work herein is focused on problems associated with post-correction using look-up tables. All results presented are supported by experiments or simulations. The first problem considered is characterization of the ADC. This is in fact an estimation problem, where the transfer function of the converter should be determined. This thesis deals with estimation of quantization region midpoints, aided by a reference signal. A novel estimator based on order statistics is proposed, and is shown to have superior performance compared with the sample mean traditionally used. The second major area deals with predicting the performance of an ADC after post-correction. A converter with static differential nonlinearities and random input noise is considered. A post-correction is applied, but with limited (fixed-point) resolution in the corrected values. An expression for the signal-to-noise and distortion ratio after post-correction is provided. It is shown that the performance is dependent on the variance of the differential nonlinearity, the variance of the random noise, the resolution of the converter and the precision of the correction values. Finally, the problem of addressing, or indexing, the correction look-up table is dealt with. The indexing method determines both the memory requirements of the table and the ability to describe and correct dynamically dependent error effects. The work here is devoted to state-space--type indexing schemes, which determine the index from a number of consecutive samples. There is a tradeoff between table size and dynamics: more samples used for indexing gives a higher dependence on dynamic, but also a larger table. An indexing scheme that uses only a subset of the bits in each sample is proposed. It is shown how the selection of bits can be optimized, and the exemplary results show that a substantial reduction in memory size is possible with only marginal reduction of performance. / QC 20101019
|
105 |
Evaluating forecast accuracy for Error Correction constraints and Intercept CorrectionEidestedt, Richard, Ekberg, Stefan January 2013 (has links)
This paper examines the forecast accuracy of an unrestricted Vector Autoregressive (VAR) model for GDP, relative to a comparable Vector Error Correction (VEC) model that recognizes that the data is characterized by co-integration. In addition, an alternative forecast method, Intercept Correction (IC), is considered for further comparison. Recursive out-of-sample forecasts are generated for both models and forecast techniques. The generated forecasts for each model are objectively evaluated by a selection of evaluation measures and equal accuracy tests. The result shows that the VEC models consistently outperform the VAR models. Further, IC enhances the forecast accuracy when applied to the VEC model, while there is no such indication when applied to the VAR model. For certain forecast horizons there is a significant difference in forecast ability between the VEC IC model compared to the VAR model.
|
106 |
Long-term matric suction measurements in highway subgradesNguyen, Quan 17 May 2006
The performance of Thin Membrane Surface (TMS) highways is largely controlled by the strength of the subgrade soil which in turn is a function of the soil suction (Fredlund and Morgenstern, 1977). Thermal conductivity suction sensors can be used to indirectly measure in situ matric suction. <p>Thirty two (32) thermal conductivity sensors were installed under Thin Membrane Surface (TMS) in two highway locations; namely, Bethune and Torquay, Saskatchewan, in September 2000. The sensors were installed beneath the pavement, shoulder and side-slope to monitor matric suction and temperature changes with time. The monitoring system at Bethune was damaged after two years of operation. The thermal conductivity sensors at Torquay all appear to have been working well and data are still being collected.<p>Other attempts had been made in the past to use thermal conductivity sensors for field suction measurement, but all were terminated within a short period of time due to limitations associated with the equipment. The long-term suction measurement at the Torquay site is unique and provides valuable field data. <p>This research project presents and interprets the long-term matric suction measurements made between the years 2000 to 2005 at the Torquay site and from 2000 to 2002 at the Bethune site. To help in the interpretation of the data, a site investigation was undertaken along with a laboratory testing program that included the measurement of Soil-Water Characteristic Curves (SWCC). As well, a limited laboratory study was undertaken on several new thermal conductivity matric suction sensors. <p>The matric suction readings in the field showed a direct relationship to rainfall and regional evaporation conditions at the test sites. At the Bethune and Torquay test sites, the changes in matric suctions appeared to be mainly due to the movement of moisture through the edge of the road. Relatively constant equilibrium suctions were encountered under the driving-lanes. Conversely, matric suctions under the side-slopes were found to vary considerably with time and depth. Matric suctions under the driving-lanes ranged from 20 to 60 kPa throughout the years. Matric suctions on the side-slopes changed from 100 to 1500 kPa over the years. <p>The greatest variation of soil suctions occurred in the month of April from location to location in the subgrade. The soil suctions became less variable in June while larger variations again occurred from July to October. <p>The matric suction measurements obtained from the thermal conductivity sensors showed a general agreement with the values estimated using the soil-water characteristic curves, SWCC, measured in the laboratory.
|
107 |
Long-term matric suction measurements in highway subgradesNguyen, Quan 17 May 2006 (has links)
The performance of Thin Membrane Surface (TMS) highways is largely controlled by the strength of the subgrade soil which in turn is a function of the soil suction (Fredlund and Morgenstern, 1977). Thermal conductivity suction sensors can be used to indirectly measure in situ matric suction. <p>Thirty two (32) thermal conductivity sensors were installed under Thin Membrane Surface (TMS) in two highway locations; namely, Bethune and Torquay, Saskatchewan, in September 2000. The sensors were installed beneath the pavement, shoulder and side-slope to monitor matric suction and temperature changes with time. The monitoring system at Bethune was damaged after two years of operation. The thermal conductivity sensors at Torquay all appear to have been working well and data are still being collected.<p>Other attempts had been made in the past to use thermal conductivity sensors for field suction measurement, but all were terminated within a short period of time due to limitations associated with the equipment. The long-term suction measurement at the Torquay site is unique and provides valuable field data. <p>This research project presents and interprets the long-term matric suction measurements made between the years 2000 to 2005 at the Torquay site and from 2000 to 2002 at the Bethune site. To help in the interpretation of the data, a site investigation was undertaken along with a laboratory testing program that included the measurement of Soil-Water Characteristic Curves (SWCC). As well, a limited laboratory study was undertaken on several new thermal conductivity matric suction sensors. <p>The matric suction readings in the field showed a direct relationship to rainfall and regional evaporation conditions at the test sites. At the Bethune and Torquay test sites, the changes in matric suctions appeared to be mainly due to the movement of moisture through the edge of the road. Relatively constant equilibrium suctions were encountered under the driving-lanes. Conversely, matric suctions under the side-slopes were found to vary considerably with time and depth. Matric suctions under the driving-lanes ranged from 20 to 60 kPa throughout the years. Matric suctions on the side-slopes changed from 100 to 1500 kPa over the years. <p>The greatest variation of soil suctions occurred in the month of April from location to location in the subgrade. The soil suctions became less variable in June while larger variations again occurred from July to October. <p>The matric suction measurements obtained from the thermal conductivity sensors showed a general agreement with the values estimated using the soil-water characteristic curves, SWCC, measured in the laboratory.
|
108 |
"How mean can you be?" : A study of teacher trainee and teacher views on error correctionJakobsson, Sofie January 2010 (has links)
The present study investigates three teacher trainees and three teachers’ views on error correction during oral communication, and the similarities and differences between them. These six people were interviewed separately and they were asked six questions; the first five questions were asked to all six people but the last question differed between the teacher trainees and the teachers. My result shows that the teacher trainees are insecure when it comes to error correction and that the teachers´ sees it as a part of their job, and that is the biggest difference between them. The teacher trainees and the teachers focus on the same types of errors and those are the errors that can cause problems in communication, and that can be pronunciation errors, grammatical errors or vocabulary errors.
|
109 |
Characterization and Correction of Analog-to-Digital ConvertersLundin, Henrik January 2005 (has links)
<p>Denna avhandling behandlar analog-digitalomvandling. I synnerhet behandlas postkorrektion av analog-digitalomvandlare (A/D-omvandlare). A/D-omvandlare är i praktiken behäftade med vissa fel som i sin tur ger upphov till distorsion i omvandlarens utsignal. Om felen har ett systematiskt samband med utsignalen kan de avhjälpas genom att korrigera utsignalen i efterhand. Detta verk behandlar den form av postkorrektion som implementeras med hjälp av en tabell ur vilken korrektionsvärden hämtas.</p><p>Innan en A/D-omvandlare kan korrigeras måste felen i den mätas upp. Detta görs genom att estimera omvandlarens överföringsfunktion. I detta arbete behandlas speciellt problemet att skatta kvantiseringsintervallens mittpunkter. Det antas härvid att en referenssignal finns tillgänglig som grund för skattningen. En skattare som baseras på sorterade data visas vara bättre än den vanligtvis använda skattaren baserad på sampelmedelvärde.</p><p>Nästa huvudbidrag visar hur resultatet efter korrigering av en A/D-omvandlare kan predikteras. Omvandlaren antas här ha en viss differentiell olinjäritet och insignalen antas påverkad av ett slumpmässigt brus. Ett postkorrektionssystem, implementerat med begränsad precision, korrigerar utsignalen från A/D-omvandlaren. Ett utryck härleds som beskriver signal-brusförhållandet efter postkorrektion. Förhållandet visar sig bero på den differentiella olinjäritetens varians, det slumpmässiga brusets varians, omvandlarens upplösning samt precisionen med vilken korrektionstermerna beskrivs.</p><p>Till sist behandlas indexering av korrektionstabeller. Valet av metod för att indexera en korrektionstabell påverkar såväl tabellens storlek som förmågan att beskriva och korrigera dynamiska fel. I avhandlingen behandlas i synnerhet tillståndsmodellbaserade metoder, det vill säga metoder där tabellindex bildas som en funktion utav flera på varandra följande sampel. Allmänt gäller att ju fler sampel som används för att bilda ett tabellindex, desto större blir tabellen, samtidigt som förmågan att beskriva dynamiska fel ökar. En indexeringsmetod som endast använder en delmängd av bitarna i varje sampel föreslås här. Vidare så påvisas hur valet av indexeringsbitar kan göras optimalt, och experimentella utvärderingar åskådliggör att tabellstorleken kan reduceras avsevärt utan att fördenskull minska prestanda mer än marginellt.</p><p>De teorier och resultat som framförs här har utvärderats med experimentella A/D-omvandlardata eller genom datorsimuleringar.</p> / <p>Analog-to-digital conversion and quantization constitute the topic of this thesis. Post-correction of analog-to-digital converters (ADCs) is considered in particular. ADCs usually exhibit non-ideal behavior in practice. These non-idealities spawn distortions in the converters output. Whenever the errors are systematic, it is possible to mitigate them by mapping the output into a corrected value. The work herein is focused on problems associated with post-correction using look-up tables. All results presented are supported by experiments or simulations.</p><p>The first problem considered is characterization of the ADC. This is in fact an estimation problem, where the transfer function of the converter should be determined. This thesis deals with estimation of quantization region midpoints, aided by a reference signal. A novel estimator based on order statistics is proposed, and is shown to have superior performance compared with the sample mean traditionally used.</p><p>The second major area deals with predicting the performance of an ADC after post-correction. A converter with static differential nonlinearities and random input noise is considered. A post-correction is applied, but with limited (fixed-point) resolution in the corrected values. An expression for the signal-to-noise and distortion ratio after post-correction is provided. It is shown that the performance is dependent on the variance of the differential nonlinearity, the variance of the random noise, the resolution of the converter and the precision of the correction values.</p><p>Finally, the problem of addressing, or indexing, the correction look-up table is dealt with. The indexing method determines both the memory requirements of the table and the ability to describe and correct dynamically dependent error effects. The work here is devoted to state-space--type indexing schemes, which determine the index from a number of consecutive samples. There is a tradeoff between table size and dynamics: more samples used for indexing gives a higher dependence on dynamic, but also a larger table. An indexing scheme that uses only a subset of the bits in each sample is proposed. It is shown how the selection of bits can be optimized, and the exemplary results show that a substantial reduction in memory size is possible with only marginal reduction of performance.</p>
|
110 |
Improving attenuation corrections obtained using singles-mode transmission data in small-animal PETVandervoort, Eric 05 1900 (has links)
The images in positron emission tomography (PET) represent three dimensional dynamic distributions of biologically interesting molecules labelled with positron emitting radionuclides (radiotracers). Spatial localisation of the radio-tracers is achieved by detecting in coincidence two collinear photons which are emitted when the positron annihilates with an ordinary electron. In order to obtain quantitatively accurate images in PET, it is necessary to correct for the effects of photon attenuation within the subject being imaged. These corrections can be obtained using singles-mode photon transmission scanning. Although suitable for small animal PET, these scans are subject to high amounts of contamination from scattered photons. Currently, no accurate correction exists to account for scatter in these data. The primary purpose of this work was to implement and validate an analytical scatter correction for PET transmission scanning. In order to isolate the effects of scatter, we developed a simulation tool which was validated using experimental transmission data. We then presented an analytical scatter correction for singles-mode transmission data in PET. We compared our scatter correction data with the previously validated simulation data for uniform and non-uniform phantoms and for two different transmission source radionuclides. Our scatter calculation correctly predicted the contribution from scattered photons to the simulated data for all phantoms and both transmission sources. We then applied our scatter correction as part of an iterative reconstruction algorithm for simulated and experimental PET transmission data for uniform and non-uniform phantoms. We also tested our reconstruction and scatter correction procedure using transmission data for several animal studies (mice, rats and primates). For all studies considered, we found that the average reconstructed linear attenuation coefficients for water or soft-tissue regions of interest agreed with expected values to within 4%. Using a 2.2 GHz processor, the scatter correction required between 6 to 27 minutes of CPU time (without any code optimisation) depending on the phantom size and source used. This extra calculation time does not seem unreasonable considering that, without scatter corrections, errors in the reconstructed attenuation coefficients were between 18 to 45% depending on the phantom size and transmission source used.
|
Page generated in 0.0648 seconds