• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 18
  • 1
  • 1
  • 1
  • Tagged with
  • 21
  • 21
  • 11
  • 9
  • 7
  • 5
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Droplet Growth in Moist Turbulent Natural Convection in a Tube

Madival, Deepak Govind January 2017 (has links) (PDF)
Droplet growth processes in a cumulus cloud, beginning from its inception at sub-micron scale up to drizzle drop size of few hundred microns, in an average duration of about half hour, has been a topic of intense research. In particular role of turbulence in aiding droplet growth in clouds has been of immense interest. Motivated by this question, we have performed experiments in which turbulent natural convection coupled with phase change is set up inside a tall vertical insulated tube, by heating water located at tube bottom and circulating cold air at tube top. The resulting moist turbulent natural convection flow in the tube is expected to be axially homogeneous. Mixing of air masses of differing temperature and moisture content leads to condensation of water vapor into droplets, on aerosols available inside the tube. We there-fore have droplets in a turbulent flow, in which phase change is coupled to turbulence dynamics, just as in clouds. We obtain a linear mean-temperature pro le in the tube away from its ends. Because there is net flux of water vapor through the tube, there is a weak mean axial flow, but which is small compared to turbulent velocity fluctuations. We have experimented with two setups, the major difference between them being that in one setup, called AC setup, tube is open to atmosphere at its top and hence has higher aerosol concentration inside the tube, while the other setup, called RINAC setup, is closed to atmosphere and due to presence of aerosol filters has lower aerosol concentration inside the tube. Also in the latter setup, cold air temperature at tube top can be reduced to sub-zero levels. In both setups, turbulence attains a stationary state and is characterized by Rayleigh number based on temperature gradient inside the tube away from its ends, which is 107. A significant result from our experiments is that in RINAC setup, we obtain a broadened droplet size distribution at mid-height of tube which includes a few droplets of size 36 m, which in real clouds marks the beginning of rapid growth of droplets due to collisions among them by virtue of their interaction with turbulence. This shows that for broadening of droplet size distribution, high turbulence levels prevalent in clouds is not strictly necessary. Second part of our study comprises two pieces of theoretical work. First, we deal with the problem of a large collector drop settling amidst a population of smaller droplets whose spatial distribution is homogeneous in the direction of fall. This problem is relevant to the last stage of droplet growth in clouds, when the droplets have grown large enough that they interact weakly with turbulence and begin to settle under gravity. We propose a new method to solve this problem in which collision process is treated as a discrete stochastic process, and reproduce Telford's solution in which collision is treated as a homogeneous Poisson process. We then show how our method may be easily generalized to non-Poisson collision process. Second, we propose a new method to detect droplet clusters in images. This method is based on nearest neighbor relationship between droplets and does not employ arbitrary numerical criteria. Also this method has desirable invariance properties, in particular under the operation of uniform scaling of all distances and addition/deletion of empty space in an image, which therefore renders the proposed method robust. This method has advantage in dealing with highly clustered distributions, where cluster properties vary over the image and therefore average of properties computed over the entire image could be misleading.
12

Investigating the Nature of Relationship between Software Size and Development Effort

Bajwa, Sohaib-Shahid January 2008 (has links)
Software effort estimation still remains a challenging and debatable research area. Most of the software effort estimation models take software size as the base input. Among the others, Constructive Cost Model (COCOMO II) is a widely known effort estimation model. It uses Source Lines of Code (SLOC) as the software size to estimate effort. However, many problems arise while using SLOC as a size measure due to its late availability in the software life cycle. Therefore, a lot of research has been going on to identify the nature of relationship between software functional size and effort since functional size can be measured very early when the functional user requirements are available. There are many other project related factors that were found to be affecting the effort estimation based on software size. Application Type, Programming Language, Development Type are some of them. This thesis aims to investigate the nature of relationship between software size and development effort. It explains known effort estimation models and gives an understanding about the Function Point and Functional Size Measurement (FSM) method. Factors, affecting relationship between software size and development effort, are also identified. In the end, an effort estimation model is developed after statistical analyses. We present the results of an empirical study which we conducted to investigate the significance of different project related factors on the relationship between functional size and effort. We used the projects data in the International Software Benchmarking Standards Group (ISBSG) dataset. We selected the projects which were measured by utilizing the Common Software Measurement International Consortium (COSMIC) Function Points. For statistical analyses, we performed step wise Analysis of Variance (ANOVA) and Analysis of Co-Variance (ANCOVA) techniques to build the multi variable models. We also performed Multiple Regression Analysis to formalize the relation. / Software effort estimation still remains a challenging and debatable research area. Most of the software effort estimation models take software size as the base input. Among the others, Constructive Cost Model (COCOMO II) is a widely known effort estimation model. It uses Source Lines of Code (SLOC) as the software size to estimate effort. However, many problems arise while using SLOC as a size measure due to its late availability in the software life cycle. Therefore, a lot of research has been going on to identify the nature of relationship between software functional size and effort since functional size can be measured very early when the functional user requirements are available. There are many other project related factors that were found to be affecting the effort estimation based on software size. Application Type, Programming Language, Development Type are some of them. This thesis aims to investigate the nature of relationship between software size and development effort. It explains known effort estimation models and gives an understanding about the Function Point and Functional Size Measurement (FSM) method. Factors, affecting relationship between software size and development effort, are also identified. In the end, an effort estimation model is developed after statistical analyses. We present the results of an empirical study which we conducted to investigate the significance of different project related factors on the relationship between functional size and effort. We used the projects data in the International Software Benchmarking Standards Group (ISBSG) dataset. We selected the projects which were measured by utilizing the Common Software Measurement International Consortium (COSMIC) Function Points. For statistical analyses, we performed step wise Analysis of Variance (ANOVA) and Analysis of Co-Variance (ANCOVA) techniques to build the multi variable models. We also performed Multiple Regression Analysis to formalize the relation. / +46-(0)-739763245
13

A Comprehensive Evaluation of Conversion Approaches for Different Function Points

Amiri, Javad Mohammadian, Padmanabhuni, Venkata Vinod Kumar January 2011 (has links)
Context: Software cost and effort estimation are important activities for planning and estimation of software projects. One major player for cost and effort estimation is functional size of software which can be measured in variety of methods. Having several methods for measuring one entity, converting outputs of these methods becomes important. Objectives: In this study we investigate different techniques that have been proposed for conversion between different Functional Size Measurement (FSM) techniques. We addressed conceptual similarities and differences between methods, empirical approaches proposed for conversion, evaluation of the proposed approaches and improvement opportunities that are available for current approaches. Finally, we proposed a new conversion model based on accumulated data. Methods: We conducted a systematic literature review for investigating the similarities and differences between FSM methods and proposed approaches for conversion. We also identified some improvement opportunities for the current conversion approaches. Sources for articles were IEEE Xplore, Engineering Village, Science Direct, ISI, and Scopus. We also performed snowball sampling to decrease chance of missing any relevant papers. We also evaluated the existing models for conversion after merging the data from publicly available datasets. By bringing suggestions for improvement, we developed a new model and then validated it. Results: Conceptual similarities and differences between methods are presented along with all methods and models that exist for conversion between different FSM methods. We also came with three major contributions for existing empirical methods; for one existing method (piecewise linear regression) we used a systematic and rigorous way of finding discontinuity point. We also evaluated several existing models to test their reliability based on a merged dataset, and finally we accumulated all data from literature in order to find the nature of relation between IFPUG and COSMIC using LOESS regression technique. Conclusions: We concluded that many concepts used by different FSM methods are common which enable conversion. In addition statistical results show that the proposed approach to enhance piecewise linear regression model slightly increases model’s test results. Even this small improvement can affect projects’ cost largely. Results of evaluation of models show that it is not possible to say which method can predict unseen data better than others and it depends on the concerns of practitioner that which model should be used. And finally accumulated data confirms that empirical relation between IFPUG and COSMIC is not linear and can be presented by two separate lines better than other models. Also we noted that unlike COSMIC manual’s claim that discontinuity point should be around 200 FP, in merged dataset discontinuity point is around 300 to 400. Finally we proposed a new conversion approach using systematic approach and piecewise linear regression. By testing on new data, this model shows improvement in MMRE and Pred(25). / Javad Amiri: Nabshe Kooche 3, Bolvare shadi, Farhangian 2, Qom, Iran, phone: +989127476593 Vinod Kumar: s/o P.V.Kondala Rao, Main Road Khaji Street Rajahmundry. A.P. India pin: 533101 phone: +917396449336
14

Nano-pipette as nanoparticle analyzer and capillary gated ion transistor

Rudzevich, Yauheni 01 January 2014 (has links)
The ability to precisely count inorganic and organic nanoparticles and to measure their size distribution plays a major role in various applications such as drug delivery, nanoparticles counting, and many others. In this work I present a simple resistive pulse method that allows translocations, counting, and measuring the size and velocity distribution of silica nanoparticles and liposomes with diameters from 50 nm to 250 nm. This technique is based on the Coulter counter technique, but has nanometer size pores. It was found that ionic current drops when nanoparticles enter the nanopore of a pulled micropipette, producing a clear translocation signal. Pulled borosilicate micropipettes with opening 50 ~ 350 nm were used as the detecting instrument. This method provides a direct, fast and cost-effective way to characterize inorganic and organic nanoparticles in a solution. In this work I also introduce a newly developed Capillary Ionic Transistor (CIT). It is presented as a nanodevice which provides control of ionic transport through nanochannel by gate voltage. CIT is Ionic transistor, which employs pulled capillary as nanochannel with a tip diameter smaller than 100 mm. We observed that the gate voltage applied to gate electrode, deposited on the outer wall of a capillary, affect a conductance of nanochannel, due to change of surface charge at the solution/capillary interface. Negative gate voltage corresponds to lower conductivity and positive gate increases conductance of the channel. This effect strongly depends on the size of the channel. In general, at least one dimension of the channel has to be small enough for electrical double layer to overlap.
15

The flow of lubricant as a mist in the piston assembly and crankcase of a fired gasoline engine

Dyson, C.J., Priest, Martin, Lee, P.M. 09 December 2022 (has links)
Yes / The tribological performance of the piston assembly of an automotive engine is highly influenced by the complex flow mechanisms that supply lubricant to the upper piston rings. As well as affecting friction and wear, the oil consumption and emissions of the engine are strongly influenced by these mechanisms. There is a significant body of work that seeks to model these flows effectively. However, these models are not able to fully describe the flow of lubricant through the piston assembly. Some experimental studies indicate that droplets of lubricant carried in the gas flows through the piston assembly may account for some of this. This work describes an investigation into the nature of lubricant misting in a fired gasoline engine. Previous work in a laboratory simulator showed that the tendency of a lubricant to form mist is dependent on the viscosity of the lubricant and the type and concentration of viscosity modifier. The higher surface area-to-volume ratio of the lubricant if more droplets are formed or if the droplets are smaller is hypothesised to increase the degradation rate of the lubricant. The key work in the investigation was to measure the size distribution of the droplets in the crankcase of a fired gasoline engine. Droplets were extracted from the crankcase and passed through a laser diffraction particle sizer. Three characteristic droplet size ranges were observed: Spray sized (250–1000 μm); Major mist (30–250 μm); and Minor mist (0.1–30 μm). Higher base oil viscosity tended to reduce the proportion of mist-sized droplets. The viscoelasticity contributed by a polymeric viscosity modifier reduced the proportion of mist droplets, especially at high load.
16

Functional Size Measurement and Model Verification for Software Model-Driven Developments: A COSMIC-based Approach

Marín Campusano, Beatriz Mariela 20 July 2011 (has links)
Historically, software production methods and tools have a unique goal: to produce high quality software. Since the goal of Model-Driven Development (MDD) methods is no different, MDD methods have emerged to take advantage of the benefits of using conceptual models to produce high quality software. In such MDD contexts, conceptual models are used as input to automatically generate final applications. Thus, we advocate that there is a relation between the quality of the final software product and the quality of the models used to generate it. The quality of conceptual models can be influenced by many factors. In this thesis, we focus on the accuracy of the techniques used to predict the characteristics of the development process and the generated products. In terms of the prediction techniques for software development processes, it is widely accepted that knowing the functional size of applications in order to successfully apply effort models and budget models is essential. In order to evaluate the quality of generated applications, defect detection is considered to be the most suitable technique. The research goal of this thesis is to provide an accurate measurement procedure based on COSMIC for the automatic sizing of object-oriented OO-Method MDD applications. To achieve this research goal, it is necessary to accurately measure the conceptual models used in the generation of object-oriented applications. It is also very important for these models not to have defects so that the applications to be measured are correctly represented. In this thesis, we present the OOmCFP (OO-Method COSMIC Function Points) measurement procedure. This procedure makes a twofold contribution: the accurate measurement of objectoriented applications generated in MDD environments from the conceptual models involved, and the verification of conceptual models to allow the complete generation of correct final applications from the conceptual models involved. The OOmCFP procedure has been systematically designed, applied, and automated. This measurement procedure has been validated to conform to the ISO 14143 standard, the metrology concepts defined in the ISO VIM, and the accuracy of the measurements obtained according to ISO 5725. This procedure has also been validated by performing empirical studies. The results of the empirical studies demonstrate that OOmCFP can obtain accurate measures of the functional size of applications generated in MDD environments from the corresponding conceptual models. / Marín Campusano, BM. (2011). Functional Size Measurement and Model Verification for Software Model-Driven Developments: A COSMIC-based Approach [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/11237 / Palancia
17

Learning to Measure Invisible Fish

Gustafsson, Stina January 2022 (has links)
In recent years, the EU has observed a decrease in the stocks of certain fish species due to unrestricted fishing. To combat the problem, many fisheries are investigating how to automatically estimate the catch size and composition using sensors onboard the vessels. Yet, measuring the size of fish in marine imagery is a difficult task. The images generally suffer from complex conditions caused by cluttered fish, motion blur and dirty sensors. In this thesis, we propose a novel method for automatic measurement of fish size that can enable measuring both visible and occluded fish. We use a Mask R-CNN to segment the visible regions of the fish, and then fill in the shape of the occluded fish using a U-Net. We train the U-Net to perform shape completion in a semi-supervised manner, by simulating occlusions on an open-source fish dataset. Different to previous shape completion work, we teach the U-Net when to fill in the shape and not by including a small portion of fully visible fish in the input training data. Our results show that our proposed method succeeds to fill in the shape of the synthetically occluded fish as well as of some of the cluttered fish in real marine imagery. We achieve an mIoU score of 93.9 % on 1 000 synthetic test images and present qualitative results on real images captured onboard a fishing vessel. The qualitative results show that the U-Net can fill in the shapes of lightly occluded fish, but struggles when the tail fin is hidden and only parts of the fish body is visible. This task is difficult even for a human, and the performance could perhaps be increased by including the fish appearance in the shape completion task. The simulation-to-reality gap could perhaps also be reduced by finetuning the U-Net on some real occlusions, which could increase the performance on the heavy occlusions in the real marine imagery.
18

Transverse electron beam dynamics in the beam loading regime

Köhler, Alexander 11 July 2019 (has links)
GeV electron bunches accelerated on a centimeter scale device exemplify the extraordinary advances of laser-plasma acceleration. The combination of high charges from optimized injection schemes and intrinsic femtosecond short bunch duration yields kiloampere peak currents. Further enhancing the current while reducing the energy spread will pave the way for future application, e.g. the driver for compact secondary radiation sources such as high-field THz, high-brightness x-ray or gamma-ray sources. One essential key for beam transport to a specific application is an electron bunch with high quality beam parameters such as low energy spread as well as small divergence and spot size. The inherent micrometer size at the plasma exit is typically sufficient for an efficient coupling into a conventional beamline. However, energy spread and beam divergence require optimization before the beam can be transported efficiently. Induced by the high peak current, the beam loading regime can be used in order to achieve optimized beam parameters for beam transport. / In this thesis, the impact of beam loading on the transverse electron dynamic is systematically studied by investigating betatron radiation and electron beam divergence. For this reason, the bubble regime with self-truncated ionization injection (STII) is applied to set up a nanocoulomb-class laser wakefield accelerator. The accelerator is driven by 150TW laser pulses from the DRACO high power laser system. A supersonic gas jet provides a 3mm long acceleration medium with electron densities from 3 × 10^18 cm^−3 to 5 × 10^18 cm^−3. The STII scheme together with the employed setup yields highly reproducible injections with bunch charges of up to 0.5 nC. The recorded betatron radius at the accelerator exit is about one micron and reveals that the beam size stays at the same value. The optimal beam loading, which is observed at around 250 pC to 300 pC, leads to the minimum energy spread of ~40MeV and a 20% smaller divergence. It is demonstrated that an incomplete betatron phase mixing due to the small energy spread can explain the experimentally observed minimum beam divergence.
19

Error-Aware Density-Based Clustering of Imprecise Measurement Values

Lehner, Wolfgang, Habich, Dirk, Volk, Peter B., Dittmann, Ralf, Utzny, Clemens 15 June 2022 (has links)
Manufacturing process development is under constant pressure to achieve a good yield for stable processes. The development of new technologies, especially in the field of photomask and semiconductor development, is at its phys- ical limits. In this area, data, e.g. sensor data, has to be collected and analyzed for each process in order to ensure process quality. With increasing complexity of manufactur- ing processes, the volume of data that has to be evaluated rises accordingly. The complexity and data volume exceeds the possibility of a manual data analysis. At this point, data mining techniques become interesting. The application of current techniques is complex because most of the data is captured with sensor measurement tools. Therefore, every measured value contains a specific error. In this paper we propose an error-aware extension of the density-based al- gorithm DBSCAN. Furthermore, we present some quality measures which could be utilized for further interpretation of the determined clustering results. With this new cluster algorithm, we can ensure that masks are classified into the correct cluster with respect to the measurement errors, thus ensuring a more likely correlation between the masks.
20

Optimizing endoscopic strategies for colorectal cancer screening : improving colonoscopy effectiveness by optical, non-optical, and computer-based models

Taghiakbari, Mahsa 12 1900 (has links)
Introduction: Le cancer colorectal demeure un grave problème de santé publique au Canada. Les programmes de dépistage pourraient réduire l'incidence du cancer colorectal et la mortalité qui lui est associée. Une coloscopie de haute qualité est considérée comme un moyen rentable de prévenir le cancer en identifiant et en éliminant les lésions précurseurs du cancer. Bien que la coloscopie puisse servir de mesure préventive contre le cancer, la procédure peut imposer un fardeau supplémentaire à la santé publique par l'enlèvement et l'évaluation histologique de polypes colorectaux diminutifs et insignifiants, qui présentent un risque minime d'histologie avancée ou de cancer. La technologie de l'amélioration de l'image permettrait aux médecins de réséquer et de rejeter les polypes diminutifs ou de diagnostiquer et de laisser les polypes rectosigmoïdiens diminutifs sans examen histopathologique. Malgré la disponibilité de systèmes informatiques de caractérisation des polypes, la pratique du diagnostic optique reste limitée en raison de la crainte d'un mauvais diagnostic de cancer, d'une mauvaise surveillance des patients et des problèmes médico-légaux correspondants. Il est donc indispensable d'élaborer des stratégies alternatives de résection et d'élimination non optiques pour améliorer la précision et la sécurité du diagnostic optique et l'adapter à la pratique clinique. Ces stratégies doivent répondre à des critères cliniques simples et ne nécessitent pas de formation supplémentaire ni de dispositifs d'amélioration de l'image. De plus, la pratique sûre du diagnostic optique, la prise de décision appropriée concernant la technique de polypectomie ou l'intervalle de surveillance dépendent de l'estimation précise de la taille des polypes. La variabilité inter-endoscopistes dans la mesure de la taille des polypes exige le développement de méthodes fiables et validées pour augmenter la précision de la mesure de la taille. Une balance virtuelle intégrée à un endoscope haute définition est actuellement disponible pour le calcul automatique de la taille des polypes, mais sa faisabilité clinique n'a pas encore été établie. En dehors des points susmentionnés, une coloscopie de haute qualité nécessite l'examen complet de la muqueuse colique, ainsi que la visualisation de la valve iléocæcale et de l'orifice appendiculaire. À ce jour, aucune solution informatique n'a été capable d'assister les endoscopistes pendant les coloscopies en temps réel en détectant et en différenciant les points de repère cæcaux de façon automatique. Objectifs: Les objectifs de cette thèse sont : 1) d'étudier l'effet de la limitation du diagnostic optique aux polypes de 1 à 3 mm sur la sécurité du diagnostic optique pour le traitement des polypes diminutifs et l'acceptation par les endoscopistes de son utilisation dans les pratiques en temps réel tout en préservant ses potentiels de temps et de rentabilité ; 2) élaborer et examiner des stratégies non optiques de résection et d'élimination qui peuvent remplacer le diagnostic optique tout en offrant les mêmes possibilités d'économie de temps et d'argent ; 3) examiner la précision relative d'un endoscope à échelle virtuelle pour mesurer la taille des polypes ; 4) former, valider et tester un modèle d'intelligence artificielle qui peut prédire la complétude d'une procédure de coloscopie en identifiant les points de repère anatomiques du cæcum (c'est-à-dire la valve iléo-cæcale et l'orifice appendiculaire) et en les différenciant les uns des autres, des polypes et de la muqueuse normale. Méthodes: Pour atteindre le premier objectif de cette thèse, une analyse post-hoc de trois études prospectives a été réalisée pour évaluer la proportion de patients chez lesquels des adénomes avancés ont été découverts et le diagnostic optique a entraîné une surveillance retardée dans trois groupes de taille de polypes : 1–3, 1–5, et 1–10 mm. Pour atteindre le second objectif de cette thèse, deux stratégies non optiques ont été développées et testées dans deux études prospectives: une stratégie de résection et d'élimination basée sur la localisation qui utilise la localisation anatomique des polypes pour classer les polypes du côlon en non-néoplasiques ou néoplasiques à faible risque et une stratégie de résection et d'élimination basée sur les polypes qui attribue des intervalles de surveillance en fonction du nombre et de la taille des polypes. Dans les trois études, la concordance de l'attribution d'intervalles de surveillance basée sur un diagnostic optique à haute confiance ou sur des stratégies non optiques avec les recommandations basées sur la pathologie, ainsi que la proportion d'examens pathologiques évités et la proportion de communications immédiates d'intervalles de surveillance, ont été évaluées. Le troisième objectif de cette thèse a été abordé par le biais d'une étude de faisabilité pilote prospective qui a utilisé la mesure de spécimens de polypes immédiatement après leur prélèvement, suite à une polypectomie par un pied à coulisse Vernier comme référence pour comparer la précision relative des mesures de la taille des polypes entre les endoscopistes et un endoscope à échelle virtuelle. Enfin, le quatrième objectif de cette thèse a été évalué par l'enregistrement et l'annotation prospective de vidéos de coloscopie. Des images non modifiées de polype, de valve iléo-caecale, d'orifice appendiculaire et de muqueuse normale ont été extraites et utilisées pour développer et tester un modèle de réseau neuronal convolutionnel profond pour classer les images pour les points de repère qu'elles contiennent. Résultats: La réduction du seuil du diagnostic optique favoriserait la sécurité du diagnostic optique en diminuant de manière significative le risque d'écarter un polype avec une histologie avancée ou la mauvaise surveillance d'un patient avec de tels polypes. En outre, les stratégies non optiques de résection et d'élimination pourraient dépasser le critère de référence d'au moins 90% de concordance dans l'attribution des intervalles de surveillance post-polypectomie par rapport aux décisions basées sur l'évaluation pathologique. De plus, il a été démontré que l'endoscope à échelle virtuelle est plus précis que l'estimation visuelle de la taille des polypes en temps réel. Enfin, un modèle d'apprentissage profond s'est révélé très efficace pour détecter les repères cæcaux, les polypes et la muqueuse normale, à la fois individuellement et en combinaison. Discussion: La prédiction histologique optique des polypes de 1 à 3 mm est une approche efficace pour améliorer la sécurité et la faisabilité de la stratégie de résection et d'écartement dans la pratique. Les approches non optiques de résection et d'élimination offrent également des alternatives viables au diagnostic optique lorsque les endoscopistes ne sont pas en mesure de répondre aux conditions de mise en œuvre systématique du diagnostic optique, ou lorsque la technologie d'amélioration de l'image n'est pas accessible. Les stratégies de résection et de rejet, qu'elles soient optiques ou non, pourraient réduire les coûts supplémentaires liés aux examens histopathologiques et faciliter la communication du prochain intervalle de surveillance le même jour que la coloscopie de référence. Un endoscope virtuel à échelle réduite faciliterait l'utilisation du diagnostic optique pour la détection des polypes diminutifs et permet une prise de décision appropriée pendant et après la coloscopie. Enfin, le modèle d'apprentissage profond peut être utile pour promouvoir et contrôler la qualité des coloscopies par la prédiction d'une coloscopie complète. Cette technologie peut être intégrée dans le cadre d'une plateforme de vérification et de génération de rapports qui élimine le besoin d'intervention humaine. Conclusion: Les résultats présentés dans cette thèse contribueront à l'état actuel des connaissances dans la pratique de la coloscopie concernant les stratégies pour améliorer l'efficacité de la coloscopie dans la prévention du cancer colorectal. Cette étude fournira des indications précieuses pour les futurs chercheurs intéressés par le développement de méthodes efficaces de traitement des polypes colorectaux diminutifs. Le diagnostic optique nécessite une formation complémentaire et une mise en œuvre à l'aide de modules de caractérisation informatisés. En outre, malgré la lenteur de l'adoption des solutions informatiques dans la pratique clinique, la coloscopie assistée par l'IA ouvrira la voie à la détection automatique, à la caractérisation et à la rédaction semi-automatique des rapports de procédure. / Introduction: Colorectal cancer remains a critical public health concern in Canada. Screening programs could reduce the incidence of colorectal cancer and its associated mortality. A high-quality colonoscopy is appraised to be a cost-effective means of cancer prevention through identifying and removing cancer precursor lesions. Although colonoscopy can serve as a preventative measure against cancer, the procedure can impose an additional burden on the public health by removing and histologically evaluating insignificant diminutive colorectal polyps, which pose a minimal risk of advanced histology or cancer. The image-enhance technology would enable physicians to resect and discard diminutive polyps or diagnose and leave diminutive rectosigmoid polyps without histopathology examination. Despite the availability of computer-based polyp characterization systems, the practice of optical diagnosis remains limited due to the fear of cancer misdiagnosis, patient mismanagement, and the related medicolegal issues. Thus, alternative non-optical resection and discard strategies are imperative for improving the accuracy and safety of optical diagnosis for adaptation to clinical practice. These strategies should follow simple clinical criteria and do not require additional education or image enhanced devices. Furthermore, the safe practice of optical diagnosis, adequate decision-making regarding polypectomy technique, or surveillance interval depends on accurate polyp size estimation. The inter-endoscopist variability in polyp sizing necessitates the development of reliable and validated methods to enhance the accuracy of size measurement. A virtual scale integrated into a high-definition endoscope is currently available for automated polyp sizing, but its clinical feasibility has not yet been demonstrated. In addition to the points mentioned above, a high-quality colonoscopy requires the complete examination of the entire colonic mucosa, as well as the visualization of the ileocecal valve and appendiceal orifice. To date, no computer-based solution has been able to support endoscopists during live colonoscopies by automatically detecting and differentiating cecal landmarks. Aims: The aims of this thesis are: 1) to investigate the effect of limiting optical diagnosis to polyps 1–3mm on the safety of optical diagnosis for the management of diminutive polyps and the acceptance of endoscopists for its use in real-time practices while preserving its time- and cost-effectiveness potentials; 2) to develop and examine non-optical resect and discard strategies that can replace optical diagnosis while offering the same time- and cost-saving potentials; 3) to examine the relative accuracy of a virtual scale endoscope for measuring polyp size; 4) to train, validate, and test an artificial intelligence-empower model that can predict the completeness of a colonoscopy procedure by identifying cecal anatomical landmarks (i.e., ileocecal valve and appendiceal orifice) and differentiating them from one another, polyps, and normal mucosa. Methods: To achieve the first aim of this thesis, a post-hoc analysis of three prospective studies was performed to evaluate the proportion of patients in which advanced adenomas were found and optical diagnosis resulted in delayed surveillance in three polyp size groups: 1‒3, 1‒5, and 1‒10 mm. To achieve the second aim of this thesis, two non-optical strategies were developed and tested in two prospective studies: a location-based resect and discard strategy that uses anatomical polyp location to classify colon polyps into non-neoplastic or low-risk neoplastic and a polyp-based resect and discard strategy that assigns surveillance intervals based on polyp number and size. In all three studies, the agreement of assigning surveillance intervals based on high-confidence optical diagnosis or non-optical strategies with pathology-based recommendations, as well as the proportion of avoided pathology examinations and the proportion of immediate surveillance interval communications, was evaluated. The third aim of this thesis was addressed through a prospective pilot feasibility study that used the measurement of polyp specimens immediately after retrieving, following a polypectomy by a Vernier caliper as a reference to compare the relative accuracy of polyp size measurements between endoscopists and a virtual scale endoscope. Finally, the fourth aim of this thesis was assessed through prospective recording and annotation of colonoscopy videos. Unaltered images of polyp, ileocecal valve, appendiceal orifice and normal mucosa were extracted and used to develop and test a deep convolutional neural network model for classifying images for the containing landmarks. Results: Reducing the threshold of optical diagnosis would promote the safety of optical diagnosis by significantly decreasing the risk of discarding a polyp with advanced histology or the mismanagement of a patient with such polyps. Additionally, the non-optical resect and discard strategies could surpass the benchmark of at least 90% agreement in the assignment of post-polypectomy surveillance intervals compared with decisions based on pathologic assessment. Moreover, the virtual scale endoscope was demonstrated to be more accurate than visual estimation of polyp size in real-time. Finally, a deep learning model proved to be highly effective in detecting cecal landmarks, polyps, and normal mucosa, both individually and in combination. Discussion: Optical histology prediction of polyps 1‒3 mm in size is an effective approach to enhance the safety and feasibility of resect and discard strategy in practice. Non-optical resect and discard approaches also offer feasible alternatives to optical diagnosis when endoscopists are unable to meet the conditions for routine implementation of optical diagnosis, or when image-enhanced technology is not accessible. Both optical and non-optical resect and discard strategies could reduce additional costs related to histopathology examinations and facilitate the communication of the next surveillance interval in the same day as the index colonoscopy. A virtual scale endoscope would facilitate the use of optical diagnosis for the detection of diminutive polyps and allows for appropriate decision-making during and after colonoscopy. Additionally, the deep learning model may be useful in promoting and monitoring the quality of colonoscopies through the prediction of a complete colonoscopy. This technology may be incorporated as part of a platform for auditing and report generation that eliminates the need for human intervention. Conclusion: The results presented in this thesis will contribute to the current state of knowledge in colonoscopy practice regarding strategies for improving the efficacy of colonoscopy in the prevention of colorectal cancer. This study will provide valuable insights for future researchers interested in developing effective methods for treating diminutive colorectal polyps. Optical diagnosis requires further training and implementation using computer-based characterization modules. Furthermore, despite the slow adoption of computer-based solutions in clinical practice, AI-empowered colonoscopy will eventually pave the way for automatic detection, characterization, and semi-automated completion of procedure reports in the future.

Page generated in 0.1438 seconds