• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 664
  • 207
  • 62
  • 60
  • 55
  • 45
  • 12
  • 11
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • Tagged with
  • 1327
  • 1327
  • 211
  • 205
  • 159
  • 140
  • 139
  • 131
  • 118
  • 117
  • 114
  • 110
  • 110
  • 108
  • 102
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
771

Ska företagens ansvar bli vårt? : en studie om CSR-kommunikation på hemsidor / Will the companies' responsibility become ours? : a study of CSR communication on websites

Olsson, Helen, Larsson, Anton January 2015 (has links)
Studiens syfte var att synliggöra och analysera hur företag som av konsumenter uppfattas som hållbara kommunicerar kring hållbarhet på sina hemsidor. En ökad förståelse inom området för CSR-kommunikation kan dels bidra till akademisk kunskap, men också ligga till grund för mer normativa modeller som kan skapa praktiska fördelar för företag. Internet har utvecklats till en av de mest betydande kommunikationskanalerna inom CSR. Hemsidor är ett dynamiskt medium för kommunikation som erbjuder både flexibilitet och enkel uppdatering.För genomförandet av studien har vi använt oss av två analysmodeller för att tolka budskap, en diskursanalys och en bildanalys. Tidigare studier om CSR-kommunikation har studerat text, men språket är inte det enda att ta hänsyn till vid tolkning av budskap, bilder och illustrationer har också en central roll. Genom att använda två analysmodeller kunde vi stu-dera företagens CSR-budskap. Studien omfattar fem företag som representerar olika branscher och begränsas till företagens hemsidor. Företagen som ingår i studien uppfattas som bransch-bäst inom hållbarhet av konsumenter.Resultatet visade att CSR-kommunikation på hemsidor är adresserad till konsumenter, vilket är rimligt. Det vi fann särskilt intressant var hur den är det. Kommunikationen har en inklu-derande diskurs där konsumenten tas in i kontexten och blir en del av helheten. Tidigare var det företagen som stod i kontext, nu är det ”vi”, i.e. företag och konsumenter tillsammans. Studien har bidragit med ett nytt akademiskt fokus som problematiserar hur företagens ansvar har gått till att bli konsumenters och företags gemensamma ansvar. / The purpose of this study is to visualize and analyze how companies perceived as sustainable by consumers communicate sustainability on their websites. The study will contribute to increased academic knowledge in the field of CSR communication, to business economic theory of sustainability. Internet has become one of the most significant communication channels within CSR and websites are a dynamic medium of communication that offers both flexibility and easy update.Previous studies of CSR communication have either studied text or image. Language is not the only thing to consider when interpreting messages, images and illustrations also have a central role. CSR messages from five companies in different industries have been analysed with the help of two analytical models. The companies were chosen based on their high sustainability ranking among consumers.The results showed that CSR communication on websites addressed to consumers, which is reasonable. What we found particularly interesting was that the communication has an inclu-sive discourse in which the consumer is taken into context. Earlier the companies was in context, now it is "we" i.e. businesses and consumers together. The study has contributed a new academic focus that problematize how corporate responsibility has passed to become consumers and companies joint responsibility.This thesis is written in Swedish.
772

Genetic approaches to the analysis of body colouration in Nile tilapia (Oreochromis niloticus)

Rajaee, Amy H. January 2011 (has links)
Body colouration in tilapia is an important trait affecting consumer preference. In the Nile tilapia (Oreochromis niloticus), there are three colour variants which are normal (wild type), red and blond. In some countries, the red variant is important and reaches higher prices in the market. However, one major problem regarding red tilapia culture is their body colouration which is often associated with blotching (mainly black but also red) which is undesirable for the consumer. The overall aim of this work was to expand knowledge on various aspects of body colouration in Nile tilapia using genetic approaches. The results of this research are presented as four different manuscripts. The manuscripts (here referred as Papers) have either been published (Paper IV) or are to be submitted (Paper I, II and III) in relevant peer reviewed journals. Paper I and II investigated the inheritance of black blotching and other body colour components of the red body colour. Specifically, Paper I consisted of two preliminary trials (Trial 1 and 2), to look at the ontogeny of black blotching and body colour components over a period of six months. Trial 1 investigated the effect of tank background colour (light vs dark) on black blotching and other body colour components and was carried out using a fully inbred (all female) clonal red line. Trial 2 was carried out using mixed sex fish and was aimed to investigate the association of black blotching with the sex of the fish. The results from this study were used to guide the experiment described in Paper II. Sixteen red sires with various levels of black and red blotching were crossed to clonal females and the inheritance of blotching and other body colour components were investigated using parent-offspring regressions. The results showed no significant heritability for black blotching and body redness, but a significant correlation for body redness and black blotching was found in female offspring at one sampling point suggesting that attempts to increase body redness may increase black blotching, as had been hypothesized. Paper III was divided into two parts. The first objective was to map the blond locus onto the tilapia linkage map and the second was to investigate the interaction of the blond and red genes on black blotching using the blond-linked markers to distinguish different blond genotypes in heterozygous red fish (i.e. RrBlbl or Rrblbl). In the blond fish, the formation of melanin is almost blocked via much reduced melanophores and this feature may be able to help reducing the black blotching in red tilapia. Two intraspecific families (O. niloticus) and one interspecific family (O. aureus and O. niloticus) were used as mapping families and the blond locus was located in LG5. Four out of eight markers were successfully used to assess the interaction of blond on red blotched fish. The blond gene did not significantly reduce the area of blotching but did reduce the saturation (paler blotching) and enhanced the redness of body colour in the Rrblbl fish compared to the RrBlbl group. Finally, Paper IV aimed to find out the effect of male colouration on reproductive success in Nile tilapia. A choice of one wild type male and one red male was presented to red or wild type females and these fish were allowed to spawn under semi-natural spawning conditions. Eggs were collected from the female’s mouth after spawning and paternity was assessed using microsatellite genotyping and phenotype scoring. No significant departures from equal mating success were observed between the red and wild type males, however there was a significant difference between the red and wild type females in the frequency of secondary paternal contribution to egg batches. The results suggest that mating success of wild type and red tilapia is approximately equal. The results from this research help to broaden our knowledge and understanding on the aspects of body colouration in Nile tilapia and provide fundamental information for further research.
773

Computer aided characterization of degenerative disk disease employing digital image texture analysis and pattern recognition algorithms

Μιχοπούλου, Σοφία 19 November 2007 (has links)
Introduction: A computer-based classification system is proposed for the characterization of cervical intervertebral disc degeneration from saggital magnetic resonance images. Materials and methods: Cervical intervertebral discs from saggital magnetic resonance images where assessed by an experienced orthopaedist as normal or degenerated (narrowed) employing Matsumoto’s classification scheme. The digital images where enhanced and the intervertebral discs which comprised the regions of interest were segmented. First and second order statistics textural features extracted from thirty-four discs (16 normal and 16 degenerated) were used in order to design and test the classification system. In addition textural features were calculated employing Laws TEM images. The existence of statistically significant differences between the textural features values that were generated from normal and degenerated discs was verified employing the Student’s paired t-test. A subset with the most discriminating features (p<0.01) was selected and the Exhaustive Search and Leave-One-Out methods were used to find the best features combination and validate the classification accuracy of the system. The proposed system used the Least Squares Minimum Distance Classifier in combination with four textural features with comprised the best features combination in order to classify the discs as normal or degenerated. Results: The overall classification accuracy was 93.8% misdiagnosing 2 discs. In addition the system’s sensitivity in detecting a narrow disc was 93.8% and its specificity was also 93.8%. Conclusion: Further investigation and the use of a larger sample for validation could make the proposed system a trustworthy and useful tool to the physicians for the evaluation of degenerative disc disease in the cervical spine. / Σκοπός: Η στένωση των μεσοσπονδύλιων δίσκων της αυχενικής μοίρας, ως κύρια έκφραση εκφυλιστικής νόσου, είναι μια από τις σημαντικότερες αιτίες πρόκλησης πόνου στην περιοχή του αυχένα. Στην κλινική πράξη η αξιολόγηση της στένωσης γίνεται μέσω μέτρησης του μεσοσπονδύλιου διαστήματος, σε διάφορες απεικονίσεις της αυχενικής μοίρας του ασθενούς. Στην παρούσα εργασία προτείνεται μια υπολογιστική μέθοδος ανάλυσης εικόνας, για την αυτοματοποιημένη εκτίμηση της στένωσης από εικόνες μαγνητικής τομογραφίας. Υλικό και Μέθοδος: Μελετήθηκαν 34 μεσοσπονδύλιοι δίσκοι από οβελιαίες τομές μαγνητικής τομογραφίας της αυχενικής μοίρας, οι οποίες ελήφθησαν με χρήση Τ2 ακολουθίας. Η στένωση των μεσοσπονδύλιων δίσκων αξιολογήθηκε από έμπειρο ορθοπαιδικό βάσει της κλίμακας Matsumoto. Οι δίσκοι χωρίστηκαν σε δύο κατηγορίες: (α) 16 φυσιολογικοί και (β) 16 δίσκοι που παρουσίαζαν στένωση. Με χρήση διαδραστικού περιβάλλοντος επεξεργασίας εικάνας καθορίστηκε το περίγραμμα των μεσοσπονδύλιων δίσκων οι οποίοι αποτελούν τις προς ανάλυση περιοχές ενδιαφέροντος (Π.Ε.). Σε κάθε Π.Ε. εφαρμόστηκαν αλγόριθμοι εξαγωγής χαρακτηριστικών υφής. Συγκεκριμένα υπολογίστικαν χαρακτηριστικά υφής από στατιστικά πρώτης και δεύτερης τάξης καθώς και χαρακτηριστικά από τα μέτρα ενέργειας υφλης κατλα Laws. Τα παραπάνω χαρακτηριστικά, ποσοτικοποιούν διαγνωστικές πληροφορίες της έντασης του σήματος της Π.Ε. και συσχετίζονται με τη βιοχημική σύσταση των απεικονιζόμενων δομών. Τα εξαχθέντα χαρακτηριστικά υφής αξιοποιήθηκαν για τη σχεδίαση του ταξινομητή ελάχιστης απόστασης ελαχίστων τετραγώνων, ο οποίος χρησιμοποιήθηκε για το διαχωρισμό μεταξύ φυσιολογικών δίσκων και δίσκων που παρουσίαζαν στένωση (εκφυλισμένων). Αποτελέσματα: Η ακρίβεια της ταξινόμησης φυσιολογικών και εκφυλισμένων μεσοσπονδύλιων δίσκων ανήλθε σε 93.8%. Η ευαισθησία καθώς και η ειδικότητα της μεθόδου, σε ότι αφορά την ανίχνευση εκφυλισμένων δίσκων, είναι επίσης 93.8%. Συμπέρασμα: Με δεδομένο το μικρό μέγεθος του δείγματος που χρησιμοποιήθηκε για το σχεδιασμό της μεθόδου, απαιτούνται περετέρω εργασίες πιστοποίησης της ακρίβειας ταξινόμησης, προκειμένου η μέθοδος αυτή να αξιοποιηθεί από ακτινολόγους και ορθοπαιδικους, ως βοηθητικό διαγνωστικό εργαλείο.
774

A Supervised Approach For The Estimation Of Parameters Of Multiresolution Segementation And Its Application In Building Feature Extraction From VHR Imagery

Dey, Vivek 28 September 2011 (has links)
With the advent of very high spatial resolution (VHR) satellite, spatial details within the image scene have increased considerably. This led to the development of object-based image analysis (OBIA) for the analysis of VHR satellite images. Image segmentation is the fundamental step for OBIA. However, a large number of techniques exist for RS image segmentation. To identify the best ones for VHR imagery, a comprehensive literature review on image segmentation is performed. Based on that review, it is found that the multiresolution segmentation, as implemented in the commercial software eCognition, is the most widely-used technique and has been successfully applied for wide variety of VHR images. However, the multiresolution segmentation suffers from the parameter estimation problem. Therefore, this study proposes a solution to the problem of the parameter estimation for improving its efficiency in VHR image segmentation. The solution aims to identify the optimal parameters, which correspond to optimal segmentation. The solution to the parameter estimation is drawn from the Equations related to the merging of any two adjacent objects in multiresolution segmentation. The solution utilizes spectral, shape, size, and neighbourhood relationships for a supervised solution. In order to justify the results of the solution, a global segmentation accuracy evaluation technique is also proposed. The solution performs excellently with the VHR images of different sensors, scenes, and land cover classes. In order to justify the applicability of solution to a real life problem, a building detection application based on multiresolution segmentation from the estimated parameters, is carried out. The accuracy of the building detection is found nearly to be eighty percent. Finally, it can be concluded that the proposed solution is fast, easy to implement and effective for the intended applications.
775

Characteristics of Concrete Containing Fly Ash With Hg-Adsorbent

Mahoutian, Mehrdad Unknown Date
No description available.
776

Computer-­Assisted  Coronary  CT  Angiography  Analysis : From  Software  Development  to  Clinical  Application

Wang, Chunliang January 2011 (has links)
Advances in coronary Computed Tomography Angiography (CTA) have resulted in a boost in the use of this new technique in recent years, creating a challenge for radiologists due to the increasing number of exams and the large amount of data for each patient. The main goal of this study was to develop a computer tool to facilitate coronary CTA analysis by combining knowledge of medicine and image processing, and to evaluate the performance in clinical settings. Firstly, a competing fuzzy connectedness tree algorithm was developed to segment the coronary arteries and extract centerlines for each branch. The new algorithm, which is an extension of the “virtual contrast injection” (VC) method, preserves the low-density soft tissue around the artery, and thus reduces the possibility of introducing false positive stenoses during segmentation. Visually reasonable results were obtained in clinical cases. Secondly, this algorithm was implemented in open source software in which multiple visualization techniques were integrated into an intuitive user interface to facilitate user interaction and provide good over­views of the processing results. An automatic seeding method was introduced into the interactive segmentation workflow to eliminate the requirement of user initialization during post-processing. In 42 clinical cases, all main arteries and more than 85% of visible branches were identified, and testing the centerline extraction in a reference database gave results in good agreement with the gold standard. Thirdly, the diagnostic accuracy of coronary CTA using the segmented 3D data from the VC method was evaluated on 30 clinical coronary CTA datasets and compared with the conventional reading method and a different 3D reading method, region growing (RG), from a commercial software. As a reference method, catheter angiography was used. The percentage of evaluable arteries, accuracy and negative predictive value (NPV) for detecting stenosis were, respectively, 86%, 74% and 93% for the conventional method, 83%, 71% and 92% for VC, and 64%, 56% and 93% for RG. Accuracy was significantly lower for the RG method than for the other two methods (p&lt;0.01), whereas there was no significant difference in accuracy between the VC method and the conventional method (p = 0.22). Furthermore, we developed a fast, level set-based algorithm for vessel segmentation, which is 10-20 times faster than the conventional methods without losing segmentation accuracy. It enables quantitative stenosis analysis at interactive speed. In conclusion, the presented software provides fast and automatic coron­ary artery segmentation and visualization. The NPV of using only segmented 3D data is as good as using conventional 2D viewing techniques, which suggests a potential of using them as an initial step, with access to 2D reviewing techniques for suspected lesions and cases with heavy calcification. Combining the 3D visualization of segmentation data with the clinical workflow could shorten reading time.
777

An integrated experimental and finite element study to understand the mechanical behavior of carbon reinforced polymer nanocomposites

Bhuiyan, Md Atiqur Rahman 27 August 2014 (has links)
The exceptional properties of carbon nanomaterials make them ideal reinforcements for polymers. However, the main challenges in utilizing their unique properties are their tendency to form agglomerates, their non-controlled orientation, non-homogeneous distribution and finally the change in their shape/size due to processing. All the above are the result of the nanomaterial/polymer interfacial interactions which dictate the overall performance of the composites including the mechanical properties. The aforementioned uncertainties are the reason for the deviation observed between the experimentally determined properties and the theoretically expected ones. The focus of this study is to understand the reinforcing efficiency of carbon nanomaterials in polymers through finite element modeling that captures the effect of the interfacial interactions on the tensile modulus of polymer nanocomposites (PNCs). The novelty of this work is that the probability distribution functions of nanomaterials dispersion, distribution, orientation and waviness, determined through image analysis by extracting 3-D information from 2-D scanning electron micrographs, are incorporated into the finite element model allowing thus for fundamental understanding of how the nanostructure parameters affect the tensile modulus of the PNCs. The nanocomposites are made using melt mixing followed by either injections molding or melt spinning of fibers. Polypropylene (PP) is used as the polymer and carbon nanotubes (CNT) or exfoliated graphite nanoplatelets (xGnP) are used as nanoreinforcements. The presence of interphase, confirmed and characterized in terms of stiffness and width using atomic force microscopy, is also accounted for in the model. The dispersion and distribution of CNT within the polymer is experimentally altered by using a surfactant and by forcing the molten material to flow through a narrow orifice (melt spinning) that promotes alignment of CNT and even of the polymer chains along the flow/drawing direction. The effect of nanomaterials' geometry on the mechanical behavior of PNCs is also studied by comparing the properties of CNT/PP to those of xGnP/PP composites. Finally the reinforcing efficiency of CNT is determined independently of the viscoelastic behavior of the polymer by conducting tensile testing at temperatures below the glass transition temperature of PP. The finite element model with the incorporated image analysis subroutine has sufficient resolution to distinguish among the different cases (dispersion, distribution, geometry and alignment of nanomaterials) and the predicted tensile modulus is in agreement with the experimentally determined one. In conclusion, this study provides a tool, that integrates finite element modeling and thorough experiments that enables design of polymer nanocomposites with engineered mechanical properties.
778

Statistical and geometric methods for shape-driven segmentation and tracking

Dambreville, Samuel 05 March 2008 (has links)
Computer Vision aims at developing techniques to extract and exploit information from images. The successful applications of computer vision approaches are multiple and have benefited diverse fields such as manufacturing, medicine or defense. Some of the most challenging tasks performed by computer vision systems are arguably segmentation and tracking. Segmentation can be defined as the partitioning of an image into homogeneous or meaningful regions. Tracking also aims at extracting meaning or information from images, however, it is a dynamic task that operates on temporal (video) sequences. Active contours have been proven to be quite valuable at performing the two aforementioned tasks. The active contours framework is an example of variational approaches, in which a problem is compactly (and elegantly) described and solved in terms of energy functionals. The objective of the proposed research is to develop statistical and shape-based tools inspired from or completing the geometric active contours methodology. These tools are designed to perform segmentation and tracking. The approaches developed in the thesis make an extensive use of partial differential equations and differential geometry to address the problems at hand. Most of the proposed approaches are cast into a variational framework. The contributions of the thesis can be summarized as follows: 1. An algorithm is presented that allows one to robustly track the position and the shape of a deformable object. 2. A variational segmentation algorithm is proposed that adopts a shape-driven point of view. 3. Diverse frameworks are introduced for including prior knowledge on shapes in the geometric active contour framework. 4. A framework is proposed that combines statistical information extracted from images with shape information learned a priori from examples 5. A technique is developed to jointly segment a 3D object of arbitrary shape in a 2D image and estimate its 3D pose with respect to a referential attached to a unique calibrated camera. 6. A methodology for the non-deterministic evolution of curves is presented, based on the theory of interacting particles systems.
779

Computer Aided Analysis of Dynamic Contrast Enhanced MRI of Breast Cancer

Yaniv Gal Unknown Date (has links)
This thesis presents a novel set of image analysis tools developed for the purpose of assisting radiologists with the task of detecting and characterizing breast lesions in image data acquired using magnetic resonance imaging (MRI). MRI is increasingly being used in the clinical setting as an adjunct to x-ray mammography (which is, itself, the basis of breast cancer screening programs worldwide) and ultrasound. Of these imaging modalities, MRI has the highest sensitivity to invasive cancer and to multifocal disease. MRI is the most reliable method for assessing tumour size and extent compared to the gold standard histopathology. It also shows great promise for the improved screening of younger women (with denser, more radio opaque breasts) and, potentially, for women at high risk. Breast MRI presently has two major shortcomings. First, although its sensitivity is high its specificity is relatively poor; i.e. the method detects many false positives. Second, the method involves acquiring several high-resolution image volumes before, during and after the injection of a contrast agent. The large volume of data makes the task of interpretation by the radiologist both complex and time-consuming. These shortcomings have motivated the research and development of the computer-aided detection systems designed to improve the efficiency and accuracy of interpretation by the radiologist. Whilst such systems have helped to improve the sensitivity/specificity of interpretation, it is the premise of this thesis that further gains are possible through automated image analysis. However, the automated analysis of breast MRI presents several technical challenges. This thesis investigates several of these, noise filtering, parametric modelling of contrast enhancement, segmentation of suspicious tissue and quantitative characterisation and classification of suspicious lesions. In relation to noise filtering, a new denoising algorithm for dynamic contrast-enhanced (DCE-MRI) data is presented, called the Dynamic Non-Local Means (DNLM). The DCE-MR image data is inherently contaminated by Rician noise and, additionally, the limited acquisition time per volume and the use of fat-suppression diminishes the signal-to-noise ratio. The DNLM algorithm, specifically designed for the DCE-MRI, is able to attenuate this noise by exploiting the redundancy of the information between the different temporal volumes, while taking into account the contrast enhancement of the tissue. Empirical results show that the algorithm more effectively attenuates noise in the DCE-MRI data than any of the previously proposed algorithms. In relation to parametric modelling of contrast enhancement, a new empiric model of contrast enhancement has been developed that is parsimonious in form. The proposed model serves as the basis for the segmentation and feature extraction algorithms presented in the thesis. In contrast to pharmacokinetic models, the proposed model does not rely on measured parameters or constants relating to the type or density of the tissue. It also does not assume a particular relationship between the observed changes in signal intensity and the concentration of the contrast agent. Empirical results demonstrate that the proposed model fits real data better than either the Tofts or Brix models and equally as well as the more complicated Hayton model. In relation to the automatic segmentation of suspicious lesions, a novel method is presented, based on seeded region growing and merging, using criteria based on both the original image MR values and the fitted parameters of the proposed model of contrast enhancement. Empirical results demonstrate the efficacy of the method, both as a tool to assist the clinician with the task of locating suspicious tissue and for extracting quantitative features. Finally, in relation to the quantitative characterisation and classification of suspicious lesions, a novel classifier (i.e. a set of features together with a classification method) is presented. Features were extracted from noise-filtered and segmented-image volumes and were based both on well-known features and several new ones (principally, on the proposed model of contrast enhancement). Empirical results, based on routine clinical breast MRI data, show that the resulting classifier performs better than other such classifiers reported in the literature. Therefore, this thesis demonstrates that improvements in both sensitivity and specificity are possible through automated image analysis.
780

Image transition techniques using projective geometry

Wong, Tzu Yen January 2009 (has links)
[Truncated abstract] Image transition effects are commonly used on television and human computer interfaces. The transition between images creates a perception of continuity which has aesthetic value in special effects and practical value in visualisation. The work in this thesis demonstrates that better image transition effects are obtained by incorporating properties of projective geometry into image transition algorithms. Current state-of-the-art techniques can be classified into two main categories namely shape interpolation and warp generation. Many shape interpolation algorithms aim to preserve rigidity but none preserve it with perspective effects. Most warp generation techniques focus on smoothness and lack the rigidity of perspective mapping. The affine transformation, a commonly used mapping between triangular patches, is rigid but not able to model perspective effects. Image transition techniques from the view interpolation community are effective in creating transitions with the correct perspective effect, however, those techniques usually require more feature points and algorithms of higher complexity. The motivation of this thesis is to enable different views of a planar surface to be interpolated with an appropriate perspective effect. The projective geometric relationship which produces the perspective effect can be specified by two quadrilaterals. This problem is equivalent to finding a perspectively appropriate interpolation for projective transformation matrices. I present two algorithms that enable smooth perspective transition between planar surfaces. The algorithms only require four point correspondences on two input images. ...The second algorithm generates transitions between shapes that lie on the same plane which exhibits a strong perspective effect. It recovers the perspective transformation which produces the perspective effect and constrains the transition so that the in-between shapes also lie on the same plane. For general image pairs with multiple quadrilateral patches, I present a novel algorithm that is transitionally symmetrical and exhibits good rigidity. The use of quadrilaterals, rather than triangles, allows an image to be represented by a small number of primitives. This algorithm uses a closed form force equilibrium scheme to correct the misalignment of the multiple transitional quadrilaterals. I also present an application for my quadrilateral interpolation algorithm in Seitz and Dyer's view morphing technique. This application automates and improves the calculation of the reprojection homography in the postwarping stage of their technique. Finally I unify different image transition research areas into a common framework, this enables analysis and comparison of the techniques and the quality of their results. I highlight that quantitative measures can greatly facilitate the comparisons among different techniques and present a quantitative measure based on epipolar geometry. This novel quantitative measure enables the quality of transitions between images of a scene from different viewpoints to be quantified by its estimated camera path.

Page generated in 0.0496 seconds