WebGL is a standard for drawing graphics in a web browser. Currently it isn’t widely understood how consistently WebGL performs across a majority of the de- vices that support it. Determining if an image looks correct to a human observer is an interesting problem. The solution for this is useful when developing WebGL applications, since a developer could make better informed decisions during de- velopment. The differences in capability between WebGL implementations are studied, and a few factors are selected that likely will contribute to variations in the rendered output. These factors are found by studying the WebGL specification documen- tation, and in the cases where it is ambiguous, further, authorative sources have contributed to the choice of factors studied. A prototype testing system is developed, including a tool for simulating imple- mentation differences. Two image processing algorithms are evaluated for their suitability in an automatic testing system. For testing, four test cases are devel- oped. The testing system is run with the test cases on wide range of devices, both mobile and desktop. The results show that image processing is not suitable alone the source for deter- mining a test success or failure. However, some promise is shown in using image processing as one component in a fully automatic testing system. Furthermore, developing test cases that perform as the test constructor intends is proven to be a challenge in itself.
Identifer | oai:union.ndltd.org:UPSALLA1/oai:DiVA.org:liu-113460 |
Date | January 2014 |
Creators | Stenbeck, Marcus |
Publisher | Linköpings universitet, Medie- och Informationsteknik, Linköpings universitet, Tekniska högskolan |
Source Sets | DiVA Archive at Upsalla University |
Language | Swedish |
Detected Language | English |
Type | Student thesis, info:eu-repo/semantics/bachelorThesis, text |
Format | application/pdf |
Rights | info:eu-repo/semantics/openAccess |
Page generated in 0.0014 seconds