Spelling suggestions: "subject:"image desolution"" "subject:"image cesolution""
11 |
Least-squares optimal interpolation for direct image super-resolution : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Engineering at Massey University, Palmerston North, New ZealandGilman, Andrew January 2009 (has links)
Image super-resolution aims to produce a higher resolution representation of a scene from an ensemble of low-resolution images that may be warped, aliased, blurred and degraded by noise. There are a variety of methods for performing super-resolution described in the literature, and in general they consist of three major steps: image registration, fusion and deblurring. This thesis proposes a novel method of performing the first two of these steps. The ultimate aim of image super-resolution is to produce a higher-quality image that is visually clearer, sharper and contains more detail than the individual input images. Machine algorithms can not assess images qualitatively and typically use a quantitative error criterion, often least-squares. This thesis aims to optimise leastsquares directly using a fast method, in particular one that can be implemented using linear filters; hence, a closed-form solution is required. The concepts of optimal interpolation and resampling are derived and demonstrated in practice. Optimal filters optimised on one image are shown to perform nearoptimally on other images, suggesting that common image features, such as stepedges, can be used to optimise a near-optimal filter without requiring the knowledge of the ground-truth output. This leads to the construction of a pulse model, which is used to derive filters for resampling non-uniformly sampled images that result from the fusion of registered input images. An experimental comparison shows that a 10th order pulse model-based filter outperforms a number of methods common in the literature. The use of optimal interpolation for image registration linearises an otherwise nonlinear problem, resulting in a direct solution. Experimental analysis is used to show that optimal interpolation-based registration outperforms a number of existing methods, both iterative and direct, at a range of noise levels and for both heavily aliased images and images with a limited degree of aliasing. The proposed method offers flexibility in terms of the size of the region of support, offering a good trade-off in terms of computational complexity and accuracy of registration. Together, optimal interpolation-based registration and fusion are shown to perform fast, direct and effective super-resolution.
|
12 |
Least-squares optimal interpolation for direct image super-resolution : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Engineering at Massey University, Palmerston North, New ZealandGilman, Andrew January 2009 (has links)
Image super-resolution aims to produce a higher resolution representation of a scene from an ensemble of low-resolution images that may be warped, aliased, blurred and degraded by noise. There are a variety of methods for performing super-resolution described in the literature, and in general they consist of three major steps: image registration, fusion and deblurring. This thesis proposes a novel method of performing the first two of these steps. The ultimate aim of image super-resolution is to produce a higher-quality image that is visually clearer, sharper and contains more detail than the individual input images. Machine algorithms can not assess images qualitatively and typically use a quantitative error criterion, often least-squares. This thesis aims to optimise leastsquares directly using a fast method, in particular one that can be implemented using linear filters; hence, a closed-form solution is required. The concepts of optimal interpolation and resampling are derived and demonstrated in practice. Optimal filters optimised on one image are shown to perform nearoptimally on other images, suggesting that common image features, such as stepedges, can be used to optimise a near-optimal filter without requiring the knowledge of the ground-truth output. This leads to the construction of a pulse model, which is used to derive filters for resampling non-uniformly sampled images that result from the fusion of registered input images. An experimental comparison shows that a 10th order pulse model-based filter outperforms a number of methods common in the literature. The use of optimal interpolation for image registration linearises an otherwise nonlinear problem, resulting in a direct solution. Experimental analysis is used to show that optimal interpolation-based registration outperforms a number of existing methods, both iterative and direct, at a range of noise levels and for both heavily aliased images and images with a limited degree of aliasing. The proposed method offers flexibility in terms of the size of the region of support, offering a good trade-off in terms of computational complexity and accuracy of registration. Together, optimal interpolation-based registration and fusion are shown to perform fast, direct and effective super-resolution.
|
13 |
Vliv rozlišení obrázku na přesnost vyhledávání podle obsahu / The Impact of Image Resolution on the Precision of Content-based RetrievalNavrátil, Lukáš January 2015 (has links)
This thesis is focused on comparing methods for similarity image retrieval. Common techniques and testing sets are introduced. The testing sets are there to measure the accuracy of the searching systems based on similarity image retrieval. Measurements are done on those models which are implemented on the basis of presented techniques. These measurements examine their results depending on the input data, used components and parameters settings, especially the impact of image resolution on the retrieval precision is examined. These results are analysed and the models are compared. Powered by TCPDF (www.tcpdf.org)
|
14 |
Adaptiv bildladdning i en kontextmedveten webbtjänstHalldén, Albin, Schönemann, Madeleine January 2014 (has links)
Information på webben konsumeras idag via en mängd heterogena enheter. Faktorer som nätverksunderlag och skärmupplösning påverkar vilken bild som är lämplig att leverera till klienten, då en bild i sitt originaltillstånd på en tekniskt begränsad enhet tar lång tid att hämta samt kräver en stor datamängd. Eftersom surfandet på mobila enheter via mobila nätverk förväntas att öka är en lösning för adaptiv bildladdning relevant. Syftet är att undersöka huruvida en webbtjänst, bestående av en klient och en server, kan avgöra bäst lämpad bildkvalitet att leverera till klienten, baserat på dennes aktuella nätverksprestanda och skärmupplösning. En enhet med lägre skärmupplösning och ett långsammare nätverk berättigar en bild i sämre kvalitet och lägre bildupplösning. Därmed förkortas hämtnings- tiden och datamängden reduceras, vilket bidrar till en förbättrad användarupplevelse.Uppsatsen presenterar och utvärderar flera lösningar för adaptiv bildladdning. Lös- ningarna baseras på två parametrar: bredden på klientens webbläsarfönster samt svarstid mellan klient och server, med hjälp av javascript. Dessa parametrar står till grund för den skalning av storlek och kvalitet som sedan appliceras på bilden. Bilden tillhandahålls klien- ten genom någon av de två leveransmetoderna fördefinierade bilder, där flera olika versioner av bilden lagras på servern, och dynamiska bilder, där bilderna i realtid renderas på servern genom gd-biblioteket i php utifrån på originalbilden. Tre typer av adaptiv bildladdning – kvalitetsadaption, storleksadaption och en kombination av de båda, undersöks med av- seende på tidsåtgång och levererad datamängd. Dessa utvärderas sedan i förhållande till basfallet bestående av originalbilderna.Att använda någon typ av adaptionsmetod är i 14 av 15 fall bättre än att enbart leverera originalbilder. Bäst resultat ger kombinerad adaption på enheter med mindre skärmupp- lösning och långsammare nätverk men är även gynnsamt för enheter med medelsnabba nätverk och enheter med stöd för högre skärmupplösning. Både fördefinierad och dyna- misk leveransmetod ger bra resultat men då den dynamiska leveransmetodens skalbarhet med flera parallella anslutningar inte är känd rekommenderas fördefinierade bilder. / Today, information on the web is consumed via a variety of heterogeneous devices. Factors, such as network connection and screen resolution, affects which image that is the most suitable to deliver to the client. An image in its original condition, in a technically limited device, takes a long time to download and requires a large amount of data. Since the number of devices browsing the internet via mobile networks are expected to increase, a solution for adaptive image loading is needed. The aim of this thesis is to explore whether a web service, consisting of a client and a server, can determine the best suited image that should be delivered to the client. This is based on the client’s current network connection and screen resolution. A device with a lower screen resolution and a slower network connection requires an image of lower quality and lower resolution. Thus, the download time can be shortened and the data volume reduced, contributing to improved user experience.Our adaptive solution is based on two measurements – the width of the client’s browser window and the latency between the client and the server – using javascript. These para- meters are the basis for the scaling of the size and quality which applies to the image. The image is provided to the client by one of the two delivery methods: “predefined images”, where several different versions of the image are stored on the server, and “dynamic images”, where the images are rendered on the server by the gd library in php, based on the original image. Three types of adaptive image loading – quality adaptation, size adaptation and a combination of both, are investigated considering delivery time and the amount of data delivered. These are then evaluated in relation to the base case consisting of the original images.Using some type of adaptation method is in 14 out of 15 cases better than simply delivering the original images. The best results are given by the combined adaption method on devices with smaller screen resolutions and slower network connections, but is also beneficial for devices with medium speed connections and devices that support higher screen resolutions. Both predefined and dynamic delivery methods shows good results, but since the dynamic delivery method’s scalability with multiple concurrent clients is not known, it is recommended to use predefined images.
|
15 |
Detection of Rail Clip with YOLO on Raspberry PiShahi, Jonathan January 2024 (has links)
I en modern värld där artificiell intelligens blir allt mer integrerad i våra dagliga liv är en av de mest grundläggande och nödvändiga färdigheterna för en AI att lära sig och bearbeta information, särskilt genom objektdetektering. Det finns många algoritmer som kan användas för denna specifika uppgift, men vårt huvudsakliga fokus ligger på "You Only Look Ones", även känd som YOLO-algoritmen. Denna studie fördjupar sig i användningen av YOLO inom inbyggda system specifikt för att upptäcka tågrelaterade objekt på en Raspberry Pi. Målet med denna studie är att övervinna begränsningar i processorkraft och minne, typiska för småskaliga databehandlingsplattformar som Raspberry Pi, samtidigt som hög detekteringsnoggrannhet, hastighet och låg energiförbrukning bibehålls. Detta uppnås genom att träna YOLO-modellen med olika bildupplösningar och olika inställningar av hyperparametrar och sedan köra inferens på dem så att energiförbrukningen kan beräknas. Resultaten indikerar att även om lägre upplösningar resulterar i lägre noggrannhet, minskar de avsevärt de beräkningsmässiga kraven på Raspberry Pi, vilket gör det till en genomförbar lösning för realtidsapplikationer i miljöer där tillgången till ström är begränsad. / In a modern world where artificial intelligence (AI) is becoming increasingly integrated into our daily lives, one of the most fundamental and essential skills for an AI is to learn and process information especially through object detection. There are many algorithms that could be used for this specific task but our mainly focus is on "You Only Look Ones" aka YOLO algorithm. This study dives into the use of YOLO within embedded systems specifically for detecting train-related objects on a Raspberry Pi. The aim of this study is to overcome limitations in processing power and memory, typical in small-scale computing platforms like Raspberry pi, while maintaining high detection accuracy, fast processing time and low energy consumption. This is achieved by training the YOLO model with different image resolutions and different hyper parameters tuning then running inference on them so that the energy consumption can be calculated. The results indicate that while lower resolutions result in lower accuracy, they significantly reduce the computational demands on the Raspberry Pi, making it a viable solution for real-time applications in environments where power availability is limited
|
16 |
Spatial-Spectral Feature Extraction on Pansharpened Hyperspectral ImageryKaufman, Jason R. January 2014 (has links)
No description available.
|
17 |
Detecting near-UV and near-IR wavelengths with the FOVEON image sensorCheak, Seck Fai 12 1900 (has links)
Approved for public release; distribution in unlimited. / Traditionally, digital imaging systems rely on the use of dedicated photodetectors to capture specific wavelengths in the visible spectrum. These photodetectors, which are commonly made of silicon, are arranged as arrays to capture the red, green and blue wavelengths. The signal captured by the individual photodetectors must then be interpolated and integrated to obtain the closest color match and the finest possible resolution with reference to the actual object. The use of spatially separated detectors to sense primary colors reduces the resolution by a factor of three compared to black and white imaging. The FOVEON detector technology greatly improves the color and resolution of the image through its vertically arranged, triple well photodetector. This is achieved by exploiting the variation of absorption coefficient of silicon with wavelength in the visible spectrum. Hence, in a silicon detector, the shorter wavelength (e.g. blue) would be mainly absorbed at a shallow depth. A longer wavelength (e.g. red) would penetrate the material deeper than the shorter wavelengths and be primarily absorbed at a greater depth. By producing a layered silicon detector, all three primary colour wavelengths of red, green and blue can be captured simultaneously. This thesis aims to study the FOVEON camera's ability to image light from the near Infrared (NIR) to the Ultra-Violet (UV) range of the electromagnetic spectrum. The imaged obtained using a set of bandpass filters show that the camera has response both in the UV as well as NIR regions. / Major, Singapore Armed Forces
|
18 |
Effekten av att balansera bildkvalitet och prestanda : för ökad upplevd säkerhet och en bättre användarupplevelse inom e-handel / The impact of balancing image quality and performance : for increased perceived security and a better user experience in e-tradeAnderson, Maria, Skudrickiene, Jurate January 2022 (has links)
There are several areas of research that concern image quality in relation to performance as well as the "feeling of perceived security" on a webpage. The main purpose of this study is to find out to what extent image quality and performance (web page load time) can affect the sense of how secure and reliable a webpage feels, as well as how the experience can be affected by the lack of a good balance between image quality and performance. We also want to find out if any of those factors may be of gender-related significance. We have conducted a survey where respondents were asked to visit 3 similar example websites but with different image resolutions. The results based on our survey suggest that a good balance between image quality and performance both increases trust and provides a better user experience, although it does seem in general like people put more emphasis on the images being of a satisfying quality than to the fact that the page's loading time is short. We have thus concluded that one then regards web pages as more reliable, which increases the tendency to trust the page and complete a purchase, as well as to return to the page in the future. / Det finns flera områden inom forskningen som berör bildkvalitet i förhållande till prestanda samt även “känslan av upplevd säkerhet” på en sida. Syftet med denna undersökning är att ta reda på till vilken grad bildkvalitet och prestanda (webbsidans laddningstid) kan påverka känslan av hur säker och pålitlig en sida är, samt hur användarupplevelsen påverkas om det inte finns en god balans mellan bildkvalitet och prestanda. Vi vill även ta reda på om några av dessa faktorer kan vara av genusrelaterad betydelse. Vi har utfört en enkätundersökning där respondenter ombetts besöka 3 likadana exempel-webbsidor fast med olika bildupplösningar. Resultaten som baseras på vår enkätundersökning antyder att just en god balans mellan bildkvalitet och prestanda faktiskt både ökar förtroendet och ger en bättre användarupplevelse, även om det generellt verkar som om man lägger mer vikt vid att bilderna ska vara av god kvalitet än att sidans laddningstid är kort. Vi har därav dragit slutsatsen att man då upplever webbsidor som pålitligare vilket ökar benägenheten att lita på sidan och genomföra ett köp, samt att återkomma till sidan i framtiden.
|
19 |
Using blue light reflectance from high-resolution images (6000 dpi) of Scots pine tree rings to reconstruct three centuries of Scottish summer temperatures / Temperaturrekonstruktion av skotska sommartemperaturer med hjälp av blå ljusreflektion från högupplösta skotska tallprover (6000 dpi)De Schutter, Alice, Markendahl, Karin January 2021 (has links)
Advances in scanner technology have made it possible to obtain high resolution (6000 dpi) images of tree samples. Due to the images’ increased capability of resolving anatomical wood structures, the new technology could be of benefit to dendroclimatology. This study attempts to expand on Rydval et al.’s (2017) previous 800 years reconstruction of Scottish summer temperatures by assessing whether a higher image resolution of samples has the ability to improve the accuracy of the region’s temperature reconstruction. Two independent blue intensity (BI) chronologies, based on differing image resolutions (6000 dpi and 2400 dpi) of Scots pine samples, were developed and subjected to standard detrending procedures. Raw data from Rydval et al.’s (2017) prior study was used to develop the chronology which was based on the 2400 dpi images. On the other hand, newly acquired data was utilized for the other chronology. In order to resolve the primary question that this paper explores, the characteristics and strength of the two BI chronologies’ climatic signals were compared. In addition, the newly acquired data was used to develop a 318 years reconstruction of mean July/August temperatures for Scotland. Calibrations against meteorological data indicated that the improved image resolution did not generate a positive effect on the chronology’s ability to retain a reliable climatic signal. The study’s findings were thus inconclusive in showing that a higher image resolution of Scots pine samples improves the accuracy of temperature reconstructions for Scotland. Future studies are encouraged to investigate the applicability of dendroclimatic computer softwares (i.e. CooRecorder) with regard to a high image resolution. From a broader perspective, this study contributes to setting climate change in a more accurate long term spatiotemporal context. This is crucial in predicting future climate variability, as well as understanding the role and extent of anthropogenic forcing. / Framsteg inom skannerteknik har gjort det möjligt att få bilder av trädprover med hög upplösning (6000 dpi). På grund av bildernas ökade förmåga att lösa anatomiska träd strukturer kan den nya tekniken vara till nytta för dendroklimatologi. Denna studie strävar mot att utveckla Rydval et al.s (2017) tidigare 800 års rekonstruktion av skotska sommartemperaturer genom att bedöma om en högre bildupplösning av prover har förmågan att förbättra noggrannheten för regionens temperaturrekonstruktion. Två oberoende blåintensitet (BI) -kronologier, baserade på olika bildupplösningar (6000 dpi och 2400 dpi) av skotska träprover, framtogs och utsattes för standardförfaranden. Rådata från Rydval et al.s (2017) tidigare studie användes för att utveckla kronologin som baserades på 2400 dpi-bilderna. Å andra sidan användes nyförvärvade data för den andra kronologin. För att besvara den primära frågan som denna studie undersöker jämfördes egenskaperna och styrkan hos de två BI-kronologins klimat signaler. Dessutom används de nyförvärvade uppgifterna för att utveckla en 318 års rekonstruktion av genomsnittliga juli/augusti temperaturer för Skottland. Kalibreringar mot meteorologiska data indikerade att den förbättrade bildupplösningen inte genererade en positiv effekt på kronologins förmåga att behålla en pålitlig klimat signal. Studiens resultat var således otvetydiga när de visade att en högre bildupplösning av skotsk tallprover förbättrar noggrannheten i temperatur rekonstruktioner för Skottland. Framtida studier uppmuntras att undersöka användbarheten av dendroklimatiska datorprogram (dvs CooRecorder) med avseende på en ultrahög bildupplösning. Ur ett bredare perspektiv bidrar denna studie till att placera klimatförändringarna i ett mer exakt långsiktigt rumsligt tidsmässigt sammanhang. Detta är avgörande för att förutsäga framtida klimatvariationer samt förstå rollen och omfattningen av antropogen tvingande.
|
20 |
Bewertung, Verarbeitung und segmentbasierte Auswertung sehr hoch auflösender Satellitenbilddaten vor dem Hintergrund landschaftsplanerischer und landschaftsökologischer Anwendungen / Evaluation, processing and segment-based analysis of very high resolution satellite imagery against the background of applications in landscape planning and landscape ecologyNeubert, Marco 03 March 2006 (has links) (PDF)
Die Fernerkundung war in den vergangenen Jahren von einschneidenden Umbrüchen gekennzeichnet, die sich besonders in der stark gestiegenen geometrischen Bodenauflösung der Sensoren und den damit einhergehenden Veränderungen der Verarbeitungs- und Auswertungsverfahren widerspiegeln. Sehr hoch auflösende Satellitenbilddaten - definiert durch eine Auflösung zwischen einem halben und einem Meter - existieren seit dem Start von IKONOS Ende 1999. Etwa im selben Zeitraum wurden extrem hoch auflösende digitale Flugzeugkameras (0,1 bis 0,5 m) entwickelt. Dieser Arbeit liegen IKONOS-Daten mit einer Auflösung von einem (panchromatischer Kanal) bzw. vier Metern (Multispektraldaten) zugrunde. Bedingt durch die Eigenschaften sehr hoch aufgelöster Bilddaten (z. B. Detailgehalt, starke spektrale Variabilität, Datenmenge) lassen sich bisher verfügbare Standardverfahren der Bildverarbeitung nur eingeschränkt anwenden. Die Ergebnisse der in dieser Arbeit getesteten Verfahren verdeutlichen, dass die Methoden- bzw. Softwareentwicklung mit den technischen Neuerungen nicht Schritt halten konnte. Einige Verfahren werden erst allmählich für sehr hoch auflösende Daten nutzbar (z. B. atmosphärisch-topographische Korrektur). Die vorliegende Arbeit zeigt, dass Daten dieses Auflösungsbereiches mit bisher verwendeten pixelbasierten, statistischen Klassifikationsverfahren nur unzulänglich ausgewertet werden können. Die hier untersuchte Anwendung von Bildsegmentierungsmethoden hilft, die Nachteile pixelbasierter Verfahren zu überwinden. Dies wurde durch einen Vergleich pixel- und segmentbasierter Klassifikationsverfahren belegt. Im Rahmen einer Segmentierung werden homogene Bildbereiche zu Regionen verschmolzen, welche die Grundlage für die anschließende Klassifikation bilden. Hierzu stehen über die spektralen Eigenschaften hinaus Form-, Textur- und Kontextmerkmale zur Verfügung. In der verwendeten Software eCognition lassen sich diese Klassifikationsmerkmale zudem auf Grundlage des fuzzy-logic-Konzeptes in einer Wissensbasis (Entscheidungsbaum) umsetzen. Ein Vergleich verschiedener, derzeit verfügbarer Segmentierungsverfahren zeigt darüber hinaus, dass sich mit der genutzten Software eine hohe Segmentierungsqualität erzielen lässt. Der wachsende Bedarf an aktuellen Geobasisdaten stellt für sehr hoch auflösende Fernerkundungsdaten eine wichtige Einsatzmöglichkeit dar. Durch eine gezielte Klassifikation der Bilddaten lassen sich Arbeitsgrundlagen für die hier betrachteten Anwendungsfelder Landschaftsplanung und Landschaftsökologie schaffen. Die dargestellten Beispiele von Landschaftsanalysen durch die segmentbasierte Auswertung von IKONOS-Daten zeigen, dass sich eine Klassifikationsgüte von 90 % und höher erreichen lässt. Zudem können die infolge der Segmentierung abgegrenzten Landschaftseinheiten eine Grundlage für die Berechnung von Landschaftsstrukturmaßen bilden. Nationale Naturschutzziele sowie internationale Vereinbarungen zwingen darüber hinaus zur kontinuierlichen Erfassung des Landschaftsinventars und dessen Veränderungen. Fernerkundungsdaten können in diesem Bereich zur Etablierung automatisierter und operationell einsatzfähiger Verfahren beitragen. Das Beispiel Biotop- und Landnutzungskartierung zeigt, dass eine Erfassung von Landnutzungseinheiten mit hoher Qualität möglich ist. Bedingt durch das Auswertungsverfahren sowie die Dateneigenschaften entspricht die Güte der Ergebnisse noch nicht vollständig den Ansprüchen der Anwender, insbesondere hinsichtlich der erreichbaren Klassifikationstiefe. Die Qualität der Ergebnisse lässt sich durch die Nutzung von Zusatzdaten (z. B. GIS-Daten, Objekthöhenmodelle) künftig weiter steigern. Insgesamt verdeutlicht die Arbeit den Trend zur sehr hoch auflösenden digitalen Erderkundung. Für eine breite Nutzung dieser Datenquellen ist die weitere Entwicklung automatisierter und operationell anwendbarer Verarbeitungs- und Analysemethoden unerlässlich. / In recent years remote sensing has been characterised by dramatic changes. This is reflected especially by the highly increased geometrical resolution of imaging sensors and as a consequence thereof by the developments in processing and analysis methods. Very high resolution satellite imagery (VHR) - defined by a resolution between 0.5 and 1 m - exists since the start of IKONOS at the end of 1999. At about the same time extreme high resolution digital airborne sensors (0.1 till 0.5 m) have been developed. The basis of investigation for this dissertation is IKONOS imagery with a resolution of one meter (panchromatic) respectively four meters (multispectral). Due to the characteristics of such high resolution data (e.g. level of detail, high spectral variability, amount of data) the use of previously available standard methods of image processing is limited. The results of the procedures tested within this work demonstrate that the development of methods and software was not able to keep up with the technical innovations. Some procedures are only gradually becoming suitable for VHR data (e.g. atmospheric-topographic correction). Additionally, this work shows that VHR imagery can be analysed only inadequately using traditional pixel-based statistical classifiers. The herein researched application of image segmentation methods helps to overcome drawbacks of pixel-wise procedures. This is demonstrated by a comparison of pixel and segment-based classification. Within a segmentaion, homogeneous image areas are merged into regions which are the basis for the subsequent classification. For this purpose, in addition to spectral features also formal, textural and contextual properties are available. Furthermore, the applied software eCognition allows the definition of the features for classification based on fuzzy logic in a knowledge base (decision tree). An evaluation of different, currently available segmentation approaches illustrates that a high segmentation quality is achievable with the used software. The increasing demand for geospatial base data offers an important field of application for VHR remote sensing data. With a targeted classification of the imagery the creation of working bases for the herein considered usage for landscape planning and landscape ecology is possible. The given examples of landscape analyses using a segment-based processsing of IKONOS data show an achievable classification accuracy of 90 % and more. The landscape units delineated by image segmentation could be used for the calculation of landscape metrics. National aims of nature conservation as well as international agreements constrain a continuous survey of the landscape inventory and the monitoring of its changes. Remote sensing imagery can support the establishment of automated and operational methods in this field. The example of biotope and land use type mapping illustrates the possibility to detect land use units with a high precision. Depending on the analysis method and the data characteristics the quality of the results is not fully equivalent to the user?s demands at the moment, especially concerning the achievable depth of classification. The quality of the results can be enhanced by using additional thematic data (e.g. GIS data, object elevation models). To summarize this dissertation underlines the trend towards very high resolution digital earth observation. Thus, for a wide use of this kind of data it is essentially to further develop automated and operationally useable processing and analysis methods.
|
Page generated in 0.0691 seconds