Spelling suggestions: "subject:"high desolution data"" "subject:"high desolution mata""
1 |
Forest characterization with high resolution satellite data for a regional inventoryKelly, Tabatha Rae 02 May 2009 (has links)
QuickBird satellite data was used to examine stem density, basal area, and crown density, as potential forest strata to aid in volume estimations for a regional inventory program. The classes used for analysis were pine pole and sawtimber, and hardwood pole and sawtimber. Total height, height to live crown, diameter at breast height (dbh), and crown class were measured on 129 field plots used in image classification and accuracy assessments. Supervised classification produced overall accuracy of 85% with a Kappa of 0.8065. The classification was used for the extraction of mean band data and percent of forested pixels. Satellite derived variables were used with field measurements such as average basal area and stem density for regression analysis to predict forest characteristics such as stem density and crown closure that are indicators of volume variability. The R2 values ranged from 0.0005 to 0.2815 for hardwoods and 0.0001 to 0.6174 for pines.
|
2 |
Deep Learning Approach for Cell Nuclear Pore Detection and Quantification over High Resolution 3D DataHe, Chongyu 21 December 2023 (has links)
The intricate task of segmenting and quantifying cell nuclear pores in high-resolution 3D microscopy data is critical for cellular biology and disease research. This thesis introduces a deep learning pipeline crafted to automate the segmentation and quantification of nuclear pores from high-resolution 3D cell organelle images. Our aim is to refine computational methods capable of handling the data's complexity and size, thus improving accuracy and reducing manual labor in biological image analysis. The developed pipeline incorporates data preprocessing, augmentation strategies, random block sampling, and a three-stage post-processing algorithm. It utilizes a 3D U-Net with a VGG-16 backbone, optimized through cyclical data augmentation and random block sampling to tackle the challenges posed by limited labeled data and the processing of large-scale 3D images. The pipeline has demonstrated its capability to effectively learn and predict nuclear pore structures, achieving improvements in validation metrics compared to baseline models. Our experiments suggest that cyclical augmentation helps prevent overfitting, and random block sampling contributes to managing data imbalance. The post-processing phase successfully automates the quantification of nuclear pores without the need for manual intervention. The proposed pipeline offers an efficient and scalable approach to segmenting and quantifying nuclear pores in 3D microscopy images. Despite the ongoing challenges of computational intensity and data volume, the techniques developed in this study provide insights into the automation of complex biological image analysis tasks, with potential applications extending beyond the detection of nuclear pores. / Master of Science / This thesis outlines a computer program developed to automatically segment and count nuclear pores in 3D cell images, aiding cell and disease research. This program aims to handle large, complex image data more effectively, boost accuracy, and cut down the need for manual labor. We created a system that prepares data, applies a technique called augmentation to enrich it, selects specific image sections, and carries out a three-step final analysis. At the core of our program is a 3D U-Net model, a type of deep learning network, that has been enhanced to address the challenges of scarce labeled data and the processing of very large images. The system developed is capable of learning and identifying the structure of nuclear pores in cell images. Our experiments indicate that using augmentation in a cyclical manner during training can prevent overfitting, which is when a model learns the training data too well, and cannot suitably generalize. Selecting certain parts of the images for processing proves helpful with imbalanced data. Additionally, the program can automatically count nuclear pores in the final step. The proposed program is effective for analyzing and counting nuclear pores in 3D cell images and has the potential for broader applications in cell analysis. Despite the challenges of managing large datasets and the significant computational power required, our methods open new possibilities for automating cell studies, with uses extending beyond just nuclear pores.
|
3 |
Neue Ansätze zur Auswertung und Klassiffizierung von sehr hochauflösenden DatenHoffmann, Andrea 10 May 2001 (has links)
Auf dem Luftbildsektor vollziehen sich seit einigen Jahren grundsätzliche Veränderungen. Digitale flugzeuggetragene Kamerasysteme und hochauflösende Satellitensysteme bieten neue Potentiale der Datenakquise und -auswertung. Diese digitalen Datensätze werden in absehbarer Zeit das herkömmliche Luftbild ersetzen und Kartographie, Photogrammetrie und Fernerkundung erheblich verändern. Die neue Generation von digitalen Kameras wird zwei zentrale Bereiche der Kartographie einschneidend beeinflussen: Die Orthokartenherstellung und die Kartenaktualisierung. Der Bedarf aktueller Geobasisdaten macht Orthobilder besonders für Geoinformationssysteme interessant. Bisher standen als Basisdaten für Orthobildkarten großer Auflösung (> 1:10 000) lediglich Luftbilder zur Verfügung. Es wird gezeigt, daß die digitalen Daten der neuen Kamerageneration zur Erstellung von Orthobildkarten operationell einsetzbar sind. Durch die automatisierte Prozessierung werden sie den Anforderungen an schnelle aktuelle Kartenprodukte gerecht, mit ihrer hochgenauen Navigation bieten die digitalen Systeme die automatisierte Erstellung geometrisch sehr genauer Datensätze, die mit herkömmlichen Mitteln nur sehr aufwendig erreicht werden könnten. Ein Vergleich mit Luftbildern zeigt und bewertet die Unterschiede beider Aufnahmesysteme. Untersucht wurden Datensätze der digitalen Kamera HRSC-A des DLR Adlershof. Mit der HRSC-A (High Resolution Stereo Camera - Airborne) und der speziell für die Prozessierung dieser Daten entwickelten Software steht den Geoinformationsnutzern erstmals ein operationelles System zur Verfügung, das vollständig digital und vollautomatisch hochauflösende Orthobilddaten produziert. Die Pixelauflösung liegt zwischen 10 und 40 cm (Flughöhe von 2500 bis 10 000 m). Als vorteilhaft für die Analyse erweist sich die gleichzeitige Verfügbarkeit von hochauflösenden panchromatischen und multispektralen Datensätzen, die Verfügbarkeit eines hochauflösenden Geländemodells (x,y: 50 cm bzw. 1m, z: 10 cm) und die hohe Genauigkeit der Datensätze. Die Arbeit diskutiert die Problematik einer automatisierten Auswertung hochauflösender Daten. Diese Datensätze stellen neue Anforderungen an Auswertungsverfahren. Der Detailreichtum erschwert die Interpretation, gröbere räumliche Auflösungen glätten die Komplexität innerhalb heterogener Landnutzungen (besonders in urbanen Gebieten) und erleichtern so eine automatische Interpretation. Es wird gezeigt, daß "klassische" Auswertungsmethoden wie pixelbasierte Klassifizierungen (überwacht oder unüberwacht) zur Auswertung der hochauflösenden Daten nur bedingt geeignet sind. Im Rahmen der Arbeit werden zwei neue Ansätze entwickelt und untersucht, die nicht mehr pixelweise, sondern flächenhaft und objektorientiert arbeiten. Ein per-parcel-Ansatz zeigt gute Ergebnisse bei der Auswertung. Das Verfahren ermittelt zunächst mittels einer unüberwachten Klassifizierung Szenekomponenten in definierten Untereinheiten (parcel), die den Inhalt des Datensatzes repräsentieren. Die klassifizierten Pixel innerhalb der definierten parcel-Einheiten werden anschließend extrahiert und ihr Verhältnis zueinander weiter ausgewertet. Ergebnis ist zunächst die prozentuelle Verteilung der Szenekomponenten in den Einheiten, anschließend werden Zusammenhänge zwischen den vorhandenen Komponenten und der Landoberfläche definiert. Untersucht wurde ferner ein objektorientierter Ansatz, der die Interpretation von Einzelobjekten erlaubt. Hierbei wird das Bild in homogene Objekte segmentiert, die die Grundlage für die weitere Analyse bilden. Der diskutierte Ansatz besteht aus zwei Strategien: Mittels multiskalarer Segmentierung wird der Bilddatensatz zunächst in Einheiten strukturiert, verschiedene Maßstabsebenen sind gleichzeitig verfügbar. Grundidee ist die Schaffung eines hierarchischen Netzes von Bildobjekten. Diese gefundenen Einheiten werden anschließend spektral mittels Nearest Neighbour oder wissensbasiert mittels Fuzzy Logic Operatoren klassifiziert. Der Ansatz zeigt überzeugende Ergebnisse bei einer automatisierten Hauserkennung und der Aktualisierung bestehender Vektordatensätze. Die Einteilung der Bilddaten in Segmente, also zunächst eine Abstrahierung der Information vom Einzelpixel zu größeren semantischen Einheiten und die weitere Bearbeitung dieser Segmente erwies sich als sinnvoll. Es wurde ferner gezeigt, daß für die Analyse in städtischen Räumen die Einbeziehung von Oberflächeninformation unbedingt erforderlich ist. Durch die spektrale Ähnlichkeit von Bildelementen bietet die Einbeziehung des Oberflächenmodells die Möglichkeit, mittels einer zusätzlich bekannten Information über die Höhe der Objekte, diese Klassen zu trennen. / Remote sensing goes through times of fundamental changes. New digital airborne camera systems offer new potentials for data aquisition and interpretation. These data sets will replace aerial photography in the near future and change photogrammetry, cartography and remote sensing. The new camera generation will influence two central domains of cartography: Orthomap production and map updating. As a base for in-time updating orthomaps became more and more important. Up to now large scale mapping (scales > 1:10,000) is done nearly exclusively with aerial photographs. It can be shown that the digital data sets of the new camera generation can be used operationally for the production of orthomaps. A fully automated processing line provides the ortho images very shortly after aquisition, due to the used high precision navigation system the accuracy of the data is very high, even very big scales can be realized. A comparison of digital cameras and aerial photos discusses and rates the properties of the different aquisition systems and data sets. For interpretation data sets of the digital camera HRSC-A were used. The High Resolution Stereo Camera - Airborne (HRSC-A) digital photogrammetric camera and its processing software provides the geoinformation industry for the first time with an entirely digital and fully automatic process to produce highly accurate digital image data. The pixel size ranges between 10 and 40 cm (flight altitude 2500 - 10,000 m). The airborne camera combines high resolution, photogrammetric accuracy and all-digital acquisition and provides both multispectral and elevation information. The pushbroom instrument provides digital ortho-images and digital surface models with an accuracy of 10-20 cm. The use of this wide range of image information showed to be very helpful for data analysis. This investigation focuses on the problems of automated interpretation of high-resolution data. These data sets make high demands on automated interpretation procedures. The richness of details depicted in the data sets complicates the interpretation, coarser spatial resolutions smooth out spatial complexity within heterogeneous land cover types, such as urban, and make an automated interpretation easier. This report shows that conventional interpretation techniques like pixelbased classification (supervised or unsupervised) do not lead to satisfactory results. Two new object-oriented and region-oriented approaches for the interpretation of high resolution data sets were developped and discussed. The parcel-based approach showed good results in interpretation of the data. The proposed methodology begins with an unsupervised per-pixel classification to identify spectral clusters which represent the range of scene components present in the pre-defined land parcels. The per-parcel analysis extracts the pixels classified as scene components within the land parcel under examination and calculates the total numbers and fractions for each scene component present. To identify land cover types not represented by scene components at the land parcel level, it is necessary to process the scene component information and infer relationships between the scene components present and land cover type. A set of rules was devised to identify a range of land cover types from the mixtures of scene components found within each land parcel. Secondly an object-oriented and multi-scale image analysis approach was used for the interpretation of single objects. The procedure contains two basic domains. The strategy is to build up a hierarchical network of image objects which allows to represent the image information content at different resolutions (scales) simultaneously. In a second step the image objects were classified by means of fuzzy logic, either on features of objects and/or on relations between networked objects operating on the semantic network. The procedure showed very good results in detecting houses and updating vector data sets. Segmenting the data in semantic units and performing further analysis on these units showed to be very helpful for interpretation. It could be shown that for analysis of urban areas the use of a Digital Surface Model is necessary. Due to the spectral similarities of image elements the elevation information offers an important additional tool for analysis.
|
4 |
Prise en compte des fluctuations spatio-temporelles pluies-débits pour une meilleure gestion de la ressource en eau et une meilleure évaluation des risques / Taking into account the space-time rainfall-discharge fluctuations to improve water resource management and risk assessmentHoang, Cong Tuan 30 November 2011 (has links)
Réduire la vulnérabilité et accroître la résilience des sociétés d'aujourd'hui aux fortes précipitations et inondations exige de mieux caractériser leur très forte variabilité spatio-temporelle observable sur une grande gamme d'échelle. Nous mettons donc en valeur tout au long de cette thèse l'intérêt méthodologique d'une approche multifractale comme étant la plus appropriée pour analyser et simuler cette variabilité. Cette thèse aborde tout d'abord le problème de la qualité des données, qui dépend étroitement de la résolution temporelle effective de la mesure, et son influence sur l'analyse multifractale et la détermination de lois d'échelle des processus de précipitations. Nous en soulignons les conséquences pour l'hydrologie opérationnelle. Nous présentons la procédure SERQUAL qui permet de quantifier cette qualité et de sélectionner les périodes correspondant aux critères de qualité requise. Un résultat surprenant est que les longues chronologies de pluie ont souvent une résolution effective horaire et rarement de 5 minutes comme annoncée. Ensuite, cette thèse se penche sur les données sélectionnées pour caractériser la structure temporelle et le comportement extrême de la pluie. Nous analysons les sources d'incertitudes dans les méthodes multifractales « classiques » d'estimation des paramètres et nous en déduisons des améliorations pour tenir compte, par exemple, de la taille finie des échantillons et des limites de la dynamique des capteurs. Ces améliorations sont utilisées pour obtenir les caractéristiques multifractales de la pluie à haute résolution de 5 minutes pour plusieurs départements de la France (à savoir, les départements 38, 78, 83 et 94) et pour aborder la question de l'évolution des précipitations durant les dernières décennies dans le cadre du changement climatique. Cette étude est confortée par l'analyse de mosaïques radars concernant trois événements majeurs en région parisienne. Enfin, cette thèse met en évidence une autre application des méthodes développées, à savoir l'hydrologie karstique. Nous discutons des caractéristiques multifractales des processus de précipitation et de débit à différentes résolutions dans deux bassins versant karstiques au sud de la France. Nous analysons, en utilisant les mesures journalière, 30 minutes et 3 minutes, la relation pluie-débit dans le cadre multifractal. Ceci est une étape majeure dans la direction d'une définition d'un modèle multi-échelle pluie-débit du fonctionnement des bassins versants karstiques / To reduce vulnerability and to increase resilience of nowadays societies to heavy precipitations and floods require better understanding of their very strong spatio-temporal variability observable over a wide range of scales. Therefore, throughout this thesis we highlight the methodological interest of a multifractal approach as being most appropriate to analyze and to simulate such the variability. This thesis first discusses the problem of data quality, which strongly depends on the effective temporal resolution of the measurements, and its influence on multifractal analysis determining the scaling laws of precipitation processes. We emphasize the consequences for operational hydrology. We present the SERQUAL procedure that allows to quantify the data quality and to select periods corresponding to the required quality criteria. A surprising result is that long chronological series of rainfall often have an effective hourly data, rather than the pretended 5-minute rainfall data. Then, this thesis focuses on the selected data to characterize the temporal structure and extreme behaviour of rainfall. We analyze the sources of uncertainties of already "classical" multifractal methods for the parameter estimates, and we therefore developed their improvements considering e.g., the finite size of data samples and the limitation of sensor dynamics. These improvements are used to obtain proper multifractal characteristics of 5-minute high-resolution rainfall for several French departments (i.e., 38, 78, 83 and 94), and hence to raise the question of preicipitation evolution during the last decades in the context of climate change. This study is reinforced by the analysis of radar mosaics for three major events in the Paris region. Finally, this thesis highlights another application of the developed methods, i.e. for the karst hydrology. We discuss the multifractal characteristics of rainfall and runoff processes observed at the different resolutions in two karst watersheds on the south of France. Using daily, 30-minute and 3-minute measurements we analyse the rainfall-runoff relationships within the multifractal framework. This is a major step towards defining a rainfall-runoff multi-scale model of the karst watershed functioning
|
5 |
Utilizing High Resolution Data to Identify Minimum Vehicle Emissions Cases Considering Platoons and EVPMorozova, Nadezhda S. 22 March 2016 (has links)
This paper describes efforts to optimize the parameters for a platoon identification and accommodation algorithm that minimizes vehicle emissions. The algorithm was developed and implemented in the AnyLogic framework, and was validated by comparing it to the results of prior research. A four-module flowchart was developed to analyze the traffic data and to identify platoons. The platoon end time was obtained from the simulation and used to calculate the offset of the downstream intersection. The simulation calculates vehicle emissions with the aid of the VT-Micro microscopic emission model. Optimization experiments were run to determine the relationship between platoon parameters and minimum- and maximum-emission scenarios. Optimal platoon identification parameters were found from these experiments, and the simulation was run with these parameters. The total time of all vehicles in the simulation was also found for minimum and maximum emissions scenarios. Time-space diagrams obtained from the simulations demonstrate that optimized parameters allow all cars to travel through the downstream intersection without waiting, and therefore cause a decrease in emissions by as much as 15.5%.
This paper also discusses the outcome of efforts to leverage high resolution data obtained from WV-705 corridor in Morgantown, WV. The proposed model was developed for that purpose and implemented in the AnyLogic framework to simulate this particular road network with four coordinated signal-controlled intersections. The simulation was also used to calculate vehicle CO, HC, NOx emissions with the aid of the VT-Micro microscopic emission model. Offset variation was run to determine the optimal offsets for this particular road network with traffic volume, signal phase diagram and vehicle characteristics. A classifier was developed by discriminant analysis based on significant attributes of HRD. Equation of this classifier was developed to distinguish between set of timing plans that produce maximum emission from set of timing plans that produce maximum emission. Also, current work investigates the potential use of the GPS-based and similar priority systems by giving preemption through signalized intersections. Two flowcharts are developed to consider presence of emergency vehicle (EV) in the system so called EV life cycle and EV preemption (EVP). Three scenarios are implemented, namely base case scenario when no EV is involved, EV scenario when EV gets EVP only, and EV scenario when EV gets preemption by signals and right-of-way by other vehicles. Research makes an attempt to compare emission results of these scenarios to find out whether EV effects vehicle emission in the road network and what is the level of this influence if any. / Master of Science
|
Page generated in 0.0975 seconds