161 |
Region-based classificationpotential for land-cover classification with Very High spatial Resolution satellite dataCarleer, Alexandre A.P. 14 February 2006 (has links)
Abstract
Since 1999, Very High spatial Resolution satellite data (Ikonos-2, QuickBird and OrbView-3) represent the surface of the Earth with more detail. However, information extraction by multispectral pixel-based classification proves to have become more complex owing to the internal variability increase in the land-cover units and to the weakness of spectral resolution.
Therefore, one possibility is to consider the internal spectral variability of land-cover classes as a valuable source of spatial information that can be used as an additional clue in characterizing and identifying land cover. Moreover, the spatial resolution gap that existed between satellite images and aerial photographs has strongly decreased, and the features used in visual interpretation transposed to digital analysis (texture, morphology and context) can be used as additional information on top of spectral features for the land cover classification.
The difficulty of this approach is often to transpose the visual features to digital analysis.
To overcome this problem region-based classification could be used. Segmentation, before classification, produces regions that are more homogeneous in themselves than with nearby regions and represent discrete objects or areas in the image. Each region becomes then a unit analysis, which makes it possible to avoid much of the structural clutter and allows to measure and use a number of features on top of spectral features. These features can be the surface, the perimeter, the compactness, the degree and kind of texture. Segmentation is one of the only methods which ensures to measure the morphological features (surface, perimeter...) and the textural features on non-arbitrary neighbourhood. In the pixel-based methods, texture is calculated with mobile windows that smooth the boundaries between discrete land cover regions and create between-class texture. This between-class texture could cause an edge-effect in the classification.
In this context, our research focuses on the potential of land cover region-based classification of VHR satellite data through the study of the object extraction capacity of segmentation processes, and through the study of the relevance of region features for classifying the land-cover classes in different kinds of Belgian landscapes; always keeping in mind the parallel with the visual interpretation which remains the reference.
Firstly, the results of the assessment of four segmentation algorithms belonging to the two main segmentation categories (contour- and region-based segmentation methods) show that the contour detection methods are sensitive to local variability, which is precisely the problem that we want to overcome. Then, a pre-processing like a filter may be used, at the risk of losing a part of the information. The “region-growing” segmentation that uses the local variability in the segmentation process appears to be the best compromise for the segmentation of different kinds of landscape.
Secondly, the features calculated thanks to segmentation seem to be relevant to identify some land-cover classes in urban/sub-urban and rural areas. These relevant features are of the same type as the features selected visually, which shows that the region-based classification gets close to the visual interpretation.
The research shows the real usefulness of region-based classification in order to classify the land cover with VHR satellite data. Even in some cases where the features calculated thanks to the segmentation prove to be useless, the region-based classification has other advantages. Working with regions instead of pixels allows to avoid the salt-and-pepper effect and makes the GIS integration easier.
The research also highlights some problems that are independent from the region-based classification and are recursive in VHR satellite data, like shadows and the spatial resolution weakness for identifying some land-cover classes.
Résumé
Depuis 1999, les données satellitaires à très haute résolution spatiale (IKONOS-2, QuickBird and OrbView-3) représentent la surface de la terre avec plus de détail. Cependant, l’extraction d’information par une classification multispectrale par pixel devient plus complexe en raison de l’augmentation de la variabilité spectrale dans les unités d’occupation du sol et du manque de résolution spectrale de ces données. Cependant, une possibilité est de considérer cette variabilité spectrale comme une information spatiale utile pouvant être utilisée comme une information complémentaire dans la caractérisation de l’occupation du sol. De plus, de part la diminution de la différence de résolution spatiale qui existait entre les photographies aériennes et les images satellitaires, les caractéristiques (attributs) utilisées en interprétation visuelle transposées à l’analyse digitale (texture, morphologie and contexte) peuvent être utilisées comme information complémentaire en plus de l’information spectrale pour la classification de l’occupation du sol.
La difficulté de cette approche est la transposition des caractéristiques visuelles à l’analyse digitale. Pour résoudre ce problème la classification par région pourrait être utilisée. La segmentation, avant la classification, produit des régions qui sont plus homogène en elles-mêmes qu’avec les régions voisines et qui représentent des objets ou des aires dans l’image. Chaque région devient alors une unité d’analyse qui permet l’élimination de l’effet « poivre et sel » et permet de mesurer et d’utiliser de nombreuses caractéristiques en plus des caractéristiques spectrales. Ces caractéristiques peuvent être la surface, le périmètre, la compacité, la texture. La segmentation est une des seules méthodes qui permet le calcul des caractéristiques morphologiques (surface, périmètre, …) et des caractéristiques texturales sur un voisinage non-arbitraire. Avec les méthodes de classification par pixel, la texture est calculée avec des fenêtres mobiles qui lissent les limites entre les régions d’occupation du sol et créent une texture interclasse. Cette texture interclasse peut alors causer un effet de bord dans le résultat de la classification.
Dans ce contexte, la recherche s’est focalisée sur l’étude du potentiel de la classification par région de l’occupation du sol avec des images satellitaires à très haute résolution spatiale. Ce potentiel a été étudié par l’intermédiaire de l’étude des capacités d’extraction d’objet de la segmentation et par l’intermédiaire de l’étude de la pertinence des caractéristiques des régions pour la classification de l’occupation du sol dans différents paysages belges tant urbains que ruraux.
|
162 |
S-parameter VLSI transmission line analysis.Cooke, Bradly James. January 1989 (has links)
This dissertation investigates the implementation of S-parameter based network techniques for the analysis of multiconductor, high speed VLSI integrated circuit and packaging interconnects. The S-parameters can be derived from three categories of input parameters: (1) lossy quasi-static R,L,C and G, (2) lossy frequency dependent (dispersive) R,L,C,G and (3) the propagation constants, Γ, the characteristic impedance, Z(c) and the conductor eigencurrents, I, derived from full wave analysis. The S-parameter network techniques developed allow for: the analysis of periodic waveform excitation, the incorporation of externally measured or calculated scattering parameter data and large system analysis through macro decomposition. The inclusion of non-linear terminations has also been developed.
|
163 |
VLSI REALIZATION OF AHPL DESCRIPTIONS AS STORAGE LOGIC ARRAY.CHIANG, CHEN HUEI. January 1982 (has links)
A methodology for the automatic translation of a Hardware Description Language (HDL) formulation of a VLSI system to a structured array-type of target realization is the subject of this investigation. A particular combination of input HDL and target technology has been implemented as part of the exercise, and a detailed evaluation of the result is presented. The HDL used in the study is AHPL, a synchronous clock-mode language which accepts the description of the hardware at Register Transfer Level. The target technology selected is Storage Logic Array (SLA), an evolution of PLA concept. Use of the SLA has a distinct advantage, notably in the ability to sidestep the interconnection routing problem, an expensive and time-consuming process in normal IC design. Over the past years, an enormous amount of effort has gone into generation of layout from an interconnection list. This conventional approach seems to complicate the placement and routing processes in later stages. In this research project the major emphasis has therefore been on extracting relevant global information from the higher-level description to guide the subsequent placement and routing algorithms. This effectively generates the lower-level layout directly from higher-level description. A special version of AHPL compiler (stage 3) has been developed as part of the project. The SLA data structure formats and the implementation of the Data and Control Sections of the target are described in detail. Also the evaluation and possibilities for future research are discussed.
|
164 |
VLSI REALIZATION OF AHPL DESCRIPTION AS SLA, PPLA, & ULA AND THEIR COMPARISONS (CAD).CHEN, DUAN-PING. January 1984 (has links)
Reducing circuit complexity to minimize design turnaround time and maximize chip area utilization is the most evident problem in dealing with VLSI layout. Three suggestions have been recommended to reduce circuit complexity. They are using regular modules as design targets, using hierarchical top-down design as a design methodology, and using CAD as a design tool. These three suggestions are the basis of this dissertation project. In this dissertation, three silicon compilers were implemented which take an universal AHPL circuit description as an input and automatically translate it into SLA (Storage Logic Array), PPLA (Path Programmable Logic Array), and ULA (Uncommitted Logic Array) chip layout. The goal is to study different layout algorithms and to derive better algorithms for alternative VLSI structures. In order to make a precise chip area comparison of these three silicon compilers, real SLA and ULA circuits have been designed. Four typical AHPL descriptions of different circuits or varying complexity were chosen as comparison examples. The result shows that the SLA layout requires least area for circuit realization generally. The PPLA approach is the worst one for large scale circuit realization, while the ULA lies in between.
|
165 |
QUALIFICATION RESEARCH FOR RELIABLE, CUSTOM LSI/VLSI ELECTRONICS.Matsumori, Barry Alan. January 1985 (has links)
No description available.
|
166 |
Thermal characterization of VLSI packagingShope, David Allen, 1958- January 1988 (has links)
With electronic packaging becoming more complex, simple hand methods to model the thermal performance of the package are insufficient. As computer aided modeling methods came into use, a test system was developed to verify the predictions produced by such modeling methods. The test system is evaluated for operation and performance. Further, the premise of this type of test (the accurate calibration of packaged temperature-sensitive-parameter devices can be done) is investigated using a series of comparative tests. From this information, causes of possible/probable errors in calibration are identified and related to the different methodologies and devices used. Finally, conclusions are presented regarding the further improvement of the test system and methodologies used in this type of testing.
|
167 |
The evaluation of the PODEM algorithm as a mechanism to generate goal states for a sequential circuit test searchLee, Hoi-Ming Bonny, 1961- January 1988 (has links)
In a VLSI design environment, a more efficient test generation algorithm is definitely needed. This thesis evaluates a test generation algorithm, PODEM, as mechanism to generate the goal states in a sequential circuit test search system, SCIRTSS. First, a hardware description language, AHPL, is used to describe the behavior of a sequential circuit. Next, SCIRTSS is used to generate the test vectors. Several circuits are evaluated and experimental results are compared with data from a previous version of SCIRTSS which was implemented with the D-Algorithm. Depending on the number of reconvergent fanouts in a circuit, it is found that PODEM is 1 to 23 times faster than the D-Algorithm.
|
168 |
Effect of dissolved carbon dioxide on very-high-gravity fermentation2012 August 1900 (has links)
The stoichiometric relationship between carbon dioxide (CO2) generated and glucose consumed during fermentation can be utilized to predict glucose consumption as well as yeast growth by measuring the CO2 concentration. Dissolved CO2 was chosen as opposed to off-gas CO2 due to the high solubility of CO2 in the fermentation broth as well as its ability to reflect on yeast growth more accurately than off-gas CO2. Typical very-high-gravity (VHG) ethanol fermentation is plagued by incomplete glucose utilization and longer durations. Aiming to improve substrate utilization and enhance VHG fermentation performance, characteristics of dissolved CO2 concentration in fermentation broths using Saccharomyces cerevisiae were studied under batch conditions. Based on this study a novel control methodology based on dissolved CO2 was developed and its effectiveness on enhancing VHG fermentation was evaluated by measuring the fermentation duration, glucose conversion efficiency and ethanol productivity.
Four different initial concentrations 150, 200.05±0.21, 250.32±0.12, and 300.24±0.28 g glucose/L were used for batch ethanol fermentation without control. Zero substrate was indicated for 150 and 200.05±0.21 g glucose/L by a characteristic abrupt drop in dissolved CO2 concentration. On the other hand sluggish fermentation and incomplete substrate utilization were witnessed for 250.32±0.12, and 300.24±0.28 g glucose/L. A material balance equation was developed to compensate for the inability of the dissolved CO2 profiles to accurately predict the different growth phases of yeast.
Dissolved CO2 was controlled at three distinct levels of 500, 750 and 1000 mg/L using aeration rates of 820 and 1300 mL/min for initial concentrations of 259.72±7.96 and 303.92±10.66 g glucose/L. Enhancement of VHG fermentation was achieved in the form of complete glucose utilization and higher ethanol productivities and shorter fermentation duration in comparison to batches without control. Complete glucose utilization was facilitated under ~250 and ~300 g glucose/L in 27.02±0.91 and 36.8±3.56 h respectively. Irrespective of the control set points and aeration rates, ethanol productivities of 3.98±0.28 g/L-h and 3.44±0.32 g/L-h were obtained for 259.72±7.96 and 303.92±10.66 g glucose/L respectively. The glucose conversion efficiencies for both 259.85±9.02 and 299.36±6.66 g glucose/L when dissolved CO2 was controlled were on par with that of batches without control.
|
169 |
Optimal Path Queries in Very Large Spatial DatabasesZhang, Jie January 2005 (has links)
Researchers have been investigating the optimal route query problem for a long time. Optimal route queries are categorized as either unconstrained or constrained queries. Many main memory based algorithms have been developed to deal with the optimal route query problem. Among these, Dijkstra's shortest path algorithm is one of the most popular algorithms for the unconstrained route query problem. The constrained route query problem is more complicated than the unconstrained one, and some constrained route query problems such as the Traveling Salesman Problem and Hamiltonian Path Problem are NP-hard. There are many algorithms dealing with the constrained route query problem, but most of them only solve a specific case. In addition, all of them require that the entire graph resides in the main memory. Recently, due to the need of applications in very large graphs, such as the digital maps managed by Geographic Information Systems (GIS), several disk-based algorithms have been derived by using divide-and-conquer techniques to solve the shortest path problem in a very large graph. However, until now little research has been conducted on the disk-based constrained problem. <br /><br /> This thesis presents two algorithms: 1) a new disk-based shortest path algorithm (DiskSPNN), and 2) a new disk-based optimal path algorithm (DiskOP) that answers an optimal route query without passing a set of forbidden edges in a very large graph. Both algorithms fit within the same divide-and-conquer framework as the existing disk-based shortest path algorithms proposed by Ning Zhang and Heechul Lim. Several techniques, including query super graph, successor fragment and open boundary node pruning are proposed to improve the performance of the previous disk-based shortest path algorithms. Furthermore, these techniques are applied to the DiskOP algorithm with minor changes. The proposed DiskOP algorithm depends on the concept of collecting a set of boundary vertices and simultaneously relaxing their adjacent super edges. Even if the forbidden edges are distributed in all the fragments of a graph, the DiskOP algorithm requires little memory. Our experimental results indicate that the DiskSPNN algorithm performs better than the original ones with respect to the I/O cost as well as the running time, and the DiskOP algorithm successfully solves a specific constrained route query problem in a very large graph.
|
170 |
Synaptic rewiring in neuromorphic VLSI for topographic map formationBamford, Simeon A. January 2009 (has links)
A generalised model of biological topographic map development is presented which combines both weight plasticity and the formation and elimination of synapses (synaptic rewiring) as well as both activity-dependent and -independent processes. The question of whether an activity-dependent process can refine a mapping created by an activity-independent process is investigated using a statistical approach to analysingmapping quality. The model is then implemented in custom mixed-signal VLSI. Novel aspects of this implementation include: (1) a distributed and locally reprogrammable address-event receiver, with which large axonal fan-out does not reduce channel capacity; (2) an analogue current-mode circuit for Euclidean distance calculation which is suitable for operation across multiple chips; (3) slow probabilistic synaptic rewiring driven by (pseudo-)random noise; (4) the application of a very-low-current design technique to improving the stability of weights stored on capacitors; (5) exploiting transistor non-ideality to implement partially weightdependent spike-timing-dependent plasticity; (6) the use of the non-linear capacitance of MOSCAP devices to compensate for other non-linearities. The performance of the chip is characterised and it is shown that the fabricated chips are capable of implementing the model, resulting in biologically relevant behaviours such as activity-dependent reduction of the spatial variance of receptive fields. Complementing a fast synaptic weight change mechanism with a slow synapse rewiring mechanism is suggested as a method of increasing the stability of learned patterns.
|
Page generated in 0.033 seconds