• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 202
  • 144
  • 111
  • 1
  • Tagged with
  • 3005
  • 341
  • 337
  • 263
  • 237
  • 208
  • 199
  • 181
  • 180
  • 151
  • 144
  • 121
  • 118
  • 112
  • 110
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

A unified architecture for adaption of mobile system behaviour : implementation of the vertical handoff paradigm

Hatzikonstantinou, Antonios C. January 2009 (has links)
Independent attempts from the computing and telecommunications worlds to provide wireless access resulted in networks with very different characteristics. This has given rise to a number of issues, such as the management of user mobility, power consumption, contingency when a communication session is interrupted etc. The result is significant Quality of Service fluctuation. A particular issue arises in an environment where more than one wireless networks offer coverage over the same area making the decision about which network to connect to, rather complex. The thesis offers the hypothesis that a unified architecture can meet the current QoS demands of qos-sentitive applications. It provides the framework that can effectively include, activate, configure and orchestrate disparate communication enhancement tools. The architecture must be extensible, flexible, and provide means for the incorporated modules to interoperate. Finally, it must allow the QoS requirements to be specified in platform independent ways and QoS adaptation to be effected under application, application manager, or user control in a continuous and dynamic way. The Vertical Handoff paradigm is studied as an illustration of a communication enhancement module of this architecture. Consider a two layer wireless network. Layer one offers high bandwidth but short-range coverage, while layer two offers low bandwidth but wide range coverage. A mobile host moving fast will experience high bandwidth with frequent handovers when connected to layer one or low bandwidth with fewer handover interruptions if connected to the bigger cells of layer two. This and other scenarios are studied in this research in order to test the hypothesis that the use of local and/or network cache can result in significant performance enhancement in terms of frequency of interruptions due to handovers and overall quality of the communication’s service depending on factors such as the host’s mobility, the population density, etc.
132

The relationship between management and control planes for delivering quality of service in multi-service networks

Griffin, David Philip January 2009 (has links)
The management and control planes are intrinsic components of communications networks. Control plane functions - such as routing - are on-line mechanisms responsible for establishing and maintaining the way data is routed and forwarded through the network and, as such, they are usually embedded within network elements. Management plane functions, on the other hand, include tasks such as network design, planning, configuration and fault handling. Management algorithms - such as traffic engineering logic for link-weight optimization - are off-line components operating over longer time-scales with a broader scope and are usually housed in separate management workstations. Delivering Quality of Service in multi-service networks is a non-trivial task that involves both on-line mechanisms – for traffic differentiation and resource scheduling - and off-line functions - for service definition, resource planning and traffic engineering. This thesis addresses the question of the degree to which off-line management plane functions and on-line control plane functions should cooperate in order to provide the benefits of network-wide optimization while remaining responsive to actual network conditions without burdening the network equipment with computationally expensive logic. The approach taken is to design an architecture encompassing all functions involved in QoS delivery, ranging from service definition, through network planning and provisioning, to network configuration, routing and forwarding. Interactions in the form of provisioning cycles are defined to facilitate the necessary cooperation between off-line management plane algorithms and on-line control plane mechanisms. Two specific mechanisms are designed and evaluated. Firstly, Dynamic Resource Management (DRsM) configures the bandwidth allocated to Per Hop Behaviours in Differentiated Services networks. This is achieved in cooperation with off-line traffic engineering functions that identify the required resources according to predicted traffic levels. Because demand may differ significantly from predicted demand DRsM is responsible for tuning the allocation of resources according to actual conditions. Secondly, QoS enhancements to the Border Gateway Protocol (q-BGP) include QoS attribute values in BGP UPDATE messages as well as extensions to the route selection process to make it QoS-aware. Off-line management plane algorithms define the attributes to be propagated and the route selection policies to be adopted by the control plane.
133

Towards automatic traffic classification and estimation for available bandwidth in IP networks

Lai, Zhaohong January 2008 (has links)
Growing rapidly, today's Internet is becoming more difficult to manage. A good understanding of what kind of network traffic classes are consuming network resource as well as how much network resource is available is important for many management tasks like QoS provisioning and traffic engineering. In the light of these objectives, two measurement mechanisms have been explored in this thesis. This thesis explores a new type of traffic classification scheme with automatic and accurate identification capability. First of all, the novel concept of IP flow profile, a unique identifier to the associated traffic class, has been proposed and the relevant model using five IP header based contexts has been presented. Then, this thesis shows that the key statistical features of each context, in the IP flow profile, follows a Gaussian distribution and explores how to use Kohonen Neural Network (KNN) for the purpose of automatically producing IP flow profile map. In order to improve the classification accuracy, this thesis investigates and evaluates the use of PCA for feature selection, which enables the produced patterns to be as tight as possible since tight patterns lead to less overlaps among patterns. In addition, the use of Linear Discriminant Analysis and alternative KNN maps has been investigated as to deal with the overlap issue between produced patterns. The entirety of this process represents a novel addition to the quest for automatic traffic classification in IP networks. This thesis also develops a fast available bandwidth measurement scheme. It firstly addresses the dynamic problem for the one way delay (OWD) trend detection. To deal with this issue, a novel model - asymptotic OWD Comparison (AOC) model for the OWD trend detection has been proposed. Then, three statistical metrics SOT (Sum of Trend), PTC (Positive Trend Checking) and CTC (Complete Trend Comparison) have been proposed to develop the AOC algorithms. To validate the proposed AOC model, an avail-bw estimation tool called Pathpair has been developed and evaluated in the Planetlah environment.
134

Random access MAC protocols and system monitoring methodology in wireless mesh networks

Huang, Feiyi January 2008 (has links)
As an extension of wireless Ad Hoc 1 and sensor 2 networks, wireless mesh networks (WMN) 3 have recently been developed as a key solution to provide high-quality multimedia services and applications, such as voice, data and video, over wireless personal area networks (WPAN) 4, wireless local area network (WXAN) 5 and wireless metropolitan area network (WMAN) 6. A WMN usually has a hierarchical network infrastructure with backbone and access networks operated in both Ad Hoc and centralized modes with self-organization and self-configuration capabilities. Along with flexibilities, WMN brings several problems and requirements at the same time. In this thesis, problems and challenges such as packet collisions, interference and security issues are initialized discussed with existing solutions reviewed. After that, three innovative random access MAC protocols are proposed for wireless mesh access networks with comprehensive analysis and discussion followed. Moreover, in order to detect misbehaviors of wireless terminals and abnormal performance of applications, the network traffic flow concept in wired IP network is extended to WMN with "Meshflow" defined. Based on this new concept, a comprehensive framework is designed for wireless mesh backbone network to monitor users, routers, applications and services so as to achieve abnormal or intrusion detection, malicious user identification and traceback.
135

Sigma delta modulation of a chaotic signal

Ushaw, Gary January 1996 (has links)
Sigma delta modulation SDM has become a widespread method of analogue to digital conversion, however its operation has not been completely defined. The majority of the analysis carried out on the circuit has been from a linear standpoint, with non-linear analysis hinting at hidden complexities in the modulator's operation. The sigma delta modulator itself is a non-linear system consisting, as it does, of a number of integrators and a one bit quantiser in a feedback loop. This configuration can be generalised as a non-linearity within a feedback path, which is a classic route to chaotic behaviour. This initially raises the prospect that a sigma delta modulator may be capable of chaotic modes of operation with a non-chaotic input. In fact, the problem does not arise and we show why not. To facilitate this investigation, a set of differential equations is formulated to represent SDM; these equations are subsequently utilised in a stability study of the sigma delta modulator. Of more interest, and more uncertainty, is the effect sigma delta modulation may have on a chaotic signal. If SDM makes a chaotic signal more chaotic then this will have serious repercussions on the predictability of that signal. In the past, analysis of the circuit has tended to be based around a steady state input or a slowly moving non-chaotic input such as a low frequency sine wave. This has greatly eased the complexity of such analyses, but it does not address the problem at hand. In this thesis we present the results of comparing the sigma delta modulation of a chaotic signal to a direct quantisation of the same signal. The tool we use to investigate this is the Lyapunov spectrum of the time series, measured using an algorithm developed at Edinburgh University. The Lyapunov exponents of a chaotic signal are presented before and after both SDM and direct quantisation, and it is shown that SDM does not increase the chaos of the signal. Indeed, it is shown that SDM has no more effect on the predictability of the signal, as measured by the Lyapunov spectrum, than direct quantisation. As such, we conclude that sigma delta modulation provides a reliable method for analogue to digital conversion of chaotic signals. It should be pointed out that, due to the incompleteness of rigorous analysis of SDM and the complex processes involved in applying such analysis to a chaotic signal, the results of this thesis are largely based upon experimentation and observation from a simulation of a sigma delta modulator.
136

Near maximum likelihood multiuser receivers for direct sequence code division multiple access

Sim, Hak Keong January 2000 (has links)
Wideband wireless access based on direct-sequence code-division multiple access (DS-CDMA) has been adopted for third-generation mobile communications systems. Hence, DS-CDMA downlink communications systems form the platform for the work in this thesis. The principles of the spread spectrum concept and DS-CDMA technology are first outlined, including a description of the system model and the conventional receiver. The two classes of codes used in this system, namely spreading codes and forward error correction codes (including Turbo codes), are discussed. Due to the fact that practical communications channels are non-ideal, the performance of an individual user is interference limited. As a result, the capacity of the system is greatly restricted. Fortunately, multiuser detection is a scheme that can effectively counteract this multiple access interference. However, the optimum multiuser detection scheme is far too computationally intensive for practical use. Hence, the fundamental interest here is to retain the advantages of multiuser detection and simplify its implementation. The objective of the thesis is to investigate the optimum multiuser receiver, regarded on a chip level sampling basis. The aim is to reduce the complexity of the optimum receiver to a practical and implementable level while retaining its good performance. The thesis first reviews various existing multiuser receivers. The chip-based maximum likelihood sequence estimation (CBMLSE) detector is formulated and implemented. However, the number of states in the state-transition trellis is still exponential in the number of users. Complexity cannot be reduced substantially without changing the structure of the trellis. A new detector is proposed which folds up the original state-transition trellis such that the number of states involved is greatly reduced. The performance is close to that of the CBMLSE. The folded trellis detector (FTD) can also be used as a preselection stage for the CBMLSE. The FTD selects with high accuracy the few symbol vectors that are more likely to be transmitted. The CBMLSE is then used to determine the most likely symbol vector out of the small subset of vectors. The performance of this scheme is as good as the CBMLSE. The FTD is also applied in an iterative multiuser receiver that exploits the powerful iterative algorithm of Turbo codes.
137

A security protocol for authentication of binding updates in Mobile IPv6

Georgiades, Andrew January 2011 (has links)
Wireless communication technologies have come along way, improving with every generational leap. As communications evolve so do the system architectures, models and paradigms. Improvements have been seen in the jump from 2G to 3G networks in terms of security. Yet these issues persist and will continue to plague mobile communications into the leap towards 4G networks if not addressed. 4G will be based on the transmission of Internet packets only, using an architecture known as mobile IP. This will feature many advantages, however security is still a fundamental issue to be resolved. One particular security issue involves the route optimisation technique, which deals with binding updates. This allows the corresponding node to by-pass the home agent router to communicate directly with the mobile node. There are a variety of security vulnerabilities with binding updates, which include the interception of data packets, which would allow an attacker to eavesdrop on its contents, breaching the users confidentiality, or to modify transmitted packets for the attackers own malicious purposes. Other possible vulnerabilities with mobile IP include address spoofing, redirection and denial of service attacks. For many of these attacks, all the attacker needs to know is the IPv6 addresses of the mobile's home agent and the corresponding node. There are a variety of security solutions to prevent these attacks from occurring. Two of the main solutions are cryptography and authentication. Cryptography allows the transmitted data to be scrambled in an undecipherable way resulting in any intercepted packets being illegible to the attacker. Only the party possessing the relevant key will be able to decrypt the message. Authentication is the process of verifying the identity of the user or device one is in communication with. Different authentication architectures exist however many of them rely on a central server to verify the users, resulting in a possible single point of attack. Decentralised authentication mechanisms would be more appropriate for the nature of mobile IP and several protocols are discussed. However they all posses' flaws, whether they be overly resource intensive or give away vital address data, which can be used to mount an attack. As a result location privacy is investigated in a possible attempt at hiding this sensitive data. Finally, a security solution is proposed to address the security vulnerabilities found in binding updates and attempts to overcome the weaknesses of the examined security solutions. The security protocol proposed in this research involves three new security techniques. The first is a combined solution using Cryptographically Generated Addresses and Return Routability, which are already established solutions, and then introduces a new authentication procedure, to create the Distributed Authentication Protocol to aid with privacy, integrity and authentication. The second is an enhancement to Return Routability called Dual Identity Return Routability, which provides location verification authentication for multiple identities on the same device. The third security technique is called Mobile Home Agents, which provides device and user authentication while introducing location privacy and optimised communication routing. All three security techniques can be used together or individually and each needs to be passed before the binding update is accepted. Cryptographically Generated Addresses asserts the users ownership of the IPv6 address by generating the interface identifier by computing a cryptographic one-way hash function from the users' public key and auxiliary parameters. The binding between the public key and the address can be verified by recomputing the hash value and by comparing the hash with the interface identifier. This method proves ownership of the address, however it does not prove the address is reachable. After establishing address ownership, Return Routability would then send two security tokens to the mobile node, one directly and one via the home agent. The mobile node would then combine them together to create an encryption key called the binding key allowing the binding update to be sent securely to the correspondent node. This technique provides a validation to the mobile nodes' location and proves its ownership of the home agent. Return Routability provides a test to verify that the node is reachable. It does not verify that the IPv6 address is owned by the user. This method is combined with Cryptographically Generated Addresses to provide best of both worlds. The third aspect of the first security solution introduces a decentralised authentication mechanism. The correspondent requests the authentication data from both the mobile node and home agent. The mobile sends the data in plain text, which could be encrypted with the binding key and the home agent sends a hash of the data. The correspondent then converts the data so both are hashes and compares them. If they are the same, authentication is successful. This provides device and user authentication which when combined with Cryptographically Generated Addresses and Return Routability create a robust security solution called the Distributed Authentication Protocol. The second new technique was designed to provide an enhancement to a current security solution. Dual Identity Return Routability builds on the concept of Return Routability by providing two Mobile IPv6 addresses on a mobile device, giving the user two separate identities. After establishing address ownership with Cryptographically Generated Addresses, Dual Identity Return Routability would then send security data to both identities, each on a separate network and each having heir own home agents, and the mobile node would then combine them together to create the binding key allowing the binding update to be sent securely to the correspondent node. This technique provides protection against address spoofing as an attacker needs two separate ip addresses, which are linked together. Spoofing only a single address will not pass this security solution. One drawback of the security techniques described, however, is that none of them provide location privacy to hide the users IP address from attackers. An attacker cannot mount a direct attack if the user is invisible. The third new security solution designed is Mobile Home Agents. These are software agents, which provide location privacy to the mobile node by acting as a proxy between it and the network. The Mobile Home Agent resides on the point of attachment and migrates to a new point of attachment at the same time as the mobile node. This provides reduced latency communication and a secure environment for the mobile node. These solutions can be used separately or combined together to form a super security solution, which is demonstrated in this thesis and attempts to provide proof of address ownership, reachability, user and device authentication, location privacy and reduction in communication latency. All these security features are design to protect against one the most devastating attacks in Mobile IPv6, the false binding update, which can allow an attacker to impersonate and deny service to the mobile node by redirecting all data packets to itself. The solutions are all simulated with different scenarios and network configurations and with a variety of attacks, which attempt to send a false binding update to the correspondent node. The results were then collected and analysed to provide conclusive proof that the proposed solutions are effective and robust in protecting against the false binding updates creating a safe and secure network for all.
138

Development of security strategies using Kerberos in wireless networks

Ever, Yoney Kirsal January 2011 (has links)
Authentication is the primary function used to reduce the risk of illegitimate access to IT services of any organisation. Kerberos is a widely used authentication protocol for authentication and access control mechanisms. This thesis presents the development of security strategies using Kerberos authentication protocol in wireless networks, Kerberos-Key Exchange protocol, Kerberos with timed-delay, Kerberos with timed-delay and delayed decryption, Kerberos with timed-delay, delayed decryption and password encryption properties. This thesis also includes a number of other research works such as, frequently key renewal under pseudo-secure conditions and shut down of the authentication server to external access temporarily to allow for secure key exchange. A general approach for the analysis and verification of authentication properties as well as Kerberos authentication protocol are presented. Existing authentication mechanisms coupled with strong encryption techniques are considered, investigated and analysed in detail. IEEE 802.1x standard, IEEE 802.11 wireless communication networks are also considered. First, existing security and authentication approaches for Kerberos authentication protocol are critically analysed with the discussions on merits and weaknesses. Then relevant terminology is defined and explained. Since Kerberos exhibits some vulnerabilities, the existing solutions have not treated the possibilities of more than one authentication server in a strict sense. A three way authentication mechanism addresses possible solution to this problem. An authentication protocol has been developed to improve the three way authentication mechanism for Kerberos. Dynamically renewing keys under pseudo-secure situations involves a temporary interruption to link/server access. After describing and analysing a protocol to achieve improved security for authentication, an analytical method is used to evaluate the cost in terms of the degradation of system performability. Various results are presented. An approach that involves a new authentication protocol is proposed. This new approach combines delaying decryption with timed authentication by using passwords and session keys for authentication purposes, and frequent key renewal under secure conditions. The analysis and verification of authentication properties and results of the designed protocol are presented and discussed. Protocols often fail when they are analysed critically. Formal approaches have emerged to analyse protocol failures. Abstract languages are designed especially for the description of communication patterns. A notion of rank functions is introduced for analysing purposes as well. An application of this formal approach to a newly designed authentication protocol that combines delaying the decryption process with timed authentication is presented. Formal methods for verifying cryptographic protocols are created to assist in ensuring that authentication protocols meet their specifications. Model checking techniques such as Communicating Sequential Processes (CSP) and Failure Divergence Refinement (FDR) checker, are widely acknowledged for effectively and efficiently revealing flaws in protocols faster than most other contemporaries. Essentially, model checking involves a detailed search of all the states reachable by the components of a protocol model. In the models that describe authentication protocols, the components, regarded as processes, are the principals including intruder (attacker) and parameters for authentication such as keys, nonces, tickets, and certificates. In this research, an automated generation tool, CASPER is used to produce CSP descriptions. Proposed protocol models rely on trusted third parties in authentication transactions while intruder capabilities are based on possible inductions and deductions. This research attempts to combine the two methods in model checking in order to realise an abstract description of intruder with enhanced capabilities. A target protocol of interest is that of Kerberos authentication protocol. The process of increasing the strength of security mechanisms usually impacts on performance thresholds. In recognition of this fact, the research adopts an analytical method known as spectral expansion to ascertain the level of impact, and which resulting protocol amendments will have on performance. Spectral expansion is based on state exploration. This implies that it is subject, as model checking, to the state explosion problem. The performance characteristics of amended protocols are examined relative to the existing protocols. Numerical solutions are presented for all models developed.
139

Green radio communication networks applying radio-over-fibre technology for wireless access

Al Noor, Mazin January 2012 (has links)
Wireless communication increasingly is becoming the first choice link to enter into the global information society. It is an essential part of broadband communication networks, due to its capacity to cover the end-user domain, outdoors or indoors. The use of mobile phones and broadband has already exceeded the one of the fixed telephones and has caused tremendous changes in peoples life, as not only to be recognised in the current political overthrows. The all-around presence of wireless communication links combined with functions that support mobility will make a roaming person-bound communication network possible in the near future. This idea of a personal network, in which a user has his own communication environment available everywhere, necessitates immense numbers of radio access points to maintain the wireless links and support mobility. The progress towards “all-around wireless” needs budget and easily maintainable radio access points, with simplified signal processing and consolidation of the radio network functions in a central station. The RF energy consumption in mobile base stations is one of the main problems in the wireless communication system, which has led to the worldwide research in so called green communication, which offers an environmentally friendly and cost-effective solution. In order to extend networks and mobility support, the simplification of antenna stations and broadband communication capacity becomes an increasingly urgent demand, also the extension of the wireless signal transmission distance to consolidate the signal processing in a centralised site. Radio-over-Fibre technology (RoF) was considered and found to be the most promising solution to achieve effective delivery of wireless and baseband signals, also to reduce RF energy consumption. The overall aim of this research project was to simulate the transmission of wireless and baseband RF signals via fibre for a long distance in high quality, consuming a low-power budget. Therefore, this thesis demonstrated a green radio communication network and the advantage of transmitting signals via fibre rather than via air. The contributions of this research work were described in the follows: Firstly, a comparison of the power consumption in WiMAX via air and fibre is presented. As shown in the simulation results, the power budget for the transmission of 64 QAM WiMAX IEEE 802.16-2005 via air for a distance of 5km lies at -189.67 dB, whereas for the transmission via RoF for a distance of 140km, the power consumption ranges at 65dB. Through the deployment of a triple symmetrical compensator technique, consisting of SMF, DCF and FBG, the transmission distance of the 54 Mbps WiMAX signal can be increased to 410km without increasing the power budget of 65dB. An amendment of the triple compensator technique to SMF, DCF and CFBG allows a 120Mbps WiMAX signal transmission with a clear RF spectrum of 3.5 GHz and constellation diagram over a fibre length of 792km using a power budget of 192dB. Secondly, the thesis demonstrates a simulation setup for the deployment of more than one wireless system, namely 64 QAM WiMAX IEEE 802.16-2005 and LTE, for a data bit rate of 1Gbps via Wavelength Division Multiplexing (WDM) RoF over a transmission distance of 1800km. The RoF system includes two triple symmetrical compensator techniques - DCF, SMF, and CFBG - to obtain a large bandwidth, power budget of 393.6dB and a high signal quality for the long transmission distance. Finally, the thesis proposed a high data bit rate and energy efficient simulation architecture, applying a passive optical component for a transmission span up to 600km. A Gigabit Optical Passive Network (GPON) based on RoF downlink 2.5 Gbps and uplink 1.25Gbps is employed to carry LTE and WiMAX, also 18 digital channels by utilising Coarse Wavelength Division Multiplexing (CWDM). The setup achieved high data speed, a low-power budget of 151.2dB, and an increased service length of up to 600km.
140

An intelligent classification system for land use and land cover mapping using spaceborne remote sensing and GIS

Kamal, Mohammad Mostafa January 2006 (has links)
The objectives of this study were to experiment with and extend current methods of Synthetic Aperture Rader (SAR) image classification, and to design and implement a prototype intelligent remote sensing image processing and classification system for land use and land cover mapping in wet season conditions in Bangladesh, which incorporates SAR images and other geodata. To meet these objectives, the problem of classifying the spaceborne SAR images, and integrating Geographic Information System (GIS) data and ground truth data was studied first. In this phase of the study, an extension to traditional techniques was made by applying a Self-Organizing feature Map (SOM) to include GIS data with the remote sensing data during image segmentation. The experimental results were compared with those of traditional statistical classifiers, such as Maximum Likelihood, Mahalanobis Distance, and Minimum Distance classifiers. The performances of the classifiers were evaluated in terms of the classification accuracy with respect to the collected real-time ground truth data. The SOM neural network provided the highest overall accuracy when a GIS layer of land type classification (with respect to the period of inundation by regular flooding) was used in the network. Using this method, the overall accuracy was around 15% higher than the previously mentioned traditional classifiers. It also achieved higher accuracies for more classes in comparison to the other classifiers. However, it was also observed that different classifiers produced better accuracy for different classes. Therefore, the investigation was extended to consider Multiple Classifier Combination (MCC) techniques, which is a recently emerging research area in pattern recognition. The study has tested some of these techniques to improve the classification accuracy by harnessing the goodness of the constituent classifiers. A Rule-based Contention Resolution method of combination was developed, which exhibited an improvement in the overall accuracy of about 2% in comparison to its best constituent (SOM) classifier. The next phase of the study involved the design of an architecture for an intelligent image processing and classification system (named ISRIPaC) that could integrate the extended methodologies mentioned above. Finally, the architecture was implemented in a prototype and its viability was evaluated using a set of real data. The originality of the ISRIPaC architecture lies in the realisation of the concept of a complete system that can intelligently cover all the steps of image processing classification and utilise standardised metadata in addition to a knowledge base in determining the appropriate methods and course of action for the given task. The implemented prototype of the ISRIPaC architecture is a federated system that integrates the CLIPS expert system shell, the IDRISI Kilimanjaro image processing and GIS software, and the domain experts' knowledge via a control agent written in Visual C++. It starts with data assessment and pre-processing and ends up with image classification and accuracy assessment. The system is designed to run automatically, where the user merely provides the initial information regarding the intended task and the source of available data. The system itself acquires necessary information about the data from metadata files in order to make decisions and perform tasks. The test and evaluation of the prototype demonstrates the viability of the proposed architecture and the possibility of extending the system to perform other image processing tasks and to use different sources of data. The system design presented in this study thus suggests some directions for the development of the next generation of remote sensing image processing and classification systems.

Page generated in 0.0172 seconds