71 |
Monitoring land use and land cover change: a combining approach of change detection to analyze urbanization in Shijiazhuang, ChinaLiu, Qingling, Gong, Fanting January 2013 (has links)
Detecting the changes of land use and land cover of the earth’s surface is extremely important to achieve continual and precise information about study area for any kinds of planning of the development. Geographic information system and remote sensing technologies have shown their great capabilities to solve the study issues like land use and land cover changes. The aim of this thesis is to produce maps of land use and land cover of Shijiazhuang on year 1993, 2000 and 2009 to monitor the possible changes that may occur particularly in agricultural land and urban or built-up land, and detect the process of urbanization in this city. Three multi-temporal satellite image data, Thematic Mapper image data from year 1993, Enhanced Thematic Mapper image data from 2000 and China Brazil Earth Resource Satellite image data from 2009 were used in this thesis. In this study, supervised classification was the major classification approach to provide classified maps, and five land use and land cover categories were identified and mapped. Post-classification approach was used to improve the qualities of the classified map. The noises in the classified maps will be removed after post-classification process. Normalized difference vegetation index was used to detect the changes of vegetated land and non-vegetated land. Change detection function in Erdas Imagine was used to detect the urban growth and the intensity of changes surrounding the urban areas. Cellular automata Markov was used to simulate the trends of land use and cover change during the period of 1993 to 2000 and 2000 to 2009, and a future land use map was simulated based on the land use maps of year 2000 and 2009. From this performance, the cross-tabulation matrices between different periods were produced to analyze the trends of land use and cover changes, and these statistic data directly expressed the change of land use and land cover. The results show that the agricultural land and urban or built-up land were changed a lot, approximately half of agricultural land was converted into urban or built-up land. This indicates that the loss of agricultural land is associated with the growth of urban or built-up land. Thus, the urbanization took place in Shijiazhuang, and the results of this urban expansion lead to the loss of agricultural land and environmental problems. During the process of detecting the land use and cover change, obtaining of high-precision classified maps was the main problem.
|
72 |
A Comparison of Change Detection Methods in an Urban Environment Using LANDSAT TM and ETM+ Satellite Imagery: A Multi-Temporal, Multi-Spectral Analysis of Gwinnett County, GA 1991-2000DiGirolamo, Paul Alrik 03 August 2006 (has links)
Land cover change detection in urban areas provides valuable data on loss of forest and agricultural land to residential and commercial development. Using Landsat 5 Thematic Mapper (1991) and Landsat 7 ETM+ (2000) imagery of Gwinnett County, GA, change images were obtained using image differencing of Normalized Difference Vegetation Index (NDVI), principal components analysis (PCA), and Tasseled Cap-transformed images. Ground truthing and accuracy assessment determined that land cover change detection using the NDVI and Tasseled Cap image transformation methods performed best in the study area, while PCA performed the worst of the three methods assessed. Analyses on vegetative and vegetation changes from 1991- 2000 revealed that these methods perform well for detecting changes in vegetation and/or vegetative characteristics but do not always correspond with changes in land use. Gwinnett County lost an estimated 13,500 hectares of vegetation cover during the study period to urban sprawl, with the majority of the loss coming from forested areas.
|
73 |
ROBUST SPEAKER DIARIZATION FOR MEETINGSAnguera Miró, Xavier 21 December 2006 (has links)
Aquesta tesi doctoral mostra la recerca feta en l'àrea de la diarització de locutor per a sales de reunions. En la present s'estudien els algorismes i la implementació d'un sistema en diferit de segmentació i aglomerat de locutor per a grabacions de reunions a on normalment es té accés a més d'un micròfon per al processat. El bloc més important de recerca s'ha fet durant una estada al International Computer Science Institute (ICSI, Berkeley, Caligornia) per un període de dos anys.La diarització de locutor s'ha estudiat força per al domini de grabacions de ràdio i televisió. La majoria dels sistemes proposats utilitzen algun tipus d'aglomerat jeràrquic de les dades en grups acústics a on de bon principi no se sap el número de locutors òptim ni tampoc la seva identitat. Un mètode molt comunment utilitzat s'anomena "bottom-up clustering" (aglomerat de baix-a-dalt), amb el qual inicialment es defineixen molts grups acústics de dades que es van ajuntant de manera iterativa fins a obtenir el nombre òptim de grups tot i acomplint un criteri de parada. Tots aquests sistemes es basen en l'anàlisi d'un canal d'entrada individual, el qual no permet la seva aplicació directa per a reunions. A més a més, molts d'aquests algorisms necessiten entrenar models o afinar els parameters del sistema usant dades externes, el qual dificulta l'aplicabilitat d'aquests sistemes per a dades diferents de les usades per a l'adaptació.La implementació proposada en aquesta tesi es dirigeix a solventar els problemes mencionats anteriorment. Aquesta pren com a punt de partida el sistema existent al ICSI de diarització de locutor basat en l'aglomerat de "baix-a-dalt". Primer es processen els canals de grabació disponibles per a obtindre un sol canal d'audio de qualitat major, a més dínformació sobre la posició dels locutors existents. Aleshores s'implementa un sistema de detecció de veu/silenci que no requereix de cap entrenament previ, i processa els segments de veu resultant amb una versió millorada del sistema mono-canal de diarització de locutor. Aquest sistema ha estat modificat per a l'ús de l'informació de posició dels locutors (quan es tingui) i s'han adaptat i creat nous algorismes per a que el sistema obtingui tanta informació com sigui possible directament del senyal acustic, fent-lo menys depenent de les dades de desenvolupament. El sistema resultant és flexible i es pot usar en qualsevol tipus de sala de reunions pel que fa al nombre de micròfons o la seva posició. El sistema, a més, no requereix en absolute dades d´entrenament, sent més senzill adaptar-lo a diferents tipus de dades o dominis d'aplicació. Finalment, fa un pas endavant en l'ús de parametres que siguin mes robusts als canvis en les dades acústiques. Dos versions del sistema es van presentar amb resultats excel.lents a les evaluacions de RT05s i RT06s del NIST en transcripció rica per a reunions, a on aquests es van avaluar amb dades de dos subdominis diferents (conferencies i reunions). A més a més, es fan experiments utilitzant totes les dades disponibles de les evaluacions RT per a demostrar la viabilitat dels algorisms proposats en aquesta tasca. / This thesis shows research performed into the topic of speaker diarization for meeting rooms. It looks into the algorithms and the implementation of an offline speaker segmentation and clustering system for a meeting recording where usually more than one microphone is available. The main research and system implementation has been done while visiting the International Computes Science Institute (ICSI, Berkeley, California) for a period of two years. Speaker diarization is a well studied topic on the domain of broadcast news recordings. Most of the proposed systems involve some sort of hierarchical clustering of the data into clusters, where the optimum number of speakers of their identities are unknown a priory. A very commonly used method is called bottom-up clustering, where multiple initial clusters are iteratively merged until the optimum number of clusters is reached, according to some stopping criterion. Such systems are based on a single channel input, not allowing a direct application for the meetings domain. Although some efforts have been done to adapt such systems to multichannel data, at the start of this thesis no effective implementation had been proposed. Furthermore, many of these speaker diarization algorithms involve some sort of models training or parameter tuning using external data, which impedes its usability with data different from what they have been adapted to.The implementation proposed in this thesis works towards solving the aforementioned problems. Taking the existing hierarchical bottom-up mono-channel speaker diarization system from ICSI, it first uses a flexible acoustic beamforming to extract speaker location information and obtain a single enhanced signal from all available microphones. It then implements a train-free speech/non-speech detection on such signal and processes the resulting speech segments with an improved version of the mono-channel speaker diarization system. Such system has been modified to use speaker location information (then available) and several algorithms have been adapted or created new to adapt the system behavior to each particular recording by obtaining information directly from the acoustics, making it less dependent on the development data.The resulting system is flexible to any meetings room layout regarding the number of microphones and their placement. It is train-free making it easy to adapt to different sorts of data and domains of application. Finally, it takes a step forward into the use of parameters that are more robust to changes in the acoustic data. Two versions of the system were submitted with excellent results in RT05s and RT06s NIST Rich Transcription evaluations for meetings, where data from two different subdomains (lectures and conferences) was evaluated. Also, experiments using the RT datasets from all meetings evaluations were used to test the different proposed algorithms proving their suitability to the task.
|
74 |
Moving object detection in urban environmentsGillsjö, David January 2012 (has links)
Successful and high precision localization is an important feature for autonomous vehicles in an urban environment. GPS solutions are not good on their own and laser, sonar and radar are often used as complementary sensors. Localization with these sensors requires the use of techniques grouped under the acronym SLAM (Simultaneous Localization And Mapping). These techniques work by comparing the current sensor inputs to either an incrementally built or known map, also adding the information to the map.Most of the SLAM techniques assume the environment to be static, which means that dynamics and clutter in the environment might cause SLAM to fail. To ob-tain a more robust algorithm, the dynamics need to be dealt with. This study seeks a solution where measurements from different points in time can be used in pairwise comparisons to detect non-static content in the mapped area. Parked cars could for example be detected at a parking lot by using measurements from several different days.The method successfully detects most non-static objects in the different test datasets from the sensor. The algorithm can be used in conjunction with Pose-SLAM to get a better localization estimate and a map for later use. This map is good for localization with SLAM or other techniques since only static objects are left in it.
|
75 |
Yaw Rate and Lateral Acceleration Sensor Plausibilisation in an Active Front Steering VehicleWikström, Anders January 2007 (has links)
Accurate measurements from sensors measuring the vehicle's lateral behavior are vital in todays vehicle dynamic control systems such as the Electronic Stability Program (ESP). This thesis concerns accurate plausibilisation of two of these sensors, namely the yaw rate sensor and the lateral acceleration sensor. The estimation is based on Kalman filtering and culminates in the use of a 2 degree-of-freedom nonlinear two-track model describing the vehicle lateral dynamics. The unknown and time-varying cornering stiffnesses are adapted while the unknown yaw moment of inertia is estimated. The Kalman filter transforms the measured signals into a sequence of residuals that are then investigated with the aid of various change detection methods such as the CuSum algorithm. An investigation into the area of adaptive thresholding has also been made. The change detection methods investigated successfully detects faults in both the yaw rate and the lateral acceleration sensor. It it also shown that adaptive thresholding can be used to improve the diagnosis system. All of the results have been evaluated on-line in a prototype vehicle with real-time fault injection.
|
76 |
Observer for a vehicle longitudinal controller / Observatör för en längsregulator i fordonRytterstedt, Peter January 2007 (has links)
The longitudinal controller at DaimlerChrysler AG consists of two cascade controllers. The outer control loop contains the driver assistance functions such as speed limiter, cruise control, etc. The inner control loop consists of a PID-controller and an observer. The task of the observer is to estimate the part of the vehicle's acceleration caused by large disturbances, for example by a changed vehicle mass or the slope of the road. As observer the Kalman filter is selected. It is the optimal filter when the process model is linear and the process noise and measurement noise can be modeled as Gaussian noise. In this Master's thesis the theory for the Kalman filter is presented and it is shown how to choose the filter parameters. Simulated annealing is a global optimization technique which can be used when autotuning, i.e., automatically find the optimal parameter settings. To be able to perform autotuning for the longitudinal controller one has to model the environment and driving situations. In this Master's thesis it is verified that the parameter choice is a compromise between a fast but jerky, or a slow but smooth estimate. As the output from the Kalman filter is directly added to the control value for the engine and brakes, it is important that the output is smooth. It is shown that the Kalman filter implemented in the test vehicles today can be exchanged with a first-order lag function, without loss in performance. This makes the filter tuning easier, as there is only one parameter to choose. Change detection is a method that can be used to detect large changes in the signal, and react accordingly - for example by making the filter faster. A filter using change detection is implemented and simulations show that it is possible to improve the estimate using this method. It is suggested to implement the change detection algorithm in a test vehicle and evaluate it further.
|
77 |
The Video Object Segmentation Method for Mpeg-4Huang, Jen-Chi 23 September 2004 (has links)
In this thesis, we proposed the series methods of moving object segmentation and object application. These methods are the moving object segmentation method in wavelet domain, double change detection method, global motion estimation method, and the moving object segmentation in the motion background.
First, we proposed the Video Object Segmentation Method in Wavelet Domain. We use the Change Detection Method with the different thresholds in four wavelet sub-bands. The experiment results show that we obtain further object shape information and more accurately extracting the moving object.
In the double change detection method, we proposed the method for moving object segmentation using three successive frames. We use change detection method twice in wavelet domain. After applying the Intersect Operation, we obtain the accurately moving object edge map and further object shape information.
Besides, we proposed the global motion estimation method in motion scene. We propose a novel global motion estimation using cross point for the reconstruction of background scene in video sequences. Due to the robust character and limit number of cross points, we can get the Affine parameters of global motion in video sequences efficiency.
At last, we proposed the object segmentation method in motion scene. We use the motion estimation method to estimate the global motion between the consecutive frames. We reconstruct a wide scene background without moving objects by the consecutive frames. At last, the moving objects will be segmented easily by comparing the object frame and the relative part in wide scene background.
The Results of our proposed have good performance in the different type of video sequences. Hence, the methods of our thesis contribute to the video coding in Mpeg-4 and multimedia technology.
|
78 |
Geoarchaeological Investigations In Zeugma,turkeyKaraca, Ceren 01 August 2008 (has links) (PDF)
The purpose of this study is to investigate the geological and morphological features around ancient city of Zeugma. To achieve this, a geological map of Zeugma excavation site is prepared / an aerial photographic survey and morphological analyses are conducted on a broader area. Additionally, the biggest ancient quarry in the study area is investigated.
In the close vicinity of Zeugma, four lithologies which are, from bottom to top, clayey limestone, thick bedded limestone, chalky limestone and cherty limestone are identified. A major fault with a vertical throw of 80 m is mapped in the area.
Geological survey reveals that the excavation site is located within the chalky limestone and the rock tombs are carved within the thick bedded limestone.
In the aerial photographic survey, Firat River is classified into 4 morphological classes which are river, island, flood plain and basement. The change among these classes is investigated between 1953 and 1992. The results reveal that there is no considerable variation in the position of the river channel and margins of flood plain within 39 years. The major change is observed in the islands that are built within the flood plain.
Testing the elevation of Gaziantep and Firat formations boundary using the relief map, investigating the visibility of selected points in the area, predicting the source area for the water supply, and evaluating the nature of the ancient route, constitute the morphological analysis carried out in this study. However, these analyses are not studied in detail and should be considered as the first attempts for more detailed morphological analyses.
|
79 |
Data Warehouse Change Management Based on OntologyTsai, Cheng-Sheng 12 July 2003 (has links)
In the thesis, we provide a solution to solve a schema change problem. In a data warehouse system, if schema changes occur in a data source, the overall system will lose the consistency between the data sources and the data warehouse. These schema changes will render the data warehouse obsolete. We have developed three stages to handle schema changes occurring in databases. They are change detection, diagnosis, and handling. Recommendations are generated by DB-agent to information DW-agent to notify the DBA what and where a schema change affects the star schema. In the study, we mainly handle seven schema changes in a relational database. All of them, we not only handle non-adding schema changes but also handling adding schema changes. A non-adding schema change in our experiment has high correct mapping rate as using a traditional mappings between a data warehouse and a database. For an adding schema change, it has many uncertainties to diagnosis and handle. For this reason, we compare similarity between an adding relation or attribute and the ontology concept or concept attribute to generate a good recommendation. The evaluation results show that the proposed approach is capable to detect these schema changes correctly and to recommend the DBA about the changes appropriately.
|
80 |
Wrapping XML-Sources to Support Update AwarenessThuresson, Marcus January 2000 (has links)
<p>Data warehousing is a generally accepted method of providing corporate decision support. Today, the majority of information in these warehouses originates from sources within a company, although changes often occur from the outside. Companies need to look outside their enterprises for valuable information, increasing their knowledge of customers, suppliers, competitors etc.</p><p>The largest and most frequently accessed information source today is the Web, which holds more and more useful business information. Today, the Web primarily relies on HTML, making mechanical extraction of information a difficult task. In the near future, XML is expected to replace HTML as the language of the Web, bringing more structure and content focus.</p><p>One problem when considering XML-sources in a data warehouse context is their lack of update awareness capabilities, which restricts eligible data warehouse maintenance policies. In this work, we wrap XML-sources in order to provide update awareness capabilities.</p><p>We have implemented a wrapper prototype that provides update awareness capabilities for autonomous XML-sources, especially change awareness, change activeness, and delta awareness. The prototype wrapper complies with recommendations and working drafts proposed by W3C, thereby being compliant with most off-the-shelf XML tools. In particular, change information produced by the wrapper is based on methods defined by the DOM, implying that any DOM-compliant software, including most off-the-shelf XML processing tools, can be used to incorporate identified changes in a source into an older version of it.</p><p>For the delta awareness capability we have investigated the possibility of using change detection algorithms proposed for semi-structured data. We have identified similarities and differences between XML and semi-structured data, which affect delta awareness for XML-sources. As a result of this effort, we propose an algorithm for change detection in XML-sources. We also propose matching criteria for XML-documents, to which the documents have to conform to be subject to change awareness extension.</p>
|
Page generated in 0.0715 seconds