Spelling suggestions: "subject:"detector data"" "subject:"colector data""
1 |
COTS GIS Integration and its Soap-Based Web ServicesWu, Ying 21 May 2004 (has links)
In the modern geographic information systems, COTS software has been playing a major role. However, deploying heterogeneous GIS software has the tendency to form fragmented data sets and to cause inconsistency. To accomplish data consolidation, we must achieve interoperability between different GIS tools. In my thesis project, I developed Vector and Raster Data Adapters to implement the spatial data consolidation. I deployed ArcIMS to publish the spatial data and metadata onto Internet. Furthermore, the SOAP-Based GIS Web services are implemented to achieve the enterprise information system integration. The contribution of ours in this project is we have streamlined the COTS GIS server, the J2EE coordinator server, the web service provider components, and the COTS web publishing tools into a hybrid web service architecture, in which the enterprise information system integration, the web publishing, and the business-to business online services are uniformed.
|
2 |
Modeling Error in Geographic Information SystemsLove, Kimberly R. 09 January 2008 (has links)
Geographic information systems (GISs) are a highly influential tool in today's society, and are used in a growing number of applications, including planning, engineering, land management,and environmental study. As the field of GISs continues to expand, it is very important to observe and account for the error that is unavoidable in computerized maps. Currently, both statistical and non-statistical models are available to do so, although there is very little implementation of these methods.
In this dissertation, I have focused on improving the methods available for analyzing error in GIS vector data. In particular, I am incorporating Bayesian methodology into the currently popular G-band error model through the inclusion of a prior distribution on point locations. This has the advantage of working well with a small number of points, and being able to synthesize information from multiple sources. I have also calculated the boundary of the confidence region explicitly, which has not been done before, and this will aid in the eventual inclusion of these methods in GIS software. Finally, I have included a statistical point deletion algorithm, designed for use in situations where map precision has surpassed map accuracy. It is very similar to the Douglas-Peucker algorithm, and can be used in a general line simplification situation, but has the advantage that it works with the error information that is already known about a map rather than adding unknown error. These contributions will make it more realistic for GIS users to implement techniques for error analysis. / Ph. D.
|
3 |
Support Vector Machine and Application in Seizure PredictionQiu, Simeng 04 1900 (has links)
Nowadays, Machine learning (ML) has been utilized in various kinds of area which across the range from engineering field to business area. In this paper, we first present several kernel machine learning methods of solving classification, regression and clustering problems. These have good performance but also have some limitations. We present examples to each method and analyze the advantages and disadvantages for solving different scenarios. Then we focus on one of the most popular classification methods, Support Vectors Machine (SVM).
In addition, we introduce the basic theory, advantages and scenarios of using Support Vector Machine (SVM) deal with classification problems. We also explain a convenient approach of tacking SVM problems which are called Sequential Minimal Optimization (SMO). Moreover, one class SVM can be understood in a different way which is called Support Vector Data Description (SVDD). This is a famous non-linear model problem compared with SVM problems, SVDD can be solved by utilizing Gaussian RBF kernel function combined with SMO. At last, we compared the difference and performance of SVM-SMO implementation and SVM-SVDD implementation.
About the application part, we utilized SVM method to handle seizure forecasting in canine epilepsy, after comparing the results from different methods such as random forest, extremely randomized tree, and SVM to classify preictal (pre-seizure) and interictal (interval-seizure) binary data. We draw the conclusion that SVM has the best performance.
|
4 |
Positional Uncertainty Analysis Using Data Uncertainy Engine A Case Study On Agricultural Land ParcelsUrganci, Ilksen 01 December 2009 (has links) (PDF)
Most of spatial data extraction and updating procedures require digitization of geographical entities from satellite imagery. During digitization, errors are introduced by factors like instrument deficiencies or user errors. In this study positional uncertainty of geographical objects, digitized from high resolution Quickbird satellite imagery, is assessed using Data Uncertainty Engine (DUE). It is a software tool for assessing uncertainties in environmental data / and generating realisations of uncertain data for use in uncertainty propagation analyses. A case study area in Kocaeli, Turkey that mostly includes agricultural land parcels is selected in order to evaluate positional uncertainty and obtain uncertainty boundaries for manually digitized fields. Geostatistical evaluation of discrepancy between reference data and digitized polygons are undertaken to analyse auto and cross correlation structures of errors. This process is utilized in order to estimate error model parameters which are employed in defining an uncertainty model within DUE. Error model parameters obtained from training data, are used to generate simulations for test data. Realisations of data derived via Monte Carlo Simulation using DUE, are evaluated to generate uncertainty boundaries for each object guiding user for further analyses with pre-defined information related to the accuracy of spatial entities. It is also aimed to assess area uncertainties affected by the position of spatial entities. For all different correlation structures and object models, weighted average positional error for this study is between 2.66 to 2.91 meters. At the end of uncertainty analysis, deformable object model produced the smallest uncertainty bandwidth by modelling cross correlation.
|
5 |
Modèles statistiques non linéaires pour l'analyse de formes : application à l'imagerie cérébrale / Non-linear statistical models for shape analysis : application to brain imagingSfikas, Giorgos 07 September 2012 (has links)
Cette thèse a pour objet l'analyse statistique de formes, dans le contexte de l'imagerie médicale.Dans le champ de l'imagerie médicale, l'analyse de formes est utilisée pour décrire la variabilité morphologique de divers organes et tissus. Nous nous focalisons dans cette thèse sur la construction d'un modèle génératif et discriminatif, compact et non-linéaire, adapté à la représentation de formes.Ce modèle est évalué dans le contexte de l'étude d'une population de patients atteints de la maladie d'Alzheimer et d'une population de sujets contrôles sains. Notre intérêt principal ici est l'utilisationdu modèle discriminatif pour découvrir les différences morphologiques les plus discriminatives entre une classe de formes donnée et des formes n'appartenant pas à cette classe. L'innovation théorique apportée par notre modèle réside en deux points principaux : premièrement, nous proposons un outil pour extraire la différence discriminative dans le cadre Support Vector Data Description (SVDD) ; deuxièmement, toutes les reconstructions générées sont anatomiquementcorrectes. Ce dernier point est dû au caractère non-linéaire et compact du modèle, lié à l'hypothèse que les données (les formes) se trouvent sur une variété non-linéaire de dimension faible. Une application de notre modèle à des données médicales réelles montre des résultats cohérents avec les connaissances médicales. / This thesis addresses statistical shape analysis, in the context of medical imaging. In the field of medical imaging, shape analysis is used to describe the morphological variability of various organs and tissues. Our focus in this thesis is on the construction of a generative and discriminative, compact and non-linear model, suitable to the representation of shapes. This model is evaluated in the context of the study of a population of Alzheimer's disease patients and a population of healthy controls. Our principal interest here is using the discriminative model to discover morphological differences that are the most characteristic and discriminate best between a given shape class and forms not belonging in that class. The theoretical innovation of our work lies in two principal points first, we propose a tool to extract discriminative difference in the context of the Support Vector Data description (SVDD) framework ; second, all generated reconstructions are anatomicallycorrect. This latter point is due to the non-linear and compact character of the model, related to the hypothesis that the data (the shapes) lie on a low-dimensional, non-linear manifold. The application of our model on real medical data shows results coherent with well-known findings in related research.
|
6 |
Historical aerial photographs and digital photogrammetry for landslide assessmentWalstra, Jan January 2006 (has links)
This study demonstrates the value of historical aerial photographs as a source for monitoring long-term landslide evolution, which can be unlocked by using appropriate photogrammetric methods. The understanding of landslide mechanisms requires extensive data records; a literature review identified quantitative data on surface movements as a key element for their analysis. It is generally acknowledged that, owing to the flexibility and high degree of automation of modern digital photogrammetric techniques, it is possible to derive detailed quantitative data from aerial photographs. In spite of the relative ease of such techniques, there is only scarce research available on data quality that can be achieved using commonly available material, hence the motivation of this study. In two landslide case-studies (the Mam Tor and East Pentwyn landslides) the different types of products were explored, that can be derived from historical aerial photographs. These products comprised geomorphological maps, automatically derived elevation models (DEMs) and displacement vectors. They proved to be useful and sufficiently accurate for monitoring landslide evolution. Comparison with independent survey data showed good consistency, hence validating the techniques used. A wide range of imagery was used in terms of quality, media and format. Analysis of the combined datasets resulted in improvements to the stochastic model and establishment of a relationship between image ground resolution and data accuracy. Undetected systematic effects provided a limiting constraint to the accuracy of the derived data, but the datasets proved insufficient to quantify each factor individually. An important advancement in digital photogrammetry is image matching, which allows automation of various stages of the working chain. However, it appeared that the radiometric quality of historical images may not always assure good results, both for extracting DEMs and vectors using automatic methods. It can be concluded that the photographic archive can provide invaluable data for landslide studies, when modern photogrammetric techniques are being used. As ever, independent and appropriate checks should always be included in any photogrammetric design.
|
7 |
Techniques for the Visualization of Positional Geospatial UncertaintyBarré, Brent A. 20 December 2013 (has links)
Geospatial data almost always contains some amount of uncertainty due to inaccuracies in its acquisition and transformation. While the data is commonly visualized (e.g. on digital maps), there are unanswered needs for visualizing uncertainty along with it. Most research on effectively doing this addresses uncertainty in data values at geospatial positions, e.g. water depth, human population, or land-cover classification. Uncertainty in the data’s geospatial positions themselves (positional uncertainty) has not been previously focused on in this regard. In this thesis, techniques were created for visualizing positional uncertainty using World Vector Shoreline as an example dataset. The techniques consist of a shoreline buffer zone to which visual effects such as gradients, transparency, and randomized dots were applied. They are viewed interactively via Web Map Service (WMS). In clutter testing with human subjects, a transparency-gradient technique performed the best, followed by a solid-fill technique, with a dots-density-gradient technique performing worst.
|
8 |
GIS obce Dolní Újezd. / GIS of Dolní Újezd municipalityKlusoňová, Pavla January 2014 (has links)
The thesis deals with geographical information systems ( GIS) which are nowadays common tools for use in civil service and local administration. The theoretical part is focused on more detailed characterization of these systems including issues concerning geographical data and data base systems. The description of the particular GIS software solution for the selected village of Dolní Újezd in the application of ArcInfo is the main contain of the practical sections. The creation of spatial analysis is a component of the project with the aim of finding a suitable location for benches for public use. The resulting project called “GIS Dolní Újezd”wil be used by the local council in Dolní Újezd.
|
9 |
Novel Pattern Recognition Techniques for Improved Target Detection in Hyperspectral ImagerySakla, Wesam Adel 2009 December 1900 (has links)
A fundamental challenge in target detection in hyperspectral imagery is spectral variability. In target detection applications, we are provided with a pure target signature;
we do not have a collection of samples that characterize the spectral variability of the target. Another problem is that the performance of stochastic detection algorithms such as the spectral matched filter can be detrimentally affected by the assumptions of multivariate normality of the data, which are often violated in practical situations.
We address the challenge of lack of training samples by creating two models to characterize the target class spectral variability --the first model makes no assumptions regarding inter-band correlation, while the second model uses a first-order Markovbased scheme to exploit correlation between bands. Using these models, we present two techniques for meeting these challenges-the kernel-based support vector data description (SVDD) and spectral fringe-adjusted joint transform correlation (SFJTC).
We have developed an algorithm that uses the kernel-based SVDD for use in full-pixel target detection scenarios. We have addressed optimization of the SVDD kernel-width parameter using the golden-section search algorithm for unconstrained
optimization. We investigated a proper number of signatures N to generate for the SVDD target class and found that only a small number of training samples is required relative to the dimensionality (number of bands). We have extended decision-level
fusion techniques using the majority vote rule for the purpose of alleviating the problem of selecting a proper value of s 2 for either of our target variability models. We have shown that heavy spectral variability may cause SFJTC-based detection to suffer and have addressed this by developing an algorithm that selects an optimal combination of the discrete wavelet transform (DWT) coefficients of the signatures for use as features
for detection.
For most scenarios, our results show that our SVDD-based detection scheme provides low false positive rates while maintaining higher true positive rates than popular stochastic detection algorithms. Our results also show that our SFJTC-based
detection scheme using the DWT coefficients can yield significant detection improvement compared to use of SFJTC using the original signatures and traditional stochastic and deterministic algorithms.
|
10 |
Akcelerace vektorových a krytografických operací na platformě x86-64 / Acceleration of Vector and Cryptographic Operations on x86-64 PlatformŠlenker, Samuel January 2017 (has links)
The aim of this thesis was to study and subsequently process a comparison of older and newer SIMD processing units of modern microprocessors on the x86-64 platform. The thesis provides an overview of the fastest computations of vector operations with matrices and vectors, including corresponding source codes. Furthermore, the thesis is focused on authenticated encryption, specifically on block cipher AES operating in Galois Counter Mode, and on a discussion of possibilities of instruction sets for cryptographic support.
|
Page generated in 0.0581 seconds