• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 114
  • 35
  • 13
  • 13
  • 8
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 237
  • 237
  • 59
  • 45
  • 42
  • 40
  • 37
  • 37
  • 34
  • 34
  • 31
  • 24
  • 24
  • 22
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Machine Learning for Spatial Positioning for XR Environments

Alraas, Khaled January 2024 (has links)
This bachelor's thesis explores the integration of machine learning (ML) with sensor fusion techniques to enhance spatial data accuracy in Extended Reality (XR) environments. With XR's revolutionary impact across various sectors, accurate localization in virtual environments becomes imperative. The thesis conducts a comprehensive literature review, highlighting advancements in indoor positioning technologies and the pivotal role of machine learning in refining sensor fusion for precise localization. It underscores the challenges in the XR field, such as signal interference, device heterogeneity, and data processing complexities. Through critical analysis, this study aims to bridge the gap in practical application of ML, offering insights into developing scalable solutions for immersive virtual productions. It offers insights into the practical integration of advanced machine learning techniques in XR applications, thereby providing valuable implications for technology development and user experience in XR. This contribution is not merely theoretical; it showcases practical applications and advancements in real-time processing and adaptability in complex environments, aligning well with existing research and extending it by addressing scalability and practical implementation challenges in XR environments. This study identifies key themes in the integration of ML with sensor fusion for XR, such as the enhancement of spatial data accuracy, challenges in real-time processing, and the need for scalable solutions. It concludes that the fusion of ML and sensor technologies not only enhances the accuracy of XR environments but also paves the way for more immersive and realistic virtual experiences.
222

Estimating Per-pixel Classification Confidence of Remote Sensing Images

Jiang, Shiguo 19 December 2012 (has links)
No description available.
223

Spatially Correlated Data Accuracy Estimation Models in Wireless Sensor Networks

Karjee, Jyotirmoy January 2013 (has links) (PDF)
One of the major applications of wireless sensor networks is to sense accurate and reliable data from the physical environment with or without a priori knowledge of data statistics. To extract accurate data from the physical environment, we investigate spatial data correlation among sensor nodes to develop data accuracy models. We propose three data accuracy models namely Estimated Data Accuracy (EDA) model, Cluster based Data Accuracy (CDA) model and Distributed Cluster based Data Accuracy (DCDA) model with a priori knowledge of data statistics. Due to the deployment of high density of sensor nodes, observed data are highly correlated among sensor nodes which form distributed clusters in space. We describe two clustering algorithms called Deterministic Distributed Clustering (DDC) algorithm and Spatial Data Correlation based Distributed Clustering (SDCDC) algorithm implemented under CDA model and DCDA model respectively. Moreover, due to data correlation in the network, it has redundancy in data collected by sensor nodes. Hence, it is not necessary for all sensor nodes to transmit their highly correlated data to the central node (sink node or cluster head node). Even an optimal set of sensor nodes are capable of measuring accurate data and transmitting the accurate, precise data to the central node. This reduces data redundancy, energy consumption and data transmission cost to increase the lifetime of sensor networks. Finally, we propose a fourth accuracy model called Adaptive Data Accuracy (ADA) model that doesn't require any a priori knowledge of data statistics. ADA model can sense continuous data stream at regular time intervals to estimate accurate data from the environment and select an optimal set of sensor nodes for data transmission to the network. Data transmission can be further reduced for these optimal sensor nodes by transmitting a subset of sensor data using a methodology called Spatio-Temporal Data Prediction (STDP) model under data reduction strategies. Furthermore, we implement data accuracy model when the network is under a threat of malicious attack.
224

The state of spatial information for land reform in South Africa : a case study of the Amantungwa Land Reform project.

Kubheka, Sipho. January 2006 (has links)
Many authors and practitioners involved in rural or local development agree that co-operation and the integration of efforts by the delivery agents is crucial for sustainable development programmes. The delivery of Land Reform as initiated by the new government in South Africa (SA) is one programme that has been faced by a number of challenges including the slow pace of delivery, lack of support and co-operation from the key stakeholders, negligible impact on the improvement in the lives of its beneficiaries and many others. Many Land Reform participants including the government argue that among the challenges facing this programme is a lack of co-operation between the key stakeholders including the different spheres of government involved or impacted upon by the delivery of the Land Reform programme. The Department of Land Affairs (DLA) which is responsible for Land Reform delivery is facing challenges in integrating Land Reform with the rural or local level development which is facilitated by the local and district municipalities through the Integrated Development Planning (IDP) process. This thesis seeks to look at how the Land Reform planning process and the internal spatial data systems within the DLA can be used to integrate Land Reform delivery with the municipal IDP processes to attain integrated rural development. There is a growing realization of the fact that the development of an integrated spatial data is critical for sustainable development in SA. A number of initiatives have been embarked upon by various organizations to establish the spatial data infrastructure. However these efforts have been reported to be often fragmented and isolated in the areas of operation and focus. Thus, the challenge is to develop a strategy to develop an integrated spatial data infrastructure that would be used to support sustainable development programmes such as the Land Reform programme. This thesis therefore proposes to look at the various data sources particularly within the DLA and from other organs of state involved in Land Reform and local development with a view to highlight the limitation and shortcomings that can be addressed in integrated spatial data infrastructure. To assess the current status of the spatial data sources and usage for Land Reform implementation, an analysis of the spatial data sources within the DLA was conducted to determine its suitability for the development of an integrated spatial data infrastructure. Different sections of the DLA responsible for acquiring and providing spatial data were assessed to ascertain whether their data can be shared, transferred or integrated to support the Land Reform implementation. An integrated spatial data infrastructure is then proposed as a solution to forge co-operation and collaboration among all users involved in Land Reform implementation. / Thesis (M.Sc.) - University of KwaZulu-Natal, Pietermaritzburg, 2006.
225

Fusion de données géoréférencées et développement de services interopérables pour l’estimation des besoins en eau à l’échelle des bassins versants / Geospatial data fusion and development of interoperable services to assess water needs at watershed scale

Beaufils, Mickaël 04 December 2012 (has links)
De nos jours, la préservation de l’environnement constitue un enjeu prioritaire. La compréhension des phénomènes environnementaux passe par l’étude et la combinaison d’un nombre croissant de données hétérogènes. De nombreuses initiatives internationales (INSPIRE, GEOSS) visent à encourager le partage et l’échange de ces données. Dans ce sujet de recherche, nous traitons de l’intérêt de mettre à disposition des modèles scientifiques sur le web. Nous montrons l’intérêt d’utiliser des applications s’appuyant sur des données géoréférencées et présentons des méthodes et des moyens répondant aux exigences d’interopérabilité. Nous illustrons notre approche par l’implémentation de modèles d’estimation des besoins en eau agricoles et domestiques fonctionnant à diverses échelles spatiales et temporelles. Un prototype basé sur une architecture entièrement orientée services web a été développé. L’outil s’appuie sur les standards Web Feature Service (WFS), Sensor Observation Service (SOS) et Web Processing Service (WPS) de l’OGC. Enfin, la prise en compte des imperfections des données est également abordée avec l’intégration de méthodes d’analyse de sensibilité et de propagation de l’incertitude. / Nowadays, preservation of the environment is a main priority. Understanding of environmental phenomena requires the study and the combination of an increasing number of heterogeneous data. Several international initiatives (INSPIRE, GEOSS) aims to encourage the sharing and exchange of those data.In this thesis, the interest of making scientific models available on the web is discussed. The value of using applications based on geospatial data is demonstrated. Several methods and means that satisfy the requirements of interoperability are also purposed.Our approach is illustrated by the implementation of models for estimating agricultural and domestic water requirements. Those models can be used at different spatial scales and temporal granularities. A prototype based on a complete web service oriented architecture was developed. The tool is based on the OGC standards Web Feature Service (WFS), Sensor Observation Service (SOS) and Web Processing Service (WPS).Finally, taking into account the imperfections of the data is also discussed with the integration of methods for sensitivity analysis and uncertainty propagation.
226

Une approche automatisée basée sur des contraintes d’intégrité définies en UML et OCL pour la vérification de la cohérence logique dans les systèmes SOLAP : applications dans le domaine agri-environnemental / An automated approach based on integrity constraints defined in UML and OCL for the verification of logical consistency in SOLAP systems : applications in the agri-environmental field

Boulil, Kamal 26 October 2012 (has links)
Les systèmes d'Entrepôts de Données et OLAP spatiaux (EDS et SOLAP) sont des technologies d'aide à la décision permettant l'analyse multidimensionnelle de gros volumes de données spatiales. Dans ces systèmes, la qualité de l'analyse dépend de trois facteurs : la qualité des données entreposées, la qualité des agrégations et la qualité de l’exploration des données. La qualité des données entreposées dépend de critères comme la précision, l'exhaustivité et la cohérence logique. La qualité d'agrégation dépend de problèmes structurels (e.g. les hiérarchies non strictes qui peuvent engendrer le comptage en double des mesures) et de problèmes sémantiques (e.g. agréger les valeurs de température par la fonction Sum peut ne pas avoir de sens considérant une application donnée). La qualité d'exploration est essentiellement affectée par des requêtes utilisateur inconsistantes (e.g. quelles ont été les valeurs de température en URSS en 2010 ?). Ces requêtes peuvent engendrer des interprétations erronées des résultats. Cette thèse s'attaque aux problèmes d'incohérence logique qui peuvent affecter les qualités de données, d'agrégation et d'exploration. L'incohérence logique est définie habituellement comme la présence de contradictions dans les données. Elle est typiquement contrôlée au moyen de Contraintes d'Intégrité (CI). Dans cette thèse nous étendons d'abord la notion de CI (dans le contexte des systèmes SOLAP) afin de prendre en compte les incohérences relatives aux agrégations et requêtes utilisateur. Pour pallier les limitations des approches existantes concernant la définition des CI SOLAP, nous proposons un Framework basé sur les langages standards UML et OCL. Ce Framework permet la spécification conceptuelle et indépendante des plates-formes des CI SOLAP et leur implémentation automatisée. Il comporte trois parties : (1) Une classification des CI SOLAP. (2) Un profil UML implémenté dans l'AGL MagicDraw, permettant la représentation conceptuelle des modèles des systèmes SOLAP et de leurs CI. (3) Une implémentation automatique qui est basée sur les générateurs de code Spatial OCL2SQL et UML2MDX qui permet de traduire les spécifications conceptuelles en code au niveau des couches EDS et serveur SOLAP. Enfin, les contributions de cette thèse ont été appliquées dans le cadre de projets nationaux de développement d'applications (S)OLAP pour l'agriculture et l'environnement. / Spatial Data Warehouse (SDW) and Spatial OLAP (SOLAP) systems are Business Intelligence (BI) allowing for interactive multidimensional analysis of huge volumes of spatial data. In such systems the quality ofanalysis mainly depends on three components : the quality of warehoused data, the quality of data aggregation, and the quality of data exploration. The warehoused data quality depends on elements such accuracy, comleteness and logical consistency. The data aggregation quality is affected by structural problems (e.g., non-strict dimension hierarchies that may cause double-counting of measure values) and semantic problems (e.g., summing temperature values does not make sens in many applications). The data exploration quality is mainly affected by inconsistent user queries (e.g., what are temperature values in USSR in 2010?) leading to possibly meaningless interpretations of query results. This thesis address the problems of logical inconsistency that may affect the data, aggregation and exploration qualities in SOLAP. The logical inconsistency is usually defined as the presence of incoherencies (contradictions) in data ; It is typically controlled by means of Integrity Constraints (IC). In this thesis, we extends the notion of IC (in the SOLAP domain) in order to take into account aggregation and query incoherencies. To overcome the limitations of existing approaches concerning the definition of SOLAP IC, we propose a framework that is based on the standard languages UML and OCL. Our framework permits a plateforme-independent conceptual design and an automatic implementation of SOLAP IC ; It consists of three parts : (1) A SOLAP IC classification, (2) A UML profile implemented in the CASE tool MagicDraw, allowing for a conceptual design of SOLAP models and their IC, (3) An automatic implementation based on the code generators Spatial OCLSQL and UML2MDX, which allows transforming the conceptual specifications into code. Finally, the contributions of this thesis have been experimented and validated in the context of French national projetcts aimming at developping (S)OLAP applications for agriculture and environment.
227

Designing conventional, spatial, and temporal data warehouses: concepts and methodological framework

Malinowski Gajda, Elzbieta 02 October 2006 (has links)
Decision support systems are interactive, computer-based information systems that provide data and analysis tools in order to better assist managers on different levels of organization in the process of decision making. Data warehouses (DWs) have been developed and deployed as an integral part of decision support systems. <p><p>A data warehouse is a database that allows to store high volume of historical data required for analytical purposes. This data is extracted from operational databases, transformed into a coherent whole, and loaded into a DW during the extraction-transformation-loading (ETL) process. <p><p>DW data can be dynamically manipulated using on-line analytical processing (OLAP) systems. DW and OLAP systems rely on a multidimensional model that includes measures, dimensions, and hierarchies. Measures are usually numeric additive values that are used for quantitative evaluation of different aspects about organization. Dimensions provide different analysis perspectives while hierarchies allow to analyze measures on different levels of detail. <p><p>Nevertheless, currently, designers as well as users find difficult to specify multidimensional elements required for analysis. One reason for that is the lack of conceptual models for DW and OLAP system design, which would allow to express data requirements on an abstract level without considering implementation details. Another problem is that many kinds of complex hierarchies arising in real-world situations are not addressed by current DW and OLAP systems.<p><p>In order to help designers to build conceptual models for decision-support systems and to help users in better understanding the data to be analyzed, in this thesis we propose the MultiDimER model - a conceptual model used for representing multidimensional data for DW and OLAP applications. Our model is mainly based on the existing ER constructs, for example, entity types, attributes, relationship types with their usual semantics, allowing to represent the common concepts of dimensions, hierarchies, and measures. It also includes a conceptual classification of different kinds of hierarchies existing in real-world situations and proposes graphical notations for them.<p><p>On the other hand, currently users of DW and OLAP systems demand also the inclusion of spatial data, visualization of which allows to reveal patterns that are difficult to discover otherwise. The advantage of using spatial data in the analysis process is widely recognized since it allows to reveal patterns that are difficult to discover otherwise. <p><p>However, although DWs typically include a spatial or a location dimension, this dimension is usually represented in an alphanumeric format. Furthermore, there is still a lack of a systematic study that analyze the inclusion as well as the management of hierarchies and measures that are represented using spatial data. <p><p>With the aim of satisfying the growing requirements of decision-making users, we extend the MultiDimER model by allowing to include spatial data in the different elements composing the multidimensional model. The novelty of our contribution lays in the fact that a multidimensional model is seldom used for representing spatial data. To succeed with our proposal, we applied the research achievements in the field of spatial databases to the specific features of a multidimensional model. The spatial extension of a multidimensional model raises several issues, to which we refer in this thesis, such as the influence of different topological relationships between spatial objects forming a hierarchy on the procedures required for measure aggregations, aggregations of spatial measures, the inclusion of spatial measures without the presence of spatial dimensions, among others. <p><p>Moreover, one of the important characteristics of multidimensional models is the presence of a time dimension for keeping track of changes in measures. However, this dimension cannot be used to model changes in other dimensions. <p>Therefore, usual multidimensional models are not symmetric in the way of representing changes for measures and dimensions. Further, there is still a lack of analysis indicating which concepts already developed for providing temporal support in conventional databases can be applied and be useful for different elements composing a multidimensional model. <p><p>In order to handle in a similar manner temporal changes to all elements of a multidimensional model, we introduce a temporal extension for the MultiDimER model. This extension is based on the research in the area of temporal databases, which have been successfully used for modeling time-varying information for several decades. We propose the inclusion of different temporal types, such as valid and transaction time, which are obtained from source systems, in addition to the DW loading time generated in DWs. We use this temporal support for a conceptual representation of time-varying dimensions, hierarchies, and measures. We also refer to specific constraints that should be imposed on time-varying hierarchies and to the problem of handling multiple time granularities between source systems and DWs. <p><p>Furthermore, the design of DWs is not an easy task. It requires to consider all phases from the requirements specification to the final implementation including the ETL process. It should also take into account that the inclusion of different data items in a DW depends on both, users' needs and data availability in source systems. However, currently, designers must rely on their experience due to the lack of a methodological framework that considers above-mentioned aspects. <p><p>In order to assist developers during the DW design process, we propose a methodology for the design of conventional, spatial, and temporal DWs. We refer to different phases, such as requirements specification, conceptual, logical, and physical modeling. We include three different methods for requirements specification depending on whether users, operational data sources, or both are the driving force in the process of requirement gathering. We show how each method leads to the creation of a conceptual multidimensional model. We also present logical and physical design phases that refer to DW structures and the ETL process.<p><p>To ensure the correctness of the proposed conceptual models, i.e. with conventional data, with the spatial data, and with time-varying data, we formally define them providing their syntax and semantics. With the aim of assessing the usability of our conceptual model including representation of different kinds of hierarchies as well as spatial and temporal support, we present real-world examples. Pursuing the goal that the proposed conceptual solutions can be implemented, we include their logical representations using relational and object-relational databases.<p> / Doctorat en sciences appliquées / info:eu-repo/semantics/nonPublished
228

Optical Remote Measurements of Particles in Emission Gas Plumes

Wang, Weihua January 2017 (has links)
This project aims to use temporal and spatial data analysis to express the measured air pollutant concentrations as well as their relation with the aerosol traces, indicated by the intensity ratio (IR). The surveying were performed during a field campaign of several days of mobile optical remote sensing measurements in Tianjing city, China. The spectroscopic data were recorded by different spectrometers from which the most two mature techniques are the Differential Optical Absorption Spectroscopy (DOAS) and the Solar Occultation Flux (SOF). Besides, a third measuring approach of using a ‘Flame’ spectrometer was explored to discover the sulphur dioxide (SO2) traces. Except for the different spectrometers, other auxiliary data were taken from the wind meter and the GPS tracker. To construct an integral geographic information, data inspecting, cleaning, merging were heavily applied based on physical modeling. / 本文旨在利用时间-空间域数据分析来探索所测量的大气污染物浓度以及它们与其诱发的气溶胶之间的关联情况。气溶胶痕迹以特定波长的光强度之比(IR)来表征。本文所用到的数据采自于针对天津市工业园区及天津港进行的车载移动光学遥感测量。所关注的大气污染物浓度通过国际上最成熟的差分吸收光谱(DOAS)技术和掩光通量(SOF)测量法计算得出,而光谱数据则通过第三种技术手段,即利用迷你型‘火焰’光谱仪,记录测量。研究表明该小型光谱仪可以发现并记录二氧化硫气体的踪迹。本文在物理建模的基础上大量应用到信号检测、滤波,数据清洗、合并。综合浓度数据,光谱数据以及辅助数据诸如取自测风仪的风速风向信息和取自GPS跟踪器的位置信息,成功构建出完整的区域地理信息。
229

Classification de données multivariées multitypes basée sur des modèles de mélange : application à l'étude d'assemblages d'espèces en écologie / Model-based clustering for multivariate and mixed-mode data : application to multi-species spatial ecological data

Georgescu, Vera 17 December 2010 (has links)
En écologie des populations, les distributions spatiales d'espèces sont étudiées afin d'inférer l'existence de processus sous-jacents, tels que les interactions intra- et interspécifiques et les réponses des espèces à l'hétérogénéité de l'environnement. Nous proposons d'analyser les données spatiales multi-spécifiques sous l'angle des assemblages d'espèces, que nous considérons en termes d'abondances absolues et non de diversité des espèces. Les assemblages d'espèces sont une des signatures des interactions spatiales locales des espèces entre elles et avec leur environnement. L'étude des assemblages d'espèces peut permettre de détecter plusieurs types d'équilibres spatialisés et de les associer à l'effet de variables environnementales. Les assemblages d'espèces sont définis ici par classification non spatiale des observations multivariées d'abondances d'espèces. Les méthodes de classification basées sur les modèles de mélange ont été choisies afin d'avoir une mesure de l'incertitude de la classification et de modéliser un assemblage par une loi de probabilité multivariée. Dans ce cadre, nous proposons : 1. une méthode d'analyse exploratoire de données spatiales multivariées d'abondances d'espèces, qui permet de détecter des assemblages d'espèces par classification, de les cartographier et d'analyser leur structure spatiale. Des lois usuelles, telle que la Gaussienne multivariée, sont utilisées pour modéliser les assemblages, 2. un modèle hiérarchique pour les assemblages d'abondances lorsque les lois usuelles ne suffisent pas. Ce modèle peut facilement s'adapter à des données contenant des variables de types différents, qui sont fréquemment rencontrées en écologie, 3. une méthode de classification de données contenant des variables de types différents basée sur des mélanges de lois à structure hiérarchique (définies en 2.). Deux applications en écologie ont guidé et illustré ce travail : l'étude à petite échelle des assemblages de deux espèces de pucerons sur des feuilles de clémentinier et l'étude à large échelle des assemblages d'une plante hôte, le plantain lancéolé, et de son pathogène, l'oïdium, sur les îles Aland en Finlande / In population ecology, species spatial patterns are studied in order to infer the existence of underlying processes, such as interactions within and between species, and species response to environmental heterogeneity. We propose to analyze spatial multi-species data by defining species abundance assemblages. Species assemblages are one of the signatures of the local spatial interactions between species and with their environment. Species assemblages are defined here by a non spatial classification of the multivariate observations of species abundances. Model-based clustering procedures using mixture models were chosen in order to have an estimation of the classification uncertainty and to model an assemblage by a multivariate probability distribution. We propose : 1. An exploratory tool for the study of spatial multivariate observations of species abundances, which defines species assemblages by a model-based clustering procedure, and then maps and analyzes the spatial structure of the assemblages. Common distributions, such as the multivariate Gaussian, are used to model the assemblages. 2. A hierarchical model for abundance assemblages which cannot be modeled with common distributions. This model can be easily adapted to mixed mode data, which are frequent in ecology. 3. A clustering procedure for mixed-mode data based on mixtures of hierarchical models. Two ecological case-studies guided and illustrated this work: the small-scale study of the assemblages of two aphid species on leaves of Citrus trees, and the large-scale study of the assemblages of a host plant, Plantago lanceolata, and its pathogen, the powdery mildew, on the Aland islands in south-west Finland
230

Client-Server Communications Efficiency in GIS/NIS Applications : An evaluation of communications protocols and serialization formats / Kommunikationseffektivitet mellan klient och server i GIS/NIS-applikationer : En utvärdering av kommunikationsprotokoll och serialiseringsformat

Klingestedt, Kashmir January 2018 (has links)
Geographic Information Systems and Network Information Systems are important tools for our society, used for handling geographic spatial data and large information networks. It is therefore important to make sure such tools are of high quality. GIS/NIS applications typically deal with a lot of data, possibly resulting in heavy loads of network traffic. This work aims to evaluate two different communications protocols and serialization formats for client-server communications efficiency in GIS/NIS applications. Specifically, these are HTTP/1.1, HTTP/2, Java Object Serialization and Google's Protocol Buffers. They were each implemented directly into a commercial GIS/NIS environment and evaluated by measuring two signature server calls in the system. Metrics that were examined are call duration, HTTP overhead size and HTTP payload size. The results suggest that HTTP/2 and Google's Protocol Buffers outperform HTTP/1.1 and Java Object Serialization respectively. An 87% decrease in HTTP overhead size was achieved when switching from HTTP/1.1 to HTTP/2. The HTTP payload size is also shown to decrease with the use of Protocol Buffers rather than Java Object Serialization, especially for communications where data consist of many different object types. Concerning call duration, the results suggest that the choice of communications protocol is more significant than the choice of serialization format for communications containing little data, while the opposite is true for communications containing much data. / Geografiska informationssystem och nätverksinformationssystem är viktiga redskap för vårt samhälle, vilka används för hantering av geografisk data och stora informationsnätverk. Det är därför viktigt att se till att sådana system är av hög kvalitet. GIS/NIS-applikationer behandlar vanligtvis stora mängder data, vilket kan resultera i mycket nätverkstrafik. I det här arbetet utvärderas två olika kommunikationsprotokoll och serialiseringsformat för kommunikationseffektivitet mellan klient och server i GIS/NIS-applikationer. Specifikt är dessa HTTP/1.1, HTTP/2, Java Objektserialisering och Googles Protocol Buffers. De implementerades var och en i en kommersiell GIS/NIS-miljö och utvärderades genom mätningar av två signaturanrop i systemet. De aspekter som observerades är kommunikationstiden, mängden HTTP-overhead och mängden HTTP-payload. Resultaten tyder på att HTTP/2 och Googles Protocol Buffers presterar bättre än HTTP/1.1 respektive Java Objektserialisering. En 87% minskning av mängden HTTP overhead uppnåddes då HTTP/1.1 ersattes med HTTP/2. En minskning av mängden HTTP payload observeras också med användning av Protocol Buffers snarare än Java Objektserialisering, särskilt för kommunikationer där data innehåller många olika objekttyper. Gällande kommunikationstiden tyder resultaten på att valet av kommunikationsprotokoll påverkar mer än valet av serialiseringsformat för kommunikationer med små mängder data, medan motsatsen gäller för kommunikationer med mycket data.

Page generated in 0.4846 seconds