• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 30
  • 9
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

A new approach for Enterprise Application Architecture for Financial Information Systems : An investigation of the architectural implications of adopting serialization and RPC frameworks, NoSQL/hybrid data stores and heterogeneous computing in Financial Information Systems

Eriksson, Peter January 2015 (has links)
This thesis investigates the architectural implications of adopting serialisation and remote procedure call (RPC) frameworks, NoSQL/hybrid data stores and heterogeneous computing in financial information systems. Each tech- nology and its implications is analysed separately together with its benefits and drawbacks for the system implemen- tor. The investigation shows that all three technologies can help alleviate technical challenges facing financial enter- prises; but they all come at a cost of complexity. / Denna rapport undersöker inverkan av serialiserings- och fjärrproceduranropsramverk (RPC), NoSQL/hybrid data- lagringslöningar samt heterogen beräkning på arkitekturen av finansiella informationssystem. Varje teknologi och dess inverkan analyseras var för sig tillsammans med dess för- och nackdelar för systemutvecklaren. Undersökningen visar att alla tre teknologierna kan av- hjälpa de tekniska utmaningar som finansiella aktörer står inför; men alla medför en komplexitetskostnad.
12

Enabling Digital Twins : A comparative study on messaging protocols and serialization formats for Digital Twins in IoV / Att möjliggöra digitala tvillingar

Persson Proos, Daniel January 2019 (has links)
In this thesis, the trade-offs between latency and transmitted data volume in vehicle-to-cloud communication for different choices of application layer messaging protocols and binary serialization formats are studied. This is done with the purpose of getting enough performance improvement to enable delay-sensitive Intelligent Transport System (ITS) features, and to reduce data usage in mobile networks. The studied protocols are Constrained Application Protocol (CoAP), Advanced Message Queuing Protocol (AMQP) and Message Queuing Telemetry Transport (MQTT), and the serialization formats studied are Protobuf and Flatbuffers.  The results show that CoAP — the only User Datagram Protocol (UDP) based protocol — has the lowest latency and overhead while not being able to guarantee reliable transfer. The best performer that can guarantee reliable transfer is MQTT. For the serialization formats, Protobuf is shown to have three times smaller serialized message size than Flatbuffers and also faster serialization speed. Flatbuffers is the winner in the case of memory use and deserialization time, which could make up for the poor performance in other aspects of data processing in the cloud. Further, the implications of these results in ITS communication are discussed suggestions made into future research topics.
13

Design and Implementation of Domain Modeling Language using Object Oriented Requirements : Case Study on Jeppesen's Modeling Language

Gangarapu, Rohan January 2022 (has links)
Background: With the rapid progress in the world, it is still a drawback that data modeling have to be developed and maintained separately each time when enhanced work is carried out. The technology we have now is not good enough to automate itself and keep itself up to date. It needs regularly to get help from people to keep working. In addition, a lot of storage space is needed to hold the new design and data because a single source cannot manage it. Every organization has to deal with this problem. The majority of large, dispersed organizations are most affected. Companies have a lot of data to handle and keep track of, so the data modeling process should be quick, error-free, and not risky in any way. Objectives: To get around these problems, there should be a way to automate the data modeling. The aim of this thesis is to give a way to solve the problem through automation. The outcome of the research not only helps to automate the data modeling, but it also keeps the data in a way that doesn’t waste memory on unnecessary data. Methods: This research employs a case study and a literature review. The case study was done at Jeppesen on the Dave modeling language. A literature review was undertaken in order to employ a specific approach for extending the data model. A survey is carried out to identify Dave’s limitations and considerations for improvement. The replies are supplied by Jeppesen employees (developers and users of Dave teams). Research was conducted and the findings were implemented as a program to upgrade the Dave modeling language to meet new object-oriented demands. Results: The findings identify some limitations of the existing Dave language and present an approach for automating the data modeling abilities by incorporating new features such as abstraction and inheritance so that it can keep up with the real time environments. It creates and automates the data model to provide rapid and right standard. The implementation strategy is drawn from the findings of a literature review. Conclusions: The existing data model requires extensive manual labour. By adding abstraction and inheritance to the data model, the new data model automates the process, reduces staffing needs, and runs with fewer risks.
14

Effective construction of data aggregation services in Java

Andersson, Fredrik, Cedergren Malmqvist, Simon January 2015 (has links)
Stora mängder data genereras dagligen av slutanvändare hos olika tjänster. Denna data tenderar att tillhandahållas av olika aktörer, vilket skapar en fragmenterad marknad där slutanvändare måste nyttja flera programvaror för att ta del av all sin data. Detta kan motverkas genom utvecklandet av aggregeringstjänster vilka samlar data från flera tjänster på en enskild ändpunkt. Utveckling av denna typ av tjänster riskerar dock att bli kostsamt och tidskrävande, då ny kod skrivs för flera projekt trots att stora delar av funktionaliteten är snarlik. För att undvika detta kan etablerade tekniker och ramverk användas för att på så vis återanvända mer generella komponenter. Vilka av dessa tekniker som är bäst lämpade och således kan anses vara mest effektiva ur ett utvecklingsperspektiv, kan dock vara svårt att avgöra. Därför baseras denna uppsats på vad som genom analys av akademisk litteratur kan utläsas som ett akademiskt konsensus.Innan denna uppsats påbörjades utvecklades en Java-baserad dataaggeringstjänst baserad på krav från ÅF i Malmö. Denna experimentella implementation har som syfte att samla in data från två separata tjänster, och tillgängliggöra denna på en enskild ändpunkt. Efter att implementationen färdigställts påbörjades arbetet på uppsatsen. Denna består av en litteraturstudie för att undersöka vilka tekniker och ramverk som akademisk forskning funnit bäst lämpad för användningsområdet. Vidare används resultaten från studien även för att analysera i vilken grad dessa korrelerar med de krav som ÅF presenterade inför den experimentella implementationen.Litteraturstudien visar på att de teknikmässiga val som gjordes av företaget i stor utsträckning korrelerar med de tekniker som akademisk forskning funnit bäst lämpade för användningsområdet. Detta innefattar bland annat OAuth 2.0 för autentisering, JSON som serialiseringsformat samt REST som kommunikationsarkitektur. Vidare visar denna litteraturstudie på en eventuell lucka inom den tillgängliga litteraturen, då sökningar kring specifika programvaror relaterade till området endast resulterar i en mindre mängd artiklar. / Large quantities of data are generated daily by the end users of various services. This data is often provided by different providers, which creates a fragmented market where the end users have to utilize multiple applications in order to access all of their data. This can be counteracted by the development of aggregation services that gather data from multiple services to a combined endpoint. The development of these kinds of services does however run the risk of becoming costly and time-consuming since new code is written for several projects even though large portions of the functionality is similar. To avoid this, established technologies and frameworks can be utilized, thereby reusing the more general components. Which of the technologies are the best suited, and thereby can be considered the most effective from a development perspective, can however be difficult to determine. This essay is therefore based on what can be considered an academic consensus through analysis of literature regarding earlier reasearch on the subject. Before the writing of the essay began a Java-based data aggregation service was developed, based on requirements from the company ÅF in Malmö. The purpose of this experimental implementation is to gather data from two separate services, and make them accessible on a unified endpoint.After the implementation was finished, work on the essay began. This consists of a literature review to investigate what technologies and frameworks that has been found best suited for this area of application by academic research. The results from this study are also used to analyze the extent of the correlation between the results and the requirements presented by ÅF regarding the experimental implementation. The literature review shows that the choices made by the company largely correlates with the technologies that the academic research has found best suited for this area of application. This includes OAuth 2.0 for authentication, JSON as a serialization format and REST for communications architecture. The literature review also indicates a possible gap within the available academic literature since searches regarding specific pieces of software related to the subject only results in a small amount of articles.
15

Contributions to the security of mobile agent systems / Contributions à la sécurité des systèmes d’agents mobiles

Idrissi, Hind 15 July 2016 (has links)
Récemment, l’informatique distribuée a connu une grande évolution en raison de l’utilisation du paradigme des agents mobiles, doté d’innovantes capacités, au lieu du système client-serveur où les applications sont liées à des nœuds particuliers dans les réseaux. Ayant capturé l’intérêt des chercheurs et de l’industrie, les agents mobiles sont capables de migrer de manière autonome d’un nœud à un autre à travers le réseau, en transférant de leur code et leurs données, ce qui leur permet d’effectuer efficacement des calculs, de recueillir des informations et d’accomplir des tâches. Cependant, en dépit de ses avantages significatifs, ce paradigme souffre encore de certaines limitations qui font obstacle à son expansion, principalement dans le domaine de la sécurité. Selon les efforts actuellement déployés pour évaluer la sécurité des agents mobiles, deux catégories de menaces sont considérées. La première catégorie concerne les attaques menées sur l’agent mobile lors de son voyage à travers des hôtes ou des entités malveillantes, tandis que la seconde catégorie traite les attaques effectuées par un agent mobile illicite afin d’affecter la plate-forme d’hébergement et de consommer ses ressources. Ainsi, il est substantiellement nécessaire de concevoir une infrastructure de sécurité complète pour les systèmes d’agents mobiles, qui comprend la méthodologie, les techniques et la validation. L’objectif de cette thèse est de proposer des approches qui fournissent cette technologie avec des fonctionnalités de sécurité, qui correspondent à sa structure globale sans compromettre ses capacités de mobilité, l’interopérabilité et l’autonomie. Notre première approche est basée sur la sérialisation XML et des primitives cryptographiques, afin d’assurer une mobilité persistante de l’agent ainsi qu’une communication sécurisée avec les plates-formes d’hébergement. Dans la seconde approche, nous avons conçu une alternative à la première approche en utilisant la sérialisation binaire et la cryptographie à base de l’identité. Notre troisième approche introduit l’aspect d’anonymat à l’agent mobile, et lui fournit un mécanisme de traçage pour détecter les intrusions le long de son voyage. La quatrième approche a été développée dans le but de restreindre l’accès aux ressources de la plate-forme de l’agent, en utilisant une politique de contrôle d’accès bien définie à base la cryptographie à seuil. A ce stade, on s’est intéressé à expérimenter l’utilité des agents mobiles avec des fonctionnalités de sécurité, dans la préservation de la sécurité des autres technologies, telles que le Cloud Computing. Ainsi, nous avons proposé une architecture innovante du Cloud, en utilisant des agents mobiles dotés de traces cryptographiques pour la détection d’intrusion et d’un protocole de révocation à base de seuil de confiance pour la prévention. / Recently, the distributed computing has witnessed a great evolution due to the use of mobile agent paradigm, endowed with innovative capabilities, instead of the client-server system where the applications are bound to particular nodes in networks. Having captured the interest of researchers and industry, the mobile agents areable to autonomously migrate from one node to another across the network, transferring their code and data, which allows them to efficiently perform computations, gather information and accomplish tasks. However, despite its significant benefits, this paradigm still suffering from some limitations that obstruct its expansion, primarily in the area of security. According to the current efforts to investigate the security of mobile agents, two categories of threats are considered. The first one concerns the attacks carried out on the mobile agent during its travel or stay by malicious hosts or entities, while the second one deals the attacks performed by a malicious mobile agent in order to affect the hosting platform and consume its resources. Thus, it is substantially needed to conceive a complete security infrastructure for mobile agent systems, which includes methodology, techniques and validation. The aim of this thesis is to propose approaches which provide this technology with security features, that meet with its overall structure without compromising its mobility, interoperbility and autonomy capabilities. Our first approach was based on XML serialization and cryptographic primitives, in order to ensure a persistent mobility of agent as well as a secure communication with hosting platforms. In the second approach, we have conceived an alternative to the first approach using binary serialization and Identity-based cryptography. Our third approach was proposed to introduce anonymity aspect to the mobile agent, and provide him with a tracing mechanism to detect intrusions along its trip. The fourth approach was developed in order to restrict the access to the resources of the agent platform, using a well-defined access control policy based on threshold cryptography. At this stage, we find it interesting to experiment the utility of mobile agents with security features in preserving the security of other technologies such as cloud computing. Thus, we have developed an innovative cloud architecture using mobile agents endowed with cryptographic traces for intrusion detection and a revocation protocol based on trust threshold for prevention.
16

Towards RDF normalization / Vers une normalisation RDF

Ticona Herrera, Regina Paola 06 July 2016 (has links)
Depuis ces dernières décennies, des millions d'internautes produisent et échangent des données sur le Web. Ces informations peuvent être structurées, semi-structurées et/ou non-structurées, tels que les blogs, les commentaires, les pages Web, les contenus multimédias, etc. Afin de faciliter la publication ainsi que l'échange de données, le World Wide Web Consortium (ou W3C) a défini en 1999 le standard RDF. Ce standard est un modèle qui permet notamment de structurer une information sous la forme d'un réseau de données dans lequel il est possible d'y attacher des descriptions sémantiques. Ce modèle permet donc d'améliorer l'interopérabilité entre différentes applications exploitant des données diverses et variées présentes sur le Web.Actuellement, une grande quantité de descriptions RDF est disponible en ligne, notamment grâce à des projets de recherche qui traitent du Web de données liées, comme par exemple DBpedia et LinkedGeoData. De plus, de nombreux fournisseurs de données ont adopté les technologies issues de cette communauté du Web de données en partageant, connectant, enrichissant et publiant leurs informations à l'aide du standard RDF, comme les gouvernements (France, Canada, Grande-Bretagne, etc.), les universités (par exemple Open University) ainsi que les entreprises (BBC, CNN, etc.). Il en résulte que de nombreux acteurs actuels (particuliers ou organisations) produisent des quantités gigantesques de descriptions RDF qui sont échangées selon différents formats (RDF/XML, Turtle, N-Triple, etc.). Néanmoins, ces descriptions RDF sont souvent verbeuses et peuvent également contenir de la redondance d'information. Ceci peut concerner à la fois leur structure ou bien leur sérialisation (ou le format) qui en plus souffre de multiples variations d'écritures possibles au sein d'un même format. Tous ces problèmes induisent des pertes de performance pour le stockage, le traitement ou encore le chargement de ce type de descriptions. Dans cette thèse, nous proposons de nettoyer les descriptions RDF en éliminant les données redondantes ou inutiles. Ce processus est nommé « normalisation » de descriptions RDF et il est une étape essentielle pour de nombreuses applications, telles que la similarité entre descriptions, l'alignement, l'intégration, le traitement des versions, la classification, l'échantillonnage, etc. Pour ce faire, nous proposons une approche intitulée R2NR qui à partir de différentes descriptions relatives à une même information produise une et une seule description normalisée qui est optimisée en fonction de multiples paramètres liés à une application cible. Notre approche est illustrée en décrivant plusieurs cas d'étude (simple pour la compréhension mais aussi plus réaliste pour montrer le passage à l'échelle) nécessitant l'étape de normalisation. La contribution de cette thèse peut être synthétisée selon les points suivants :i. Produire une description RDF normalisée (en sortie) qui préserve les informations d'une description source (en entrée),ii. Éliminer les redondances et optimiser l'encodage d'une description normalisée,iii. Engendrer une description RDF optimisée en fonction d'une application cible (chargement rapide, stockage optimisée...),iv. Définir de manière complète et formelle le processus de normalisation à l'aide de fonctions, d'opérateurs, de règles et de propriétés bien fondées, etc.v. Fournir un prototype RDF2NormRDF (avec deux versions : en ligne et hors ligne) permettant de tester et de valider l'efficacité de notre approche.Afin de valider notre proposition, le prototype RDF2NormRDF a été utilisé avec une batterie de tests. Nos résultats expérimentaux ont montré des mesures très encourageantes par rapport aux approches existantes, notamment vis-à-vis du temps de chargement ou bien du stockage d'une description normalisée, tout en préservant le maximum d'informations. / Over the past three decades, millions of people have been producing and sharing information on the Web, this information can be structured, semi-structured, and/or non-structured such as blogs, comments, Web pages, and multimedia data, etc., which require a formal description to help their publication and/or exchange on the Web. To help address this problem, the Word Wide Web Consortium (or W3C) introduced in 1999 the RDF standard as a data model designed to standardize the definition and use of metadata, in order to better describe and handle data semantics, thus improving interoperability, and scalability, and promoting the deployment of new Web applications. Currently, billions of RDF descriptions are available on the Web through the Linked Open Data cloud projects (e.g., DBpedia and LinkedGeoData). Also, several data providers have adopted the principles and practices of the Linked Data to share, connect, enrich and publish their information using the RDF standard, e.g., Governments (e.g., Canada Government), universities (e.g., Open University) and companies (e.g., BBC and CNN). As a result, both individuals and organizations are increasingly producing huge collections of RDF descriptions and exchanging them through different serialization formats (e.g., RDF/XML, Turtle, N-Triple, etc.). However, many available RDF descriptions (i.e., graphs and serializations) are noisy in terms of structure, syntax, and semantics, and thus may present problems when exploiting them (e.g., more storage, processing time, and loading time). In this study, we propose to clean RDF descriptions of redundancies and unused information, which we consider to be an essential and required stepping stone toward performing advanced RDF processing as well as the development of RDF databases and related applications (e.g., similarity computation, mapping, alignment, integration, versioning, clustering, and classification, etc.). For that purpose, we have defined a framework entitled R2NR which normalizes different RDF descriptions pertaining to the same information into one normalized representation, which can then be tuned both at the graph level and at the serialization level, depending on the target application and user requirements. We illustrate this approach by introducing use cases (real and synthetics) that need to be normalized.The contributions of the thesis can be summarized as follows:i. Producing a normalized (output) RDF representation that preserves all the information in the source (input) RDF descriptions,ii. Eliminating redundancies and disparities in the normalized RDF descriptions, both at the logical (graph) and physical (serialization) levels,iii. Computing a RDF serialization output adapted w.r.t. the target application requirements (faster loading, better storage, etc.),iv. Providing a mathematical formalization of the normalization process with dedicated normalization functions, operators, and rules with provable properties, andv. Providing a prototype tool called RDF2NormRDF (desktop and online versions) in order to test and to evaluate the approach's efficiency.In order to validate our framework, the prototype RDF2NormRDF has been tested through extensive experimentations. Experimental results are satisfactory show significant improvements over existing approaches, namely regarding loading time and file size, while preserving all the information from the original description.
17

Aspects de l'identité narrative chez Witold Gombrowicz et Virginia Woolf

Nasr, Marwan 08 1900 (has links)
À la lumière de la théorie de l’identité narrative élaborée par Paul Ricœur, ce mémoire évoque la configuration identitaire des personnages dans les œuvres Cosmos et Les envoûtés de W. Gombrowicz, ainsi que To the Lighthouse et The Waves de V. Woolf. D’une part, nous analyserons l’obsession d’une mise en série aliénante dans le cas de Witold (Cosmos), suivi par un cas de dédoublement et de perte dans l’Autre chez Walczak (Les envoûtés). D’autre part, nous évoquerons le rapport à la famille (James Ramsay) en plus de l’inscription et de l’ancrage par le biais d’éléments prépondérants (The Waves). Il en résulte chez ces personnages, la conception d’une identité par l’entremise d’histoires et d’événements qui la façonnent en un parcours narratif singulier. / By the means of Paul Ricœur’s theory on narrative identity, the following thesis will examine the singular conceptions of identity in the works Cosmos and Les envoutés (W. Gombrowicz) as well as To the Lighthouse and The Waves by V. Woolf. On one hand, we will analyze the obsessive serialization behavior that Witold manifests in Cosmos followed by a case of complete loss of one’s self towards otherness. Furthermore, we will scrutinize the family turmoil between James Ramsay (To the Lighthouse) and his family members. Additionally, we will also inspect the adherence and anchoring of three protagonists towards the development of a unique perception of their environment (The Waves). Ultimately, the protagonists recognize that one’s self is intrically linked to individual stories and events that craft their own sense of being.
18

Polymediated Narrative: The Case of the Supernatural Episode "Fan Fiction"

Herbig, Art, Herrmann, Andrew F. 29 January 2016 (has links)
Modern stories are the product of a recursive process influenced by elements of genre, outside content, medium, and more. These stories exist in a multitude of forms and are transmitted across multiple media. This article examines how those stories function as pieces of a broader narrative, as well as how that narrative acts as a world for the creation of stories. Through an examination of the polymediated nature of modern narratives, we explore the complicated nature of modern storytelling.
19

Aspects de l'identité narrative chez Witold Gombrowicz et Virginia Woolf

Nasr, Marwan 08 1900 (has links)
À la lumière de la théorie de l’identité narrative élaborée par Paul Ricœur, ce mémoire évoque la configuration identitaire des personnages dans les œuvres Cosmos et Les envoûtés de W. Gombrowicz, ainsi que To the Lighthouse et The Waves de V. Woolf. D’une part, nous analyserons l’obsession d’une mise en série aliénante dans le cas de Witold (Cosmos), suivi par un cas de dédoublement et de perte dans l’Autre chez Walczak (Les envoûtés). D’autre part, nous évoquerons le rapport à la famille (James Ramsay) en plus de l’inscription et de l’ancrage par le biais d’éléments prépondérants (The Waves). Il en résulte chez ces personnages, la conception d’une identité par l’entremise d’histoires et d’événements qui la façonnent en un parcours narratif singulier. / By the means of Paul Ricœur’s theory on narrative identity, the following thesis will examine the singular conceptions of identity in the works Cosmos and Les envoutés (W. Gombrowicz) as well as To the Lighthouse and The Waves by V. Woolf. On one hand, we will analyze the obsessive serialization behavior that Witold manifests in Cosmos followed by a case of complete loss of one’s self towards otherness. Furthermore, we will scrutinize the family turmoil between James Ramsay (To the Lighthouse) and his family members. Additionally, we will also inspect the adherence and anchoring of three protagonists towards the development of a unique perception of their environment (The Waves). Ultimately, the protagonists recognize that one’s self is intrically linked to individual stories and events that craft their own sense of being.
20

JavaFX Scene Graph Object Serialization

Khodabandehloo, Elmira January 2013 (has links)
Data visualization is used in order to analyze and perceive patterns in data. One of the use cases of visualization is to graphically represent and compare simulation results. At Ericsson Research, a visualization platform, based on JavaFX 2 is used to visualize simulation results. Three configuration files are required in order to create an application based on the visualization tool: XML, FXML, and CSS. The current problem is that, in order to set up a visualization application, the three configuration files must be written by hand which is a very tedious task. The purpose of this study is to reduce the amount of work which is required to construct a visualization application by providing a serialization function which makes it possible to save the layout (FXML) of the application at run-time based solely on the scene graph. In this master’s thesis, possible frameworks that might ease the implementation of a generic FXML serialization have been investigated and the most promising alternative according to a number of evaluation metrics has been identified. Then, using a design science research method, an algorithm is proposed which is capable of generic object/bean serialization to FXML based on a number of features or requirements. Finally, the implementation results are evaluated through a set of test cases. The evaluation is composed of an analysis of the serialization results & tests and a comparison of the expected result and the actual results using unit testing and test coverage measurements. Evaluation results for each serialization function show that the results of the serialization are similar to the original files and hence the proposed algorithm provides the desired serialization functionality for the specific features of FXML needed for this platform, provided that the tests considered every aspect of the serialization functionality. / Datavisualisering används för att analysera och uppfatta mönster i data. Ett användningsfall för visualisering är att grafiskt representera och jämföra simuleringsresultat. På Ericsson Research har en visualiseringplattform för att visualisera simuleringsresultat utvecklats som baserats på JavaFX 2. Tre konfigurationsfiler krävs för att skapa en applikation baserad på denna visualiseringsplattform: XML, FXML och CSS. Det nuvarande problemet är att för att utveckla en ny applikation så måste de tre konfigurationsfilerna skrivas för hand vilket är kräver mycket utvecklingstid. Syftet med denna studie är att minska mängden arbete som krävs för att konstruera en visualiseringapplikation genom att tillhandahålla en serialiseringsfunktion som gör det möjligt att spara applikationens layout till en FXML-fil medan programmet exekverar enbart genom att extrahera information ur det grafiska gränsnittets scengraf. I detta examensarbete har ett antal mjukvarubibliotek eller API: er som kan underlätta utvecklandet av en generisk FXML serialiseringsfunktion analyserats och de mest lovande alternativen enligt ett antal utvärderingsmetriker har identifierats. Med hjälp av en iterativ, design-orienterad forskningsmetod har en algoritm designats som är kapabel till att serialisera generiska Java-objekt, eller Java-bönor till FXML. Den föreslagna algoritmen har sedan utvärderats genom automatiserade mjukvarutester. Utvärderingen består av: analys av serialiseringsresultat, design av testfall, samt jämförelse av förväntade resultat och de faktiska resultaten med hjälp av enhetstest och uppmätt kodtäckning. Utvärderingen visar att serialiseringsalgoritmen ger resultat som motsvarar de ursprungliga FXML-filerna som utformats för att verifiera olika delar av FXML standarden. Därmed anses den föreslagna serialiseringsalgoritmen uppfylla de delar av FXML-specifikationen som kravställts och beaktats i detta examensarbete.

Page generated in 0.1703 seconds