• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 2
  • 1
  • 1
  • Tagged with
  • 15
  • 15
  • 7
  • 7
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

An Improved Density-Based Clustering Algorithm Using Gravity and Aging Approaches

Al-Azab, Fadwa Gamal Mohammed January 2015 (has links)
Density-based clustering is one of the well-known algorithms focusing on grouping samples according to their densities. In the existing density-based clustering algorithms, samples are clustered according to the total number of points within the radius of the defined dense region. This method of determining density, however, provides little knowledge about the similarities among points. Additionally, they are not flexible enough to deal with dynamic data that changes over time. The current study addresses these challenges by proposing a new approach that incorporates new measures to evaluate the attributes similarities while clustering incoming samples rather than considering only the total number of points within a radius. The new approach is developed based on the notion of Gravity where incoming samples are clustered according to the force of their neighbouring samples. The Mass (density) of a cluster is measured using various approaches including the number of neighbouring samples and Silhouette measure. Then, the neighbouring sample with the highest force is the one that pulls in the new incoming sample to be part of that cluster. Taking into account the attribute similarities of points provides more information by accurately defining the dense regions around the incoming samples. Also, it determines the best neighbourhood to which the new sample belongs. In addition, the proposed algorithm introduces a new approach to utilize the memory efficiently. It forms clusters with different shapes over time when dealing with dynamic data. This approach, called Aging, enables the proposed algorithm to utilize the memory efficiently by removing points that are aged if they do not participate in clustering incoming samples, and consequently, changing the shapes of the clusters incrementally. Four experiments are conducted in this study to evaluate the performance of the proposed algorithm. The performance and effectiveness of the proposed algorithm are validated on a synthetic dataset (to visualize the changes of the clusters’ shapes over time), as well as real datasets. The experimental results confirm that the proposed algorithm is improved in terms of the performance measures including Dunn Index and SD Index. The experimental results also demonstrate that the proposed algorithm utilizes less memory, with the ability to form clusters with arbitrary shapes that are changeable over time.
2

Storage Efficient Code on Microcontrollers

Tågerud, Hampus January 2018 (has links)
I den här rapporten presenteras och implementeras ett mer lagringseffektivt sätt att köra kod på mikrokontrollers. Det jämförs också med det traditionella sättet detta görs på. Metoden involverar en hopptabell och målet är att kunna köra större mängder kod än vad som kan lagras på mikrokontrollern. Utan att förlora för mycket prestanda.I slutändan finns det inget självklart svar på om systemet som implementerats är ett bra alternativ till traditionella applikationer. Fler faktorer än bara prestanda presenteras och måste beaktas när system implementeras. Den utvecklade prototypen introducerade en overhead på cirka 1%. Därför kunde slutsatsen dras att prototypen är ett rimligt alternativ (prestandamässigt) till det traditionella sättet att köra applikationer. / In this paper, a more storage efficient way of running code on microcontrollers is presented, implemented and compared against the conventional method. The method involves utilising a jump table and the objective is to be able to execute larger amounts of code than fits into the program memory of the microcontroller. Without loosing too much performance.In conclusion, there is no obvious answer to whether the implemented system is a viable alternative to traditional applications or not. More variables than just performance are brought up and must be considered when a system is implemen- ted. However, the developed prototype introduced a minor overhead of about 1%. It could therefore be concluded that the prototype is a viable alternative, to the conventional way of running applications, performance-wise.
3

A performance investigation into JavaScript visualization libraries with the focus on render time and memory usage : A performance measurement of different libraries and statistical charts

Boström, Fredrik, Dahlberg, Alexander, Linderoth, William January 2022 (has links)
Visualizing data is important to make it easier to understand. Due to the common accessibility and the popularity of the web, a growth within web-based visualization is seen. One common and easy way to do this is by using an already implemented JavaScript library. When developing such a website some important properties to keep in mind are the render time and the memory usage. This study measured both the render time with different sizes of the dataset, and the render time when rendering different number of charts with the same dataset size. The memory usage was also measured with different sizes of the dataset. Six libraries were chosen to be included inthis study: D3, Echarts, CanvasJS, Chartist, Highcharts, and Plotly. An experiment has been performed to test the libraries render times and memory usage. The results of the experiment show that D3 has the overall lowest render times whilst CanvasJS had the lowest memory usage.
4

Lyra: uma função de derivação de chaves com custos de memória e processamento configuráveis. / Lyra: password-based key derivation with tunable memory and processing costs.

Almeida, Leonardo de Campos 16 March 2016 (has links)
Este documento apresenta o Lyra, um novo esquema de derivação de chaves, baseado em esponjas criptográficas. O Lyra foi projetado para ser estritamente sequencial, fornecendo um nível elevado de segurança mesmo contra atacantes que utilizem múltiplos núcleos de processamento, como uma GPU ou FPGA. Ao mesmo tempo possui uma implementação simples em software e permite ao usuário legítimo ajustar o uso de memória e tempo de processamento de acordo com o nível de segurança desejado. O Lyra é, então, comparado ao scrypt, mostrando que esta proposta fornece um nível se segurança mais alto, além de superar suas deficiências. Caso o atacante deseje realizar um ataque utilizando pouca memória, o tempo de processamento do Lyra cresce exponencialmente, enquanto no scrypt este crescimento é apenas quadrático. Além disto, para o mesmo tempo de processamento, o Lyra permite uma utilização maior de memória, quando comparado ao scrypt, aumentando o custo de ataques de força bruta. / This document presents Lyra, a password-based key derivation scheme based on cryptographic sponges. Lyra was designed to be strictly sequential, providing strong security even against attackers that use multiple processing cores, such as FPGAs or GPUs. At the same time, it is very simple to implement in software and allows legitimate users to tune its memory and processing costs according to the desired level of security. We compare Lyra with scrypt, showing how this proposal provides a higher security level and overcomes limitations of scrypt. If the attacker wishes to perform a low-memory attack against the algorithm, the processing cost grwos expontetialy, while in scrypt, this growth is only quadratic. In addition, for an identical processing time, Lyra allows for a higher memory usage than its counterparts, further increasing the cost of brute force attacks.
5

Lyra: uma função de derivação de chaves com custos de memória e processamento configuráveis. / Lyra: password-based key derivation with tunable memory and processing costs.

Leonardo de Campos Almeida 16 March 2016 (has links)
Este documento apresenta o Lyra, um novo esquema de derivação de chaves, baseado em esponjas criptográficas. O Lyra foi projetado para ser estritamente sequencial, fornecendo um nível elevado de segurança mesmo contra atacantes que utilizem múltiplos núcleos de processamento, como uma GPU ou FPGA. Ao mesmo tempo possui uma implementação simples em software e permite ao usuário legítimo ajustar o uso de memória e tempo de processamento de acordo com o nível de segurança desejado. O Lyra é, então, comparado ao scrypt, mostrando que esta proposta fornece um nível se segurança mais alto, além de superar suas deficiências. Caso o atacante deseje realizar um ataque utilizando pouca memória, o tempo de processamento do Lyra cresce exponencialmente, enquanto no scrypt este crescimento é apenas quadrático. Além disto, para o mesmo tempo de processamento, o Lyra permite uma utilização maior de memória, quando comparado ao scrypt, aumentando o custo de ataques de força bruta. / This document presents Lyra, a password-based key derivation scheme based on cryptographic sponges. Lyra was designed to be strictly sequential, providing strong security even against attackers that use multiple processing cores, such as FPGAs or GPUs. At the same time, it is very simple to implement in software and allows legitimate users to tune its memory and processing costs according to the desired level of security. We compare Lyra with scrypt, showing how this proposal provides a higher security level and overcomes limitations of scrypt. If the attacker wishes to perform a low-memory attack against the algorithm, the processing cost grwos expontetialy, while in scrypt, this growth is only quadratic. In addition, for an identical processing time, Lyra allows for a higher memory usage than its counterparts, further increasing the cost of brute force attacks.
6

Evaluation of CPU and  Memory performance between Object-oriented Design and Data-oriented Design in Mobile games

Eriksson, Björn, Tatarian, Maria January 2021 (has links)
The popularity that mobile games gained recently gives the opportunity to develop more mobile games. Limited by the scarce resources on mobile phones, developing good games becomes critical and requires special optimization while choosing the design approach.   Object-oriented Design (OOD) and Data-oriented Design (DOD) are two programming paradigms that have different ways of defining and structuring data. The purpose of this student thesis is to investigate the CPU and Memory performance differences between the two approaches.   To answer the research questions an experiment is conducted where two identical mobile games are built, one according to OOD and the other to DOD to collect empirical quantitative data and compare the results. The study limits the scope by  running the games on Android mobile phones.   The results of comparing the CPU Usage show significant differences especially when the amount of data is large. For instance, in the DOD version of the game, the CPU spends 20.9% of the time on updating data, while it spends 69.2% of the time on the same action in the OOD version of the game. No significant differences are observed regarding the total Memory allocated for the games in both versions. It can therefore be concluded that when the number of objects/data is big, a more optimized code should be written following the Data oriented Design approach with regard to better CPU and Memory Usage and    better game performance.
7

Evaluation of Rust and WebAssembly when building a Progressive Web Application : An analysis of performance and memory usage / Evaluering av Rust och WebAssembly vid utveckling av en Progressiv Webapplikation : En analys av prestanda och minnesanvändning

Asegehegn, Natan Teferi January 2022 (has links)
One problem that has been plaguing software development is the multitude of platforms that are available to users. Consequentially, a company needs to provide its service on multiple devices, running different operating systems, in order to reach as many end-users as possible. This leads to increased development complexity and costs. To solve this issue, multiple cross-platform solutions have been proposed throughout the years. One such solution is Progressive Web Application, a set of techniques that aim to create web applications with features that have traditionally only been available to native applications. In recent years WebAssembly, a compilation target that allows languages other than JavaScript to run on the browser, has been introduced. With its compact binary format and compiled nature, its goal is to bring speed and performance enhancement to web applications. This thesis analyzes WebAssembly in the context of building a Progressive Web Application, particularly the impacts it has on the performance and memory usage. A comparison is made with the JavaScript library ReactJS. The results indicate that a Progressive Web Application built with WebAssembly achieves similar performance results as one built using ReactJS when it comes to computers, but performs worse on mobile platforms. The results also indicate that using a programming language such as Rust, although still introducing memory overhead, minimizes the bundle size and runtime memory consumption of the application. / Ett problem som har plågat mjukvaruutveckling är mängdenplattformar som är tillgängliga för användare. Följaktligen måste ett företagtillhandahålla sin tjänst på flera enheter, som kör olika operativsystem,för att nå så många slutanvändare som möjligt. Detta leder till ökadutvecklingskomplexitet och kostnad. För att lösa detta problem har flera plattformsoberoendelösningar föreslagits genom åren. En sådan lösning är Progressiva Webapplikationer, en samling tekniker som syftar till att skapa webbapplikationer med funktioner som traditionellt bara varit tillgängliga förmobilapplikationer. Under de senaste åren har ett verktyg som ger andra språk än JavaScript möjligheten att köras i webbläsaren introducerats. Detta verktyg är WebAssembly. Med sitt kompakta format och kompilerade natur, har den som mål att förbättra prestanda för webbapplikationer. Detta arbete analyserar WebAssembly i samband med utvecklingen av en Progressiv Webapplikation, specifikt inverkan den har på prestanda och minnesanvändning. En jämförelse görs med JavaScriptbiblioteket ReactJS. Resultaten tyder på att en Progressiv Webapplikation byggd med WebAssembly uppnår liknanderesultat som en byggd med ReactJS när det kommer till datorer, men presterar sämre på mobila plattformar. Resultaten visar också att användningen av ett programmeringsspråk som Rust minimerar paketstorleken och minnesanvändningen av applikationer även om det fortfarande introducerar minneskostnader.
8

Memory Measurement and Message Usage Improvement on an Elevator Embedded System

Arleklint, Tomas January 2019 (has links)
All embedded systems are unique, a design that is suitable for one system can rarely be copied over to another. This inherently makes designing embedded systems difficult. The difficulty is only amplified by the uncertainty of the future requirements as it is developed over time. Being able to continuously validate the performance and the reliability is of great importance to be able to ensure fault proof execution.This thesis explores two areas. A method of tracking the static and dynamic memory usage of a system is crucial to ensure correct functionality under all conditions, and that the implemented hardware will suffice. Multiple possible tools, each functioning uniquely, were developed and tested to find the most suitable for measuring the memory usage of the elevator system. Additionally the message usage, i.e. the way the different units within the studied system communicate with each other, was scrutinized for possible performance and reliability enhancements. A study was made for the most optimal communication protocol, and for how the transmissions could be improved upon.The results show that for this specific system, the best way of calculating the memory usage is with a tool developed within this project. Using this tool it was found that none of the modules in the elevator system use more than 30 % of the available memory during execution. The message usage study shows the most optimal protocol is CAN with the ISO 15765-2 upperlevel protocol, which is the one currently in use. However, improvements to the message transmissions are suggested, such as taking full advantage of the CAN protocol and by implementing message buffers on the receiving end.An important conclusion is that just as there is no unique system design that fits all, there is no memory measurement tool or message usage implementation that fits all systems. Each system has to be analyzed to find the most optimal solution for that particular system. / Alla inbyggda system är unika, en design som passar ett system kan sällan kopieras över till ett annat. Detta leder till att det är svårt att designa inbyggda system. Osäkerheten över framtida systemkrav då systemet utvecklas över tid gör inte designproblemet lättare. Att kontinuerligt kunna validera prestandan och pålitligheten är viktigt för att kunna garantera felfri körning.Detta examensarbete utforskar två områden. En metod för att mäta den statiska och dynamiska minnesanvändningen av systemet är nödvändig för att kunna säkerställa att systemet alltid fungerar som det ska, och att den tillgängliga hårdvaran är tillräcklig. Flera olika verktyg utvecklades och testades för att hitta det som bäst mäter hissens minnesanvändning. Utöver det granskades meddelandeanvändningen, hur de olika enheterna inom det studerade systemet kommunicerar med varandra, för potentiella förbättringar av prestandan och pålitligheten. En studie utfördes för att hitta det mest optimala kommunikationsprotokollet, och för hur av överföringarna kunde förbättras.Resultatet visar att för det här specifika systemet är bästa sättet att räkna ut minnesanvändningen med ett verktyg utvecklat under projektet. Med hjälp av det här verktyget visas att ingen av modulerna i hissystemet använde mer än 30% av det tillgängliga minnet under körning. Studien över minnesanvändningen påvisar att det mest optimala protokollet var CAN och ISO 15765-2 för det övre lagret, vilket är det som används för nuvarande. Dock föreslås förbättringar till hur meddelandena överförs, till exempel genom att utnyttja CAN protokollet till fullo och genom att implementera meddelandebufferts på mottagarsidan.En betydelsefull slutsats som drogs var att på samma sätt som det inte finns en unik systemdesign som passar alla system, finns det inte heller ett minnesanvändningsverktyg eller en meddelandeanvändning som passar alla system. Varje enskilt system måste analyseras för att hitta den mest optimala lösningen för det specifika systemet.
9

Low-Level Static Analysis for Memory Usage and Control Flow Recovery

Bockenek, Joshua Alexander 07 March 2023 (has links)
Formal characterization of the memory used by a program is an important basis for security analyses, compositional verification, and identification of noninterference. However, soundly proving memory usage requires operating on the assembly level due to the semantic gap between high-level languages and the code that processors actually execute. Automated methods, such as model checking, would not be able to handle many interesting functions due to the undecidability of memory usage. Fully-interactive methods do not scale well either. Sound control flow recovery (CFR) is also important for binary decompilation, verification, patching, and security analysis. It lifts raw unstructured data into a form that allows reasoning over behavior and semantics. However, doing so requires interpreting the behavior of the program when indirect or dynamic control flow exists, creating a recursive dependency. This dissertation tackles the first property with two contributions that perform proof generation combined with interactive theorem proving in a semi-automated manner: an untrusted tool extracts as much information as it can from the functions under test and then generates all the necessary proofs to be completed in a theorem prover. The first, Floyd-style approach still requires significant manual effort but provides good flexibility and ensures no paths are analyzed more than once. In contrast, the second, Hoare-style approach sacrifices some flexibility and avoidance of repeated path evaluation in order to achieve much greater automation. However, neither approach can handle the dynamic control flow caused by indirect branching. The second property is handled by the second set of contributions of this dissertation. These two contributions provide fully-automated methods of recovering control flow from binaries even in the presence of indirect branching. When such dynamic control flow cannot be overapproximatively resolved, it is clearly noted in the resultant output. In the first approach to control flow recovery, a structured memory representation allows for general analysis of control flow in the presence of indirection, gaining scalability by utilizing context-free function analysis. It supports various aliasing conditions via the usage of nondeterminism, with multiple output states potentially being produced from a given input state. The second approach adds function context and abstract interpretation-inspired modeling of the C++ exception handling (EH) application binary interface (ABI), allowing for the discovery of previously-unknown paths while maintaining or increasing automation. / Doctor of Philosophy / Modern computer programs are so complicated that individual humans cannot manually check all but the smallest programs to make sure they are correct and secure. This is even worse if you want to reduce the trusted computing base (TCB), the stuff that you have to assume is working right in order to say a program will execute correctly. The TCB includes your computer itself, but also whatever tools were used to take the programs written by programmers and transform them into a form suitable for running on a computer. Such tools are often called compilers. One method of reducing the TCB is to examine the lowest-level representation of that program, the assembly or even machine code that is actually run by your computer. This poses unique challenges, because operating on such a low level means you do not have a lot of the structure that a more abstract, higher-level representation provides. Also, sometimes you want to formally state things about a program's behavior; that is, say things about what it does with a high degree of confidence based on mathematical principles. You may also want to verify that one or more of those statements are true. If you want to be detailed about that behavior, you may need to know all of the chunks, or regions, in random-access memory (RAM) that are used by that program. RAM, henceforth referred to as just ``memory'', is your computer's first place of storage for the information used by running programs. This is distinct from long-term storage devices like hard disk drives (HDDs) or solid-state drives (SSDs), which programs do not normally have direct access to. Unfortunately, there is no one single approach that can automatically determine with absolute certainty for all cases the exact regions of memory that are read or written. This is called undecidability, and means that you need to approximate those memory regions a lot of the time if you want to have a significant degree of automation. An underapproximation, an approach that only gives you some of the regions, is not useful for formal statements as it might miss out on some behavior; it is unsound. This means that you need an overapproximation, an approach that is guaranteed to give you at least the regions read or written. Therefore, the first contribution of this dissertation is a preliminary approach to such an overapproximation. This approach is based on the work of Robert L. Floyd, focusing on the direct control flow (where the steps of a program go) in an individual function (structured program component). It still requires a lot of user effort, including having to manually specify the regions in memory that were possibly used and do a lot of work to prove that those regions are (overapproximatively) correct, so our tests were limited in scope. The second contribution automated a lot of the manual work done for the first approach. It is based on the work of Charles Antony Richard Hoare, who developed a verification approach focusing on the syntax (the textual form) of programs. This contribution produces what we call formal memory usage certificates (FMUCs), which are formal statements that the regions of memory they describe are the only ones possibly affected by the functions under test. These statements also come with proofs, which for our work are like scripts used to verify that the things the FMUCs assert about the corresponding functions can be shown to be true given the assumptions our FMUCs have. Sometimes those proofs are incomplete, though, such as when there is a loop (repeated bit of code) in a function under test or one function calls (executes) another. In those cases, a user has to finish the proof, in the first case by weakening (removing information from) the FMUC's statements about the loop and in the second by composing, or combining, the FMUCs of the two functions. Additionally, this second approach cannot handle dynamic control flow. Such control flow occurs when the low-level instructions a program uses to move to another place in that program do not have a pre-stored location to go to. Instead, that location is supplied as the program is running. This is opposed to direct control flow, where the place to go to is hard-coded into the program when it is compiled. The tool also cannot not deal with aliasing, which is when different state parts (value-holding components) of a program contain the same value and that value is used as the numeric address or identifier of a location in memory. Specifically, it cannot deal with potential aliasing, when there is not enough information available to determine if the state parts alias or not. Because of that, we had to add extra assumptions to the FMUCs that limited them to those cases where ambiguous memory-referencing state parts referred to separate memory locations. Finally, it specifically requires assembly as input; you cannot directly supply a binary to it. This is also true of the first contribution. Because of this, we were able to test on more functions than before, but not a lot more. Not being able to deal with dynamic control flow is a big problem, as almost all programs use it. For example, when a function reaches its end, it has to figure out where to return to based on the current state of the program (in the previous contribution, this was done manually). This means that control flow recovery (CFR) is very important for many applications, including decompilation (converting a program back into a higher-level form), patching (updating a program in place without modifying the original code and recompiling it), and low-level analysis or verification in general. However, as you may have noticed from earlier in this paragraph, in order to deal with such dynamic control flow you need to figure out what the possible destinations are for the individual control flow transfers. That can require knowing where you came from in the program, which means that analysis of dynamic control flow requires context (in this context, information previously obtained in the program). Even worse, it is another undecidable problem that requires overapproximation. To soundly recover control flow, we developed Hoare graphs (HGs), the third contribution of this dissertation. HGs use memory models that take the form of forests, or collections of tree data structures. A single tree represents a region in memory that may have multiple symbolic references, or abstract representations of a value. The children of the tree represent regions used in the program that are enclosed within their parent tree elements. Now, instead of assuming that all ambiguous memory regions are separate, we can use them under various aliasing conditions. We have also implemented support for some forms of dynamic control flow. Those that are not supported are clearly marked in the resultant HG. No user interaction is required even when loops are present thanks to a methodology that automatically reduces the amount of information present at a re-executed instruction until the information stabilizes. Function composition is also automatic now thanks to a method that treats each function as its own context in a safe and automated way, reducing memory consumption of our tool and allowing larger programs to be examined. In the process we did lose the ability to deal with recursion (functions that call themselves or call other functions that call back to the original), though. Lastly, we provided the ability to directly load binaries into the tool, no external disassembly (converting machine code into human-readable instructions) needed. This all allowed much greater testing than before, with applications to multiple programs and program libraries. The fourth and final contribution of this dissertation iterates on the HG work by narrowing focus to the concept of exceptional control flow. Specifically, it models the kind of exception handling used by C++ programs. This is important as, if you want to explore a program's behavior, you need to know all the places it goes to. If you use a tool that does not model exception handling, you may end up missing paths of execution caused by unwinding. This is when an exception is thrown and propagates up through the program's current stack of function calls, potentially reaching programmer-supplied handling for that exception. Despite this, commonplace tools for static, low-level program analysis do not model such unwinding. The control flow graph (CFG) produced by our exception-aware tool are called exceptional interprocedural control flow graphs (EICFGs). These provide information about the exceptions being thrown and what paths they take in the program when they are thrown. Additional improvements are a better methodology for handling dynamic control flow as well adding back in support for recursion. All told, this allowed us to explore even more programs than ever before.
10

En jämförelse mellan dataorienterad design och objektorienterad design / A Comparison Between Data-Oriented Design and Object-Oriented Design

Westerberg, Charlotte January 2020 (has links)
Dagens applikationer hanterar mer och mer data vilket resulterar i att de blir allt mer resurskrävande och kräver mer av hårdvaran. Vilket i förlängningen kan innebär att hårdvaran måste bytas ut med jämna mellanrum för att kunna köra mjukvaran på ett för användaren tillfredsställande sätt. Detta arbete undersöker om det genom att byta designteknik är möjligt att utveckla mindre resurskrävande applikationer. Arbetet presenterar en jämförelse mellan objektorienterad design (även kallad objektorienterad programmering, OOP) och data orienterad design (DOD). Detta genom att dels ta upp kända för- och nackdelar med respektive designteknik samt genom att utföra en mätning på respektive teknik. Det som anses vara de främsta fördelarna med OOP är återanvändning av kod, att koden är lätt att underhålla, säkerhet i form av inkapsling samt att objekten som används reflekterar den mänskliga verkligheten. Dessa fördelar är dock även något som bidrar till det som anses vara den främsta nackdelen med OOP, nämligen att den är prestandakrävande. När det gäller DOD så anses de främsta fördelarna vara att det medför en cachevänligare kod som leder till färre cachemissar. Det anses även vara lättare att parallellisera koden i jämförelse med OOP. Den nackdelen som tas upp med DOD är att de tar tid att lära sig och kräver en del övning. Dock är DOD väldigt okänt vilket resulterade i ett svagt underlag. Två simuleringar utvecklades i Unity varav den ena använder sig av den nya teknikstacken DOTS som är dataorienterad. Resultatet av mätningarna indikerar på att DOD använder mindre av hårdvaruresurserna vid prestandakrävande applikationer. Om applikationen ej är prestandakrävande märks dock ingen skillnad mellan de olika teknikerna vid fråga om processoranvändning. / Today, applications handle more and more data, which results in them becoming increasingly resource-intensive and requiring more of the hardware. Which in the long run may cause that the hardware must be replaced at regular intervals to be able to run the software in a way that is satisfactory for the user. This thesis investigates whether it is possible to get less resource-intensive applications by changing the design technology. The paper presents a comparison between object-oriented design (also known as object-oriented programming, OOP) and data-oriented design (DOD). This is performed by addressing the known advantages and disadvantages of each design technique and by measuring each technique in the matter of performance. What was considered to be the main advantages of OOP is the reuse of code, that the code is easy to maintain, security in the form of encapsulation and that the objects that are used reflect human reality. On the other hand, these advantages also contribute to what is considered to be the main disadvantage of OOP, namely that it is performance-intensive. When it comes to DOD, the main advantages are considered to be that it results in a more cache-friendly code that leads to fewer cache misses. DOD is also considered easier to parallelize the code compared to OOP. The disadvantage of DOD is that it is time consuming to learn and requires some practice. Though, DOD is very unknown which resulted in a narrow basis. Two simulations were developed in Unity, one of which uses the new technology stack DOTS, which is data-oriented. The results of the measurements indicate that DOD uses less of the hardware resources in performance-intensive applications. If the application is not performance-intensive, though, no difference is noticed between the different technologies when it comes to CPU-usage.

Page generated in 0.4565 seconds