1 |
Measuring and Analysing Execution Time in an Automotive Real-Time Application / Exekveringstid i ett Realtidssystem för FordonLiljeroth, Henrik January 2009 (has links)
<p>Autoliv has developed the Night Vision system, which is a safety system for use incars to improve the driver’s situational awareness during night conditions. It is areal-time system that is able to detect pedestrians in the traffic environment andissue warnings when there is a risk of collision. The timing behaviour of programsrunning on real-time systems is vital information when developing and optimisingboth hardware and software. As a part of further developing their Night Visionsystem, Autoliv wanted to examine detailed timing behaviour of a specific part ofthe Night Vision algorithm, namely the Tracking module, which tracks detectedpedestrians. Parallel to this, they also wanted a reliable method to obtain timingdata that would work for other parts of that system as well, or even other applications.</p><p>A preliminary study was conducted in order to determine the most suitable methodof obtaining the timing data desired. This resulted in a measurement-based approachusing software profiling, in which the Tracking module was measured usingvarious input data. The measurements were performed on simulated hardwareusing both a cycle accurate simulator and measurement tools from the systemCPU manufacturer, as well as tools implemented specifically to handle input andoutput data.</p><p>The measurements resulted in large amounts of data used to compile performancestatistics. Using different scenarios in the input data, we were able to obtain timingcharacteristics for several typical situations the system may encounter duringoperation. By manipulating the input data we were also able to observe generalbehaviour and achieve artificially high execution times, which serves as indicationson how the system responds to irregular and unexpected input data.</p><p>The method used for collecting timing information was well suited for this particularproject. It provided the possibility to analyse behavior in a better waythan other, more theoretical, approaches would have. The method is also easilyadaptable to other parts of the Night Vision system, or other systems, with onlyminor adjustments to measurement environment and tools.</p>
|
2 |
Measuring and Analysing Execution Time in an Automotive Real-Time Application / Exekveringstid i ett Realtidssystem för FordonLiljeroth, Henrik January 2009 (has links)
Autoliv has developed the Night Vision system, which is a safety system for use incars to improve the driver’s situational awareness during night conditions. It is areal-time system that is able to detect pedestrians in the traffic environment andissue warnings when there is a risk of collision. The timing behaviour of programsrunning on real-time systems is vital information when developing and optimisingboth hardware and software. As a part of further developing their Night Visionsystem, Autoliv wanted to examine detailed timing behaviour of a specific part ofthe Night Vision algorithm, namely the Tracking module, which tracks detectedpedestrians. Parallel to this, they also wanted a reliable method to obtain timingdata that would work for other parts of that system as well, or even other applications. A preliminary study was conducted in order to determine the most suitable methodof obtaining the timing data desired. This resulted in a measurement-based approachusing software profiling, in which the Tracking module was measured usingvarious input data. The measurements were performed on simulated hardwareusing both a cycle accurate simulator and measurement tools from the systemCPU manufacturer, as well as tools implemented specifically to handle input andoutput data. The measurements resulted in large amounts of data used to compile performancestatistics. Using different scenarios in the input data, we were able to obtain timingcharacteristics for several typical situations the system may encounter duringoperation. By manipulating the input data we were also able to observe generalbehaviour and achieve artificially high execution times, which serves as indicationson how the system responds to irregular and unexpected input data. The method used for collecting timing information was well suited for this particularproject. It provided the possibility to analyse behavior in a better waythan other, more theoretical, approaches would have. The method is also easilyadaptable to other parts of the Night Vision system, or other systems, with onlyminor adjustments to measurement environment and tools.
|
3 |
CPU Load Control of LTE Radio Base StationLarsson, Joachim January 2015 (has links)
A radio base station (RBS) may become overloaded if too many mobile devices communicate with it at the same time. This could happen at for instance sport events or in the case of accidents. To prevent CPU overload, the RBS is provided with a controller that adjusts the acceptance rate, the maximum number of connection requests that can be accepted per time interval. The current controller is tuned in real radio base stations and the procedure is both time consuming and expensive. This, combined with the fact that the mobile data usage is predicted to increase puts more pressure on today's system. Thus, there is a need to be able to simulate the system in order to suggest an alternative controller. In this thesis, an implementation of the system is developed in Matlab in order to simulate the RBS system load control behaviour. A CPU load model is estimated using system identification. The current version of the CPU load controller and an alternative PI CPU load controller are implemented. Both are evaluated on different test cases and this shows that it is possible to increase the performance of the system with the alternative CPU load controller, both in terms of lower amount of rejected connection requests and decreased CPU load overshoot.
|
4 |
Deduplicerings påverkan på effektförbrukningen : en studie av deduplicering i ZFSAndersson, Tommy, Carlsson, Marcus January 2011 (has links)
Uppsatsen beskriver arbetet och undersökning för hur deduplicering i filsystemet ZFS påverkar effektförbrukningen. En större mängd redundant data förekommer i centraliserade lagringssystem som förser virtualiserade servrar med lagringsutrymme. Deduplicering kan för den typen av lagringsmiljö eliminera redundant data och ger en stor besparing av lagringsutrymme. Frågan som undersökningen avsåg att besvara var hur ett lagringssystem påverkas av det extra arbete som det innebär att deduplicera data i realtid.Metoden för att undersöka problemet var att utföra fem experiment med olika typer av scenarion. Varje scenario innebar att filer kopierades till ett lagringssystem med eller utan deduplicering för att senare kunna analysera skillnaden. Dessutom varierades mängden deduplicerbar data under experimenten vilket skulle visa om belastningen på hårddiskarna förändrades.Resultatet av experimenten visar att deduplicering ökar effektförbrukning och processorbelastning medan antalet I/O-operationer minskar. Analysen av resultatet visar att med en stigande andel deduplicerbar data som skrivs till hårddiskarna så stiger också effektförbrukning och processorbelastning. / This report describes the process and outcome of the research on how the power consumption is affected by deduplication in a ZFS file system. A large amount of redundant data exists in centralized storage systems that provide virtualized servers with storage space. Deduplication can be used to eliminate redundant data and give an improved utilization of available space in this kind of storage environment. The question that the study sought to answer was how a storage systems power consumption is affected by the extra workload deduplication introduces.The method used to investigate the problem was to perform five experiments with different types of scenarios. The difference in each scenario was that the data was written to a storage system with or without deduplication to later analyze the difference. Each scenario had a varied amount deduplicatable data during the experiments which would show if the load on disks changed.The results show that deduplication increases the power consumption and CPU load while the I/O-operations decrease. The analysis of the result shows that increasing the deduplicatable data also increases the power consumption and CPU load.
|
5 |
Environnements pour l'analyse expérimentale d'applications de calcul haute performance / Environments for the experimental analysis of HPC applications.Perarnau, Swann 01 December 2011 (has links)
Les machines du domaine du calcul haute performance (HPC) gagnent régulièrement en com- plexité. De nos jours, chaque nœud de calcul peut être constitué de plusieurs puces ou de plusieurs cœurs se partageant divers caches mémoire de façon hiérarchique. Que se soit pour comprendre les performances ob- tenues par une application sur ces architectures ou pour développer de nouveaux algorithmes et valider leur performance, une phase d'expérimentation est souvent nécessaire. Dans cette thèse, nous nous intéressons à deux formes d'analyse expérimentale : l'exécution sur machines réelles et la simulation d'algorithmes sur des jeux de données aléatoires. Dans un cas comme dans l'autre, le contrôle des paramètres de l'environnement (matériel ou données en entrée) permet une meilleure analyse des performances de l'application étudiée. Ainsi, nous proposons deux méthodes pour contrôler l'utilisation par une application des ressources ma- térielles d'une machine : l'une pour le temps processeur alloué et l'autre pour la quantité de cache mémoire disponible. Ces deux méthodes nous permettent notamment d'étudier les changements de comportement d'une application en fonction de la quantité de ressources allouées. Basées sur une modification du compor- tement du système d'exploitation, nous avons implémenté ces méthodes pour un système Linux et démontré leur utilité dans l'analyse de plusieurs applications parallèles. Du point de vue de la simulation, nous avons étudié le problème de la génération aléatoire de graphes orientés acycliques (DAG) pour la simulation d'algorithmes d'ordonnancement. Bien qu'un grand nombre d'algorithmes de génération existent dans ce domaine, la plupart des publications repose sur des implémen- tations ad-hoc et peu validées de ces derniers. Pour pallier ce problème, nous proposons un environnement de génération comprenant la majorité des méthodes rencontrées dans la littérature. Pour valider cet envi- ronnement, nous avons réalisé de grande campagnes d'analyses à l'aide de Grid'5000, notamment du point de vue des propriétés statistiques connues de certaines méthodes. Nous montrons aussi que la performance d'un algorithme est fortement influencée par la méthode de génération des entrées choisie, au point de ren- contrer des phénomènes d'inversion : un changement d'algorithme de génération inverse le résultat d'une comparaison entre deux ordonnanceurs. / High performance computing systems are increasingly complex. Nowadays, each compute node can contain several sockets or several cores and share multiple memory caches in a hierarchical way. To understand an application's performance on such systems or to develop new algorithms and validate their behavior, an experimental study is often required. In this thesis, we consider two types of experimental analysis : execution on real systems and simulation using randomly generated inputs. In both cases, a scientist can improve the quality of its performance analysis by controlling the environment (hardware or input data) used. Therefore, we discuss two methods to control hardware resources allocation inside a system : one for the processing time given to an application, the other for the amount of cache memory available to it. Both methods allow us to study how an application's behavior change according to the amount of resources allocated. Based on modifications of the operating system, we implemented these methods for Linux and demonstrated their use for the analysis of several parallel applications. Regarding simulation, we studied the issue of the random generation of directed acyclic graphs for scheduler simulations. While numerous algorithms can be found for such problem, most papers in this field rely on ad-hoc implementations and provide little validation of their generator. To tackle this issue, we propose a complete environment providing most of the classical generation methods. We validated this environment using big analysis campaigns on Grid'5000, verifying known statistical properties of most algorithms. We also demonstrated that the performance of a scheduler can be impacted by the generation method used, identifying a reversing phenomenon : changing the generating algorithm can reverse the comparison between two schedulers.
|
6 |
Prediction of 5G system latency contribution for 5GC network functions / Förutsägelse av 5G-systemets latensbidrag för 5GC-nätverksfunktionerCheng, Ziyu January 2023 (has links)
End-to-end delay measurement is deemed crucial for network models at all times as it acts as a pivotal metric of the model’s effectiveness, assists in delineating its performance ceiling, and stimulates further refinement and enhancement. This premise holds true for 5G Core Network (5GC) models as well. Commercial 5G models, with their intricate topological structures and requirement for reduced latencies, necessitate an effective model to anticipate each server’s current latency and load levels. Consequently, the introduction of a model for estimating the present latency and load levels of each network element server would be advantageous. The central content of this article is to record and analyze the packet data and CPU load data of network functions running at different user counts as operational data, with the data from each successful operation of a service used as model data for analyzing the relationship between latency and CPU load. Particular emphasis is placed on the end-to-end latency of the PDU session establishment scenario on two core functions - the Access and Mobility Management Function (AMF) and the Session Management Function (SMF). Through this methodology, a more accurate model has been developed to review the latency of servers and nodes when used by up to 650, 000 end users. This approach has provided new insights for network level testing, paving the way for a comprehensive understanding of network performance under various conditions. These conditions include strategies such as "sluggish start" and "delayed TCP confirmation" for flow control, or overload situations where the load of network functions exceeds 80%. It also identifies the optimal performance range. / Latensmätningar för slutanvändare anses vara viktiga för nätverksmodeller eftersom de fungerar som en måttstock för modellens effektivitet, hjälper till att definiera dess prestandatak samt bidrar till vidare förfining och förbättring. Detta antagande gäller även för 5G kärnnätverk (5GC). Kommersiella 5G-nätverk med sin komplexa topologi och krav på låg latens, kräver en effektiv modell för att prediktera varje servers aktuella last och latensbidrag. Följdaktligen behövs en modell som beskriver den aktuella latensen och dess beroende till lastnivå hos respektive nätverkselement. Arbetet består i att samla in och analysera paketdata och CPU-last för nätverksfunktioner i drift med olika antal slutanvändare. Fokus ligger på tjänster som används som modelldata för att analysera förhållandet mellan latens och CPU-last. Särskilt fokus läggs på latensen för slutanvändarna vid PDU session-etablering för två kärnfunktioner – Åtkomst- och mobilitetshanteringsfunktionen (AMF) samt Sessionshanteringsfunktionen (SMF). Genom denna metodik har en mer exakt modell tagits fram för att granska latensen för servrar och noder vid användning av upp till 650 000 slutanvändare. Detta tillvägagångssätt har givit nya insikter för nätverksnivåtestningen, vilket banar väg för en omfattande förståelse för nätverprestanda under olika förhållanden. Dessa förhållanden inkluderar strategier som ”trög start” och ”fördröjd TCP bekräftelse” för flödeskontroll, eller överlastsituationer där lasten hos nätverksfunktionerna överstiger 80%. Det identifierar också det optimala prestandaområdet.
|
7 |
Application development of 3D LiDAR sensor for display computersEkstrand, Oskar January 2023 (has links)
A highly accurate sensor for measuring distances, used for creating high-resolution 3D maps of the environment, utilize “Light Detection And Ranging” (LiDAR) technology. This degree project aims to investigate the implementation of 3D LiDAR sensors into off-highway vehicle display computers, called CCpilots. This involves a study of available low-cost 3D LiDAR sensors on the market and development of an application for visualizing real time data graphically, with room for optimization algorithms. The selected LiDAR sensor is “Livox Mid-360”, a hybrid-solid technology and a field of view of 360° horizontally and 59° vertically. The LiDAR application was developed using Livox SDK2 combined with a C++ back-end, in order to visualize data using Qt QML as the Graphical User Interface design tool. A filter was utilized from the Point Cloud Library (PCL), called a voxel grid filter, for optimization purpose. Real time 3D LiDAR sensor data was graphically visualized on the display computer CCpilot X900. The voxel grid filter had a few visual advantages, although it consumed more processor power compared to when no filter was used. Whether a filter was used or not, all points generated by the LiDAR sensor could be processed and visualized by the developed application without any latency.
|
Page generated in 0.0373 seconds