• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 3
  • 3
  • Tagged with
  • 12
  • 12
  • 6
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

[en] A MODEL OF COMPUTATION FOR OBJECT CIRCUITS / [pt] UM MODELO DE COMPUTAÇÃO PARA CIRCUITOS DE OBJETOS

MATHEUS COSTA LEITE 19 September 2003 (has links)
[pt] Programação Orientada a Objetos é uma técnica de modelagem de software madura e bem estabelecida. Entretanto, a importância do seu papel tem a mesma medida do consenso em relação às suas fraquezas e limitações. OO não é uma panacéia, e, caso falhe, alternativas devem ser buscadas - algumas delas híbridas, outras inteiramente novas. Neste trabalho, argumentamos que o paralelo entre OO e circuitos elétricos é uma solução híbrida interessante, pois algumas das características básicas destes circuitos são as mesmas perseguidas como o Santo Gral da Engenharia de Software - concorrência, modularidade, robustez, escalabilidade, etc. - e que nem sempre são alcançadas somente com a abordagem OO tradicional. Sendo assim, nossa proposta é o estabelecimento de uma correlação entre circuitos elétricos e programas orientados a objeto. Do primeiro, vem o circuito: percurso fechado por onde informação trafega e é processada. Do segundo, vem o objeto: entidade abstrata que constitui a informação que trafega no circuito. Finalmente, da união de ambos, surge um novo modelo de computação - o circuito de objetos - onde se supõe que os benefícios trazidos pelas partes que o compõem sejam complementares. Motivamos nossa discussão com uma série de exemplos simples, porém elucidativos, seguida de um estudo de caso na área de simulação. De modo a ratificar o funcionamento destes circuitos, foi construída uma implementação de circuitos de objetos utilizando a linguagem de programação Java. / [en] Object Oriented Programming is a mature, well established software modeling technique. Nevertheless, the importance of its role has the same magnitude as the consensus in respect to its weakness and limitations. OO is not a panacea, and, should it fail, alternatives must be found - some hybrid, while others entirely new. In this work, we argue that the parallel between OO and electric circuits is an interesting hybrid solution, for some of the basic features found in such circuits are the same as the ones sought after as the Holy Grail of Software Engineering - concurrency, modularity, robustness, scalability, etc. - and that are not always achieved only with the traditional OO approach. Hence, our proposal is the establishment of a correlation between electric circuits and object oriented programming. From the former, comes the circuit: closed path where information flows and is processed. From the second, comes the object: abstract entity that constitutes the information flowing within the circuit. Finally, from their union, arises a new model of computation - the object circuit - where it is supposed the benefits brought by each part are complementary. We motivate our discussion with a collection of simple - albeit elucidative - examples, followed by a case study in the simulation field. In order to ratify the functioning of these circuits, an object circuit`s implementation was built on top of the Java programming language.
2

Queue Streaming Model Theory, Algorithms, and Implementation

Zope, Anup D 03 May 2019 (has links)
In this work, a model of computation for shared memory parallelism is presented. To address fundamental constraints of modern memory systems, the presented model constrains how parallelism interacts with memory access patterns and in doing so provides a method for design and analysis of algorithms that estimates reliable execution time based on a few architectural parameters. This model is presented as an alternative to modern thread based models that focus on computational concurrency but rely on reactive hardware policies to hide and amortize memory latency. Since modern processors use reactive mechanisms and heuristics to deduce the data access requirement of computations, the memory access costs of these threaded programs may be difficult to predict reliably. This research presents the Queue Streaming Model (QSM) that aims to address these shortcomings by providing a prescriptive mechanism to achieve latency-amortized and predictable-cost data access. Further, the work presents application of the QSM to algorithms commonly used in a number of applications. These algorithms include structured regular computations represented by merge sort, unstructured irregular computations represented by sparse matrix dense vector multiplication, and dynamic computations represented by MapReduce. The analysis of these algorithms reveal architectural tradeoffs between memory system bottlenecks and algorithm design. The techniques described in this dissertation reveal a general software approach that could be used to construct more general irregular applications, provided they can be transformed into a relational query form. It demonstrates that the QSM can be used to design algorithms that enhance utilization of memory system resources by structuring concurrency and memory accesses such that system bandwidths are balanced and latency is amortized. Finally, the benefit of applying the QSM algorithm to the Euler inviscid flow solver is demonstrated through experiments on the Intel(R) Xeon(R) E5-2680 v2 processor using ten cores. The transformation produced a speed-up of 25% over an optimized OpenMP implementation having identical computational structure.
3

Modélisation explicite de l'adaptation sémantique entre modèles de calcul. / Explicit modeling of the semantic adaptation between models of computation

Dogui, Ayman 18 December 2013 (has links)
Ce travail traite de la modélisation de systèmes complexes constitués de plusieurs composants impliquant des domaines techniques différents. Il se place dans le contexte de la modélisation hétérogène hiérarchique, selon l’approche à base de modèles de calcul. Chaque composant faisant appel à un domaine technique particulier, son comportement peut être modélisé selon un paradigme de modélisation approprié, avec une sémantique différente de celle des autres composants. La modélisation du système global, qui intègre les modèles hétérogènes de ces composants, nécessite donc une adaptation sémantique permettant l’échange entre les divers sous-modèles.Ce travail propose une approche de modélisation de l’adaptation sémantique où les sémantiques du temps et du contrôle sont explicitement spécifiées par le concepteur en définissant des relations sur les occurrences d’évènements d’une part et sur les étiquettes temporelles de ces occurrences d’autre part. Cette approche est intégrée dans la plateforme ModHel’X et testée sur un cas d’étude : un modèle de lève-vitre électrique. / This work takes place in the context of hierarchical heterogeneous modeling using the model of computation approach in order to model complex systems which includes several components from different technical fields.Each of these components is usually designed according to a modeling paradigm that suits the technical domain and is based on specific semantics. Therefore, the overall system, which integrates the heterogeneous models of the components, requires semantic adaptation to ensure proper communication between its various sub-models.In this context, the aim of this thesis is to propose a new approach of semantic adaptation modeling where the semantic adaptation of time and control is specified by defining relationships between the occurrences of events as well as the time tags of these occurrences. This approach was integrated into the ModHel’X platform and tested on the case study of a power window system.
4

Modèle géométrique de calcul : fractales et barrières de complexité / Geometrical model of computation : fractals and complexity gaps

Senot, Maxime 27 June 2013 (has links)
Les modèles géométriques de calcul permettent d’effectuer des calculs à l’aide de primitives géométriques. Parmi eux, le modèle des machines à signaux se distingue par sa simplicité, ainsi que par sa puissance à réaliser efficacement de nombreux calculs. Nous nous proposons ici d’illustrer et de démontrer cette aptitude, en particulier dans le cas de processus massivement parallèles. Nous montrons d’abord à travers l’étude de fractales que les machines à signaux sont capables d’une utilisation massive et parallèle de l’espace. Une méthode de programmation géométrique modulaire est ensuite proposée pour construire des machines à partir de composants géométriques de base les modules munis de certaines fonctionnalités. Cette méthode est particulièrement adaptée pour la conception de calculs géométriques parallèles. Enfin, l’application de cette méthode et l’utilisation de certaines des structures fractales résultent en une résolution géométrique de problèmes difficiles comme les problèmes de satisfaisabilité booléenne SAT et Q-SAT. Ceux-ci, ainsi que plusieurs de leurs variantes, sont résolus par machines à signaux avec une complexité en temps intrinsèque au modèle, appelée profondeur de collisions, qui est polynomiale, illustrant ainsi l’efficacité et le pouvoir de calcul parallèle des machines a signaux. / Geometrical models of computation allow to compute by using geometrical elementary operations. Among them, the signal machines model distinguishes itself by its simplicity, along with its power to realize efficiently various computations. We propose here an illustration and a study of this ability, especially in the case of massively parallel processes. We show first, through a study of fractals, that signal machines are able to make a massive and parallel use of space. Then, a framework of geometrical modular programmation is proposed for designing machines from basic geometrical components —called modules— supplied with given functionnalities. This method fits particulary with the conception of geometrical parallel computations. Finally, the joint use of this method and of fractal structures provides a geometrical resolution of difficult problems such as the boolean satisfiability problems SAT and Q-SAT. These ones, as well as several variants, are solved by signal machines with a model-specific time complexity, called collisions depth, which is polynomial, illustrating thus the efficiency and the parallel computational abilities of signal machines.
5

Concevoir et partager des workflows d’analyse de données : application aux traitements intensifs en bioinformatique / Design and share data analysis workflows : application to bioinformatics intensive treatments

Moreews, François 11 December 2015 (has links)
Dans le cadre d'une démarche d'Open science, nous nous intéressons aux systèmes de gestion de workflows (WfMS) scientifiques et à leurs applications pour l'analyse de données intensive en bioinformatique. Nous partons de l'hypothèse que les WfMS peuvent évoluer pour devenir des plates-formes pivots capables d'accélérer la mise au point et la diffusion de méthodes d'analyses innovantes. Elles pourraient capter et fédérer autour d'une thématique disciplinaire non seulement le public actuel des consommateurs de services mais aussi celui des producteurs de services. Pour cela, nous considérons que ces environnements doivent à la fois être adaptés aux pratiques des scientifiques concepteurs de méthodes et fournir un gain de productivité durant la conception et le traitement. Ces contraintes nous amènent à étudier la capture rapide des workflows, la simplification de l'intégration des tâches techniques, comme le parallélisme nécessaire au haut-débit, et la personnalisation du déploiement. Tout d'abord, nous avons défini un langage graphique DataFlow expressif, adapté à la capture rapide des workflows. Celui-ci est interprétable par un moteur de workflows basé sur un nouveau modèle de calcul doté de performances élevées, obtenues par l'exploitation des multiples niveaux de parallélisme. Nous présentons ensuite une approche de conception orientée modèle qui facilite la génération du parallélisme de données et la production d'implémentations adaptées à différents contextes d'exécution. Nous décrivons notamment l'intégration d'un métamodèle des composants et des plates-formes, employé pour automatiser la configuration des dépendances des workflows. Enfin, dans le cas du modèle Container as a Service (CaaS), nous avons élaboré une spécification de workflows intrinsèquement diffusable et ré-exécutable. L'adoption de ce type de modèle pourrait déboucher sur une accélération des échanges et de la mise à disposition des chaînes de traitements d'analyse de données. / As part of an Open Science initiative, we are particularly interested in the scientific Workflow Management Systems (WfMS) and their applications for intensive data analysis in bioinformatics. We start from the assumption that WfMS can evolve to become efficient hubs able to speed up the development and the dissemination of innovative analysis methods. These software platforms could rally and unite not only the current stakeholders, who are service consumers, but also the service producers, around a disciplinary theme. We therefore consider that these environments must be both adapted to the practices of the scientists who are method designers and also enhanced with increased productivity during design and treatment. These constraints lead us to study the rapid capture of workflows, the simplification of technical tasks integration, like parallelisation and the deployment customization. First, we define an expressive graphic worfklow language, adapted to the quick capture of workflows. This is interpreted by a workflow engine based on a new model of computation with high performances obtained by the use of multiple levels of parallelism. Then, we present a Model-Driven design approach that facilitates the data parallelism generation and the production of suitable implementations for different execution contexts. We describe in particular the integration of a components and platforms meta-model used to automate the configuration of workflows’ dependencies. Finally, in the case of the cloud model Container as a Service (CaaS), we develop a workflow specification intrinsically re-executable and readily disseminatable. The adoption of this kind of model could lead to an acceleration of exchanges and a better availability of data analysis workflows.
6

Catálogo de modelos de computação para o desenvolvimento de linguagens específicas de modelagem de domínio. / Catalog of models of computation for the development of domain-specific modeling languages.

Fernandes, Sergio Martins 13 June 2013 (has links)
Esta tese apresenta um processo para a criação de um catálogo de modelos de computação para apoiar o design de DSMLs, e a primeira versão do catálogo, com atributos que ajudam a selecionar os modelos de computação mais adequados para cada desenvolvimento de DSML, e as características dos sistemas de software para os quais esses modelos de computação são mais adequados. O contexto de aplicação desse catálogo é o Model-Driven Development (MDD desenvolvimento dirigido por modelos) a abordagem em que o desenvolvimento de software é baseado em modelos gráficos que são posteriormente traduzidos (transformados) em modelos de nível mais baixo e, no final, em código de linguagens de programação, tais como Java ou C#. A aplicação do processo gerou uma versão inicial do catálogo com os seguintes modelos de computação: diagramas BPMN, diagramas de classe da UML e regras de negócio. Visa-se contribuir para popularizar a abordagem de MDD com base em DSMLs e, em particular, a elaboração do design das DSMLs a partir de modelos de domínio, para o que o uso do catálogo efetivamente contribui. / This thesis presents a process for the creation of a catalog of models of computation to support the design of Domain-Specific Modeling Languages (DSMLs), and the first version of the catalog, which comprises attributes that aim to help the selection of the most suitable models of computation for each DSML development, and characteristics of software systems for which these models of computation are more appropriate. The context for the use of the catalog is the Model-Driven Development (MDD) - the approach where software development is based on graphical models that are subsequently translated (transformed) into lower-level models and, in the end, in source code in programming languages, such as Java or C #. The process was applied to generate an initial version of the catalog with the following models of computation: BPMN diagrams, UML class diagrams and business rules. It aims to contribute to popularize the MDD approach based in DSMLs, and in particular, the development of the DSMLs design from domain models, for which the use of the catalog effectively contributes.
7

Catálogo de modelos de computação para o desenvolvimento de linguagens específicas de modelagem de domínio. / Catalog of models of computation for the development of domain-specific modeling languages.

Sergio Martins Fernandes 13 June 2013 (has links)
Esta tese apresenta um processo para a criação de um catálogo de modelos de computação para apoiar o design de DSMLs, e a primeira versão do catálogo, com atributos que ajudam a selecionar os modelos de computação mais adequados para cada desenvolvimento de DSML, e as características dos sistemas de software para os quais esses modelos de computação são mais adequados. O contexto de aplicação desse catálogo é o Model-Driven Development (MDD desenvolvimento dirigido por modelos) a abordagem em que o desenvolvimento de software é baseado em modelos gráficos que são posteriormente traduzidos (transformados) em modelos de nível mais baixo e, no final, em código de linguagens de programação, tais como Java ou C#. A aplicação do processo gerou uma versão inicial do catálogo com os seguintes modelos de computação: diagramas BPMN, diagramas de classe da UML e regras de negócio. Visa-se contribuir para popularizar a abordagem de MDD com base em DSMLs e, em particular, a elaboração do design das DSMLs a partir de modelos de domínio, para o que o uso do catálogo efetivamente contribui. / This thesis presents a process for the creation of a catalog of models of computation to support the design of Domain-Specific Modeling Languages (DSMLs), and the first version of the catalog, which comprises attributes that aim to help the selection of the most suitable models of computation for each DSML development, and characteristics of software systems for which these models of computation are more appropriate. The context for the use of the catalog is the Model-Driven Development (MDD) - the approach where software development is based on graphical models that are subsequently translated (transformed) into lower-level models and, in the end, in source code in programming languages, such as Java or C #. The process was applied to generate an initial version of the catalog with the following models of computation: BPMN diagrams, UML class diagrams and business rules. It aims to contribute to popularize the MDD approach based in DSMLs, and in particular, the development of the DSMLs design from domain models, for which the use of the catalog effectively contributes.
8

Méthode de modélisation et de raffinement pour les systèmes hétérogènes. Illustration avec le langage System C-AMS / Study and development of a AMS design-flow in SytemC : semantic, refinement and validation

Paugnat, Franck 25 October 2012 (has links)
Les systèmes sur puces intègrent aujourd’hui sur le même substrat des parties analogiques et des unités de traitement numérique. Tandis que la complexité de ces systèmes s’accroissait, leur temps de mise sur le marché se réduisait. Une conception descendante globale et coordonnée du système est devenue indispensable de façon à tenir compte des interactions entre les parties analogiques et les partis numériques dès le début du développement. Dans le but de répondre à ce besoin, cette thèse expose un processus de raffinement progressif et méthodique des parties analogiques, comparable à ce qui existe pour le raffinement des parties numériques. L'attention a été plus particulièrement portée sur la définition des niveaux analogiques les plus abstraits et à la mise en correspondance des niveaux d’abstraction entre parties analogiques et numériques. La cohérence du raffinement analogique exige de détecter le niveau d’abstraction à partir duquel l’utilisation d’un modèle trop idéalisé conduit à des comportements irréalistes et par conséquent d’identifier l’étape du raffinement à partir de laquelle les limitations et les non linéarités aux conséquences les plus fortes sur le comportement doivent être introduites. Cette étape peut être d’un niveau d'abstraction élevé. Le choix du style de modélisation le mieux adapté à chaque niveau d'abstraction est crucial pour atteindre le meilleur compromis entre vitesse de simulation et précision. Les styles de modélisations possibles à chaque niveau ont été examinés de façon à évaluer leur impact sur la simulation. Les différents modèles de calcul de SystemC-AMS ont été catégorisés dans cet objectif. Les temps de simulation obtenus avec SystemC-AMS ont été comparés avec Matlab Simulink. L'interface entre les modèles issus de l'exploration d'architecture, encore assez abstraits, et les modèles plus fin requis pour l'implémentation, est une question qui reste entière. Une bibliothèque de composants électroniques complexes décrits en SystemC-AMS avec le modèle de calcul le plus précis (modélisation ELN) pourrait être une voie pour réussir une telle interface. Afin d’illustrer ce que pourrait être un élément d’une telle bibliothèque et ainsi démontrer la faisabilité du concept, un modèle d'amplificateur opérationnel a été élaboré de façon à être suffisamment détaillé pour prendre en compte la saturation de la tension de sortie et la vitesse de balayage finie, tout en gardant un niveau d'abstraction suffisamment élevé pour rester indépendant de toute hypothèse sur la structure interne de l'amplificateur ou la technologie à employer. / Systems on Chip (SoC) embed in the same chip analogue parts and digital processing units. While their complexity is ever increasing, their time to market is becoming shorter. A global and coordinated top-down design approach of the whole system is becoming crucial in order to take into account the interactions between the analogue and digital parts since the beginning of the development. This thesis presents a systematic and gradual refinement process for the analogue parts comparable to what exists for the digital parts. A special attention has been paid to the definition of the highest abstracted analogue levels and to the correspondence between the analogue and the digital abstraction levels. The analogue refinement consistency requires to detect the abstraction level where a too idealised model leads to unrealistic behaviours. Then the refinement step consist in introducing – for instance – the limitations and non-linearities that have a strong impact on the behaviour. Such a step can be done at a relatively high level of abstraction. Correctly choosing a modelling style, that suits well an abstraction level, is crucial to obtain the best trade-off between the simulation speed and the accuracy. The modelling styles at each abstraction level have been examined to understand their impact on the simulation. The SystemC-AMS models of computation have been classified for this purpose. The SystemC-AMS simulation times have been compared to that obtained with Matlab Simulink. The interface between models arisen from the architectural exploration – still rather abstracted – and the more detailed models that are required for the implementation, is still an open question. A library of complex electronic components described with the most accurate model of computation of SystemC-AMS (ELN modelling) could be a way to achieve such an interface. In order to show what should be an element of such a library, and thus prove the concept, a model of an operational amplifier has been elaborated. It is enough detailed to take into account the output voltage saturation and the finite slew rate of the amplifier. Nevertheless, it remains sufficiently abstracted to stay independent from any architectural or technological assumption.
9

Modèles de programmation des applications de traitement du signal et de l'image sur cluster parallèle et hétérogène / Programming models for signal and image processing on parallel and heterogeneous architectures

Mansouri, Farouk 14 October 2015 (has links)
Depuis une dizaine d'année, l'évolution des machines de calcul tend vers des architectures parallèles et hétérogènes. Composées de plusieurs nœuds connectés via un réseau incluant chacun des unités de traitement hétérogènes, ces grilles offrent de grandes performances. Pour programmer ces architectures, l'utilisateur doit s'appuyer sur des modèles de programmation comme MPI, OpenMP, CUDA. Toutefois, il est toujours difficile d'obtenir à la fois une bonne productivité du programmeur, qui passe par une abstraction des spécificités de l'architecture et performances. Dans cette thèse, nous proposons d'exploiter l'idée qu'un modèle de programmation spécifique à un domaine applicatif particulier permet de concilier ces deux objectifs antagonistes. En effet, en caractérisant une famille d'applications, il est possible d'identifier des abstractions de haut niveau permettant de les modéliser. Nous proposons deux modèles spécifiques au traitement du signal et de l'image sur cluster hétérogène. Le premier modèle est statique. Nous lui apportons une fonctionnalité de migration de tâches. Le second est dynamique, basé sur le support exécutif StarPU. Les deux modèles offrent d'une part un haut niveau d'abstraction en modélisant les applications de traitement du signal et de l'image sous forme de graphe de flot de données et d'autre part, ils permettent d'exploiter efficacement les différents niveaux de parallélisme tâche, données, graphe. Ces deux modèles sont validés par plusieurs implémentations et comparaisons incluant deux applications de traitement de l'image du monde réel sur cluster CPU-GPU. / Since a decade, computing systems evolved to parallel and heterogeneous architectures. Composed of several nodes connected via a network and including heterogeneous processing units, clusters achieve high performances. To program these architectures, the user must rely on programming models such as MPI, OpenMP or CUDA. However, it is still difficult to conciliate productivity provided by abstracting the architectural specificities, and performances. In this thesis, we exploit the idea that a programming model specific to a particular domain of application can achieve these antagonist goals. In fact, by characterizing a family of application, it is possible to identify high level abstractions to efficiently model them. We propose two models specific to the implementation of signal and image processing applications on heterogeneous clusters. The first model is static. We enrich it with a task migration feature. The second model is dynamic, based on the StarPU runtime. Both models offer firstly a high level of abstraction by modeling image and signal applications as a data flow graph and secondly they efficiently exploit task, data and graph parallelisms. We validate these models with different implementations and comparisons including two real-world applications of images processing on a CPU-GPU cluster.
10

Online Sample Selection for Resource Constrained Networked Systems

Sjösvärd, Philip, Miksits, Samuel January 2022 (has links)
As more devices with different service requirements become connected to networked systems, such as Internet of Things (IoT) devices, maintaining quality of service becomes increasingly difficult. Large data sets can be obtained ahead of time in networks to train prediction models offline, however, resulting in high computational costs. Online learning is an alternative approach where a smaller cache of fixed size is maintained for training using sample selection algorithms, allowing for lower computational costs and real-time model re-computation. This project has resulted in two newly designed sample selection algorithms, Binned Relevance and Redundancy Sample Selection (BRR-SS) and Autoregressive First, In First Out-buffer (AR-FIFO). The algorithms are evaluated on data traces retrieved from a Key Value store and a Video on Demand service. Prediction accuracy of the resulting model while using the sample selection algorithms and the time to process a received sample is evaluated and compared to the pre-existing Reservoir Sampling (RS) and Relevance and Redundancy Sample Selection (RR-SS) with and without model re-computation. The results show that, while RS maintains the lowest computational overhead, BRR-SS outperforms both RS and RR-SS in prediction accuracy on the investigated traces. AR-FIFO, with its low computational cost, outperforms offline learning for larger cache sizes on the Key Value data set but shows inconsistencies on the Video on Demand trace. Model re-computation results in reduced error rates and significantly lowered variance on the investigated data traces, where periodic model re-computation overall outperforms change detection in practicality, prediction accuracy, and computational overhead. / Allteftersom fler enheter med olika servicekrav ansluts till nätverkssystem, såsom Internet of Things (IoT) enheter, ökar svårigheten att erhålla nödvändig servicekvalitet. Nätverk kan ge upphov till stora datamängder för träning av prediktionsmodeller offline, dock till en hög beräkningskostnad. Ett alternativt tillvägagångssätt är onlineinlärning där en mindre cache av fast storlek upprätthålls för träning med hjälp av datapunkturvalsalgoritmer. Detta möjliggör lägre beräkningskostnader samt realtidsmodellomräkningar. Detta projekt har resulterat i två nydesignade datapunkturvalsalgoritmer, Binned Relevance and Redundancy Sample Selection (BRR-SS) och Autoregressive First In, First Out-buffer (AR-FIFO). Algoritmerna utvärderas på dataspår som hämtats från ett Key Value-lager och en Video on Demand-tjänst. Förutsägelseförmåga för den resulterande modellen när datapunkturvalsalgoritmerna används och tid för bearbetning av mottagen datapunkt utvärderas och jämförs med dem redan existerande Reservoir Sampling (RS) och Relevance and Redundancy Sample Selection (RR-SS), med och utan modellomräkning. RS resulterar i lägst beräkningskostnad medan BRR-SS överträffar både RS och RR-SS i förutsägelseförmåga på dem undersökta spåren. AR-FIFO, med sin låga beräkningskostnad, överträffar offlineinlärning för större cachestorlekar på Key Value-spåret, men visar inkonsekvent beteende på Video on Demand-spåret. Modellomräkning resulterar i mindre fel och avsevärt sänkt varians på dem undersökta spåren, där periodisk modellomräkning totalt sett överträffar förändringsdetektering i praktikalitet, förutsägelseförmåga och beräkningskostnad. / Kandidatexjobb i elektroteknik 2022, KTH, Stockholm

Page generated in 0.0729 seconds