Spelling suggestions: "subject:"reconfigurable""
81 |
Quantitative Analysis of Configurable and Reconfigurable SystemsDubslaff, Clemens 21 March 2022 (has links)
The often huge configuration spaces of modern software systems render the detection, prediction, and explanation of defects and inadvertent behaviors challenging tasks. Besides configurability, a further source of complexity is the integration of cyber-physical systems (CPSs). Behaviors in CPSs depend on quantitative aspects such as throughput, energy consumption, and probability of failure, which all play a central role in new technologies like 5G networks, tactile internet, autonomous driving, and the internet of things. The manifold environmental influences and human interactions within CPSs might also trigger reconfigurations, e.g., to ensure quality of service through adaptivity or fulfill user’s wishes by adjusting program settings and performing software updates. Such reconfigurations add yet another source of complexity to the quest of modeling and analyzing modern software systems.
The main contribution of this thesis is a formal compositional modeling and analysis framework for systems that involve configurability, adaptivity through reconfiguration, and quantitative aspects. Existing modeling approaches for configurable systems are commonly divided into annotative and compositional approaches, both having complementary strengths and weaknesses. It has been a well-known open problem in the configurable systems community whether there is a hybrid approach that combines the strengths of both specification approaches. We provide a formal solution to this problem, prove its correctness, and show practical applicability to actual configurable systems by introducing a formal analysis framework and its implementation. While existing family-based analysis approaches for configurable systems mainly focused on software systems, we show effectiveness of such approaches also in the hardware domain. To explicate the impact of configuration options onto analysis results, we introduce the notion of feature causality that is inspired by the seminal counterfactual definition of causality by Halpern and Pearl. By means of several experimental studies, including a velocity controller of an aircraft system that required new techniques already for its analysis, we show how our notion of causality facilitates to identify root causes, to estimate the effects of features, and to detect feature interactions.:1 Introduction
2 Foundations
3 Probabilistic Configurable Systems
4 Analysis and Synthesis in Reconfigurable Systems
5 Experimental Studies
6 Causality in Configurable Systems
7 Conclusion
|
82 |
Automated support of the variability in configurable process models / Automatiser le support de la variabilité dans les modèles de processus configurablesAssy, Nour 28 September 2015 (has links)
L'évolution rapide dans les environnements métier d'aujourd'hui impose de nouveaux défis pour la gestion efficace et rentable des processus métiers. Dans un tel environnement très dynamique, la conception des processus métiers devient une tâche fastidieuse, source d'erreurs et coûteuse. Par conséquent, l'adoption d'une approche permettant la réutilisation et l'adaptabilité devient un besoin urgent pour une conception de processus prospère. Les modèles de processus configurables récemment introduits représentent l'une des solutions recherchées permettant une conception de processus par la réutilisation, tout en offrant la flexibilité. Un modèle de processus configurable est un modèle générique qui intègre de multiples variantes de procédés d'un même processus métier à travers des points de variation. Ces points de variation sont appelés éléments configurables et permettent de multiples options de conception dans le modèle de processus. Un modèle de processus configurable doit être configuré selon une exigence spécifique en sélectionnant une option de conception pour chaque élément configurable.Les activités de recherche récentes sur les modèles de processus configurables ont conduit à la spécification des langages de modélisation de processus configurables comme par exemple configurable Event-Driven Process Chain (C-EPC) qui étend la notation de l'EPC avec des éléments configurables. Depuis lors, la question de la conception et de la configuration des modèles de processus configurables a été étudiée. D'une part, puisque les modèles de processus configurables ont tendance à être très complexe avec un grand nombre d'éléments configurables, de nombreuses approches automatisées ont été proposées afin d'assister leur conception. Cependant, les approches existantes proposent de recommander des modèles de processus configurables entiers qui sont difficiles à réutiliser, nécessitent un temps complexe de calcul et peuvent confondre le concepteur du processus. D'autre part, les résultats de la recherche sur la conception des modèles de processus configurables ont mis en évidence la nécessité des moyens de soutien pour configurer le processus. Par conséquent, de nombreuses approches ont proposé de construire un système de support de configuration pour aider les utilisateurs finaux à sélectionner les choix de configuration souhaitables en fonction de leurs exigences. Cependant, ces systèmes sont actuellement créés manuellement par des experts du domaine qui est sans aucun doute une tâche fastidieuse et source d'erreurs .Dans cette thèse, nous visons à automatiser le soutien de la variabilité dans les modèles de processus configurables. Notre objectif est double: (i) assister la conception des processus configurables d'une manière à ne pas confondre les concepteurs par des recommandations complexes et (i) assister la création des systèmes de soutien de configuration afin de libérer les analystes de processus de la charge de les construire manuellement. Pour atteindre le premier objectif, nous proposons d'apprendre de l'expérience acquise grâce à la modélisation des processus passés afin d'aider les concepteurs de processus avec des fragments de processus configurables. Les fragments proposés inspirent le concepteur du processus pour compléter la conception du processus en cours. Pour atteindre le deuxième objectif, nous nous rendons compte que les modèles de processus préalablement conçus et configurés contiennent des connaissances implicites et utiles pour la configuration de processus. Par conséquent, nous proposons de bénéficier de l'expérience acquise grâce à la modélisation et à la configuration passées des processus afin d'aider les analystes de processus dans la construction de leurs systèmes de support de configuration. / Today's fast changing environment imposes new challenges for effective management of business processes. In such a highly dynamic environment, the business process design becomes time-consuming, error-prone, and costly. Therefore, seeking reuse and adaptability is a pressing need for a successful business process design. Configurable reference models recently introduced were a step toward enabling a process design by reuse while providing flexibility. A configurable process model is a generic model that integrates multiple process variants of a same business process in a given domain through variation points. These variation points are referred to as configurable elements and allow for multiple design options in the process model. A configurable process model needs to be configured according to a specific requirement by selecting one design option for each configurable element.Recent research activities on configurable process models have led to the specification of configurable process modeling notations as for example configurable Event-Driven Process Chain (C-EPC) that extends the EPC notation with configurable elements. Since then, the issue of building and configuring configurable process models has been investigated. On the one hand, as configurable process models tend to be very complex with a large number of configurable elements, many automated approaches have been proposed to assist their design. However, existing approaches propose to recommend entire configurable process models which are difficult to reuse, cost much computation time and may confuse the process designer. On the other hand, the research results on configurable process model design highlight the need for means of support to configure the process. Therefore, many approaches proposed to build a configuration support system for assisting end users selecting desirable configuration choices according to their requirements. However, these systems are currently manually created by domain experts which is undoubtedly a time-consuming and error-prone task.In this thesis, we aim at automating the support of the variability in configurable process models. Our objective is twofold: (i) assisting the configurable process design in a fin-grained way using configurable process fragments that are close to the designers interest and (ii) automating the creation of configuration support systems in order to release the process analysts from the burden of manually building them. In order to achieve the first objective, we propose to learn from the experience gained through past process modeling in order to assist the process designers with configurable process fragments. The proposed fragments inspire the process designer to complete the design of the ongoing process. To achieve the second objective, we realize that previously designed and configured process models contain implicit and useful knowledge for process configuration. Therefore, we propose to benefit from the experience gained through past process modeling and configuration in order to assist process analysts building their configuration support systems. Such systems assist end users interactively configuring the process by recommending suitable configuration decisions.
|
83 |
[pt] MITIGAÇÃO PROATIVA DE VULNERABILIDADES EM SISTEMAS DA WEB BASEADOS EM PLUGIN / [en] PROACTIVE MITIGATION OF VULNERABILITIES IN PLUGIN-BASED WEB SYSTEMSOSLIEN MESA RODRIGUEZ 12 May 2020 (has links)
[pt] Uma estratégia comum de linha de produtos de software envolve sistemas da Web baseados em plug-ins que suportam a incorporação simples
e rápida de comportamentos personalizados, sendo amplamente adotados
para criar aplicativos baseados na web. A popularidade dos ecossistemas
que suportam o desenvolvimento baseado em plug-ins (como o WordPress)
é, em grande parte, devido ao número de opções de personalização disponíveis como plug-ins contribuídos pela comunidade. Entretanto, as vulnerabilidades relacionadas a plug-ins tendem a ser recorrentes, exploráveis e
difíceis de serem detectadas e podem levar a graves conseqüências para o
produto personalizado. Portanto, é necessário entender essas vulnerabilidades para permitir a prevenção de ameaças de segurança relevantes. Neste
trabalho, realizamos um estudo exploratório para caracterizar vulnerabilidades causadas por plug-ins em sistemas baseados na web, examinando os
boletins de vulnerabilidade do WordPress catalogados pelo National Vulnerability Database e os patches associados, mantidos pelo repositório de
plugins do WordPress. Identificamos os principais tipos de vulnerabilidades, o seu impacto e o tamanho do patch para corrigir a vulnerabilidade.
Identificamos, também, os tópicos mais comuns relacionados à segurança
discutidos entre os desenvolvedores do WordPress. Observamos que, embora as vulnerabilidades possam ter consequências graves e permanecerem
despercebidas por muito tempo, elas geralmente podem ser atenuadas com
pequenas alterações no código-fonte. A caracterização ajuda a fornecer uma
compreensão de como tais vulnerabilidades se manifestam na prática e contribui com as novas gerações de ferramentas de teste de vulnerabilidades
capazes de antecipar sua possível ocorrência. Esta pesquisa propõe uma
ferramenta de suporte para mitigar a ocorrência de vulnerabilidades em sistemas baseados em plugins web, facilitando a descoberta e antecipação da
possível ocorrência de vulnerabilidades. / [en] A common software product line strategy involves plug-in-based web
systems that support the simple and rapid incorporation of custom behaviors and are widely adopted for building web-based applications. The popularity of ecosystems that support plug-in-based development (such as
WordPress) is largely due to the number of customization options available
as community-contributed plugins. However, plug-in related vulnerabilities
tend to be recurring, exploitable and difficult to detect and can lead to serious consequences for the custom product. Therefore, these vulnerabilities
must be understood to enable the prevention of relevant security threats. In
this paper, we conduct an exploratory study to characterize plug-in vulnerabilities in web-based systems by examining the WordPress vulnerability
bulletins cataloged by the National Vulnerability Database and the associated patches maintained by the WordPress plugin repository. We identify
the main types of vulnerabilities, their impact, and the size of the patch to
address the vulnerability. We have also identified the most common securityrelated topics discussed among WordPress developers. We note that while
vulnerabilities can have serious consequences and remain unnoticed for a
long time, they can often be mitigated with minor changes to source code.
Characterization helps provide an understanding of how such vulnerabilities
manifest themselves in practice and contributes to new generations of vulnerability testing tools that can anticipate their potential occurrence. This
research proposes a support tool to mitigate the occurrence of vulnerabilities in web plugin based systems, facilitating the discovery and anticipation
of the possible occurrence of vulnerabilities.
|
84 |
A 5.5–7.5‐GHz band‐configurable wake‐up receiver fully integrated in 45‐nm RF‐SOI CMOSMa, Rui, Protze, Florian, Ellinger, Frank 30 May 2024 (has links)
This work investigates a 5.5–7.5-GHz band-configurable duty-cycled wake-up receiver (WuRX) fully implemented in a 45-nm radio-frequency (RF) silicon-on-insulator (SOI) complementary-metal-oxide-semiconductor (CMOS) technology. Based on an uncertain intermediate frequency (IF) super-heterodyne receiver (RX) topology, the WuRX analogue front-end (AFE) incorporates a 5.5–7.5-GHz band-tunable low-power low-noise amplifier, a low-power Gilbert mixer, a digitally controlled oscillator (DCO), a 100-MHz IF band-pass filter (BPF), an envelope detector, a comparator, a pulse generator and a current reference. By application of duty cycling with a low duty cycle below 1%, the power consumption of the AFE was significantly reduced. In addition, the on-chip digital bank-end consists of a frequency divider, a phase corrector, a 31-bit correlator and a serial peripheral interface. A proof-of-concept WuRX circuit occupying an area of 1200 μm by 900 μm has been fabricated in a GlobalFoundries 45-nm RF-SOI CMOS technology. Measurement results show that at a data rate of 64 bps, the entire WuRX consumes only 2.3 μW. Tested at 8 operation bands covering 5.5–7.7 GHz, the WuRX has a measured sensitivity between −67.5 dBm and −72.4 dBm at a wake-up error rate of 10−3. With the sensitivity unchanged, the data rate of the WuRX can be scaled up to 8.2 kbps. To the authors' best knowledge, this work offers the largest RF bandwidth from 5.5 to 7.5 GHz, the most operation channels (≥8) and the fastest settling time (<115 ns) among the WuRXs reported to date.
|
85 |
Software Performance Modeling for Multi-Factor System VariabilityMühlbauer, Stefan 13 January 2025 (has links)
Modern software systems offer numerous configuration options to optimize performance, often using machine learning-based models. However, these models frequently overlook external factors such as software versions, usage scenarios, and hardware setups, raising concerns about their real-world applicability.
This research expands performance modeling to include system variability from both software evolution and workload variations. It empirically analyzes how these factors influence performance and develops methods to adapt models accordingly.
Experiments reveal that software performance evolves with abrupt changes linked to code revisions or merges. An active learning strategy efficiently detects performance change points, aiding in identifying significant shifts. Additionally, large-scale empirical analysis shows how varying configurations and workloads affect performance, identifying significant data shifts that question the accuracy of traditional models. Correlating performance data with code coverage information reveals that code coverage testing can identify workload-sensitive configuration options.
Based on these comprehensive empirical results, this thesis proposes integrating environmental factors into performance modeling. A combined coarse-grain screening strategy and stepwise feature selection enhance data extraction. Compared to a Lasso baseline, this method more accurately identifies performance-relevant factors, particularly in feature-rich scenarios.
This research provides a foundational understanding of multi-factor system variability in performance modeling, offering efficient strategies for managing software evolution and workload variability. / Moderne Software-Systeme bieten oft Anpassungsmöglichkeiten, um Funktionalität und Leistung an spezifische Anforderungen anzupassen. Performance-Modelle verwenden maschinelles Lernen, um die Leistung eines Systems basierend auf seiner Konfiguration abzuschätzen. Allerdings beeinflussen nicht nur Konfigurationseinstellungen, sondern auch externe Faktoren wie Softwareversion, Arbeitslast und Hardwarekonfiguration die Leistung. Angesichts möglicher Verzerrungen durch sekundäre Faktoren und der Tatsache, dass viele Modellierungsansätze hauptsächlich die Konfigurierbarkeit eines Systems berücksichtigen, ist die Frage, wie gut bestehende Ansätze in der Praxis anwendbar sind.
Diese Dissertation erweitert die Modellierung der Software-Leistung um zwei zusätzliche sekundäre Dimensionen: die Evolution eines Softwaresystems und den Einfluss der spezifischen Arbeitslast. Die Arbeit kombiniert empirische Leistungsmessungen, um die Leistungsmerkmale unter Variation dieser Faktoren zu beschreiben, mit dem Ziel, Leistungsmodelle entsprechend anzupassen.
Die Veränderung der Software-Leistung über die Entwicklungsgeschichte ist durch plötzliche Änderungen (Change Points) gekennzeichnet, die oft mit spezifischen Code-Revisionen oder Merge-Ereignissen zusammenhängen. Auf Basis dieser Beobachtungen präsentiert die Arbeit eine Active-Learning-Strategie, die es ermöglicht, mit wenigen Messungen die gesamte Leistungshistorie eines Softwaresystems abzuschätzen und spezifische Änderungen zu lokalisieren. Eine Erweiterung dieses Ansatzes zur Kontextualisierung von Leistungsänderungen ordnet erkannten Änderungen einer oder mehreren Konfigurationsoptionen zu, um die betroffenen Konfigurationen einzugrenzen.
In Bezug auf die Dimension der Arbeitslast präsentiert diese Dissertation eine umfassende empirische Studie, die die kombinierten Auswirkungen von Konfigurierbarkeit und Arbeitslastvariation auf die Software-Leistung untersucht. Es zeigt sich, dass die Arbeitslast den Einfluss der einzelnen Konfigurationsoptionen signifikant beeinflusst, was die Zuverlässigkeit vieler Ansätze weiter einschränkt. Eine Verknüpfung von Code-Coverage-Daten mit Leistungsmessungen zeigt, wie Abdeckungstests (als kostengünstige Alternative zu Leistungsmessungen) dazu beitragen können, Interaktionen zwischen der Arbeitslast und den einzelnen Optionen zu finden.
Die Berücksichtigung weiterer sekundärer Faktoren in Modellierungsansätzen für Software-Leistung stellt eine Herausforderung dar, die durch die kombinatorische Komplexität des Gesamtproblems bedingt ist. Angesichts der hohen Kosten von Leistungsmessungen und der oft begrenzten Datenverfügbarkeit präsentiert der letzte Teil der Dissertation einen Ansatz zur Identifizierung relevanter Faktoren (Feature Selection). Durch die gleichzeitige Betrachtung mehrerer Faktoren (Group Sampling) und eine schrittweise Auswahl von Faktoren erhöht die Screening-Strategie die aus einer Stichprobe an Messungen gewonnene Informationsmenge im Vergleich zu einem Lasso-Basismodell.
Zusammenfassend bietet diese Dissertation eine empirische Beschreibung der Software-Leistung, abhängig von der Konfiguration sowie zwei weiteren Faktoren: der Software-Evolution und der Arbeitslast. Darüber hinaus stellt diese Arbeit einen Katalog an Methoden bereit, um beide Dimensionen bei der Modellierung von Software-Leistung zu berücksichtigen und die Herausforderungen der Skalierbarkeit eines kombinierten Ansatzes zu bewältigen.
|
86 |
Implementation Strategies for Particle Filter based Target TrackingVelmurugan, Rajbabu 03 April 2007 (has links)
This thesis contributes new algorithms and implementations for particle filter-based target tracking. From an algorithmic perspective, modifications that improve a batch-based acoustic direction-of-arrival (DOA), multi-target, particle filter tracker are presented. The main improvements are reduced execution time and increased robustness to target maneuvers. The key feature of the batch-based tracker is an image template-matching approach that handles data association and clutter in measurements. The particle filter tracker is compared to an extended Kalman filter~(EKF) and a Laplacian filter and is shown to perform better for maneuvering targets. Using an approach similar to the acoustic tracker, a radar range-only tracker is also developed. This includes developing the state update and observation models, and proving observability
for a batch of range measurements.
From an implementation perspective, this thesis provides new low-power and real-time implementations for particle filters. First, to achieve a very low-power implementation, two mixed-mode implementation strategies that use
analog and digital components are developed. The mixed-mode implementations use analog, multiple-input translinear element (MITE) networks to realize nonlinear functions. The power dissipated in the mixed-mode implementation of a particle filter-based, bearings-only tracker is compared to a digital implementation that uses the CORDIC algorithm to realize the nonlinear functions. The mixed-mode method that uses predominantly analog components is shown to provide a factor of twenty improvement in power savings compared to a digital implementation. Next, real-time implementation strategies for the batch-based acoustic DOA tracker are developed. The characteristics of the digital implementation of the tracker are quantified using digital signal processor (DSP) and field-programmable gate array (FPGA) implementations. The FPGA implementation uses a soft-core or hard-core processor to implement the Newton search in the particle proposal stage. A MITE implementation of the nonlinear DOA update function in the tracker is also presented.
|
87 |
Automatic non-functional testing and tuning of configurable generators / Une approche pour le test non-fonctionnel et la configuration automatique des générateursBoussaa, Mohamed 06 September 2017 (has links)
Les techniques émergentes de l’ingénierie dirigée par les modèles et de la programmation générative ont permis la création de plusieurs générateurs (générateurs de code et compilateurs). Ceux-ci sont souvent utilisés afin de faciliter le développement logiciel et automatiser le processus de génération de code à partir des spécifications abstraites. De plus, les générateurs modernes comme les compilateurs C, sont devenus hautement configurables, offrant de nombreuses options de configuration à l'utilisateur de manière à personnaliser facilement le code généré pour la plateforme matérielle cible. Par conséquent, la qualité logicielle est devenue fortement corrélée aux paramètres de configuration ainsi qu'au générateur lui-même. Dans ce contexte, il est devenu indispensable de vérifier le bon comportement des générateurs. Cette thèse établit trois contributions principales : Contribution I: détection automatique des inconsistances dans les familles de générateurs de code : Dans cette contribution, nous abordons le problème de l'oracle dans le domaine du test non-fonctionnel des générateurs de code. La disponibilité de multiples générateurs de code avec des fonctionnalités comparables (c.-à-d. familles de générateurs de code) nous permet d'appliquer l'idée du test métamorphique en définissant des oracles de test de haut-niveau (c.-à-d. relation métamorphique) pour détecter des inconsistances. Une inconsistance est détectée lorsque le code généré présente un comportement inattendu par rapport à toutes les implémentations équivalentes de la même famille. Nous évaluons notre approche en analysant la performance de Haxe, un langage de programmation de haut niveau impliquant un ensemble de générateurs de code multi-plateformes. Les résultats expérimentaux montrent que notre approche est capable de détecter plusieurs inconsistances qui révèlent des problèmes réels dans cette famille de générateurs de code. Contribution II: une approche pour l'auto-configuration des compilateurs. Le grand nombre d'options de compilation des compilateurs nécessite une méthode efficace pour explorer l'espace d’optimisation. Ainsi, nous appliquons, dans cette contribution, une méta-heuristique appelée Novelty Search pour l'exploration de cet espace de recherche. Cette approche aide les utilisateurs à paramétrer automatiquement les compilateurs pour une architecture matérielle cible et pour une métrique non-fonctionnelle spécifique tel que la performance et l'utilisation des ressources. Nous évaluons l'efficacité de notre approche en vérifiant les optimisations fournies par le compilateur GCC. Nos résultats expérimentaux montrent que notre approche permet d'auto-configurer les compilateurs en fonction des besoins de l'utilisateur et de construire des optimisations qui surpassent les niveaux d'optimisation standard. Nous démontrons également que notre approche peut être utilisée pour construire automatiquement des niveaux d'optimisation qui représentent des compromis optimaux entre plusieurs propriétés non-fonctionnelles telles que le temps d'exécution et la consommation des ressources. Contribution III: Un environnement d'exécution léger pour le test et la surveillance de la consommation des ressources des logiciels. Enfin, nous proposons une infrastructure basée sur les micro-services pour assurer le déploiement et la surveillance de la consommation des ressources des différentes variantes du code généré. Cette contribution traite le problème de l'hétérogénéité des plateformes logicielles et matérielles. Nous décrivons une approche qui automatise le processus de génération, compilation, et exécution du code dans le but de faciliter le test et l'auto-configuration des générateurs. Cet environnement isolé repose sur des conteneurs système, comme plateformes d'exécution, pour une surveillance et analyse fine des propriétés liées à l'utilisation des ressources (CPU et mémoire). / Generative software development has paved the way for the creation of multiple generators (code generators and compilers) that serve as a basis for automatically producing code to a broad range of software and hardware platforms. With full automatic code generation, users are able to rapidly synthesize software artifacts for various software platforms. In addition, they can easily customize the generated code for the target hardware platform since modern generators (i.e., C compilers) become highly configurable, offering numerous configuration options that the user can apply. Consequently, the quality of generated software becomes highly correlated to the configuration settings as well as to the generator itself. In this context, it is crucial to verify the correct behavior of generators. Numerous approaches have been proposed to verify the functional outcome of generated code but few of them evaluate the non-functional properties of automatically generated code, namely the performance and resource usage properties. This thesis addresses three problems : (1) Non-functional testing of generators: We benefit from the existence of multiple code generators with comparable functionality (i.e., code generator families) to automatically test the generated code. We leverage the metamorphic testing approach to detect non-functional inconsistencies in code generator families by defining metamorphic relations as test oracles. We define the metamorphic relation as a comparison between the variations of performance and resource usage of code, generated from the same code generator family. We evaluate our approach by analyzing the performance of HAXE, a popular code generator family. Experimental results show that our approach is able to automatically detect several inconsistencies that reveal real issues in this family of code generators. (2) Generators auto-tuning: We exploit the recent advances in search-based software engineering in order to provide an effective approach to tune generators (i.e., through optimizations) according to user's non-functional requirements (i.e., performance and resource usage). We also demonstrate that our approach can be used to automatically construct optimization levels that represent optimal trade-offs between multiple non-functional properties such as execution time and resource usage requirements. We evaluate our approach by verifying the optimizations performed by the GCC compiler. Our experimental results show that our approach is able to auto-tune compilers and construct optimizations that yield to better performance results than standard optimization levels. (3) Handling the diversity of software and hardware platforms in software testing: Running tests and evaluating the resource usage in heterogeneous environments is tedious. To handle this problem, we benefit from the recent advances in lightweight system virtualization, in particular container-based virtualization, in order to offer effective support for automatically deploying, executing, and monitoring code in heterogeneous environment, and collect non-functional metrics (e.g., memory and CPU consumptions). This testing infrastructure serves as a basis for evaluating the experiments conducted in the two first contributions.
|
88 |
Supporting cloud resource allocation in configurable business process models / Supporter l'allocation des ressources cloud dans les processus métiers configurablesHachicha Belghith, Emna 22 September 2017 (has links)
Les organisations adoptent de plus en plus les Systèmes (PAIS) pour gérer leurs processus métiers basés sur les services en utilisant les modèles de processus appelés «modèles de processus métiers». Motivés par l’adaptation aux exigences commerciales et par la réduction des coûts de maintenance, les organisations externalisent leurs processus dans le Cloud Computing. Selon l'Institut NIST, Cloud Computing est un modèle qui permet aux fournisseurs de partager leurs ressources et aux utilisateurs d’y accéder de manière pratique et à la demande. Dans un tel environnement multi-tenant, l'utilisation de modèles de processus configurables permet aux fournisseurs de processus Cloud de fournir un processus personnalisable qui peut être configuré par différents tenants en fonction de leurs besoins.Un processus métier peut être spécifié par plusieurs perspectives tel que la perspective de flux de contrôle, la perspective des ressources, etc. Plusieurs approches ont été proposées au niveau des premières perspectives, notamment le flux de contrôle. Cependant, la perspective ressource, qui est d'une importance égale, était négligée et pas explicitement définie. D’un côté, la gestion de la perspective ressource spécifiquement l’allocation des ressources Cloud est un thème d’actualité qui implique plusieurs recherches. La modélisation et la configuration des ressources sont une tâche sensible nécessitant un travail intensif. Malgré l’existence de différentes approches, elles traitent principalement les ressources humaines plutôt que des ressources Cloud. D’un autre côté, malgré le fait que le concept des modèles de processus configurables est très complémentaire au Cloud, la manière dont comment les ressources sont configurées et intégrées est à peine manipulée. Les approches proposées travaillant sur l’extension de la configuration de ressources, ne couvrent pas les propriétés Cloud notamment l’élasticité et le partage.Pour répondre à ces lacunes, nous proposons une approche pour supporter la modélisation et la configuration de l’allocation des ressources Cloud dans les modèles de processus configurables. Nous visons à (1) définir une description unifiée et formelle pour la perspective ressource, (2) assurer une allocation de ressource correcte, sans conflits et optimisée, (3) Aider les fournisseurs de processus à concevoir leur allocation de ressources configurable de manière fine afin d'éviter des résultats complexes et importants, et (4) Optimiser la sélection des ressources Cloud par rapport aux exigences liées aux propriétés Cloud (élasticité et partage) et propriétés QoS.Pour ce faire, nous proposons d'abord un cadre sémantique pour une description de ressources sémantiquement enrichies dans les processus métiers visant à formaliser les ressources Cloud consommées à l'aide d'une base de connaissances partagée. Ensuite, nous nous basons sur les processus métiers sociales pour fournir des stratégies afin d'assurer une allocation de ressources contrôlée sans conflits en termes de ressources. Par la suite, nous proposons une nouvelle approche qui étend les modèles de processus configurables pour permettre une allocation de ressources Cloud configurable. Notre objectif est de déplacer l'allocation de ressources Cloud du côté des tenants vers le côté du fournisseur de processus Cloud pour une gestion centralisée des ressources. Après, nous proposons des approches génétiques qui visent à choisir une configuration optimale des ressources d'une manière efficace sur le plan énergétique en améliorant les propriétés QoS.Afin de montrer l'efficacité de nos propositions, nous avons développé concrètement (1) une série de preuves de concepts, en tant que partie de validation, pour aider à concevoir des modèles de processus et remplir une base de connaissances de modèles de processus hétérogènes avec des ressources Cloud et (2) ont effectué des expériences sur des modèles de processus réels à partir de grands ensembles de données / Organizations are recently more and more adopting Process-Aware Information Systems (PAIS) for managing their service-based processes using process models referred to as business process models. Motivated by adapting to the rapid changing business requirements and reducing maintenance costs, organizations are outsourcing their processes in an important infrastructure which is Cloud Computing. According to the NIST Institute, Cloud Computing is a model that enables providers sharing their computing resources (e.g., networks, applications, and storage) and users accessing them in convenient and on-demand way with a minimal management effort. In such a multi-tenant environment, using configurable process models allows a Cloud process provider to deliver a customizable process that can be configured by different tenants according to their needs.A business process could be specified from various perspectives such as the control-flow perspective, the organizational perspective, the resource perspective, etc. Several approaches have been correctly proposed at the level of the first perspectives, in particular the control-flow, i.e., the temporal ordering of the process activities. Nevertheless, the resource perspective, which is of equal importance, has been neglected and poorly operated. The management of the resource perspective especially the Cloud resource allocation in business processes is a current interesting topic that increasingly involves many researches in both academics and industry. The design and configuration of resources are undoubtedly sensitive and labor-intensive task. On the one hand, the resource perspective in process models is not explicitly defined. Although many proposals exist in the literature, they all targeted human resources rather than Cloud resources. On the other hand, despite of the fact that the concept of configurable process models is highly complementary to Cloud Computing, the way in how resources can be configured and integrated is hardly handled. The few proposals, which have been suggested on extending configuration to resources, do not cover required Cloud properties such as elasticity or multi-tenancy.To address these limitations, we propose an approach for supporting the design and configuration of Cloud resource Allocation in configurable business process models. We target to (1) define a unified and formal description for the resource perspective, (2) ensure a correct, free-of-conflict and optimized use of Cloud resource consumption, (3) assist process providers to design their configurable resource allocation in a fine-grained way to avoid complex and large results, and (4) optimize the selection of Cloud resources with respect to the requirements related to Cloud properties (elasticity and shareability) and QoS properties.To do so, we first suggest a semantic framework for a semantically-enriched resource description in business processes aiming at formalizing the consumed Cloud resources using a shared knowledge base. Then, we build upon social business processes to provide strategies in order to ensure a controlled resource allocation without conflicts in terms of resources. Next, we propose a novel approach that extends configurable process models to permit a configurable Cloud resource allocation. Our purpose is to shift the Cloud resource allocation from the tenant side to the Cloud process provider side for a centralized resource management. Afterwards, we propose genetic-based approaches that aim at selecting optimal resource configuration in an energy efficient manner and to improve non-functional properties.In order to show the effectiveness of our proposals, we concretely developed (i) a set of proof of concepts, as a validation part, to assist the design of process models and populate a knowledge base of heterogeneous process models with Cloud resources, and (ii) performed experiments on real process models from large datasets
|
89 |
[pt] AGENTES EMBARCADOS DE IOT AUTO-CONFIGURÁVEIS CUONTROLADOS POR REDES NEURAIS / [en] SELF-CONFIGURABLE IOT EMBEDDED AGENTS CONTROLLED BY NEURAL NETWORKSNATHALIA MORAES DO NASCIMENTO 12 May 2020 (has links)
[pt] Aplicações em Internet das Coisas (IoT) baseadas em agentes têm surgido como aplicações que podem envolver sensores, dispositivos sem fio, máquinas e softwares que podem compartilhar dados e que podem ser acessados remotamente. Essas aplicações vêm sendo propostas em vários domínios de aplicação, incluindo cuidados em saúde, cidades inteligentes e agricultura. Uma terminologia comumente utilizada para representar agentes embarcados inteligentes é embodied agents, a qual é proposta esse trabalho para projetar agentes para o domínio de IoT. Embodied agents significa agentes que possuem corpo, o qual pode ser definido pelos tipos de sensores e atuadores, e controlador, normalmente representada por uma rede neural artificial. Apesar da capacidade de reconfiguração ser essencial para embodied agents inteligentes, existem poucas tecnologias para suportar sistemas reconfigurfuaveis. Além disso, é necessário novas abordagens para lidar com as variabilidades dos agentes e do ambiente, e novos procedimentos para investigar a relação o entre o corpo e o controlador de um embodied agent, assim como as interações entre as mudanças do agente e do ambiente. Além da variabilidade do corpo e do controlador desses agentes, a exemplo do número e tipos de sensores, assim como o número de camadas e tipos de função de ativação para a rede neural, também é preciso lidar com a variabilidade do ambiente em que esses agentes estão situados. A fifim de entender melhor e esclarecer os conceitos de embodied agents, este trabalho apresenta um modelo de referência para embodied agents autoconfifiguráveis de IoT. A partir desse modelo de referência, três abordagens foram criadas para projetar e testar agentes embarcados reconfifiguráeis: i) um software framework para o desenvolvimento de embodied agents no domínio de internet das coisas; ii) uma arquitetura para configurar o corpo e controlador dos agentes de acordo com as variantes do ambiente; e iii) uma ferramenta para testar embodied agents. As abordagens foram avaliadas através de estudos de caso e experimentos em diferentes domínios de aplicação. / [en] Agent-based Internet of Things (IoT) applications have recently emerged as applications that can involve sensors, wireless devices, machines and software that can exchange data and be accessed remotely. Such applications have been proposed in several domains including health care, smart cities and agriculture. Embodied Agents is a term used to denote intelligent embedded agents, which we use to design agents to the IoT domain. Each agent is provided with a body that has sensors to collect data from the
environment and actuators to interact with the environment, and a controller that is usually represented by an artificial neural network. Because reconfigurable behavior is key for autonomous embodied agents, there is a spectrum of approaches to support system reconfigurations. However, there is a need for approaches to handle agents and environment variability, and for a broad spectrum of procedures to investigate the relationship between the body and the controller of an embodied agent, as the interaction between
the agent and the environment changes. In addition to the body and controller variability of these agents, such as those variations related to the number and types of sensors as well as the number of layers and types of activation function for the neural network, it is also necessary to deal with the variability of the environment in which these agents are situated. A discussion of the embodied agents should have some formal basis in order to clarify these concepts. Notwithstanding, this thesis presents a reference model for selfcon figurable IoT embodied agents. Based on this reference model, we have created three approaches to design and test self-configurable IoT embodied agents: i) a software framework for the development of embodied agents to the Internet of Things (IoT) applications; ii) an architecture to configure the body and controller of the agents based on environment variants; and iii) a tool for testing embodied agents. To evaluate these approaches, we have conducted diffierent case studies and experiments in difierent application domains.
|
90 |
A Trusted Autonomic Architecture to Safeguard Cyber-Physical Control Leaf Nodes and Protect Process IntegrityChiluvuri, Nayana Teja 16 September 2015 (has links)
Cyber-physical systems are networked through IT infrastructure and susceptible to malware. Threats targeting process control are much more safety-critical than traditional computing systems since they jeopardize the integrity of physical infrastructure. Existing defence mechanisms address security at the network nodes but do not protect the physical infrastructure if network integrity is compromised. An interface guardian architecture is implemented on cyber-physical control leaf nodes to maintain process integrity by enforcing high-level safety and stability policies.
Preemptive detection schemes are implemented to monitor process behavior and anticipate malicious activity before process safety and stability are compromised. Autonomic properties are employed to automatically protect process integrity by initiating switch-over to a verified backup controller. Subsystems adhere to strict trust requirements safeguarding them from adversarial intrusion. The preemptive detection schemes, switch-over logic, backup controller, and process communication are all trusted components that are separated from the untrusted production controller.
The proposed architecture is applied to a rotary inverted pendulum experiment and implemented on a Xilinx Zynq-7000 configurable SoC. The leaf node implementation is integrated into a cyber-physical control topology. Simulated attack scenarios show strengthened resilience to both network integrity and reconfiguration attacks. Threats attempting to disrupt process behavior are successfully thwarted by having a backup controller maintain process stability. The system ensures both safety and liveness properties even under adversarial conditions. / Master of Science
|
Page generated in 0.0855 seconds