Spelling suggestions: "subject:"dependability"" "subject:"expendability""
111 |
Contribution à l'évaluation de sûreté de fonctionnement des architectures de surveillance/diagnostic embarquées. Application au transport ferroviaire / Contribution to embedded monitoring/diagnosis architectures dependability assesment. Application to the railway transportGandibleux, Jean 06 December 2013 (has links)
Dans le transport ferroviaire, le coût et la disponibilité du matériel roulant sont des questions majeures. Pour optimiser le coût de maintenance du système de transport ferroviaire, une solution consiste à mieux détecter et diagnostiquer les défaillances. Actuellement, les architectures de surveillance/diagnostic centralisées atteignent leurs limites et imposent d'innover. Cette innovation technologique peut se matérialiser par la mise en oeuvre d’architectures embarquées de surveillance/diagnostic distribuées et communicantes afin de détecter et localiser plus rapidement les défaillances et de les valider dans le contexte opérationnel du train. Les présents travaux de doctorat, menés dans le cadre du FUI SURFER (SURveillance active Ferroviaire) coordonné par Bombardier, visent à proposer une démarche méthodologique d’évaluation de la sûreté de fonctionnement d’architectures de surveillance/diagnostic. Pour ce faire, une caractérisation et une modélisation génériques des architectures de surveillance/diagnostic basée sur le formalisme des Réseaux de Petri stochastiques ont été proposées. Ces modèles génériques intègrent les réseaux de communication (et les modes de défaillances associés) qui constituent un point dur des architectures de surveillance/diagnostic retenues. Les modèles proposés ont été implantés et validés théoriquement par simulation et une étude de sensibilité de ces architectures de surveillance/diagnostic à certains paramètres influents a été menée. Enfin, ces modèles génériques sont appliqués sur un cas réel du domaine ferroviaire, les systèmes accès voyageurs des trains, qui sont critiques en matière de disponibilité et diagnosticabilité. / In the railway transport, rolling stock cost and availability are major concern. To optimise the maintenance cost of the railway transport system, one solution consists in better detecting and diagnosing failures. Today, centralized monitoring/diagnosis architectures reach their limits. Innovation is therefore necessary. This technological innovation may be implemented with embedded distributed and communicating monitoring/diagnosis architectures in order to faster detect and localize failures and to make a validation with respect to the train operational context.The present research work, carried out as part of the SURFER FUI project (french acronym standing for railway active monitoring) lead by Bombardier, aim to propose a methodology to assess dependability of monitoring/diagnosis architectures. To this end, a caracterisation et une modélisation génériques des monitoring/diagnosis architectures based on the stochastic Petri Nets have been proposed. These generic models take into account communication networks (and the associated failure modes), which constitutes a central point of the studied monitoring/diagnosis architectures. The proposed models have been edited and theoretically validated by simulation. A sensitiveness of the monitoring/diagnosis architectures to parameters has been studied. Finally, these generic models have applied to a real case of the railway transport, train passenger access systems, which are critical in term of availability and diagnosability.
|
112 |
Évaluation par simulation de la sûreté de fonctionnement de systèmes en contexte dynamique hybride / Evaluation by simulation of the dependability of systems in hybrid dynamic contextPerez Castaneda, Gabriel Antonio 30 March 2009 (has links)
La recherche de solutions analytiques pour l’évaluation de la fiabilité en contexte dynamique n’est pas résolue dans le cas général. Un état de l’art présenté dans le chapitre 1 montre que des approches partielles relatives à des hypothèses particulières existent. La simulation de Monte Carlo serait le seul recours, mais il n’existait pas d’outils performants permettant la simulation simultanée de l’évolution discrète du système et de son évolution continue prenant en compte les aspects probabilistes. Dans ce contexte, dans le chapitre 2, nous introduisons le concept d’automate stochastique hybride capable de prendre en compte tous les problèmes posés par la fiabilité dynamique et d’accéder à l’évaluation des grandeurs de la sûreté de fonctionnement par une simulation de Monte Carlo implémentée dans l’environnement Scilab-Scicos. Dans le chapitre 3, nous montrons l’efficacité de notre approche de simulation pour l’évaluation de la sûreté de fonctionnement en contexte dynamique sur deux cas test dont un est un benchmark de la communauté de la Sûreté de Fonctionnement. Notre approche permet de répondre aux problèmes posés, notamment celui de la prise en compte de l’influence de l’état discret, de l’état continu et de leur interaction dans l’évaluation probabiliste des performances d’un système dans lequel en outre, les caractéristiques fiabilistes des composants dépendent eux-mêmes des états continu et discret. Dans le chapitre 4, nous donnons une idée de l’intérêt du contrôle par supervision comme moyen de la sûreté de fonctionnement. Les concepts d’automate observateur et de contrôleur ont été introduits et illustrés sur notre cas test afin de montrer leur potentialité / The research of analytical solutions for reliability assessment in dynamic context is not solved in the general case. A state of the art presented in chapter 1 shows that partial approaches exist in the case of particular hypothesis. The Monte Carlo simulation would be the only recourse, but there were no tools allowing the simultaneous simulation of the discrete evolution of the system and its continuous evolution taking into account the probabilistic aspects. In this context, in chapter 2, we introduce the concept of hybrid stochastic automaton capable of taking into account all the problems posed by dynamic reliability and to accede to the assessment of dependability parameters by a Monte Carlo simulation implemented in Scicos-Scilab environment. In chapter 3, we show the effectiveness of our approach of simulation for dependability assessment in dynamic context through two test cases of which case one is a benchmark of dependability community. Our approach responds to the posed problems, notably the consideration of the influence of the discrete state, of the continuous state and their interaction, in the probabilistic assessment of the performances of a system in which besides, the reliability characteristics of components depend themselves of the continuous and discrete states. In chapter 4, we give an idea of the interest of control by supervision as a means of dependability. The concepts of observer automaton and of controller have been introduced and illustrated on our test case in order to show their potential
|
113 |
Vers les applications fiables basées sur des composants dynamiques / Towards Dependable Dynamic Component-based ApplicationsSantos da Gama, Kiev 06 October 2011 (has links)
Les logiciels s'orientent de plus en plus vers des architectures évolutives, capables de s'adapter facilement aux changements et d'intégrer de nouvelles fonctionnalités. Ceci est important pour plusieurs classes d'applications qui ont besoin d‘évoluer sans que cela implique d'interrompre leur exécution. Des plateformes dynamiques à composants autorisent ce type d'évolution à l'exécution, en permettant aux composants d'être chargés et exécutés sans requérir le redémarrage complet de l'application en service. Toutefois, la flexibilité d'un tel mécanisme introduit de nouveaux défis qui exigent de gérer les possibles erreurs dues à des incohérences dans le processus de mise à jour, ou en raison du comportement défectueux de composants survenant pendant l'exécution de l'application. Des composants tiers dont l'origine ou la qualité sont inconnus peuvent être considérées à priori comme peu fiables, car ils peuvent potentiellement introduire des défauts d'applications lorsqu'il est combiné avec d'autres composants. Nous sommes intéressés à la réduction de l'impact de ces composants considérés comme non fiables et qui sont susceptibles de compromettre la fiabilité de l'application en cours d'exécution. Cette thèse porte sur l'application de techniques pour améliorer la fiabilité des applications dynamiques à composants. Pour cela, nous proposons l'utilisation des frontières d'isolation pouvant fournir du contingentement de fautes. Le composant ainsi isolé ne perturbe pas le reste de l'application quand il est défaillant. Une telle approche peut être vu sous trois perspectives présentées: (i) l'isolement des composants dynamiques, régi par une politique d'exécution reconfigurable, (ii) l'autoréparation de conteneurs d‘isolement, et (iii) l'utilisation des aspects pour séparer les préoccupations de fiabilité à partir du code fonctionnel. / Software is moving towards evolutionary architectures that are able to easily accommodate changes and integrate new functionality. This is important in a wide range of applications, from plugin-based end user applications to critical applications with high availability requirements. Dynamic component-based platforms allow software to evolve at runtime, by allowing components to be loaded, and executed without forcing applications to be restarted. However, the flexibility of such mechanism demands applications to cope with errors due to inconsistencies in the update process, or due to faulty behavior from components introduced during execution. This is mainly true when dealing with third-party components, making it harder to predict the impacts (e.g., runtime incompatibilities, application crashes) and to maintain application dependability when integrating such third-party code into the application. Components whose origin or quality attributes are unknown could be considered as untrustworthy since they can potentially introduce faults to applications when combined with other components, even if unintentionally. The quality of components is harder to evaluate when components are combined together, especially if it happens on-the-fly. We are interested in reducing the impact that can be brought by untrustworthy components deployed at runtime and that would potentially compromise application dependability. This thesis focuses on applying techniques for moving a step forward towards dependable dynamic component-based applications by addressing different dependability attributes namely reliability, maintainability and availability. We propose the utilization of strong component isolation boundaries, by providing a fault-contained environment for separately running untrustworthy components. Our solution combines three approaches: (i) the dynamic isolation of components, governed by a runtime reconfigurable policy; (ii) a self-healing component isolation container; and (iii) the usage of aspects for separating dependability concerns from functional code.
|
114 |
The management of security officer's performance within a private security company in GautengHorn, Heather Elizabeth 01 1900 (has links)
This study was undertaken to investigate whether there is a performance management system within the security industry, applicable specifically to Security Officers. To investigate which performance factors, apply to security officers and how security officers perceived performance management.
The management of Security Officers’ performance is an aspect of management which has not garnered much interest compared to other operational and management areas – hence the paucity of research on the performance management of security officers. They make a major contribution to the labour market with 7 949 security companies listed on the Private Security Industry Regulatory Authority (PSIRA) website, and 2 973 companies (37%) based in Gauteng alone. However, despite the high number of companies, the industry has attracted the least attention in terms of performance.
The overall research purpose of this study was to explore the management of security officers’ performance in a private security company operating in South Africa,focussing specifically on a company based in the Gauteng Province.
The scope of the study was aimed at investigating security officers’ perception of performance management and to link performance to actual job performance and security officers’ perceived work performance. The researcher also investigated whether biographical factors had an influence on security officers’ performance.
A quantitative research methodology was utilised to conduct the study. The main research instruments were primary data, comprising a self-developed questionnaire and secondary data, comprising company records. The respondents consisted of security officers whom had been subjected to the Dependability and Safety Instrument (DSI) during the period 2013 to 2015, in the region, who were still employed at the company at the time of the study.
The findings of the study identified 11 performance management factors and indicated links between self-reported and actual work performance. Biographical characteristics did not seem to influence the work performance of the security officers. However, the results did indicate that employees with less tenure were more prone to disciplinary action by the company and those with higher levels of education were prone to fewer disciplinary actions and dismissals based on AWOL.
The study identified the areas that play a significant role in the management of security officers’ performance. The identification of performance management factors in the security industry and security officers’ perceptions about performance management should enable HR officers to develop and implement a performance management system that will contribute to better service delivery to both internal and external clients in this industry. / Business Management / M. Com. (Business Management)
|
115 |
Model-based Evaluation: from Dependability Theory to SecurityAlaboodi, Saad Saleh 21 June 2013 (has links)
How to quantify security is a classic question in the security community that until today has had no plausible answer. Unfortunately, current security evaluation models are often either quantitative but too specific (i.e., applicability is limited), or comprehensive (i.e., system-level) but qualitative. The importance of quantifying security cannot be overstated, but doing so is difficult and complex, for many reason: the “physics” of the amount of security is ambiguous; the operational state is defined by two confronting parties; protecting and breaking systems is a cross-disciplinary mechanism; security is achieved by comparable security strength and breakable by the weakest link; and the human factor is unavoidable, among others. Thus, security engineers face great challenges in defending the principles of information security and privacy. This thesis addresses model-based system-level security quantification and argues that properly addressing the quantification problem of security first requires a paradigm shift in security modeling, addressing the problem at the abstraction level of what defines a computing system and failure model, before any system-level analysis can be established. Consequently, we present a candidate computing systems abstraction and failure model, then propose two failure-centric model-based quantification approaches, each including a bounding system model, performance measures, and evaluation techniques. The first approach addresses the problem considering the set of controls. To bound and build the logical network of a security system, we extend our original work on the Information Security Maturity Model (ISMM) with Reliability Block Diagrams (RBDs), state vectors, and structure functions from reliability engineering. We then present two different groups of evaluation methods. The first mainly addresses binary systems, by extending minimal path sets, minimal cut sets, and reliability analysis based on both random events and random variables. The second group addresses multi-state security systems with multiple performance measures, by extending Multi-state Systems (MSSs) representation and the Universal Generating Function (UGF) method. The second approach addresses the quantification problem when the two sets of a computing system, i.e., assets and controls, are considered. We adopt a graph-theoretic approach using Bayesian Networks (BNs) to build an asset-control graph as the candidate bounding system model, then demonstrate its application in a novel risk assessment method with various diagnosis and prediction inferences. This work, however, is multidisciplinary, involving foundations from many fields, including security engineering; maturity models; dependability theory, particularly reliability engineering; graph theory, particularly BNs; and probability and stochastic models.
|
116 |
Model-based Evaluation: from Dependability Theory to SecurityAlaboodi, Saad Saleh 21 June 2013 (has links)
How to quantify security is a classic question in the security community that until today has had no plausible answer. Unfortunately, current security evaluation models are often either quantitative but too specific (i.e., applicability is limited), or comprehensive (i.e., system-level) but qualitative. The importance of quantifying security cannot be overstated, but doing so is difficult and complex, for many reason: the “physics” of the amount of security is ambiguous; the operational state is defined by two confronting parties; protecting and breaking systems is a cross-disciplinary mechanism; security is achieved by comparable security strength and breakable by the weakest link; and the human factor is unavoidable, among others. Thus, security engineers face great challenges in defending the principles of information security and privacy. This thesis addresses model-based system-level security quantification and argues that properly addressing the quantification problem of security first requires a paradigm shift in security modeling, addressing the problem at the abstraction level of what defines a computing system and failure model, before any system-level analysis can be established. Consequently, we present a candidate computing systems abstraction and failure model, then propose two failure-centric model-based quantification approaches, each including a bounding system model, performance measures, and evaluation techniques. The first approach addresses the problem considering the set of controls. To bound and build the logical network of a security system, we extend our original work on the Information Security Maturity Model (ISMM) with Reliability Block Diagrams (RBDs), state vectors, and structure functions from reliability engineering. We then present two different groups of evaluation methods. The first mainly addresses binary systems, by extending minimal path sets, minimal cut sets, and reliability analysis based on both random events and random variables. The second group addresses multi-state security systems with multiple performance measures, by extending Multi-state Systems (MSSs) representation and the Universal Generating Function (UGF) method. The second approach addresses the quantification problem when the two sets of a computing system, i.e., assets and controls, are considered. We adopt a graph-theoretic approach using Bayesian Networks (BNs) to build an asset-control graph as the candidate bounding system model, then demonstrate its application in a novel risk assessment method with various diagnosis and prediction inferences. This work, however, is multidisciplinary, involving foundations from many fields, including security engineering; maturity models; dependability theory, particularly reliability engineering; graph theory, particularly BNs; and probability and stochastic models.
|
117 |
Automatic Hardening against Dependability and Security Software Bugs / Automatisches Härten gegen Zuverlässigkeits- und SicherheitssoftwarefehlerSüßkraut, Martin 15 June 2010 (has links) (PDF)
It is a fact that software has bugs. These bugs can lead to failures. Especially dependability and security failures are a great threat to software users. This thesis introduces four novel approaches that can be used to automatically harden software at the user's site. Automatic hardening removes bugs from already deployed software. All four approaches are automated, i.e., they require little support from the end-user. However, some support from the software developer is needed for two of these approaches. The presented approaches can be grouped into error toleration and bug removal. The two error toleration approaches are focused primarily on fast detection of security errors. When an error is detected it can be tolerated with well-known existing approaches. The other two approaches are bug removal approaches. They remove dependability bugs from already deployed software. We tested all approaches with existing benchmarks and applications, like the Apache web-server.
|
118 |
On quantifying miltary strategy.Engelbrecht, Gerhard Nieuwoudt 30 June 2003 (has links)
Military Strategy is defined as a plan at the military strategic level of war that consists of a set of military strategic ends, ways and means
and the relationships between them. This definition leads to the following research questions:
1. How can the extent of the many-to-many relationships that exist between a military strategy, its ends, ways and means be quantified?
2. If the relationships between a military strategy, its ends, ways and means are quantified and if the effectiveness of the force design elements is known, how shall that enable the
quantification of the state’s ability to execute its military strategy?
3. If the relationships between a military strategy, its ends, ways and means are quantified and if the effectiveness of the force design elements is known, how will it aid decision-making about the acquisition of the future force design?
The first research question is answered by mapping a military strategy complete with its ends, ways and means to a ranked tree where the
entities in the strategy corresponds with the vertices of different rank in the tree. The tree representation is used to define and determine
the contribution of entities in a military strategy to entities at the next higher level. It is explained how analytical, heuristic and judgement methods can be employed to find the relative and real contribution values. Also, a military strategy for South Africa is developed to
demonstrate the concept.
The second research question is answered by developing measures of effectiveness taking the interdependence of entities at the terminal
vertices of the ranked tree into account. Thereafter, the degree to which the force design would support the higher order entities inclusive of a military strategy could be calculated.
The third research question is answered by developing a cost-benefit analysis method and a distance indicator from an optimal point to aid
in deciding between supplier options for acquisition. Thereafter the knapsack problem is amended to allow for scheduling acquisition
projects whilst optimising the force design's support of a military strategy.
Finally, the model is validated and put into a contextual framework for use in the military. / Operations Management / D.Phil.
|
119 |
Modeling of Secure Dependable (S&D) applications based on patterns for Resource-Constrained Embedded Systems (RCES) / Modélisation des applications "sécurisées et sûres" (S&D) à base de patrons pour des systèmes embarqués contraints en ressources (RCES)Ziani, Adel 19 September 2013 (has links)
La complexité croissante du matériel et du logiciel dans le développement des applications pour les systèmes embarqués induit de nouveaux besoins et de nouvelles contraintes en termes de fonctionnalités, de capacité de stockage, de calcul et de consommation d'énergie. Cela entraîne des difficultés accrues pour les concepteurs alors que les contraintes commerciales liées aux développement et à la production de ces systèmes demeurent impondérables. Un autre défi qui s'ajoute à cette complexité est le développement des applications avec de fortes exigences de sécurité et de fiabilité (S&D) pour des systèmes embarqués contraints en ressources (RCES). De ce fait, nous recommandons d'aborder cette complexité via la réutilisation d'un ensemble d'artefacts de modélisation dédiés. Le "patron" constitue l'artefact de base pour représenter des solutions S&D, à spécifier par les experts de ces aspects et à réutiliser par les ingénieurs pour résoudre les problèmes de l'ingénierie système/logicielle du domaine confrontée à ces aspects. Dans le cadre de cette thèse, nous proposons une approche d'ingénierie à base de modèles pour la spécification, le packaging et la réutilisation de ces artefacts pour modéliser et analyser ces systèmes. Le fondement de notre approche est un ensemble de langages de modélisation et des règles de transformation couplés à un référentiel orienté modèles et de moteurs de recherche et d'instantiation. Ces langages de modélisation permettent de spécifier les patrons, les systèmes de patrons et un ensemble de modèles de propriétés et de ressources. Ces modèles permettent de gouverner l'utilisation des patrons, leur organisation et leur analyse pour d'éventuel réutilisation. Pour les transformations, nous avons conçu un ensemble de règles pour l'analyse de la satisfiabilité des architectures logicielles à base de patrons S&D pour des plateformes matérielles à base de modèles de ressources. Pour le développement du référentiel, nous proposons un processus de spécification et de génération basé sur les langages de modélisation décrits au préalable. Les moteurs permettent de retrouver pour ensuite instantier ces artefacts vers des environnements de développement spécifiques.Dans le cadre de l'assistance pour le développement des applications S&D pour les RCES, nous avons implémenté une suite d'outils, basés sur Eclipse EMF/QVTO, pour supporter la spécification de ces artefacts et l'analyse des applications S&D autour d'un référentiel. Afin de faciliter l'utilisation de l'approche proposée, nous préconisons un ensemble de méthodes pour favoriser l'utilisation de cette suite d'outils tout au long du processus de développement. En outre, nous avons abordé la construction par génération automatique de référentiels orientés modèles, basée sur Eclipse EMF/QVTO et la plateforme Eclispe CDO, accompagné d'un ensemble d'outils pour le peuplement, l'accès et la gestion de ce référentiel. Les problèmes étudiés dans ce travail ont été identifiés dans le contexte du projet européen FP7 TERESA. Les solutions proposées ont ainsi été démontrées et évaluées dans le cadre de ce projet à travers un cas d'étude d'une application de contrôle, exhibant des exigences de fiabilité et de sécurité, pour les systèmes ferroviaires. / Non-functional requirements such as Security and Dependability (S&D) become more and more important as well as more and more difficult to achieve, particularly in embedded systems development. Such systems come with a large number of common characteristics, including real-time and temperature constraints, security and dependability as well as efficiency requirements. In particular, the development of Resource Constrained Embedded Systems (RCES) has to address constraints regarding memory, computational processing power and/or energy consumption. In this work, we propose a modeling environment which associates model-driven paradigm and a model-based repository, to support the design and the packaging of S&D patterns, resource models and their property models. The approach is based on a set of modeling languages coupled with a model-repository, search and instantiation engines towards specific development environments. These modeling languages allow to specify patterns, resources and a set of property models. These property models will allow to govern the use of patterns and their analysis for reuse. In addition, we propose a specification and generation process of repositories. As part of the assistance for the development of S&D applications, we have implemented a suite of tool-chain based on the Eclipse platform to support the different activities around the repository, including the analysis activities. The proposed solutions were evaluated in the TERESA project through a case study from the railway domain.
|
120 |
Gerenciamento de Faltas em Computação em Nuvem: Um Mapeamento Sistemático de LiteraturaLeite Neto, Clodoaldo Brasilino 22 January 2014 (has links)
Made available in DSpace on 2015-05-14T12:36:48Z (GMT). No. of bitstreams: 1
arquivototal.pdf: 4346482 bytes, checksum: 66ecee23f8ca75e6ea5bba135cfa9a42 (MD5)
Previous issue date: 2014-01-22 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Background: With the growing popularity of cloud computing, an challenge seen in this
discipline is the management of faults that may occur in such big infrastructures that, because
of its size, has a greater chance of ocurring faults, errors and failures. A work that maps the
solutions already created by researchers efficiently should help visualizing gaps over those
research fields.
Aims: This work aims to find research gaps in the cloud computing and fault management
domains, aside from building a social network of researchers in the area.
Method: We conducted a systematic mapping study to collect, filter and classify scientific
works in this area. The 4535 scientific papers found on major search engines were filtered
and the remaining 166 papers were classified according to a taxonomy described in this work.
Results: We found that IaaS is most explored in the selected studies. The main dependability
functions explored were Tolerance and Removal, and the attributes were Reliability
and Availability. Most papers had been classified by research type as Solution Proposal.
Conclusion: This work summarizes and classifies the research effort conducted on fault
management in cloud computing, providing a good starting point for further research in this
area. / Fundamentação: Com o grande crescimento da popularidade da computação em nuvens,
observa-se que um desafio dessa área é gerenciar falhas que possam ocorrer nas grandes
infra-estruturas computacionais construídas para dar suporte à computação como serviço.
Por serem extensas, possuem maior ocorrência de faltas, falhas e erros. Um trabalho que
mapeie as soluções já criadas por pesquisadores de maneira simples e eficiente pode ajudar
a visualizar oportunidades e saturações nesta área de pesquisa.
Objetivos: Este trabalho visa mapear de forma sistemática todo o esforço de pesquisa
aplicado ao gerênciamento de faltas em computação em nuvem de forma a facilitar a identificação
de áreas pouco exploradas e que pode eventualmente representar novas oportunidades
de pesquisa.
Metodologia: Para este trabalho utiliza-se a metodologia de pesquisa baseada em evidências,
através do método de mapeamento sistemático, sendo a pesquisa construída em
três etapas de seleção de estudos.
Método: Conduzimos um mapeamento sistemático para coletar, filtrar e classificar trabalhos
científicos na área. Foram inicialmente coletados 4535 artigos científicos nos grandes
engenhos de busca que, após três etapas de filtragem, acabaram sendo reduzidos a 166 artigos.
Estes artigos restantes foram classificados de acordo com a taxonomia definida neste
trabalho.
Resultados: Observa-se que IaaS é a área mais explorada nos estudos selecionados. As
funções de gerência de falhas mais exploradas são Tolerância e Remoção, e os atributos são
Confiabilidade e Disponibilidade. A maioria dos trabalhos foram classificados como tipo de
pesquisa de Proposta de Solução.
Conclusão: Este trabalho sumariza e classifica o esforço de pesquisa conduzido em
Gerenciamento de Faltas em Computação em Nuvens, provendo um ponto de partida para
pesquisas futuras nesta área.
|
Page generated in 0.045 seconds