Spelling suggestions: "subject:"complex distributed systems"" "subject:"3complex distributed systems""
1 |
Modèles, méthodes et outils pour les systèmes répartis multiéchelles / Models, methods and tools for multiscale distributed systemsRottenberg, Sam 27 April 2015 (has links)
Les systèmes informatiques sont des systèmes de plus en plus complexes, répartis sur plusieurs niveaux d’infrastructures des Technologies de l’Information et de la Communication (TIC). Ces systèmes sont parfois appelés des systèmes répartis multiéchelles. Le terme « multiéchelle » peut qualifier des systèmes répartis extrêmement variés suivant les points de vue dans lesquels ils sont caractérisés, comme la dispersion géographique des entités, la nature des équipements qui les hébergent, les réseaux sur lesquels elles sont déployées, ou encore l’organisation des utilisateurs. Pour une entité d’un système multiéchelle, les technologies de communication, les propriétés non fonctionnelles (en termes de persistance ou de sécurité), ou les architectures à favoriser, varient suivant la caractérisation multiéchelle pertinente définie ainsi que l’échelle à laquelle est associée l’entité. De plus, des architectures ad hoc de tels systèmes complexes sont coûteuses et peu durables. Dans cette thèse, nous proposons un framework de caractérisation multiéchelle, appelé MuSCa. Ce framework inclut un processus de caractérisation fondé sur les concepts de points de vue, dimensions et échelles, permettant de mettre en avant, pour chaque système complexe étudié, ses caractéristiques multiéchelles. Ces concepts constituent le cœur d’un métamodèle dédié. Le framework que nous proposons permet aux concepteurs de systèmes répartis multiéchelles de partager une taxonomie pour qualifier chaque système. Le résultat d’une caractérisation est un modèle à partir duquel le framework produit des artefacts logiciels qui apportent, à l’exécution, la conscience des échelles aux entités du système / Computer systems are becoming more and more complex. Most of them are distributed over several levels of Information and Communication Technology (ICT) infrastructures. These systems are sometimes referred to as multiscale systems. The word “multiscale” may qualify extremely various distributed systems according to the viewpoints in which they are characterized, such as the geographic dispersion of the entities, the nature of the hosting devices, the networks they are deployed on, or the users’ organization. For one entity of a multiscale system, communication technologies, non-functional properties (in terms of persistence or security) or architectures to be favored may vary depending on the relevant multiscale characterization defined for the system and on the scale associated to the entity. Moreover, ad hoc architectures of such complex systems are costly and non-sustainable. In this doctoral thesis, we propose a multiscale characterization framework, called MuSCa. The framework includes a characterization process based on the concepts of viewpoints, dimensions and scales, which enables to put to the fore the multiscale characteristics of each studied system. These concepts constitute the core of a dedicated metamodel. The proposed framework allows multiscale distributed systems designers to share a taxonomy for qualifying each system. The result of a characterization is a model from which the framework produces software artifacts that provide scale-awareness to the system’s entities at runtime
|
2 |
Detecção de anomalias em aplicações Web utilizando filtros baseados em coeficiente de correlação parcial / Anomaly detection in web applications using filters based on partial correlation coefficientSilva, Otto Julio Ahlert Pinno da 31 October 2014 (has links)
Submitted by Erika Demachki (erikademachki@gmail.com) on 2015-03-09T12:10:52Z
No. of bitstreams: 2
Dissertação - Otto Julio Ahlert Pinno da Silva - 2014.pdf: 1770799 bytes, checksum: 02efab9704ef08dc041959d737152b0a (MD5)
license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) / Approved for entry into archive by Erika Demachki (erikademachki@gmail.com) on 2015-03-09T12:11:12Z (GMT) No. of bitstreams: 2
Dissertação - Otto Julio Ahlert Pinno da Silva - 2014.pdf: 1770799 bytes, checksum: 02efab9704ef08dc041959d737152b0a (MD5)
license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) / Made available in DSpace on 2015-03-09T12:11:12Z (GMT). No. of bitstreams: 2
Dissertação - Otto Julio Ahlert Pinno da Silva - 2014.pdf: 1770799 bytes, checksum: 02efab9704ef08dc041959d737152b0a (MD5)
license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5)
Previous issue date: 2014-10-31 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / Finding faults or causes of performance problems in modernWeb computer systems is an
arduous task that involves many hours of system metrics monitoring and log analysis. In
order to aid administrators in this task, many anomaly detection mechanisms have been
proposed to analyze the behavior of the system by collecting a large volume of statistical
information showing the condition and performance of the computer system. One of the
approaches adopted by these mechanism is the monitoring through strong correlations
found in the system. In this approach, the collection of large amounts of data generate
drawbacks associated with communication, storage and specially with the processing
of information collected. Nevertheless, few mechanisms for detecting anomalies have a
strategy for the selection of statistical information to be collected, i.e., for the selection
of monitored metrics. This paper presents three metrics selection filters for mechanisms
of anomaly detection based on monitoring of correlations. These filters were based on
the concept of partial correlation technique which is capable of providing information
not observable by common correlations methods. The validation of these filters was
performed on a scenario of Web application, and, to simulate this environment, we use
the TPC-W, a Web transactions Benchmark of type E-commerce. The results from our
evaluation shows that one of our filters allowed the construction of a monitoring network
with 8% fewer metrics that state-of-the-art filters, and achieve fault coverage up to 10%
more efficient. / Encontrar falhas ou causas de problemas de desempenho em sistemas computacionais
Web atuais é uma tarefa árdua que envolve muitas horas de análise de logs e métricas
de sistemas. Para ajudar administradores nessa tarefa, diversos mecanismos de detecção
de anomalia foram propostos visando analisar o comportamento do sistema mediante
a coleta de um grande volume de informações estatísticas que demonstram o estado
e o desempenho do sistema computacional. Uma das abordagens adotadas por esses
mecanismo é o monitoramento por meio de correlações fortes identificadas no sistema.
Nessa abordagem, a coleta desse grande número de dados gera inconvenientes associados
à comunicação, armazenamento e, especialmente, com o processamento das informações
coletadas. Apesar disso, poucos mecanismos de detecção de anomalias possuem uma
estratégia para a seleção das informações estatísticas a serem coletadas, ou seja, para
a seleção das métricas monitoradas. Este trabalho apresenta três filtros de seleção de
métricas para mecanismos de detecção de anomalias baseados no monitoramento de
correlações. Esses filtros foram baseados no conceito de correlação parcial, técnica que
é capaz de fornecer informações não observáveis por métodos de correlações comuns. A
validação desses filtros foi realizada sobre um cenário de aplicação Web, sendo que, para
simular esse ambiente, nós utilizamos o TPC-W, um Benchmark de transações Web do
tipo E-commerce. Os resultados obtidos em nossa avaliação mostram que um de nossos
filtros permitiu a construção de uma rede de monitoramento com 8% menos métricas que
filtros estado-da-arte, além de alcançar uma cobertura de falhas até 10% mais eficiente.
|
Page generated in 0.0773 seconds