241 |
A Study of Scalability and Performance of Solaris ZonesXu, Yuan January 2007 (has links)
This thesis presents a quantitative evaluation of an operating system virtualization technology known as Solaris Containers or Solaris Zones, with a special emphasis on measuring the influence of a security technology known as Solaris Trusted Extensions. Solaris Zones is an operating system-level (OS-level) virtualization technology embedded in the Solaris OS that primarily provides containment of processes within the abstraction of a complete operating system environment. Solaris Trusted Extensions presents a specific configuration of the Solaris operating system that is designed to offer multi-level security functionality. Firstly, we examine the scalability of the OS with respect to an increasing number of zones. Secondly, we evaluate the performance of zones in three scenarios. In the first scenario we measure - as a baseline - the performance of Solaris Zones on a 2-CPU core machine in the standard configuration that is distributed as part of the Solaris OS. In the second scenario we investigate the influence of the number of CPU cores. In the third scenario we evaluate the performance in the presence of a security configuration known as Solaris Trusted Extensions. To evaluate performance, we calculate a number of metrics using the AIM benchmark. We calculate these benchmarks for the global zone, a non-global zone, and increasing numbers of concurrently running non-global zones. We aggregate the results of the latter to compare aggregate system performance against single zone performance. The results of this study demonstrate the scalability and performance impact of Solaris Zones in the Solaris OS. On our chosen hardware platform, Solaris Zones scales to about 110 zones within a short creation time (i.e., less than 13 minutes per zone for installation, configuration, and boot.) As the number of zones increases, the measured overhead of virtualization shows less than 2% of performance decrease for most measured benchmarks, with one exception: the benchmarks for memory and process management show that performance decreases of 5-12% (depending on the sub-benchmark) are typical. When evaluating the Trusted Extensions-based security configuration, additional small performance penalties were measured in the areas of Disk/Filesystem I/O and Inter Process Communication. Most benchmarks show that aggregate system performance is higher when distributing system load across multiple zones compared to running the same load in a single zone.
|
242 |
Directory scalability in multi-agent based systemsHussain, Shahid, Shabbir, Hassan January 2008 (has links)
Simulation is one approach to analyze and model the real world complex problems. Multi-agent based systems provide a platform to develop simulations based on the concept of agent-oriented programming. In multi-agent systems, the local interaction between agents contributes to the emergence of the global phenomena by getting the result of the simulation runs. In MABS systems, interaction is one common aspect for all agents to perform their tasks. To interact with each other the agents require yellow page services from the platform to search for other agents. As more and more agents perform searches on this yellow page directory, there is a decrease in the performance due to a central bottleneck. In this thesis, we have investigated multiple solutions for this problem. The most promising solution is to integrate distributed shared memory with the directory systems. With our proposed solution, empirical analysis shows a statistically significant increase in performance of the directory service. We expect this result to make a considerable contribution to the state of the art in multiagent platforms.
|
243 |
Efficient persistence, query, and transformation of large models / Persistance, requêtage, et transformation efficaces de grands modèlesDaniel, Gwendal 14 November 2017 (has links)
L’Ingénierie Dirigée par les Modèles (IDM) est une méthode de développement logicielle ayant pour but d’améliorer la productivité et la qualité logicielle en utilisant les modèles comme artefacts de premiers plans durant le processus développement. Dans cette approche, les modèles sont typiquement utilisés pour représenter des vues abstraites d’un système, manipuler des données, valider des propriétés, et sont finalement transformés en ressources applicatives (code, documentation, tests, etc). Bien que les techniques d’IDM aient montré des résultats positifs lors de leurs intégrations dans des processus industriels, les études montrent que la mise à l’échelle des solutions existantes est un des freins majeurs à l’adoption de l’IDM dans l’industrie. Ces problématiques sont particulièrement importantes dans le cadre d’approches génératives, qui nécessitent des techniques efficaces de stockage, requêtage, et transformation de grands modèles typiquement construits dans un contexte mono-utilisateur. Plusieurs solutions de persistance, requêtage, et transformations basées sur des bases de données relationnelles ou NoSQL ont été proposées pour améliorer le passage à l’échelle, mais ces dernières sont souvent basées sur une seule sérialisation model/base de données, adaptée à une activité de modélisation particulière, mais peu efficace pour d’autres cas d’utilisation. Par exemple, une sérialisation en graphe est optimisée pour calculer des chemins de navigations complexes,mais n’est pas adaptée pour accéder à des valeurs atomiques de manière répétée. De plus, les frameworks de modélisations existants ont été initialement développés pour gérer des activités simples, et leurs APIs n’ont pas évolué pour gérer les modèles de grande taille, limitant les performances des outils actuels. Dans cette thèse nous présentons une nouvelle infrastructure de modélisation ayant pour but de résoudre les problèmes de passage à l’échelle en proposant (i) un framework de persistance permettant de choisir la représentation bas niveau la plus adaptée à un cas d’utilisation, (ii) une solution de requêtage efficace qui délègue les navigations complexes à la base de données stockant le modèle,bénéficiant de ses optimisations bas niveau et améliorant significativement les performances en terme de temps d’exécution et consommation mémoire, et (iii) une approche de transformation de modèles qui calcule directement les transformations au niveau de la base de données. Nos solutions sont construites en utilisant des standards OMG tels que UML et OCL, et sont intégrées dans les solutions de modélisations majeures telles que ATL ou EMF. / The Model Driven Engineering (MDE) paradigm is a softwaredevelopment method that aims to improve productivity and software quality by using models as primary artifacts in all the aspects of software engineering processes. In this approach, models are typically used to represent abstract views of a system, manipulate data, validate properties, and are finally transformed to application artifacts (code, documentation, tests, etc). Among other MDE-based approaches, automatic model generation processes such as Model Driven Reverse Engineering are a family of approaches that rely on existing modeling techniques and languages to automatically create and validate models representing existing artifact. Model extraction tasks are typically performed by a modeler, and produce a set of views that ease the understanding of the system under study. While MDE techniques have shown positive results when integrated in industrial processes, the existing studies also report that scalability of current solutions is one of the key issues that prevent a wider adoption of MDE techniques in the industry. This isparticularly true in the context of generative approaches, that require efficient techniques to store, query, and transform very large models typically built in a single-user context. Several persistence, query, and transformation solutions based on relational and NoSQL databases have been proposed to achieve scalability, but they often rely on a single model-to-database mapping, which suits a specific modeling activity, but may not be optimized for other use cases. For example a graph-based representation is optimized to compute complex navigation paths, but may not be the best solution for repeated atomic accesses. In addition, low-level modeling framework were originally developed to handle simple modeling activities (such as manual model edition), and their APIs have not evolved to handle large models, limiting the benefits of advance storage mechanisms. In this thesis we present a novel modeling infrastructure that aims to tackle scalability issues by providing (i) a new persistence framework that allows to choose the appropriate model-to-database mapping according to a given modeling scenario, (ii) an efficient query approach that delegates complex computation to the underlying database, benefiting of its native optimization and reducing drastically memory consumption and execution time, and (iii) a model transformation solution that directly computes transformations in the database. Our solutions are built on top of OMG standards such as UML and OCL, and are integrated with the de-facto standard modeling solutions such as EMF and ATL.
|
244 |
Assessment of the ability of the health management information system in India to use information for actionSahay, Sundeep January 2011 (has links)
Magister Public Health - MPH / This thesis explores the interconnected problems of ―why health information is not used in practice?‖ and ―what can be done to address this problem?‖ The primary aim of the thesis was to make an assessment of the existing Health Management Information System (HMIS) in India with respect to its ability to support the use of information for action in priority areas identified by the national and state governments. The problem of lack of effective information use in health management has been fairly well documented in the literature, but much less has been said about what can be done about it, other than the rather superficial advice of increasing the levels of training. The empirical setting for the examination of these research questions was within the public sector in India, where the research took place within an action research framework. The author was actively engaged as a participant with national and state authorities in the process of redesigning of the HMIS, building and deploying to the states various HMIS reform systems including the software, capacity building and making systems sustainable and scalable. A key focus area of the action research was aimed at enabling systems that would promote the utilization of the routine data being collected through the HMIS, and integrating the same with action areas such as related to planning, monitoring and evaluation. Data collection was carried out through various methods including interviews with key stakeholders, observations, formal and informal discussions carried out face to face and through emails or telephone communication, and the writing of various reports which were then commented on by various people including the state and national level user departments. Both quantitative and qualitative data was collected and analyzed. Quantitative data collected through the ―Readiness Matrix for Information for Action‖ across the three dimensions of human resources, technical infrastructure and institutional conditions helped to see how states performed individually and how they ranked compared to each other on information generation and use. The matrix also helped to diagnose the dimensions for strengthening in order to improve the overall readiness to use information for action in the states. This diagnosis was supplemented through qualitative analysis to further probe into ―the why‖ of the performance of the states at various rankings and what could be done to improve matters. The readiness matrix,12 arguably, could be used by researchers in other settings to help diagnose key areas that need to be strengthened in order to improve information use, and also evaluate where a state is in terms of its maturity towards the same. While progress was noted in areas of data coverage in that some sporadic examples of information use were present and enhancements in capacity and infrastructure were accumulating, challenges still remained. Key ones included poor data quality, the unfulfilled promise of integration and a continuing weak culture of information use. Some key strategies identified to address these challenges included the promotion of decentralization of information to support decentralized action, the adoption of a data warehouse approach and strengthening collaborative networks. Achieving this however, requires some structural interventions such as the broad basing of education in public health informatics, institutionalization of a cadre of public health informatics staff within the Ministry of Health, and promoting the use of software which is open source and based on open standards such that widespread local use is supported.
|
245 |
Towards ensuring scalability, interoperability and efficient access control in a triple-domain grid-based environmentNureni Ayofe, Azeez January 2012 (has links)
Philosophiae Doctor - PhD / The high rate of grid computing adoption, both in academe and industry, has posed
challenges regarding efficient access control, interoperability and scalability. Although several methods have been proposed to address these grid computing challenges, none has proven to be completely efficient and dependable. To tackle these challenges, a novel access control architecture framework, a triple-domain grid-based environment, modelled on role based access control, was developed. The architecture’s framework assumes three domains, each domain with an independent Local Security Monitoring Unit and a Central Security Monitoring Unit that monitors security for the entire grid.The architecture was evaluated and implemented using the G3S, grid security services simulator, meta-query language as “cross-domain” queries and Java Runtime Environment 1.7.0.5 for implementing the workflows that define the model’s task. The simulation results show that the developed architecture is reliable and efficient if measured against the observed parameters and entities. This proposed framework for access control also proved to be interoperable and scalable within the parameters tested.
|
246 |
Scaling out-of-core k-nearest neighbors computation on single machines / Faire passer à l'échelle le calcul "out-of-core" des K-plus proche voisins sur une seule machineOlivares, Javier 19 December 2016 (has links)
La technique des K-plus proches voisins (K-Nearest Neighbors (KNN) en Anglais) est une méthode efficace pour trouver des données similaires au sein d'un grand ensemble de données. Au fil des années, un grand nombre d'applications ont utilisé les capacités du KNN pour découvrir des similitudes dans des jeux de données de divers domaines tels que les affaires, la médecine, la musique, ou l'informatique. Bien que des années de recherche aient apporté plusieurs approches de cet algorithme, sa mise en œuvre reste un défi, en particulier aujourd'hui alors que les quantités de données croissent à des vitesses inimaginables. Dans ce contexte, l'exécution du KNN sur de grands ensembles pose deux problèmes majeurs: d'énormes empreintes mémoire et de très longs temps d'exécution. En raison de ces coût élevés en termes de ressources de calcul et de temps, les travaux de l'état de l'art ne considèrent pas le fait que les données peuvent changer au fil du temps, et supposent toujours que les données restent statiques tout au long du calcul, ce qui n'est malheureusement pas du tout conforme à la réalité. Nos contributions dans cette thèse répondent à ces défis. Tout d'abord, nous proposons une approche out-of-core pour calculer les KNN sur de grands ensembles de données en utilisant un seul ordinateur. Nous préconisons cette approche comme un moyen moins coûteux pour faire passer à l'échelle le calcul des KNN par rapport au coût élevé d'un algorithme distribué, tant en termes de ressources de calcul que de temps de développement, de débogage et de déploiement. Deuxièmement, nous proposons une approche out-of-core multithreadée (i.e. utilisant plusieurs fils d'exécution) pour faire face aux défis du calcul des KNN sur des données qui changent rapidement et continuellement au cours du temps. Après une évaluation approfondie, nous constatons que nos principales contributions font face aux défis du calcul des KNN sur de grands ensembles de données, en tirant parti des ressources limitées d'une machine unique, en diminuant les temps d'exécution par rapport aux performances actuelles, et en permettant le passage à l'échelle du calcul, à la fois sur des données statiques et des données dynamiques. / The K-Nearest Neighbors (KNN) is an efficient method to find similar data among a large set of it. Over the years, a huge number of applications have used KNN's capabilities to discover similarities within the data generated in diverse areas such as business, medicine, music, and computer science. Despite years of research have brought several approaches of this algorithm, its implementation still remains a challenge, particularly today where the data is growing at unthinkable rates. In this context, running KNN on large datasets brings two major issues: huge memory footprints and very long runtimes. Because of these high costs in terms of computational resources and time, KNN state-of the-art works do not consider the fact that data can change over time, assuming always that the data remains static throughout the computation, which unfortunately does not conform to reality at all. In this thesis, we address these challenges in our contributions. Firstly, we propose an out-of-core approach to compute KNN on large datasets, using a commodity single PC. We advocate this approach as an inexpensive way to scale the KNN computation compared to the high cost of a distributed algorithm, both in terms of computational resources as well as coding, debugging and deployment effort. Secondly, we propose a multithreading out-of-core approach to face the challenges of computing KNN on data that changes rapidly and continuously over time. After a thorough evaluation, we observe that our main contributions address the challenges of computing the KNN on large datasets, leveraging the restricted resources of a single machine, decreasing runtimes compared to that of the baselines, and scaling the computation both on static and dynamic datasets.
|
247 |
Représentation et compression à haut niveau sémantique d’images 3D / Representation and compression at high semantic level of 3D imagesSamrouth, Khouloud 19 December 2014 (has links)
La diffusion de données multimédia, et particulièrement les images, continuent à croitre de manière très significative. La recherche de schémas de codage efficaces des images reste donc un domaine de recherche très dynamique. Aujourd'hui, une des technologies innovantes les plus marquantes dans ce secteur est sans doute le passage à un affichage 3D. La technologie 3D est largement utilisée dans les domaines de divertissement, d'imagerie médicale, de l'éducation et même plus récemment dans les enquêtes criminelles. Il existe différentes manières de représenter l'information 3D. L'une des plus répandues consiste à associer à une image classique dite de texture, une image de profondeur de champs. Cette représentation conjointe permet ainsi une bonne reconstruction 3D dès lors que les deux images sont bien corrélées, et plus particulièrement sur les zones de contours de l'image de profondeur. En comparaison avec des images 2D classiques, la connaissance de la profondeur de champs pour les images 3D apporte donc une information sémantique importante quant à la composition de la scène. Dans cette thèse, nous proposons un schéma de codage scalable d'images 3D de type 2D + profondeur avec des fonctionnalités avancées, qui préserve toute la sémantique présente dans les images, tout en garantissant une efficacité de codage significative. La notion de préservation de la sémantique peut être traduite en termes de fonctionnalités telles que l'extraction automatique de zones d'intérêt, la capacité de coder plus finement des zones d'intérêt par rapport au fond, la recomposition de la scène et l'indexation. Ainsi, dans un premier temps, nous introduisons un schéma de codage scalable et joint texture/profondeur. La texture est codée conjointement avec la profondeur à basse résolution, et une méthode de compression de la profondeur adaptée aux caractéristiques des cartes de profondeur est proposée. Ensuite, nous présentons un schéma global de représentation fine et de codage basé contenu. Nous proposons ainsi schéma global de représentation et de codage de "Profondeur d'Intérêt", appelé "Autofocus 3D". Il consiste à extraire finement des objets en respectant les contours dans la carte de profondeur, et de se focaliser automatiquement sur une zone de profondeur pour une meilleure qualité de synthèse. Enfin, nous proposons un algorithme de segmentation en régions d'images 3D, fournissant une forte consistance entre la couleur, la profondeur et les régions de la scène. Basé sur une exploitation conjointe de l'information couleurs, et celle de profondeur, cet algorithme permet la segmentation de la scène avec un degré de granularité fonction de l'application visée. Basé sur cette représentation en régions, il est possible d'appliquer simplement le même principe d'Autofocus 3D précédent, pour une extraction et un codage de la profondeur d'Intérêt (DoI). L'élément le plus remarquable de ces deux approches est d'assurer une pleine cohérence spatiale entre texture, profondeur, et régions, se traduisant par une minimisation des problèmes de distorsions au niveau des contours et ainsi par une meilleure qualité dans les vues synthétisées. / Dissemination of multimedia data, in particular the images, continues to grow very significantly. Therefore, developing effective image coding schemes remains a very active research area. Today, one of the most innovative technologies in this area is the 3D technology. This 3D technology is widely used in many domains such as entertainment, medical imaging, education and very recently in criminal investigations. There are different ways of representing 3D information. One of the most common representations, is to associate a depth image to a classic colour image called texture. This joint representation allows a good 3D reconstruction, as the two images are well correlated, especially along the contours of the depth image. Therefore, in comparison with conventional 2D images, knowledge of the depth of field for 3D images provides an important semantic information about the composition of the scene. In this thesis, we propose a scalable 3D image coding scheme for 2D + depth representation with advanced functionalities, which preserves all the semantics present in the images, while maintaining a significant coding efficiency. The concept of preserving the semantics can be translated in terms of features such as an automatic extraction of regions of interest, the ability to encode the regions of interest with higher quality than the background, the post-production of the scene and the indexing. Thus, firstly we introduce a joint and scalable 2D plus depth coding scheme. First, texture is coded jointly with depth at low resolution, and a method of depth data compression well suited to the characteristics of the depth maps is proposed. This method exploits the strong correlation between the depth map and the texture to better encode the depth map. Then, a high resolution coding scheme is proposed in order to refine the texture quality. Next, we present a global fine representation and contentbased coding scheme. Therefore, we propose a representation and coding scheme based on "Depth of Interest", called "3D Autofocus". It consists in a fine extraction of objects, while preserving the contours in the depth map, and it allows to automatically focus on a particular depth zone, for a high rendering quality. Finally, we propose 3D image segmentation, providing a high consistency between colour, depth and regions of the scene. Based on a joint exploitation of the colour and depth information, this algorithm allows the segmentation of the scene with a level of granularity depending on the intended application. Based on such representation of the scene, it is possible to simply apply the same previous 3D Autofocus, for Depth of Interest extraction and coding. It is remarkable that both approaches ensure a high spatial coherence between texture, depth, and regions, allowing to minimize the distortions along object of interest's contours and then a higher quality in the synthesized views.
|
248 |
Srovnání distribuovaných "NoSQL" databází s důrazem na výkon a škálovatelnost / Comparison of distributed "NoSQL" databases with focus on performance and scalabilityVrbík, Tomáš January 2011 (has links)
This paper focuses on NoSQL database systems. These systems currently serve rather as supplement than replacement of relational database systems. The aim of this paper is to compare 4 selected NoSQL database systems (MongoDB, Apache Cassandra, Apache HBase and Redis) with a main focus on performance and scalability. Performance comparison is done using simulated workload in a 4 nodes cluster environment. One relational SQL database is also benchmarked to provide comparison between classic and modern way of maintaining structured data. As the result of comparison I found out that none of these database systems can be labeled as "the best" as each of the compared systems is suitable for different production deployment.
|
249 |
CLUE: A Cluster Evaluation ToolParker, Brandon S. 12 1900 (has links)
Modern high performance computing is dependent on parallel processing systems. Most current benchmarks reveal only the high level computational throughput metrics, which may be sufficient for single processor systems, but can lead to a misrepresentation of true system capability for parallel systems. A new benchmark is therefore proposed. CLUE (Cluster Evaluator) uses a cellular automata algorithm to evaluate the scalability of parallel processing machines. The benchmark also uses algorithmic variations to evaluate individual system components' impact on the overall serial fraction and efficiency. CLUE is not a replacement for other performance-centric benchmarks, but rather shows the scalability of a system and provides metrics to reveal where one can improve overall performance. CLUE is a new benchmark which demonstrates a better comparison among different parallel systems than existing benchmarks and can diagnose where a particular parallel system can be optimized.
|
250 |
Estudo de escalabilidade de servidores baseados em eventos em sitemas multiprocessados: um estudo de caso completo\" / Scalability study of event-driven servers in multi-processed systems: a complete case studyDaniel de Angelis Cordeiro 27 October 2006 (has links)
O crescimento explosivo no número de usuários de Internet levou arquitetos de software a reavaliarem questões relacionadas à escalabilidade de serviços que são disponibilizados em larga escala. Projetar arquiteturas de software que não apresentem degradação no desempenho com o aumento no número de acessos concorrentes ainda é um desafio. Neste trabalho, investigamos o impacto do sistema operacional em questões relacionadas ao desempenho, paralelização e escalabilidade de jogos interativos multi-usuários. Em particular, estudamos e estendemos o jogo interativo, multi-usuário, QuakeWorld, disponibilizado publicamente pela id Software sob a licença GPL. Criamos um modelo de paralelismo para a simulação distribuída realizada pelo jogo e o implementamos no servidor do QuakeWorld com adaptações que permitem que o sistema operacional gerencie de forma adequada a execução da carga de trabalho gerada. / The explosive growth in the number of Internet users made software architects reevaluate issues related to the scalability of services deployed on a large scale. It is still challenging to design software architectures that do not experience performance degradation when the concurrent access increases. In this work, we investigate the impact of the operating system in issues related to performance, parallelization, and scalability of interactive multiplayer games. Particularly, we study and extend the interactive, multiplayer game QuakeWorld, made publicly available by id Software under GPL license. We have created a new parallelization model for Quake\'s distributed simulation and implemented that model in QuakeWorld server with adaptations that allows the operating system to manage the execution of the generated workload in a more convenient way.
|
Page generated in 0.054 seconds