Spelling suggestions: "subject:"scalability."" "subject:"calability.""
141 |
Análise da escalabilidade de aplicações em computadores multicoreSilva, Samuel Reghim 14 June 2013 (has links)
Made available in DSpace on 2016-06-02T19:06:05Z (GMT). No. of bitstreams: 1
5312.pdf: 1746409 bytes, checksum: f1bdfc6eec1ef747466c9ed99d5d8835 (MD5)
Previous issue date: 2013-06-14 / Financiadora de Estudos e Projetos / Multicore processors allow applications to explore thread-level parallelism in order to enable improvements on the elapsed time. The sharing of the memory subsystem and the discrepancy between the speeds of processors and memory access operations, however, may entail limitations to the scalability caused by thread competition for the resources. The automatic determination of the appropriate number of threads for an application that ensure efficient executions, although widely desired, is a non-trivial problem. This work aimed to evaluate the factors limiting the scalability of OpenMP parallel applications related to the contention for shared resources in multicore processors, with the goal of identifying the characteristics of applications that limit their scalability. It was found that memory accesses are a major limitation to the performance gains with parallelism. The granularity, indicating the ratio of memory accesses to processing, has been verified as being an important performance factor of parallel executions. Estimates of granularity can be obtained from the applications' source code. Different data access modes, however, point to the need to estimate the combination of granularity with information about the data access locality to properly determine the scalability of applications. / Processadores multicore permitem que aplicações explorem paralelismo no nível de threads para habilitar melhorias no tempo de conclusão da execução. O compartilhamento do subsistema de memória e a disparidade entre as velocidades dos processadores e das operações de acesso à memória, contudo, podem implicar em limitações na escalabilidade causadas pela competição das threads pelos recursos. A determinação da quantidade apropriada de threads que garanta execuções eficientes para uma aplicação é um problema não trivial cuja obtenção automatizada é amplamente desejada. Neste trabalho, buscou-se avaliar os fatores limitantes para a escalabilidade de aplicações paralelas com OpenMP relacionados à contenção pelos recursos compartilhados em processadores multicore, com o objetivo de identificar características das aplicações que limitem sua escalabilidade. Constatou-se que os acessos à memória são a principal limitação aos ganhos de desempenho com o paralelismo. A granularidade, que indica a proporção de acessos à memória em relação ao processamento, foi verificada como sendo um indicativo importante do desempenho das execuções paralelas. Estimativas de granularidade podem ser obtidas a partir do código-fonte das aplicações. Diferentes modos de acessos aos dados apontam, todavia, para a necessidade de combinação da estimativa de granularidade com informações sobre a localidade dos acessos aos dados para determinar corretamente a escalabilidade das aplicações.
|
142 |
Análise de escalabilidade de aplicações Hadoop/Mapreduce por meio de simulaçãoRocha, Fabiano da Guia 04 February 2013 (has links)
Made available in DSpace on 2016-06-02T19:06:06Z (GMT). No. of bitstreams: 1
5351.pdf: 2740873 bytes, checksum: e4ce3a33279ffb7afccf2fc418af0f79 (MD5)
Previous issue date: 2013-02-04 / During the last years we have witnessed a significant growing in the amount of data processed in a daily basis by companies, universities, and other institutions. Many use cases report processing of data volumes of petabytes in thousands of cores by a single application. MapReduce is a programming model, and a framework for the execution of applications which manipulate large data volumes in machines composed of thousands of processors/cores. Currently, Hadoop is the most widely adopted free implementation of MapReduce. Although there are reports in the literature about the use of MapReduce applications on platforms with more than one hundred cores, the scalability is not stressed and much remain to be studied in this field. One of the main challenges in the scalability study of MapReduce applications is the large number of configuration parameters of Hadoop. There are reports in the literature that mention more than 190 configuration parameters, 25 of which are known to impact the application performance in a significant way. In this work we study the scalability of MapReduce applications running on Hadoop. Due to the limited number of processors/cores available, we adopted a combined approach involving both experimentation and simulation. The experimentation has been carried out in a local cluster of 32 nodes, and for the simulation we have used MRSG (MapReduce Over SimGrid). In a first set of experiments, we identify the most impacting parameters in the performance and scalability of the applications. Then, we present a method for calibrating the simulator. With the calibrated simulator, we evaluated the scalability of one well-optimized application on larger clusters, with up to 10 thousands of nodes. / Durante os últimos anos, houve um significativo crescimento na quantidade de dados processados diariamente por companhias, universidades e outras instituições. Mapreduce é um modelo de programação e um framework para a execução de aplicações que manipulam grandes volumes de dados em máquinas compostas por milhares de processadores ou núcleos. Atualmente, o Hadoop é a implementação como software livre de Mapreduce mais largamente adotada. Embora existam relatos na literatura sobre o uso de aplicações Mapreduce em plataformas com cerca de quatro mil núcleos processando dados da ordem de dezenas de petabytes, o estudo dos limites de escalabilidade não foi esgotado e muito ainda resta a ser estudado. Um dos principais desafios no estudo de escalabilidade de aplicações Mapreduce é o grande número de parâmetros de configuração da aplicação e do ambiente Hadoop. Na literatura há relatos que mencionam mais de 190 parâmetros de configuração, sendo que 25 podem afetar de maneira significativa o desempenho da aplicação. Este trabalho contém um estudo sobre a escalabilidade de aplicações Mapreduce executadas na plataforma Hadoop. Devido ao número limitado de processadores disponíveis, adotou-se uma abordagem que combina experimentação e simulação. A experimentação foi realizada em um cluster local de 32 nós (com 64 processadores), e para a simulação empregou-se o simulador MRSG (MapReduce Over SimGrid). Como principais resultados, foram identificados os parâmetros de maior impacto no desempenho e na escalabilidade das aplicações. Esse resultado foi obtido por meio de simulação. Além disso, apresentou-se um método para a calibração do simulador MRSG, em função de uma aplicação representativa escolhida como benchmark. Com o simulador calibrado, avaliou-se a escalabilidade de uma aplicação bem otimizada. O simulador calibrado permitiu obter uma predição sobre a escalabilidade da aplicação para uma plataforma com até 10 mil nós.
|
143 |
Eficiência e auto-escalabilidade na virtualização do serviço de tradução de endereçosBarea, Emerson Rogério Alves 22 February 2016 (has links)
Submitted by Luciana Sebin (lusebin@ufscar.br) on 2016-10-07T18:12:52Z
No. of bitstreams: 1
DissERAB.pdf: 2425846 bytes, checksum: 4f23e91fd2bdddcaf67d6efb2354814e (MD5) / Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-10-13T19:50:06Z (GMT) No. of bitstreams: 1
DissERAB.pdf: 2425846 bytes, checksum: 4f23e91fd2bdddcaf67d6efb2354814e (MD5) / Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-10-13T19:50:15Z (GMT) No. of bitstreams: 1
DissERAB.pdf: 2425846 bytes, checksum: 4f23e91fd2bdddcaf67d6efb2354814e (MD5) / Made available in DSpace on 2016-10-13T19:50:25Z (GMT). No. of bitstreams: 1
DissERAB.pdf: 2425846 bytes, checksum: 4f23e91fd2bdddcaf67d6efb2354814e (MD5)
Previous issue date: 2016-02-22 / Não recebi financiamento / This work presents a novel architecture for the address translation service (NAT) with efficient scalability through the use of Network Functions Virtualization (NFV) in low cost computing environments. To this end, a virtualized network function (VNF) of NAT is instantiated in a minimal OS (ClickOS) and uses a feedback control system to smooth the rate control. Our results indicate that the proposed architecture meets several relevant NFV requirements, including high efficiency, obtaining up to 900% higher throughput gains compared to Linux NAT. In addition, the Proportional Integral control system (PI) yields 85% accuracy in estimating the exact rate using only 3 samples. / Este trabalho apresenta uma proposta de arquitetura de serviço de tradução de endereços (NAT) com escalabilidade eficiente através do uso de Virtualização de Funções de Rede (NFV) em ambientes computacionais de baixo custo. Para isso, uma função de rede virtualizada (VNF) do tipo NAT foi instanciada em sistema minimalista ClickOS e com um sistema de controle retroalimentável realizando o controle da vazão de maneira suave. Nossos resultados indicam que a arquitetura proposta, implementada e testada atende vários requisitos considerados importantes em NFV, como alta eficiencia, obtendo ganhos de 900% na vazão em alta escala contrastado com NAT do Linux. Além disso, resultados do sistema de controle do tipo Proporcional Integral (PI) demonstram 85% de acurácia na vazão prevista em apenas 3 amostras.
|
144 |
Unveiling the interplay between timeliness and scalability in cloud monitoring systems / Desvelando a relação mútua entre escalabilidade e oportunidade em sistemas de monitoramento de nuvens computacionaisRodrigues, Guilherme da Cunha January 2016 (has links)
Computação em nuvem é uma solução adequada para profissionais, empresas, centros de pesquisa e instituições que necessitam de acesso a recursos computacionais sob demanda. Atualmente, nuvens computacionais confiam no gerenciamento de sua estrutura para fornecer recursos computacionais com qualidade de serviço adequada as expectativas de seus clientes, tal qualidade de serviço é estabelecida através de acordos de nível de serviço. Nesse contexto, o monitoramento é uma função crítica de gerenciamento para se prover tal qualidade de serviço. Requisitos de monitoramento em nuvens computacionais são propriedades que um sistema de monitoramento de nuvem precisa reunir para executar suas funções de modo adequado e atualmente existem diversos requisitos definidos pela literatura, tais como: oportunidade, elasticidade e escalabilidade. Entretanto, tais requisitos geralmente possuem influência mútua entre eles, que pode ser positiva ou negativa, e isso impossibilita o desenvolvimento de soluções de monitoramento completas. Dado o cenario descrito acima, essa tese tem como objetivo investigar a influência mútua entre escalabilidade e oportunidade. Especificamente, essa tese propõe um modelo matemático para estimar a influência mútua entre tais requisitos de monitoramento. A metodologia utilizada por essa tese para construir tal modelo matemático baseia-se em parâmetros de monitoramento tais como: topologia de monitoramento, quantidade de dados de monitoramento e frequencia de amostragem. Além destes, a largura de banda de rede e o tempo de resposta também são importantes métricas do modelo matemático. A avaliação dos resultados obtidos foi realizada através da comparação entre os resultados do modelo matemático e de uma simulação. As maiores contribuições dessa tese são divididas em dois eixos, estes são denominados: Básico e Chave. As contribuições do eixo básico são: (i) a discussão a respeito da estrutura de monitoramento de nuvem e introdução do conceito de foco de monitoramento (ii) o exame do conceito de requisito de monitoramento e a proposição do conceito de abilidade de monitoramento (iii) a análise dos desafios e tendências a respeito de monitoramento de nuvens computacionais. As contribuições do eixo chave são: (i) a discussão a respeito de oportunidade e escalabilidade incluindo métodos para lidar com a mútua influência entre tais requisitos e a relação desses requisitos com parâmetros de monitoramento (ii) a identificação dos parâmetros de monitoramento que são essenciais na relação entre oportunidade e escalabilidade (iii) a proposição de um modelo matemático baseado em parâmetros de monitoramento que visa estimar a relação mútua entre oportunidade e escalabilidade. / Cloud computing is a suitable solution for professionals, companies, research centres, and institutions that need to have access to computational resources on demand. Nowadays, clouds have to rely on proper management of its structure to provide such computational resources with adequate quality of service, which is established by Service Level Agreements (SLAs), to customers. In this context, cloud monitoring is a critical management function to achieve it. Cloud monitoring requirements are properties that a cloud monitoring system need to meet to perform its functions properly, and currently there are several of them such as timeliness, elasticity and scalability. However, such requirements usually have mutual influence, which is either positive or negative, among themselves, and it has prevented the development of complete cloud monitoring solutions. From the above, this thesis investigates the mutual influence between timeliness and scalability. This thesis proposes a mathematical model to estimate such mutual influence to enhance cloud monitoring systems. The methodology used in this thesis is based on monitoring parameters such as monitoring topologies, the amount of monitoring data, and frequency sampling. Besides, it considers as important metrics network bandwidth and response time. Finally, the evaluation is based on a comparison of the mathematical model results and outcomes obtained via simulation. The main contributions of this thesis are divided into two axes, namely, basic and key. Basic contributions of this thesis are: (i) it discusses the cloud monitoring structure and introduced the concept of cloud monitoring focus (ii) it examines the concept of cloud monitoring requirement and proposed to divide them into two groups defined as cloud monitoring requirements and cloud monitoring abilities (iii) it analysed challenges and trends in cloud monitoring pointing research gaps that include the mutual influence between cloud monitoring requirements which is core to the key contributions. The key contributions of this thesis are: (i) it presents a discussion of timeliness and scalability that include: the methods currently used to cope with the mutual influence between them, and the relation between such requirements and monitoring parameters (ii) it identifies the monitoring parameters that are essential in the relation between timeliness and scalability (iii) it proposes a mathematical model based on monitoring parameters to estimate the mutual influence between timeliness and scalability.
|
145 |
Tile-based methods for online choropleth mapping: a scalability evaluationJanuary 2013 (has links)
abstract: Choropleth maps are a common form of online cartographic visualization. They reveal patterns in spatial distributions of a variable by associating colors with data values measured at areal units. Although this capability of pattern revelation has popularized the use of choropleth maps, existing methods for their online delivery are limited in supporting dynamic map generation from large areal data. This limitation has become increasingly problematic in online choropleth mapping as access to small area statistics, such as high-resolution census data and real-time aggregates of geospatial data streams, has never been easier due to advances in geospatial web technologies. The current literature shows that the challenge of large areal data can be mitigated through tiled maps where pre-processed map data are hierarchically partitioned into tiny rectangular images or map chunks for efficient data transmission. Various approaches have emerged lately to enable this tile-based choropleth mapping, yet little empirical evidence exists on their ability to handle spatial data with large numbers of areal units, thus complicating technical decision making in the development of online choropleth mapping applications. To fill this knowledge gap, this dissertation study conducts a scalability evaluation of three tile-based methods discussed in the literature: raster, scalable vector graphics (SVG), and HTML5 Canvas. For the evaluation, the study develops two test applications, generates map tiles from five different boundaries of the United States, and measures the response times of the applications under multiple test operations. While specific to the experimental setups of the study, the evaluation results show that the raster method scales better across various types of user interaction than the other methods. Empirical evidence also points to the superior scalability of Canvas to SVG in dynamic rendering of vector tiles, but not necessarily for partial updates of the tiles. These findings indicate that the raster method is better suited for dynamic choropleth rendering from large areal data, while Canvas would be more suitable than SVG when such rendering frequently involves complete updates of vector shapes. / Dissertation/Thesis / Ph.D. Geography 2013
|
146 |
Upper: uma ferramenta para escolha de servidor e estimação de gatilhos de escalabilidade de banco de dados relacionais na plataforma Amazon AWSRODRIGUES JUNIOR, Paulo Lins 09 December 2013 (has links)
Submitted by Irene Nascimento (irene.kessia@ufpe.br) on 2016-07-21T16:43:15Z
No. of bitstreams: 2
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
Upper.pdf: 1291176 bytes, checksum: 335e26f2c99d96f05a40fca5acb1fed1 (MD5) / Made available in DSpace on 2016-07-21T16:43:15Z (GMT). No. of bitstreams: 2
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
Upper.pdf: 1291176 bytes, checksum: 335e26f2c99d96f05a40fca5acb1fed1 (MD5)
Previous issue date: 2013-12-09 / A escalabilidade de uma aplicação é de vital importância para o sucesso de um negócio, sendo considerado um dos atributos mais importantes das aplicações atualmente.
Diversas aplicações atualmente são voltadas diretamente a dados, o que torna o banco de dados uma camada crítica em toda estrutura do sistema. Entre os tipos de bancos de dados existentes, destacam-se os bancos de dados relacionais por fornecerem sobretudo um nível de consistência adequado a maioria destas aplicações.
A projeção de infraestrutura e de gatilhos de escalabilidade são tarefas complexas até mesmo para profissionais experientes, e erros nestas tarefas podem representar perdas significativas de negócio.
A plataforma de computação em nuvem, em particular o modelo de infraestrutura como serviço se torna vantajosa por proporcionar um baixo investimento inicial e modelos de escala conforme demanda. Para se usufruir das vantagens oferecidas pela plataforma, os administradores de sistema ainda tem a difícil tarefa de definir o servidor adequado assim como estimar o momento certo de escalar atendendo as necessidades da aplicação e garantindo eficiência na alocação de recursos.
Este trabalho propõe um ambiente de simulação para auxílio na definição do servidor adequado e dos gatilhos de escalabilidade do servidor de banco de dados na Amazon Web Services, plataforma líder de serviços de computação em nuvem. A principal contribuição desta ferramenta, chamada Upper, é facilitar o trabalho do administrador de sistema, possibilitando-o executar a tarefa de estimativa de forma mais rápida e precisa. / The scalability of an application is of vital importance to the success of a business, being considered one of the most important attributes of current applications.
Many applications are now directly targeting to data, which makes the database a critical layer throughout the system structure. Among the types of existing databases, highlight the relational databases primarily for providing an appropriate level of consistency needed for most of these applications.
The projection of infrastructure and scalability triggers is complex even for senior professionals, and errors in these tasks can result in significant business losses.
The platform of cloud computing, in particular the model of infrastructure as a service becomes advantageous for providing a low initial investment and models of scale on demand. To benefit from the advantages offered by the platform, system administrators still have the difficult task of defining the appropriate server as well as estimating the right time to scale ensuring the performance needs of the application and efficiency in resource allocation.
This paper proposes a simulation environment to aid in defining the appropriate server and scalability triggers of the database server on Amazon Web Services, a leading platform for cloud computing services. The main contribution of this tool, called Upper, is to facilitate the work of system administrator, providing him means to perform the task of estimation faster and more accurately.
|
147 |
On the Scalability of Four Multi-Agent Architectures for Load Control Management in Intelligent Networks / On the Scalability of Four Multi-Agent Architectures for Load Control Management in Intelligent NetworksAhmad, Raheel January 2003 (has links)
Paralleling the rapid advancement in the network evolution is the need for advanced network traffic management surveillance. The increasing number and variety of services being offered by communication networks has fuelled the demand for optimized load management strategies. The problem of Load Control Management in Intelligent Networks has been studied previously and four Multi-Agent architectures have been proposed. The objective of this thesis is to investigate one of the quality attributes namely, scalability of the four Multi-Agent architectures. The focus of this research would be to resize the network and study the performance of the different architectures in terms of Load Control Management through different scalability attributes. The analysis has been based on experimentation through simulations. It has been revealed through the results that different architectures exhibit different performance behaviors for various scalability attributes at different network sizes. It has been observed that there exists a trade-off in different scalability attributes as the network grows. The factors affecting the network performance at different network settings have been observed. Based on the results from this study it would be easier to design similar networks for optimal performance by controlling the influencing factors and considering the trade-offs involved. / C/o Aijaz Ahmad, House No. E-97/A, Street No. 4, Super Town, Walton Road, Lahore Cantt.-Pakistan, Tel +92-42-6655070
|
148 |
An experimental comparison of five prioritization methods : Investigating ease of use, accuracy and scalabilityAhl, Viggo January 2005 (has links)
Requirements prioritization is an important part of developing the right product in the right time. There are different ideas about which method is the best to use when prioritizing requirements. This thesis takes a closer look at five different methods and then put them into an controlled experiment, in order to find out which of the methods that would be the best method to use. The experiment was designed to find out which method yields the most accurate result, the method’s ability to scale up to many more requirements, what time it took to prioritize with the method, and finally how easy the method was to use. These four criteria combined will indicate which method is more suitable, i.e. be the best method, to use in prioritizing of requirements. The chosen methods are the well-known analytic hierarchy process, the computer algorithm binary search tree, and from the ideas of extreme programming come planning game. The fourth method is an old but well used method, the 100 points method. The last method is a new method, which combines planning game with the analytic hierarchy process. Analysis of the data from the experiment indicates that the planning game combined with analytic hierarchy process could be a good candidate. However, the result from the experiment clearly indicates that the binary search tree yields accurate result, is able to scale up and was the easiest method to use. For these three reasons the binary search tree clearly is the better method to use for prioritizing requirements
|
149 |
Systematic Overview of Savings versus Quality for H.264/SVC / Systematisk översikt över besparingar kontra kvalitet för H.264/SVC.Varisetty, Tilak, Edara, Praveen January 2012 (has links)
The demand for efficient video coding techniques has increased in the recent past, resulting in the evolution of various video compression techniques. SVC (Scalable video coding) is the recent amendment of H.264/AVC (Advanced Video Coding), which adds a new dimension by providing the possibility of encoding a video stream into a combination of different sub streams that are scalable in areas corresponding to spatial resolution, temporal resolution and quality. Introduction of the scalability aspect is an effective video coding technique in a network scenario where the client can decode the sub stream depending on the available bandwidth in the network. A graceful degradation in the video quality is expected when any of the spatial, temporal or the quality layer is removed. Still the amount of degradation in video quality has to be measured in terms of Quality of Experience (QoE) from the user’s perspective. To measure the degradation in video quality, video streams consisting of different spatial and temporal layers have been extracted and efforts have been put to remove each layer starting from a higher dependency layer or the Enhancement layer and ending up with the lowest dependency layer or the Base layer. Extraction of a temporally downsampled layer had challenges with frame interpolation and to overcome this, temporal interpolation was employed. Similarly, a spatial downsampled layer has been upsampled in the spatial domain in order to compare with the original stream. Later, an objective video quality assessment has been made by comparing the extracted substream containing fewer layers that are downsampled both spatially and temporally with the original stream containing all layers. The Mean Opinion Scores (MOS) were obtained from objective tool named Perceptual Evaluation of Video Quality (PEVQ). The experiment is carried out for each layers and also for different test videos. Subjective tests were also performed to evaluate the user experience. The results provide recommendations to SVC capable router about the video quality available for each layer and hence the network transcoder can transmit a specific layer depending on the network conditions and capabilities of the decoding device. / Efterfrågan på effektiva video kodningstekniker har ökat under de senaste åren, vilket resulterar i utvecklingen av olika tekniker videokomprimering. SVC (Scalable Video Coding) är den senaste ändringen av H.264/AVC (Advanced Video Coding), vilket ger en ny dimension genom att möjligheten att koda en videoström till en kombination av olika sub strömmar som är skalbara i områden som motsvarar rumslig upplösning, tidsupplösning och kvalitet. Introduktion av skalbarhet aspekten är en effektiv video kodningsteknik i ett nätverk scenario där kunden kan avkoda sub strömmen beroende på den tillgängliga bandbredden i nätverket. En elegant nedbrytning i videokvaliteten förväntas när någon av den rumsliga, tidsmässiga eller kvaliteten skiktet avlägsnas. Fortfarande mängden nedbrytning i videokvalitet måste mätas i termer "Quality of Experience" (QoE) från användarens perspektiv. För att mäta försämring i video-kvalitet, har videoströmmar består av olika rumsliga och tidsmässiga skikt hämtats och ansträngningar har lagts för att ta bort varje lager från ett högre beroende lager eller förbättrande lagret och slutar upp med den lägsta beroendet lagret eller basen skikt. Extraktion av ett tidsmässigt nedsamplas lager hade problem med ram interpolation och för att övervinna detta, var temporal interpolering används. På liknande sätt har en rumslig nedsamplas skikt har uppsamplas i rumsdomänen för att jämföra med den ursprungliga strömmen. Senare har en objektiv videokvalitet bedömning gjorts genom att jämföra den extraherade underströmmen med färre lager som nedsamplade både rumsligt och tidsmässigt med den ursprungliga strömmen innehållande alla lager. De genomsnittliga yttrande poäng (MOS) erhölls från objektivt verktyg som heter Perceptuell utvärdering av Videokvalitet (PEVQ). Experimentet utförs för varje skikt och även för olika test video. Subjektiva tester utfördes också för att utvärdera användarupplevelsen. Resultaten ger rekommendationer till SVC kapabel router om videokvaliteten för varje lager och därmed nätverket kodomvandlaren kan överföra ett visst lager beroende på nätverksförhållanden och kapacitet avkodnings anordningen. / Tilak Varisetty, 518, Gamlainfartsvägen, Annebo, Karlskrona -37141, Mobil: 0723060131
|
150 |
Compiling an Interpreted Processing Language : Improving Performance in a Large Telecommunication SystemMejstad, Valdemar, Tångby, Karl-Johan January 2001 (has links)
In this report we evaluate different techniques for increasing the performance of an interpreted processing language in a telecommunication system, called Billing Gateway R8. We have implemented a prototype in which we first translate the language into C++ code, and then compile it using a C++ compiler. In our prototype we experienced a threefold increase in processing throughput, compared to the original system, when running on a Symmetric Multi Processor with four CPU:s that were under full load. The prototype also showed better scalability than Billing Gateway R8, due to less use of dynamic memory management.
|
Page generated in 0.0515 seconds