Spelling suggestions: "subject:"scalability."" "subject:"calability.""
101 |
A scalable microservice-based open source platform for smart cities / Uma plataforma escalável de código aberto baseada em microsserviços para cidades inteligentesEsposte, Arthur de Moura Del 18 June 2018 (has links)
Smart City technologies emerge as a potential solution to tackle common problems in large urban centers by using city resources efficiently and providing quality services for citizens. Despite the various advances in middleware technologies to support future smart cities, there are yet no widely accepted platforms. Most of the existing solutions do not provide the required flexibility to be shared across cities. Moreover, the extensive use and development of non-open-source software leads to interoperability issues and limits the collaboration among R&D groups. Our research explores the use of a microservices architecture to address key practical challenges in smart city platforms. More specifically, we are concerned with the impact of microservices on addressing the key non-functional requirements to enable the development of smart cities such as supporting different scalability demands and providing a flexible architecture which can easily evolve over time. To this end, we are developing InterSCity, a microservice-based open source smart city platform that aims at supporting the development of sophisticated, cross- domain applications and services. Our early experience shows that microservices can be properly used as building blocks to achieve a loosely coupled, flexible architecture. Experimental results point towards the applicability of our approach in the context of smart cities since the platform can support multiple scalability demands. We expect to enable collaborative, novel smart city research, development, and deployment initiatives through the InterSCity platform. The full validation of the platform will be conducted using different smart city scenarios and workloads. Future work comprises the ongoing design and development effort on data processing services as well as more comprehensive evaluation of the proposed platform through scalability experiments. / As tecnologias de Cidades Inteligentes surgem como uma potencial solução para lidar com problemas comuns em grandes centros urbanos, utilizando os recursos da cidade de maneira eficiente e fornecendo serviços de qualidade para os cidadãos. Apesar dos vários avanços nas tecnologias de middleware para suporte às cidades inteligentes do futuro, ainda não existem plataformas amplamente aceitas. A maioria das soluções existentes não oferece a flexibilidade necessária para ser compartilhada entre as cidades. Além disso, o vasto uso e desenvolvimento de software proprietário levam a problemas de interoperabilidade e limitam a colaboração entre grupos de P&D. Nesta dissertação, exploramos uso de uma arquitetura de microsserviços para abordar os principais desafios práticos em plataformas de cidades inteligentes. Mais especificamente, estamos preocupados com o impacto dos microsserviços sobre requisitos não-funcionais para permitir o desenvolvimento de cidades inteligentes, tais como o suporte a diferentes demandas de escalabilidade e o fornecimento de uma arquitetura flexível que pode evoluir facilmente. Para esse fim, criamos a InterSCity, uma plataforma para cidades inteligentes de código aberto baseada em microsserviços que visa apoiar o desenvolvimento de aplicativos e serviços sofisticados em múltiplos domínios. Nossa experiência inicial mostra que os microsserviços podem ser usados adequadamente como blocos de construção para obter uma arquitetura flexível e fracamente acoplada. Resultados experimentais apontam para a aplicabilidade de nossa abordagem no contexto de cidades inteligentes, já que a plataforma pode suportar diferentes demandas de escalabilidade. Esperamos permitir pesquisas colaborativas e inovadoras em cidades inteligentes, assim como o desenvolvimento e iniciativas de implantações reais através da plataforma InterSCity. A validação completa da plataforma será realizada usando diferentes cenários de cidades inteligentes e cargas de trabalho. Os trabalhos futuros compreendem o esforço contínuo de projetar e desenvolver novos serviços de processamento de dados, bem como a realização de avaliações mais abrangentes da plataforma proposta por meio de experimentos de escalabilidade.
|
102 |
[en] WATER AND OIL FLOW SIMULATION IN POROUS MEDIA / [pt] SIMULAÇÃO DO ESCOAMENTO DE ÁGUA E ÓLEO EM MEIOS POROSOSMARCOS AURELIO CITELI DA SILVA 14 April 2004 (has links)
[pt] Muitos problemas provenientes do mundo real podem ser
modelados por sistemas de equações diferenciais parciais
(EDP´s). No entanto, as equações resultantes da
discretização produzem matrizes grandes e freqüentementes
mal condicionadas. Este trabaho implementa o método de
elementos finitos mistos para resolver numericamente um
sistema de EDP´s oriundo de um modelo de escoamento de
fluidos em meios porosos e melhora sua performance usando
precondicionadores e processamento paralelo. / [en] Many problems arising from real world can be represented by
systems of partial diferential equations (PDE´s). However,
the resulting discrete equations produce large and
frequently bad conditioned matrices. This work
implements the mixed finite element method to numerically
solve a system of PDE´s coming from a multiphase flow in
porous media model and improve its performance by
preconditioners and parallel processing.
|
103 |
On the access pricing and network scaling issues of wireless mesh networks. / On the access pricing & network scaling issues of wireless mesh networksJanuary 2006 (has links)
Lam Kong. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2006. / Includes bibliographical references (leaves 84-85). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Related Work and Background --- p.7 / Chapter 2.1 --- Competition-free Unlimited Capacity Model´ؤOne-hop Case --- p.9 / Chapter 2.2 --- Competition-free Unlimited Capacity Model一Two-hop Case --- p.11 / Chapter 3 --- Extensions to Competition-free Unlimited Capacity Model --- p.13 / Chapter 3.1 --- Optimal Pricing for the One-hop Case under Various Utility Distributions --- p.13 / Chapter 3.2 --- Optimal Pricing for Competition-free Multi-hop Wireless Mesh Networks --- p.16 / Chapter 3.3 --- The Issue on Network Scaling --- p.22 / Chapter 4 --- Competition-free Limited Capacity Model --- p.28 / Chapter 4.1 --- One-hop Case --- p.28 / Chapter 4.2 --- Multi-hop Case --- p.36 / Chapter 5 --- Unlimited Capacity Model with Price Competition --- p.42 / Chapter 5.1 --- Renewed Game Model for Networks with Price Competition --- p.43 / Chapter 5.2 --- Pricing Equilibriums in Different Network Topologies --- p.46 / Chapter 5.2.1 --- Case A: Two Access Points Competing in a One-hop Network --- p.47 / Chapter 5.2.2 --- Case B: Two Access Points Competing in a Two-hop Network --- p.51 / Chapter 5.2.3 --- Case C: Two Resellers Competing in a Two-hop Network --- p.54 / Chapter 5.2.4 --- Case D: Extending Case A into a Multi-hop Network --- p.60 / Chapter 5.2.5 --- Case E: Extending Case C into a Multi-hop Network. --- p.66 / Chapter 5.2.6 --- The Unified Pricing Equilibrium --- p.68 / Chapter 5.2.7 --- Case F: The Characterizing Multi-hop Network --- p.75 / Chapter 5.3 --- Revisiting the Network Scaling Issue --- p.80 / Chapter 6 --- Conclusion --- p.82 / Bibliography --- p.84 / Chapter A --- Proof of the PBE for Competition-free Multi-hop Wireless Mesh Networks --- p.86 / Chapter B --- Proof of the Unified Pricing Equilibrium --- p.92
|
104 |
Energy efficient channel access mechanism for IEEE 802.11ah based networksWang, Yanru January 2018 (has links)
IEEE 802.11ah is designed to support battery powered devices that are required to serve for several years in the Internet of Things networks. The Restricted Access Window (RAW) has been introduced in IEEE 802.11ah to address the scalability of thousands of densely deployed devices. As the RAW sizes entail the consumed energy to support the transmitting devices in the network, hence the control mechanism for RAW should be carefully devised for improving the overall energy e ciency of IEEE 802.11ah. This thesis presents a two-stage adaptive RAW scheme for IEEE 802.11ah to optimise the energy efficiency of massive channel access and transmission in the uplink communications for highly dense networks. The proposed scheme adaptively controls the RAW sizes and device transmission access by taking into account the number of devices per RAW, retransmission mechanism, harvested-energy and prioritised access. The scheme has four completely novel control blocks: RAW size control that adaptively adjusts the RAW sizes according to different number of devices and application types in the networks. RAW retransmission control that improves the channel utilisation by retransmitting the collided packets at the subsequent slot in the same RAW. Harvested-energy powered access control that adjusts the RAW sizes with the consideration of the uncertain amount of harvested-energy in each device and channel conditions. Priority-aware channel access control that reduces the collisions of high-priority packets in the time-critical networks. The performance of the proposed controls is evaluated in Matlab under different net work scenarios. Simulation results show that the proposed controls improve the network performances in terms of energy efficiency, packet delivery ratio and delay as compared to the existing window control.
|
105 |
Unveiling the interplay between timeliness and scalability in cloud monitoring systems / Desvelando a relação mútua entre escalabilidade e oportunidade em sistemas de monitoramento de nuvens computacionaisRodrigues, Guilherme da Cunha January 2016 (has links)
Computação em nuvem é uma solução adequada para profissionais, empresas, centros de pesquisa e instituições que necessitam de acesso a recursos computacionais sob demanda. Atualmente, nuvens computacionais confiam no gerenciamento de sua estrutura para fornecer recursos computacionais com qualidade de serviço adequada as expectativas de seus clientes, tal qualidade de serviço é estabelecida através de acordos de nível de serviço. Nesse contexto, o monitoramento é uma função crítica de gerenciamento para se prover tal qualidade de serviço. Requisitos de monitoramento em nuvens computacionais são propriedades que um sistema de monitoramento de nuvem precisa reunir para executar suas funções de modo adequado e atualmente existem diversos requisitos definidos pela literatura, tais como: oportunidade, elasticidade e escalabilidade. Entretanto, tais requisitos geralmente possuem influência mútua entre eles, que pode ser positiva ou negativa, e isso impossibilita o desenvolvimento de soluções de monitoramento completas. Dado o cenario descrito acima, essa tese tem como objetivo investigar a influência mútua entre escalabilidade e oportunidade. Especificamente, essa tese propõe um modelo matemático para estimar a influência mútua entre tais requisitos de monitoramento. A metodologia utilizada por essa tese para construir tal modelo matemático baseia-se em parâmetros de monitoramento tais como: topologia de monitoramento, quantidade de dados de monitoramento e frequencia de amostragem. Além destes, a largura de banda de rede e o tempo de resposta também são importantes métricas do modelo matemático. A avaliação dos resultados obtidos foi realizada através da comparação entre os resultados do modelo matemático e de uma simulação. As maiores contribuições dessa tese são divididas em dois eixos, estes são denominados: Básico e Chave. As contribuições do eixo básico são: (i) a discussão a respeito da estrutura de monitoramento de nuvem e introdução do conceito de foco de monitoramento (ii) o exame do conceito de requisito de monitoramento e a proposição do conceito de abilidade de monitoramento (iii) a análise dos desafios e tendências a respeito de monitoramento de nuvens computacionais. As contribuições do eixo chave são: (i) a discussão a respeito de oportunidade e escalabilidade incluindo métodos para lidar com a mútua influência entre tais requisitos e a relação desses requisitos com parâmetros de monitoramento (ii) a identificação dos parâmetros de monitoramento que são essenciais na relação entre oportunidade e escalabilidade (iii) a proposição de um modelo matemático baseado em parâmetros de monitoramento que visa estimar a relação mútua entre oportunidade e escalabilidade. / Cloud computing is a suitable solution for professionals, companies, research centres, and institutions that need to have access to computational resources on demand. Nowadays, clouds have to rely on proper management of its structure to provide such computational resources with adequate quality of service, which is established by Service Level Agreements (SLAs), to customers. In this context, cloud monitoring is a critical management function to achieve it. Cloud monitoring requirements are properties that a cloud monitoring system need to meet to perform its functions properly, and currently there are several of them such as timeliness, elasticity and scalability. However, such requirements usually have mutual influence, which is either positive or negative, among themselves, and it has prevented the development of complete cloud monitoring solutions. From the above, this thesis investigates the mutual influence between timeliness and scalability. This thesis proposes a mathematical model to estimate such mutual influence to enhance cloud monitoring systems. The methodology used in this thesis is based on monitoring parameters such as monitoring topologies, the amount of monitoring data, and frequency sampling. Besides, it considers as important metrics network bandwidth and response time. Finally, the evaluation is based on a comparison of the mathematical model results and outcomes obtained via simulation. The main contributions of this thesis are divided into two axes, namely, basic and key. Basic contributions of this thesis are: (i) it discusses the cloud monitoring structure and introduced the concept of cloud monitoring focus (ii) it examines the concept of cloud monitoring requirement and proposed to divide them into two groups defined as cloud monitoring requirements and cloud monitoring abilities (iii) it analysed challenges and trends in cloud monitoring pointing research gaps that include the mutual influence between cloud monitoring requirements which is core to the key contributions. The key contributions of this thesis are: (i) it presents a discussion of timeliness and scalability that include: the methods currently used to cope with the mutual influence between them, and the relation between such requirements and monitoring parameters (ii) it identifies the monitoring parameters that are essential in the relation between timeliness and scalability (iii) it proposes a mathematical model based on monitoring parameters to estimate the mutual influence between timeliness and scalability.
|
106 |
Scalable state machine replication / Replicação escalável de máquina de estadosBezerra, Carlos Eduardo Benevides January 2016 (has links)
Redundância provê tolerância a falhas. Um serviço pode ser executado em múltiplos servidores que se replicam uns aos outros, de maneira a prover disponibilidade do serviço em caso de falhas. Uma maneira de implementar um tal serviço replicado é através de técnicas como replicação de máquina de estados (SMR). SMR provê tolerância a falhas, ao mesmo tempo que é linearizável, isto é, clientes não são capazes de distinguir o comportamento do sistema replicado daquele de um sistema não replicado. No entanto, ter um sistema completamente replicado e linearizável vem com um custo, que é escalabilidade – por escalabilidade, queremos dizer que adicionar servidores ao sistema aumenta a sua vazão, pelo menos para algumas cargas de trabalho. Mesmo com uma configuração cuidadosa e usando otimizações que evitam que os servidores executem ações redundantes desnecessárias, em um determinado ponto a vazão de um sistema replicado com SMR não pode ser mais aumentada acrescentando-se servidores; na verdade, adicionar réplicas pode até degradar a sua performance. Uma maneira de conseguir escalabilidade é particionar o serviço e então permitir que partições trabalhem independentemente. Por outro lado, ter um sistema particionado, porém linearizável e com razoavelmente boa performance não é trivial, e esse é o tópico de pesquisa tratado aqui. Para permitir que sistemas escalem, ao mesmo tempo que se garante linearizabilidade, nós propomos as seguinte ideias: (i) Replicação Escalável de Máquina de Estados (SSMR), (ii) Multicast Atômico Otimista (Opt-amcast) e (iii) S-SMR Rápido (Fast-SSMR). S-SMR é um modelo de execução que permite que a vazão do sistema escale de maneira linear com o número de servidores, sem sacrificar consistência. Para reduzir o tempo de resposta dos comandos, nós definimos o conceito de Opt-amcast, que permite que mensagens sejam entregues duas vezes: uma entrega garante ordem atômica (entrega atômica), enquanto a outra é mais rápida, mas nem sempre garante ordem atômica (entrega otimista). A implementação de Opt-amcast que nós propomos nessa tese se chama Ridge, um protocolo que combina baixa latência com alta vazão. Fast-SSMR é uma extensão do S-SMR que utiliza a entrega otimista do Opt-amcast: enquanto um comando é ordenado de maneira atômica, pode-se fazer alguma pré-computação baseado na entrega otimista, reduzindo assim tempo de resposta. / Redundancy provides fault-tolerance. A service can run on multiple servers that replicate each other, in order to provide service availability even in the case of crashes. A way to implement such a replicated service is by using techniques like state machine replication (SMR). SMR provides fault tolerance, while being linearizable, that is, clients cannot distinguish the behaviour of the replicated system to that of a single-site, unreplicated one. However, having a fully replicated, linearizable system comes at a cost, namely, scalability—by scalability we mean that adding servers will always increase the maximum system throughput, at least for some workloads. Even with a careful setup and using optimizations that avoid unnecessary redundant actions to be taken by servers, at some point the throughput of a system replicated with SMR cannot be increased by additional servers; in fact, adding replicas may even degrade performance. A way to achieve scalability is by partitioning the service state and then allowing partitions to work independently. Having a partitioned, yet linearizable and reasonably performant service is not trivial, and this is the topic of research addressed here. To allow systems to scale, while at the same time ensuring linearizability, we propose and implement the following ideas: (i) Scalable State Machine Replication (S-SMR), (ii) Optimistic Atomic Multicast (Opt-amcast), and (iii) Fast S-SMR (Fast-SSMR). S-SMR is an execution model that allows the throughput of the system to scale linearly with the number of servers without sacrificing consistency. To provide faster responses for commands, we developed Opt-amcast, which allows messages to be delivered twice: one delivery guarantees atomic order (conservative delivery), while the other is fast, but not always guarantees atomic order (optimistic delivery). The implementation of Opt-amcast that we propose is called Ridge, a protocol that combines low latency with high throughput. Fast-SSMR is an extension of S-SMR that uses the optimistic delivery of Opt-amcast: while a command is atomically ordered, some precomputation can be done based on its fast, optimistically ordered delivery, improving response time.
|
107 |
Ranking for Scalable Information ExtractionBarrio Gonzalez, Pablo Javier January 2015 (has links)
Information extraction systems are complex software tools that discover structured information in natural language text. For instance, an information extraction system trained to extract tuples for an Occurs-in(Natural Disaster, Location) relation may extract the tuple <tsunami, Hawaii> from the sentence: "A tsunami swept the coast of Hawaii." Having information in structured form enables more sophisticated querying and data mining than what is possible over the natural language text. Unfortunately, information extraction is a time-consuming task. For example, a state-of-the-art information extraction system to extract Occurs-in tuples may take up to two hours to process only 1,000 text documents. Since document collections routinely contain millions of documents or more, improving the efficiency and scalability of the information extraction process over these collections is critical. As a significant step towards this goal, this dissertation presents approaches for (i) enabling the deployment of efficient information extraction systems and (ii) scaling the information extraction process to large volumes of text.
To enable the deployment of efficient information extraction systems, we have developed two crucial building blocks for this task. As a first contribution, we have created REEL, a toolkit to easily implement, evaluate, and deploy full-fledged relation extraction systems. REEL, in contrast to existing toolkits, effectively modularizes the key components involved in relation extraction systems and can integrate other long-established text processing and machine learning toolkits. To define a relation extraction system for a new relation and text collection, users only need to specify the desired configuration, which makes REEL a powerful framework for both research and application building. As a second contribution, we have addressed the problem of building representative extraction task-specific document samples from collections, a step often required by approaches for efficient information extraction. Specifically, we devised fully automatic document sampling techniques for information extraction that can produce better-quality document samples than the state-of-the-art sampling strategies; furthermore, our techniques are substantially more efficient than the existing alternative approaches.
To scale the information extraction process to large volumes of text, we have developed approaches that address the efficiency and scalability of the extraction process by focusing the extraction effort on the collections, documents, and sentences worth processing for a given extraction task. For collections, we have studied both (adaptations of) state-of-the art approaches for estimating the number of documents in a collection that lead to the extraction of tuples as well as information extraction-specific approaches. Using these estimations we can identify the collections worth processing and ignore the rest, for efficiency. For documents, we have developed an adaptive document ranking approach that relies on learning-to-rank techniques to prioritize the documents that are likely to produce tuples for an extraction task of choice. Our approach revises the (learned) ranking decisions periodically as the extraction process progresses and new characteristics of the useful documents are revealed. Finally, for sentences, we have developed an approach based on the sparse group selection problem that identifies sentences|modeled as groups of words|that best characterize the extraction task. Beyond identifying sentences worth processing, our approach aims at selecting sentences that lead to the extraction of unseen, novel tuples. Our approaches are lightweight and efficient, and dramatically improve the efficiency and scalability of the information extraction process. We can often complete the extraction task by focusing on just a very small fraction of the available text, namely, the text that contains relevant information for the extraction task at hand. Our approaches therefore constitute a substantial step towards efficient and scalable information extraction over large volumes of text.
|
108 |
On SIP Server Clusters and the Migration to Cloud Computing PlatformsKim, Jong Yul January 2016 (has links)
This thesis looks in depth at telephony server clusters, the modern switchboards at the core of a packet-based telephony service. The most widely used de facto standard protocols for telecommunications are the Session Initiation Protocol (SIP) and the Real Time Protocol (RTP). SIP is a signaling protocol used to establish, maintain, and tear down communication channel between two or more parties. RTP is a media delivery protocol that allows packets to carry digitized voice, video, or text.
SIP telephony server clusters that provide communications services, such as an emergency calling service, must be scalable and highly available. We evaluate existing commercial and open source telephony server clusters to see how they differ in scalability and high availability.
We also investigate how a scalable SIP server cluster can be built on a cloud computing platform. Elasticity of resources is an attractive property for SIP server clusters because it allows the cluster to grow or shrink organically based on traffic load. However, simply deploying existing clusters to cloud computing platforms is not good enough to take full advantage of elasticity. We explore the design and implementation of clusters that scale in real-time. The database tier of our cluster was modified to use a scalable key-value store so that both the SIP proxy tier and the database tier can scale separately. Load monitoring and reactive threshold-based scaling logic is presented and evaluated.
Server clusters also need to reduce processing latency. Otherwise, subscribers experience low quality of service such as delayed call establishment, dropped calls, and inadequate media quality. Cloud computing platforms do not guarantee latency on virtual machines due to resource contention on the same physical host. These extra latencies from resource contention are temporary in nature. Therefore, we propose and evaluate a mechanism that temporarily distributes more incoming calls to responsive SIP proxies, based on measurements of the processing delay in proxies.
Availability of SIP server clusters is also a challenge on platforms where a node may fail anytime. We investigated how single component failures in a cluster can lead to a complete system outage. We found that for single component failures, simply having redundant components of the same type are enough to mask those failures. However, for client-facing components, smarter clients and DNS resolvers are necessary.
Throughout the thesis, a prototype SIP proxy cluster is re-used, with variations in the architecture or configuration, to demonstrate and address issues mentioned above. This allows us to tie all of our approaches for different issues into one coherent system that is dynamically scalable, is responsive despite latency varations of virtual machines, and is tolerant of single component failures in cloud platforms.
|
109 |
GeoSparkSim: A Scalable Microscopic Road Network Traffic Simulator Based on Apache SparkJanuary 2019 (has links)
abstract: Researchers and practitioners have widely studied road network traffic data in different areas such as urban planning, traffic prediction and spatial-temporal databases. For instance, researchers use such data to evaluate the impact of road network changes. Unfortunately, collecting large-scale high-quality urban traffic data requires tremendous efforts because participating vehicles must install Global Positioning System(GPS) receivers and administrators must continuously monitor these devices. There have been some urban traffic simulators trying to generate such data with different features. However, they suffer from two critical issues (1) Scalability: most of them only offer single-machine solution which is not adequate to produce large-scale data. Some simulators can generate traffic in parallel but do not well balance the load among machines in a cluster. (2) Granularity: many simulators do not consider microscopic traffic situations including traffic lights, lane changing, car following. This paper proposed GeoSparkSim, a scalable traffic simulator which extends Apache Spark to generate large-scale road network traffic datasets with microscopic traffic simulation. The proposed system seamlessly integrates with a Spark-based spatial data management system, GeoSpark, to deliver a holistic approach that allows data scientists to simulate, analyze and visualize large-scale urban traffic data. To implement microscopic traffic models, GeoSparkSim employs a simulation-aware vehicle partitioning method to partition vehicles among different machines such that each machine has a balanced workload. The experimental analysis shows that GeoSparkSim can simulate the movements of 200 thousand cars over an extensive road network (250 thousand road junctions and 300 thousand road segments). / Dissertation/Thesis / Masters Thesis Computer Engineering 2019
|
110 |
InterSCSimulator: a scalable, open source, smart city simulator / InterSCSimulator: um simulador de cidades inteligentes escalável e de código abertoSantana, Eduardo Felipe Zambom 18 March 2019 (has links)
Large cities around the world face numerous challenges to guarantee the quality of life of its citizens. A promising approach to cope with these problems is the concept of Smart Cities, of which the main idea is the use of Information and Communication Technologies to improve city services and infrastructure. Being able to simulate the execution of Smart Cities scenarios would be extremely beneficial for the advancement of the field and for governments. Such a simulator would need to represent a large number of agents such as cars, hospitals, and gas pipelines. One possible approach for doing this in a computer system is to use the actor model as a programming paradigm so that each agent corresponds to an actor. The Erlang programming language is based on the actor model and is the most commonly used implementation of it. In this thesis, we present the first version of InterSCSimulator, an open-source, extensible, large-scale traffic Simulator for Smart Cities developed in Erlang. Experiments showed that the simulator is capable of simulating millions of agents using a real map of a large city. We also present study cases which demonstrate the possible uses of the simulator such as tests new urban infrastructure and test the viability of future transportation modes. / Grandes cidades ao redor do mundo enfrentam grandes desafios para garantir boas condições de vida para seus cidadãos. Uma abordagem para responder aos problemas das cidades é a ideia de Cidades Inteligentes, a qual tem como principal característica o uso de Tecnologias de Telecomunicações e Informação (TIC) para melhorar os serviços da cidade. Simular cenários de Cidades Inteligentes pode beneficiar bastante essa área de pesquisa e também gestores de cidades. Um simulador desse tipo precisa representar diversos tipos de agentes como carros, hospitais e a infraestrutura da cidade. Uma possível implementação desse simulador pode usar o modelo de atores como paradigma de programação, implementando cada agente como um ator. O Erlang é uma das linguagens de programação baseada no modelo de atores mais utilizadas para o desenvolvimento de aplicações de larga escala. Esta tese apresenta a primeira versão do InterSCSimulator, um simulador de Cidades Inteligentes de código aberto, extensível e de larga escala desenvolvido em Erlang. Experimentos mostraram que o simulador é capaz de simular todo o trânsito de uma metrópole como S\\~ao Paulo. Adicionalmente, são apresentados diversos casos de usos demonstrando como o simulador pode ser utilizado em trabalhos sobre Cidades Inteligentes como pesquisas sobre novos modos de transportes, redes veiculares e aplicações de Cidades Inteligentes.
|
Page generated in 0.0782 seconds