61 |
Papel da dimensionalidade em redes complexas: conex?es com a mec?nica estat?stica n?o-extensivaBrito, Samura? Gomes de Aguiar 13 December 2016 (has links)
Submitted by Automa??o e Estat?stica (sst@bczm.ufrn.br) on 2017-05-31T20:37:34Z
No. of bitstreams: 1
SamuraiGomesDeAguiarBrito_TESE.pdf: 8366063 bytes, checksum: 5dab239b1084984dd0fc67c064d31175 (MD5) / Approved for entry into archive by Arlan Eloi Leite Silva (eloihistoriador@yahoo.com.br) on 2017-06-01T19:41:03Z (GMT) No. of bitstreams: 1
SamuraiGomesDeAguiarBrito_TESE.pdf: 8366063 bytes, checksum: 5dab239b1084984dd0fc67c064d31175 (MD5) / Made available in DSpace on 2017-06-01T19:41:03Z (GMT). No. of bitstreams: 1
SamuraiGomesDeAguiarBrito_TESE.pdf: 8366063 bytes, checksum: 5dab239b1084984dd0fc67c064d31175 (MD5)
Previous issue date: 2016-12-13 / Conselho Nacional de Desenvolvimento Cient?fico e Tecnol?gico (CNPq) / Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior (CAPES) / Estudos em redes complexas s?o bastante atuais e promovem a integra??o de diversas ?reas do conhecimento. J? foi comprovado em pesquisas anteriores que a estat?stica que rege as redes complexas, quando as intera??es s?o de longo alcance, n?o ? a estat?stica padr?o de Boltzmann-Gibbs, mas sim uma estat?stica que leve em conta correla??es de longo alcance. Neste sentido existe uma proposta que tem tido bastante aceita??o que ? a estat?stica n?o-extensiva de Tsallis. No limite termodin?mico, as distribui??es de grau, s?o da forma P(k)?e^(-k/?) , onde e_q ? a q?exponencial definida por e^z ? [1 + (1 - q)z]^(1/(1-q) )que otimiza a entropia n?o aditiva S_q (quando q?1, recupera-se a entropia de Boltzmann-Gibbs). Nesta tese n?s introduzimos um estudo de redes geogr?ficas d?dimensionais (Modelo Natal) as quais crescem com liga??o preferencial envolvendo dist?ncia Euclidiana atrav?s da introdu??o do termo r^(-?_A ) (?_A ? 0) na regra de liga??o preferencial. Dada a conex?o com a q-estat?stica, n?s numericamente verificamos (para d = 1,2,3 e 4) que as distribui??es de grau, que em princ?pio dependem de ?_A e d , na realidade dependem somente do quociente destas vari?veis ou seja ?_A/d, portanto apresentando um comportamento universal em rela??o ? essa vari?vel. Al?m disso, o limite q = 1 ? rapidamente alcan?ado quando ?_A/d ? ?. Verificamos ainda que outras propriedades da rede tamb?m possuem depend?ncias universais com rela??o a ?_A/d, tais como: menor caminho m?dio ?l?, expoente din?mico ? proveniente da evolu??o temporal da conectividade dos s?tios e a entropia S_q da distribui??o de grau.
|
62 |
Teoria cin?tica relativ?stica: efeitos n?o-extensivos no o Teorema-H / Relativistic theory kinetics: non-extensive effects on the Theorem-HOliveira, Zaira Bruna Borges de 03 June 2016 (has links)
Submitted by Automa??o e Estat?stica (sst@bczm.ufrn.br) on 2017-10-04T21:58:21Z
No. of bitstreams: 1
ZairaBrunaBorgesDeOliveira_DISSERT.pdf: 558082 bytes, checksum: 8a6b37de39eb5c7535dd747f52825d2b (MD5) / Approved for entry into archive by Arlan Eloi Leite Silva (eloihistoriador@yahoo.com.br) on 2017-10-11T23:47:56Z (GMT) No. of bitstreams: 1
ZairaBrunaBorgesDeOliveira_DISSERT.pdf: 558082 bytes, checksum: 8a6b37de39eb5c7535dd747f52825d2b (MD5) / Made available in DSpace on 2017-10-11T23:47:56Z (GMT). No. of bitstreams: 1
ZairaBrunaBorgesDeOliveira_DISSERT.pdf: 558082 bytes, checksum: 8a6b37de39eb5c7535dd747f52825d2b (MD5)
Previous issue date: 2016-06-03 / Conselho Nacional de Desenvolvimento Cient?fico e Tecnol?gico (CNPq) / O Teorema-H relativ?stico, incluindo efeitos n?o-extensivos, foi calculado usando o
q-c?lculo. A hip?tese de caos molecular foi generalizada com o objetivo de introduzir fortes
correla??es estat?sticas entre as fun??es de distribui??es relativ?sticas. A positividade da fonte
de entropia conduz a um v?nculo termodin?mico sobre o par?metro entr?pico, q 2 [0; 2]. Tamb?m
foi provado que os estados de equil?brio colisional (termo da fonte de entropia nula) s?o
descritos por uma lei de pot?ncia relativ?stica que estende a distribui??o exponencial de Juttner,
que se reduz, no dom?nio cl?ssico, a fun??o lei de pot?ncia de Tsallis. Todos os resultados
fornecem os resultados padr?es no limite extensivo (q = 1), mostrando assim que o formalismo
de Tsallis ? compat?vel com as quest?es abordadas na teoria da relatividade especial. / The relativistic H theorem, by including nonextensive effects, has been calculated
using the so-called q-calculus. The molecular chaos hypothesis was generalized in order to
capture the strong statistical correlations between the relativistic distributions functions. The
positiveness of the source of entropy leads to thermodynamical constraint on the entropic parameter,
q 2 [0; 2]. It was also proven that the collisional equilibrium states (null entropy source
term) are described by the relativistic q power law extension of the exponential Juttner distribution
which reduces, in the nonrelativistic domain, to the Tsallis power law function. All results
provide to the standard ones in the extensive limit (q = 1), thereby showing that the Tsallis
framework is compatible with the issues addresed in the special relativity theory.
|
63 |
Comp?sitos Bulk Fill flu?dos versus resinas compostas tradicionais: comportamento mec?nicoChaves, Let?cia Virginia de Freitas 07 July 2017 (has links)
Submitted by Automa??o e Estat?stica (sst@bczm.ufrn.br) on 2017-10-05T00:11:28Z
No. of bitstreams: 1
LeticiaVirginiaDeFreitasChaves_DISSERT.pdf: 624609 bytes, checksum: 00e1748d54f2f6578477eb5c68293890 (MD5) / Approved for entry into archive by Arlan Eloi Leite Silva (eloihistoriador@yahoo.com.br) on 2017-10-16T23:12:07Z (GMT) No. of bitstreams: 1
LeticiaVirginiaDeFreitasChaves_DISSERT.pdf: 624609 bytes, checksum: 00e1748d54f2f6578477eb5c68293890 (MD5) / Made available in DSpace on 2017-10-16T23:12:07Z (GMT). No. of bitstreams: 1
LeticiaVirginiaDeFreitasChaves_DISSERT.pdf: 624609 bytes, checksum: 00e1748d54f2f6578477eb5c68293890 (MD5)
Previous issue date: 2017-07-07 / Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior (CAPES) / Objetivos: Avaliar a profundidade de polimeriza??o (PP), tens?o de contra??o (TC), resist?ncia flexural (RF), m?dulo de elasticidade (ME) e a resist?ncia de uni?o (RU) de tr?s comp?sitos Bulk Fill fluidos em compara??o a comp?sitos tradicionais de alta viscosidade. Metodologia: Tr?s comp?sitos Bulk fill fluidos (Filtek BKF, Surefil SDR, X-tra Base) e tr?s tradicionais ( Z250 XT, Grandioso, TPH3) foram utilizados. Para RF/ME, 60 amostras (n=10) em forma de barra (7 mm x 2 mm x 1 mm) foram confeccionadas e avaliadas por meio de uma M?quina de Ensaios Universal (MEU). Para PP e RU, cavidades c?nicas (n=10) preparadas em dentina bovina foram restauradas com os materiais. A PP foi analisada atrav?s da raz?o base/topo de microdureza de superf?cie e a RU pelo teste de push-out numa MEU. A TC foi mensurada em incremento ?nico das Bulk Fill e dois incrementos para as tradicionais em MEU acoplada a um extens?metro (n=5). Os dados foram estatisticamente avaliados por meio dos testes ANOVA 2 fatores e Tukey (p<0,05). Resultados: Para ME, as resinas convencionais apresentaram maiores valores diante de todas as resinas BKF, no entanto para a RF elas foram iguais, com exce??o apenas da BKF da VOCO que foi inferior. Para a PP convencionais e BKF foram estatisticamente semelhantes. Na TC e na RU as BKF apresentaram-se superiores as convencionais, tamb?m com exce??o apenas da Xtra Base Bulk Fill (VOCO) que foi estatisticamente igual as convencionais na RU. Conclus?o: As resinas Bulk Fill flu?das testadas BKF e SDR apresentaram as propriedades mec?nicas testadas superiores ou iguais aos comp?sitos tradicionais, exceto o m?dulo de elasticidade. / Objectives: To evaluate depth of polimerization (DP), contraction strress (SC), flexural strenght (FS), elastic modulus (EM) and bond strength (BS) of three low-viscosity Bulk Fill composites resin compare as of high-viscosity traditional composites. Methods: Three Bulk fill composites (Filtek BKF, Surefil SDR, X-tra Base) and three traditional ( Z250 XT, Grandioso, TPH3) were used. For FS/ME, 60 (n=10) bar specimens (7 mm x 2 mm x 1 mm) were prepared and evaluated with Universal Testing Machine (UTM). For DP e BS, conical cavities (n=10) were prepared in bovine dentine and restored with materials. DP was analyzed trough the ratio base/top with microhardness of surface and BS on the push-out test in UTM. SC was measured for to increment one of Bulk Fill Resins and two increments of traditional resins in UTM attached to a extensometer (n=5). The data were statistically evaluated by ANOVA 2-way/Tukey (p<0,05). Results: For EM the conventional resins show higher values compare to all the BKF resins, however for the FS they were the same, except for only the VOCO Bulk Fill that was inferior. For the PP, conventional and BKF were statistically similar. In TC and in BS the BKF were superior to conventional, also with the exception of Xtra Base Bulk Fill (VOCO), which was statistically the same as the conventional ones in the BS. Conclusion: The Bulk Fill composites resin BKF and SDR present most of property physical and mechanical higher or similar than traditional composites, except the elastic modulus.
|
64 |
Entropias generalizadas: v?nculos termodin?micos da Terceira LeiBento, Eli?ngela Paulino 22 April 2016 (has links)
Submitted by Automa??o e Estat?stica (sst@bczm.ufrn.br) on 2017-10-18T20:33:18Z
No. of bitstreams: 1
EliangelaPaulinoBento_TESE.pdf: 2042041 bytes, checksum: 74823d35762c444109f5c6d2de459ac3 (MD5) / Approved for entry into archive by Arlan Eloi Leite Silva (eloihistoriador@yahoo.com.br) on 2017-10-23T23:34:52Z (GMT) No. of bitstreams: 1
EliangelaPaulinoBento_TESE.pdf: 2042041 bytes, checksum: 74823d35762c444109f5c6d2de459ac3 (MD5) / Made available in DSpace on 2017-10-23T23:34:52Z (GMT). No. of bitstreams: 1
EliangelaPaulinoBento_TESE.pdf: 2042041 bytes, checksum: 74823d35762c444109f5c6d2de459ac3 (MD5)
Previous issue date: 2016-04-22 / Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior (CAPES) / Com base na terceira lei da Termodin?mica, questionamos se as entropias generalizadas
satisfazem ou n?o esta propriedade fundamental. Em linhas gerais, a terceira
lei afirma que, para sistemas com estados fundamentais n?o degenerados em equil?brio,
a entropia se aproxima de zero conforme a temperatura (em escala absoluta) tamb?m se
aproxima de zero. No entanto, a entropia pode desaparecer apenas com a temperatura no
zero absoluto. Neste contexto, propomos um procedimento anal?tico direto para testar se
uma entropia generalizada satisfaz a terceira lei, assumindo apenas uma forma geral de
entropia S e energia U de um sistema de N n?veis cl?ssico arbitr?rio. Matematicamente,
o m?todo depende do c?lculo exato do par?metro _ = dS=dU em termos das probabilidades
de microestados pi. Finalmente, determinamos a rela??o entre o limite m?nino da
entropia S ! 0 (ou, mais geral, S ! Smin) e o limite m?nimo de temperatura _ ! 1. A
n?vel de compara??o, aplicamos o m?todo para as entropias de Boltzmann-Gibbs (modelo
padr?o), Kaniadakis e Tsallis (modelos generalizados). Para as duas ?ltimas, ilustramos o
poder do m?todo calculando os intervalos dos par?metros entr?picos em que a entropia
satisfaz a terceira lei. Os resultados obtidos mostraram que, para a _-entropia, os valores
usualmente atribu?dos ao par?metro _ satisfazem a terceira lei ( - 1 < _ < 1). Entretanto,
para a q-entropia o mesmo n?o ocorre. Mostramos que, a q-entropia pode desaparecer a
temperaturas diferentes de zero para certos valores de q. Como exemplo concreto, consideramos
o modelo de Ising unidimensional com intera??es de primeiros vizinhos, o qual
? um dos mais importantes modelos em toda a f?sica. Classicamente, o modelo de Ising ?
resolvido por meio do ensemble can?nico, por?m ele tamb?m pode ser resolvido por meio
de ensembles generalizados. / Based on the third law of Thermodynamics we ask whether or not generalized
entropies satisfy this fundamental property. The third law states that the entropy approaches
zero as the temperature (in absolute scale) also approaches zero. However, the
entropy can vanish only at absolute zero temperature. In this context, we propose a direct
analytical procedure to test if the generalized entropy satisfies the third law, assuming
only very general assumptions for the entropy S and energy U of an arbitrary N-level
classical system. Mathematically, the method relies on exact calculation of the parameter
_ = dS=dU in terms of the microstate probabilities pi. Finally, we determine the relation
between the low entropy limit S ! 0 (or more generally Smin) and the low-temperature
limit _ ! +1. For comparison, we apply the method to the entropy Boltzmann (standard
model), and Kaniadakis Tsallis (generalized models). For the latter two, we illustrate the
power of the method by unveiling the ranges of their parameters for which the third law is
satisfied. For _-entropy, the values usually assumed in the literature to _ parameter obeys
the third law ( - 1 < _ < 1). However, for the q-entropy the same is not true. We show that
the q-entropy can vanish at nonzero temperature in certain ranges of q. These results and
their consequences are discussed in this thesis. As a concrete example, we consider the
paradigmatic one-dimensional Ising model, which is one of the most important models in
all of physics. Classically, the Ising model is solved in the canonical ensemble, but it can
also solved exactly in nonstandard ensembles using generalized entropies.
|
65 |
Estimating Risks of Pharmaceutical NSAID Mixtures in Surface Waters through Risk Cups : – Implications for SustainabilityMandahl, Per January 2020 (has links)
Background: Use of pharmaceuticals can lead to unchanged or metabolite residues in surface water that may result in negative environmental effects. Sweden has adopted the Generational goal defining direction and changes needed to become a sustainable nation, these align with the UN Sustainable Development Goals (SDGs). Sweden collects and analyzes samples for pharmaceuticals and other contaminants in surface water. Aim: To estimate risks connected to pharmaceuticals in complex mixtures, exemplified by nonsteroidal anti-inflammatory drugs (NSAIDs), and discuss how this can be used to influence the actions needed to reach the Generational goal and the SDGs of Agenda 2030. Methods: Here, measured environmental concentrations (MECs) of the NSAIDs diclofenac, ibuprofen, ketoprofen, and naproxen in Swedish surface waters and in Uppsala’s Fyris River were accessed from a database and used in conjunction with predicted no-effect concentrations (PNECs) from the literature to derive risk quotients(RQ=MEC/PNEC). For all drugs a standardized PNEC derived from OECD guideline base-set tests were found, and for diclofenac and ibuprofen also non-traditional guideline PNECs were identified. Risk cups applied by summation of MEC/PNEC-risk quotients are considered safe if the sum of RQ <1, and as proposed inSOU 2019:45, if one chemical adds more risk than 0.1 to the risk cup it would be better to substitute it for another, if possible. Results and Discussion: Standardized PNECs derived from OECD guideline base-set tests were more than 60-fold greater than non-traditional PNECs for diclofenac and ibuprofen, affecting their individual RQ contribution and total sum of RQ. Based on the non-traditional PNECs, the sum of RQ were more than or near 1 in some cases in Fyris River and elsewhere, thus indicating risk to biota especially in 2010. Diclofenac and ibuprofen typically contributed more to Risk cups than did ketoprofen and naproxen. Especially diclofenac should be considered for substitution, if possible. Swedish sales data indicate at least one more NSAID compound suitable for analysis. In addition, more than 70 pharmaceuticals were identified in Fyris River, adding to pressure on environment from NSAIDs. Risk cups are conservative and require sparse data relative to other methods, and thus can be used to prioritize further efforts. A difficulty is to find relevant ecotoxicological data for pharmaceuticals and therefore an open access database would be of value, preferably complemented with sales data for APIs. However, since a default RQ-value of 0.1 was suggested in SOU 2019:45, a lack of data would not hinder action. Use of risk cups makes it possible to work toward e.g., sustainable production practices benefiting SDG 12. Inaction after identifying a problem conflicts with SDGs 6 and 12, since it would lead to less clean water and more sanitation issues and non-sustainable consumption and production. Conclusion: Risk cups as applied here are suitable as a first tier of pharmaceutical mixture risk estimation since they are quick to perform and demand less data than other methods. Because of their dependence on PNECs, it is important to use a relevant effect test, with results preferably published in an open access database. Diclofenac’s non-traditional risk quotient indicate that the ecological status of the Fyris River is at risk, supporting the official moderate ecological status classification. This thesis suggests an additional NSAID, etoricoxib, as a possible candidate for future studies, based on the number of other NSAIDs on the market and sales numbers, pointing at the usefulness of sales data for a better understanding of risk. In addition to the NSAID group, other pharmaceuticals, active metabolites, and non-pharmaceutical chemicals add to the pressure on the environment. Data on the risk cups and risk quotients can be used as a basis for improvements at sewage treatment plants and factories as well as for launching informative campaigns to physicians and the general public, actions which all may lead to a more sustainable future.
|
66 |
Spectral, Energy and Computation Efficiency in Future 5G Wireless NetworksSun, Haijian 01 August 2019 (has links)
Wireless technology has revolutionized the way people communicate. From first generation, or 1G, in the 1980s to current, largely deployed 4G in the 2010s, we have witnessed not only a technological leap, but also the reformation of associated applications. It is expected that 5G will become commercially available in 2020. 5G is driven by ever-increasing demands for high mobile traffic, low transmission delay, and massive numbers of connected devices. Today, with the popularity of smart phones, intelligent appliances, autonomous cars, and tablets, communication demands are higher than ever, especially when it comes to low-cost and easy-access solutions.
Existing communication architecture cannot fulfill 5G’s needs. For example, 5G requires connection speeds up to 1,000 times faster than current technology can provide. Also, from transmitter side to receiver side, 5G delays should be less than 1ms, while 4G targets a 5ms delay speed. To meet these requirements, 5G will apply several disruptive techniques. We focus on two of them: new radio and new scheme. As for the former, we study the non-orthogonal multiple access (NOMA) and as for the latter, we use mobile edge computing (MEC).
Traditional communication systems allow users to communicate alternatively, which clearly avoids inter-user interference, but also caps the connection speed. NOMA, on the other hand, allows multiple users to transmit simultaneously. While NOMA will inevitably cause excessive interference, we prove such interference can be mitigated by an advanced receiver side technique. NOMA has existed on the research frontier since 2013. Since that time, both academics and industry professionals have extensively studied its performance. In this dissertation, our contribution is to incorporate NOMA with several potential schemes, such as relay, IoT, and cognitive radio networks. Furthermore, we reviewed various limitations on NOMA and proposed a more practical model.
In the second part, MEC is considered. MEC is a transformation from the previous cloud computing system. In particular, MEC leverages powerful devices nearby and instead of sending information to distant cloud servers, the transmission occurs in closer range, which can effectively reduce communication delay. In this work, we have proposed a new evaluation metric for MEC which can more effectively leverage the trade-off between the amount of computation and the energy consumed thereby.
A practical communication system for wearable devices is proposed in the last part, which combines all the techniques discussed above. The challenges for wearable communication are inherent in its diverse needs, as some devices may require low speed but high reliability (factory sensors), while others may need low delay (medical devices). We have addressed these challenges and validated our findings through simulations.
|
67 |
Ontology based framework for Tactile Internet and Digital Twin ApplicationsAdhami, Hikmat 09 August 2022 (has links)
In the era of Industry 4 and Digital Twin – DT- (integrating Audio-Video, Virtual Reality,
Augmented Reality and Haptics - from the Greek word Haptikos meaning "able to touch") and the Tactile Internet (TI), it becomes obvious that telecom stakeholders need different networks requirements to provision high quality services with respect to the new standards. In reality, this era is proposed as TI, and it will achieve a true paradigm shift from content delivery to skill-set delivery network types, thanks to recent technical breakthroughs. It will build a new internet structure with improved capabilities; but it will be difficult to meet the technical needs of the TI with current fourth generation (4G) mobile communication systems. As a result, 5G mobile
communication systems will be used at the wireless edge and as a key enabler for TI due to its automated core network functionalities.
Because of the COVID-19 outbreak, most daily activities such as employment, research, and education are now conducted online rather than in person. As a result, internet traffic has risen dramatically. Nowadays, Tactile Internet is in its infancy deployment phase worldwide. For this reason, and because of the growing need of its applications, the feasibility of these applications on the existing and deployed networks infrastructures, especially in the growing countries, is thought
to be very hard, even quasi-impossible. Since 5G is not reaching yet its convergence stage (i.e. it is not deployed everywhere) and there is a huge stress on mobile communications given that the world is still facing the COVID-19 Pandemic, and since all the activities are taking place online, we propose design and implement a QoS framework to facilitate the feasibility and the applicability of the TI systems, where no 5G infrastructure is deployed. This framework will predict the most suitable network type to be deployed for certain given TI applications with certain given KPIs (Key Performance Indicators). Also, this framework is scalable, in such it gives an idea of even the future Next Generation Mobile Networks types (NGMN, if necessary).
“To deal” with TI applications, means “to deal” with Haptics added to Audio and Video streams. Therefore, performance evaluation for haptic networks is required. And since there are different types of haptic networks, so interoperability is needed. Consequently, a standardization form is necessary for that purpose, to annotate and describe the haptic network. The first idea that flashes in mind, is the use of Ontologies. In these latters, we can add intelligent rules to infer additional data and predict resource requirements in order to achieve better performance. Many works in the research rely on Artificial Intelligence approaches to tackle the above-mentioned
standardization, but very few depend on ontologies, and without futuristic outcomes, especially for the optimization problem. We mean by optimization, the optimal types, methods and rules that are able to accommodate the applicability of the TI systems (here come the applications KPIs) in an acceptable environment or infrastructure (here come the networking KPIs), and even-more, to infer the most optimal network type.
To help manufacturing companies take full advantage of the TI, we propose to develop new methods and tools (ontologies) to intelligently handle the TI, DT (Digital Twin) and IoT (Internet of Things) sensor data and process data at the edge of the network and deliver faster insights. The outcomes of these ontologies, have been validated through two conducted case studies, where we simulated, in the first, TI traffic over Wi-Fi, WiMAX and UMTS (3G) infrastructures; While in the second we used 4G (LTE-A), along with SDN (Software Defined Networking) integrated to MEC (Mobile Edge Computing) as networking backbone. The results, in terms of QoS KPIs performance evaluation, present high relevance to our proposed Ontology outcomes.
|
68 |
Autonomic Management and Orchestration Strategies in MEC-Enabled 5G NetworksSubramanya, Tejas 26 October 2021 (has links)
5G and beyond mobile network technology promises to deliver unprecedented ultra-low latency and high data rates, paving the way for many novel applications and services. Network Function Virtualization (NFV) and Multi-access Edge Computing (MEC) are two technologies expected to play a vital role in achieving ambitious Quality of Service requirements of such applications. While NFV provides flexibility by enabling network functions to be dynamically deployed and inter-connected to realize Service Function Chains (SFC), MEC brings the computing capability to the mobile network's edges, thus reducing latency and alleviating the transport network load. However, adequate mechanisms are needed to meet the dynamically changing network service demands (i.e., in single and multiple domains) and optimally utilize the network resources while ensuring that the end-to-end latency requirement of services is always satisfied. In this dissertation work, we break the problem into three separate stages and present the solutions for each one of them.Firstly, we apply Artificial Intelligence (AI) techniques to drive NFV resource orchestration in MEC-enabled 5G architectures for single and multi-domain scenarios. We propose three deep learning approaches to perform horizontal and vertical Virtual Network Function (VNF) auto-scaling: (i) Multilayer Perceptron (MLP) classification and regression (single-domain), (ii) Centralized Artificial Neural Network (ANN), centralized Long-Short Term Memory (LSTM) and centralized Convolutional Neural Network-LSTM (CNN-LSTM) (single-domain), and (iii) Federated ANN, federated LSTM and federated CNN-LSTM (multi-domain). We evaluate the performance of each of these deep learning models trained over a commercial network operator dataset and investigate the pros and cons of different approaches for VNF auto-scaling. For the first approach, our results show that both MLP classifier and MLP regressor models have strong predicting capability for auto-scaling. However, MLP regressor outperforms MLP classifier in terms of accuracy. For the second approach (one-step prediction), CNN-LSTM performs the best for the QoS-prioritized objective and LSTM performs the best for the cost-prioritized objective. For the second approach (multi-step prediction), the encoder-decoder CNN-LSTM model outperforms the encoder-decoder LSTM model for both QoS and Cost prioritized objectives. For the third approach, both federated LSTM and federated CNN-LSTM models perform equally better than the federated ANN model. It was also noted that in general federated learning approaches performs poorly compared to centralized learning approaches. Secondly, we employ Integer Linear Programming (ILP) techniques to formulate and solve a joint user association and SFC placement problem, where each SFC represents a service requested by a user with end-to-end latency and data rate requirements. We also develop a comprehensive end-to-end latency model considering radio delay, backhaul network delay and SFC processing delay for 5G mobile networks. We evaluated the proposed model using simulations based on real-operator network topology and real-world latency values. Our results show that the average end-to-end latency reduces significantly when SFCs are placed at the ME hosts according to their latency and data rate demands. Furthermore, we propose an heuristic algorithm to address the issue of scalability in ILP, that can solve the above association/mapping problem in seconds rather than hours.Finally, we introduce lightMEC - a lightweight MEC platform for deploying mobile edge computing functionalities which allows hosting of low-latency and bandwidth-intensive applications at the network edge. Measurements conducted over a real-life test demonstrated that lightMEC could actually support practical MEC applications without requiring any change to existing mobile network nodes' functionality in the access and core network segments. The significant benefits of adopting the proposed architecture are analyzed based on a proof-of-concept demonstration of the content caching use case. Furthermore, we introduce the AI-driven Kubernetes orchestration prototype that we implemented by leveraging the lightMEC platform and assess the performance of the proposed deep learning models (from stage 1) in an experimental setup. The prototype evaluations confirm the simulation results achieved in stage 1 of the thesis.
|
69 |
Analysis of 5G Edge Computing solutions and APIs from an E2E perspective addressing the developer experienceManocha, Jitendra January 2021 (has links)
Edge Computing is considered one of the key capabilities in next generation (5G) networks, which will enable inundation of latency, throughput, and data sensitive edge-native applications. Edge application developers require infrastructure at the edge to host the application workload and network connectivity procedures to connect the application users to the nearest edge where the application workload is hosted. Distributed nature of edge infrastructure and the requirement on network connectivity makes it attractive for communication service providers (CSPs) to become Edge Service providers (ESP); similarly, hyper-scale cloud providers (HCPs) are also planning to expand as ESP building on their cloud presence targeting edge application developers. CSPs across the globe follow a standard approach for building interoperable networks and infrastructure, while HCPs do not participate in telecom standardization bodies. Standards development organizations (SDOs) such as the European Telecommunication Standardization Institute (ETSI) and 3rd Generation Partnership Project (3GPP) are working to provide a standard architecture for edge computing solution for service providers. However, the current focus of SDOs is more on architecture and not much focus on application developer experience and the Application Programming Interfaces (APIs). On the architecture itself, there are different standards and approaches available which overlap with each other. APIs proposed by different SDOs are not easily consumable by edge application developers and require simplification. On the other hand, there are not many widely known standards in the hyper-scale cloud and public cloud industry to integrate with each other except the public application programming interfaces (APIs) offered by cloud providers. To scale and succeed, edge service providers need to focus on interoperability, not only from a cloud infrastructure perspective but from a network connectivity perspective as well. This work analyzes standards defined by different standardization bodies in the 5G edge computing area and the overlaps between the standards. The work then highlights the requirements from an edge application developer perspective, investigates the deficiencies of the standards, and proposes an end-to-end edge solution architecture and a method to simplify the APIs which fulfil the need for edge-native applications. The proposed solution considers CSPs providing multi-cloud infrastructure for edge computing by integrating with HCPs infrastructure. In addition, the work investigates existing standards to integrate cloud capabilities in network platforms and elaborates the way network and cloud computing capabilities can be integrated to provide complete edge service to edge application developers or enterprises. It proposes an alternative way to integrate edge application developers with cloud service providers dynamically by offering a catalog of services. / Edge Computing anses vara en av nyckelfunktionerna i nästa generations (5G) nätverk, vilket möjliggör minskad fördröjning, ökad genomströmning och datakänsliga och kantnära applikationer. Applikationsutvecklare för Edge Computing är beroende av kantinfrastruktur som är värd för applikationen, och nätverksanslutning för att ansluta applikationsanvändarna till närmaste kant där applikationens är placerad. Även om kantapplikationer kan vara värd för vilken infrastruktur som helst, planerar leverantörer av kommunikationstjänster (CSP:er) att erbjuda distribuerad kantinfrastruktur och anslutningar. På liknande sätt planerar även molnleverantörer med hög skalbarhet (HCP) att erbjudakantinfrastruktur. CSP:er följer standardmetoden för att bygga nätverk och infrastruktur medan HCP:er inte deltar i standardiseringsorgan. Standardutvecklingsorganisationer (SDO) som europeisk telekommunikations standardiseringsinstitut (ETSI) och 3rd Generation Partnership Project (3GPP) arbetar för att tillhandahålla en standardarkitektur för Edge Computing för tjänsteleverantörer. Men nuvarande fokus är mer på arkitektur och inte mycket fokus är riktat mot applikationsutvecklares erfarenhet och API:er. I själva arkitekturen finns det olika standarder och tillvägagångssätt som överlappar varandra. API:er föreslagna av olika SDO:er är inte lättillgängliga för utvecklar av kantapplikationer och måste förenklas. Å andra sidan finns det inte många allmänt kända standarder i hyperskalära moln och offentlig molnindustri som går att integrera med varandra förutom de offentliga gränssnitten för applikationsprogrammering (API:er) som erbjuds av molnleverantörer. För att kunna betjäna omfattande applikationsutvecklare måste CSP:er erbjuda multimolnfunktioner och därmed komplettera sin egen infrastruktur med kapaciteten för HCP:er. På liknande sätt kommer HCP:er att behöva integrera anslutningstjänster utöver infrastruktur för att erbjuda kantfunktioner. Den här arbetet beskriver olika standarder definierade av olika standardiseringsorgan i Edge Computing-området för 5G, och analyzerar överlappningar mellan standarderna. Arbetet belyser sedan kraven från ett utvecklingsperspektiv av kantapplikationer, undersöker bristerna i standarderna och föreslår en lösningsarkitektur som uppfyller behovet för kantbyggda applikationer. Den föreslagna lösningen beaktar CSP:er som tillhandahåller flermolnsinfrastruktur för Edge Computing genom att integreras med HCP:s infrastruktur. Arbetet undersöker vidare befintliga standarder för att integrera molnfunktioner i nätverksplattformar och utvecklar på vilket sätt nätverks- och molntjänster kan integreras för att erbjuda kompletta tjänster till utvecklare av kantapplikationer. Arbetet föreslår ett alternativt sätt att dynamiskt integrera utvecklare av kantapplikationer med leverantörer av molntjänster genom att erbjuda en katalog av tjänster.
|
70 |
AI-enabled System Optimization with Carrier Aggregation and Task Offloading in 5G and 6GKhoramnejad, Fahimeh 24 March 2023 (has links)
Fifth-Generation (5G) and sixth-Generation (6G) are new global wireless standards
providing everyone and everything, machines, objects, and devices, with massive network capacity. The technological advances in wireless communication enable 5G and 6G networks to support resource and computation-hungry services such as smart agriculture and smart city applications. Among these advances are two state-of-the-art technologies: Carrier Aggregation (CA) and Multi Access Edge Computing (MEC). CA unlocks new sources of spectrum in both the mid-band and high-band radio frequencies. It provides the unique capability of aggregating several frequency bands for higher peak rates, and increases cell coverage. The latter is obtained by activating the Component Carriers (CC) in low-band and mid-band frequency (below 7 GHz) while 5G high-band (above 24GHz) delivers unprecedented peak rates with poorer Uplink (UL) coverage. MEC provides computing and storage resources with sufficient connectivity close to end users. These execution resources are typically within/at the boundary of access networks providing support for application use cases such as Augmented Reality (AR)/Virtual Reality (VR). The key technology in MEC is task offloading, which enables a user to offload a resource-hungry application to the MEC hosts to reduce the cost (in terms of energy and latency) of processing the application. This thesis focuses on using CA and task offloading in 5G and 6G wireless networks. These advanced infrastructures are an enabler for many broader use cases, e.g., autonomous driving and Internet of Things (IoT) applications. However, the pertinent problems are the high dimensional ones with combinatorial characteristics. Furthermore, the time-varying features of the 5G/6G wireless networks, such as the stochastic nature of the wireless channel, should be concurrently met. The above challenges can be tackled by using data-driven techniques and Machine Learning (ML) algorithms to derive intelligent and autonomous resource management techniques in the 5G/6G wireless networks. The resource management problems in these networks are sequential decision-making problems, additionally with conflicting objectives. Therefore, among the ML algorithms, the ones based on the Reinforcement Learning (RL), constitute a promising tool to make a trade-off between the conflicting objectives of the resource management problems in the 5G/6G wireless networks, are used. This research considers the objective of maximizing the achievable rate and minimizing the users’ transmit power levels in the MEC-enabled network. Additionally, we try to simultaneously maximize the network capacity and improve the network coverage by activating/deactivating the CCs. Compared with the derived schemes in the literature, our contributions are two folded: deriving distributed resource management schemes in 5G/6G wireless networks to efficiently manage the limited spectrum resources and meet the diverse requirements of some resource-hungry applications, and developing intelligent and energy-aware algorithms to improve the performance in terms of energy consumption, delay, and achievable rate.
|
Page generated in 0.0497 seconds