• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 55
  • 20
  • 5
  • 3
  • 2
  • 1
  • Tagged with
  • 92
  • 34
  • 29
  • 20
  • 18
  • 17
  • 16
  • 16
  • 10
  • 10
  • 9
  • 9
  • 9
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

AI-enabled System Optimization with Carrier Aggregation and Task Offloading in 5G and 6G

Khoramnejad, Fahimeh 24 March 2023 (has links)
Fifth-Generation (5G) and sixth-Generation (6G) are new global wireless standards providing everyone and everything, machines, objects, and devices, with massive network capacity. The technological advances in wireless communication enable 5G and 6G networks to support resource and computation-hungry services such as smart agriculture and smart city applications. Among these advances are two state-of-the-art technologies: Carrier Aggregation (CA) and Multi Access Edge Computing (MEC). CA unlocks new sources of spectrum in both the mid-band and high-band radio frequencies. It provides the unique capability of aggregating several frequency bands for higher peak rates, and increases cell coverage. The latter is obtained by activating the Component Carriers (CC) in low-band and mid-band frequency (below 7 GHz) while 5G high-band (above 24GHz) delivers unprecedented peak rates with poorer Uplink (UL) coverage. MEC provides computing and storage resources with sufficient connectivity close to end users. These execution resources are typically within/at the boundary of access networks providing support for application use cases such as Augmented Reality (AR)/Virtual Reality (VR). The key technology in MEC is task offloading, which enables a user to offload a resource-hungry application to the MEC hosts to reduce the cost (in terms of energy and latency) of processing the application. This thesis focuses on using CA and task offloading in 5G and 6G wireless networks. These advanced infrastructures are an enabler for many broader use cases, e.g., autonomous driving and Internet of Things (IoT) applications. However, the pertinent problems are the high dimensional ones with combinatorial characteristics. Furthermore, the time-varying features of the 5G/6G wireless networks, such as the stochastic nature of the wireless channel, should be concurrently met. The above challenges can be tackled by using data-driven techniques and Machine Learning (ML) algorithms to derive intelligent and autonomous resource management techniques in the 5G/6G wireless networks. The resource management problems in these networks are sequential decision-making problems, additionally with conflicting objectives. Therefore, among the ML algorithms, the ones based on the Reinforcement Learning (RL), constitute a promising tool to make a trade-off between the conflicting objectives of the resource management problems in the 5G/6G wireless networks, are used. This research considers the objective of maximizing the achievable rate and minimizing the users’ transmit power levels in the MEC-enabled network. Additionally, we try to simultaneously maximize the network capacity and improve the network coverage by activating/deactivating the CCs. Compared with the derived schemes in the literature, our contributions are two folded: deriving distributed resource management schemes in 5G/6G wireless networks to efficiently manage the limited spectrum resources and meet the diverse requirements of some resource-hungry applications, and developing intelligent and energy-aware algorithms to improve the performance in terms of energy consumption, delay, and achievable rate.
52

Multipath transport protocol offloading

Alfredsson, Rebecka January 2022 (has links)
Recently, we have seen an evolution of programmable network devices, where it is possible to customize packet processing inside the data plane at an unprecedented level. This is in contrast to traditional approaches, where networking device functionality is fixed and defined by the ASIC and customers need to wait possibly years before the vendors release new versions that add features required by customers. The vendors in the industry have adapted and the focus has shifted to offering new types of network devices, such as the SmartNIC, IPU, and DPU. Another major paradigm shift in the networking area is the shift towards protocols that encrypt parts of headers and contents of packets such as QUIC. Also, many devices such as smart phones have support for multiple access networks, which requires efficient multipath protocols to leverage the capabilities of multiple networks at the same time. However, when using protocols inside the network that requires encryption such as QUIC or multipath QUIC, packet processing operations for the en/decryption process are very resource intensive. Consequently, network vendors and operators are in need to accelerate and offload crypto operations to dedicated hardware in order to free CPU cycles for business critical operations. Therefore, the aim of this study is to investigate how multipath QUIC can be offloaded or hardware accelerated in order to reduce the CPU utilization on the server. Our contributions are an evaluation of frameworks, programming languages and hardware devices in terms of crypto offloading functionality. Two packet processing offloading prototypes were designed using the DPDK framework and the programming language P4. The design using DPDK was implemented and evaluated on a BlueField 2 DPU. The offloading prototype handles a major part of the packet processing and the crypto operations in order to reduce the load of the user application running on the host. A evaluation show that the throughput when using larger keys are only slightly decreased. The evaluation gives important insights in the need of crypto engines and/or CPUs with high performance when offloading.
53

A Delay- and Power-optimized Task Offloading using Genetic Algorithm.

Nygren, Christoffer, Hellkvist, Oskar January 2022 (has links)
Internet of Things (IoT) introduces the Big Data era as the IoT devices produce massive amounts of data daily. Since IoT devices contain limited computational and processing capabilities, processing the data at the edge is challenging. For example, power consumption becomes problematic if data is processed on the IoT device itself. Thus, there is a need to feed this massive data into the cloud platform for analysis. However, uploading the data from IoT devices to the cloud platform causes a delay which is a significant issue for delay-sensitive applications. This tradeoff between delay and power needs a favorable policy to decide where it should allocate the task from edge to cloud processing platform. Research on this subject addresses this issue quite frequently, and various methods have been proposed to mitigate the problem. The previous studies usually focus on the edge-to-cloud computing platform, i.e., they efficiently offload the computational tasks onto the IoT devices and cloud. This thesis proposes a balanced task allocation between edge and cloud computing regarding power consumption and delay. We accomplish our idea by comparing the different task allocation methods, benchmarking in different scenarios, and evaluating by proposing mathematical modeling.
54

Network Resource Management Using Multi-Agent Deep Reinforcement Learning / マルチエージェント深層強化学習によるネットワーク資源管理

Suzuki, Akito 25 September 2023 (has links)
京都大学 / 新制・課程博士 / 博士(情報学) / 甲第24940号 / 情博第851号 / 新制||情||142(附属図書館) / 京都大学大学院情報学研究科通信情報システム専攻 / (主査)教授 大木 英司, 教授 原田 博司, 教授 伊藤 孝行 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
55

Computation Offloading for Real-Time Applications : Server Time Reservation for Periodic Tasks / Beräkningsavlastning för realtidsapplikationer

Tengana Hurtado, Lizzy January 2023 (has links)
Edge computing is a distributed computing paradigm where computing resources are located physically closer to the data source compared to the traditional cloud computing paradigm. Edge computing enables computation offloading from resource-constrained devices to more powerful servers in the edge and cloud. To offer edge and cloud support to real-time industrial applications, the communication to the servers and the server-side computation needs to be predictable. However, the predictability of offloading cannot be guaranteed in an environment where multiple devices are competing for the same edge and cloud resources due to potential server-side scheduling conflicts. To the best or our knowledge, no offloading scheme has been proposed that provides a highly predictable real-time task scheduling in the face of multiple devices offloading to a set of heterogeneous edge/cloud servers. Hence, this thesis approaches the problem of predictable offloading in real-time environments by proposing a centralized server time reservation system to schedule the offloading of real-time tasks to edge and cloud servers. Our reservation system allows end-devices to request external execution time in advance for real-time tasks that will be generated in the future, therefore when such a task is created, it already has a designated offloading server that guarantees its timely execution. Furthermore, this centralized reservation system is capable of optimizing the reservation scheduling strategy with the goal of minimizing energy consumption of edge servers while meeting the stringent deadline constraints of real-time applications. / Edge computing är ett distribuerat datorparadigm där datorresurser är fysiskt placerade närmare datakällan jämfört med det traditionella molnberäkningsparadigmet. Edge computing möjliggör beräkningsavlastning från resursbegränsade enheter till mer kraftfulla servrar i kanten och molnet. För att erbjuda kant- och molnstöd till industriella tillämpningar i realtid måste kommunikationen till servrarna och beräkningen på serversidan vara förutsägbar. Förutsägbarheten av avlastning kan dock inte garanteras i en miljö där flera enheter konkurrerar om samma kant- och molnresurser på grund av potentiella schemaläggningskonflikter på serversidan. Så vitt vi vet har inget avlastningsschema föreslagits som ger en mycket förutsägbar uppgiftsschemaläggning i realtid inför flera enheter som laddas av till en uppsättning heterogena edge-/molnservrar. Därför närmar sig denna avhandling problemet med förutsägbar avlastning i realtidsmiljöer genom att föreslå ett centraliserat servertidsreservationssystem för att schemalägga avlastningen av realtidsuppgifter till edge- och molnservrar. Vårt reservationssystem tillåter slutenheter att begära extern exekveringstid i förväg för realtidsuppgifter som kommer att genereras i framtiden, därför när en sådan uppgift skapas har den redan en utsedd avlastningsserver som garanterar att den utförs i tid. Dessutom kan detta centraliserade bokningssystem optimera bokningsschemaläggningsstrategin med målet att minimera energiförbrukningen för edge-servrar samtidigt som de stränga deadline-begränsningarna för realtidsapplikationer uppfylls.
56

Offloading devices for the prevention of heel pressure ulcers: A realist evaluation

Greenwood, C., Nixon, J., Nelson, E.A., McGinnis, E., Randell, Rebecca 21 June 2023 (has links)
Yes / Heel pressure ulcers can cause pain, reduce mobility, lead to longer hospital stays and in severe cases can lead to sepsis, amputation, and death. Offloading boots are marketed as heel pressure ulcer prevention devices, working by removing pressure to the heel, yet there is little good quality evidence about their clinical effectiveness. Given that evidence is not guiding use of these devices, this study aims to explore, how, when, and why these devices are used in hospital settings. To explore how offloading devices are used to prevent heel pressure ulcers, for whom and in what circumstances. A realist evaluation was undertaken to explore the contexts, mechanisms, and outcomes that might influence how offloading devices are implemented and used in clinical practice for the prevention of heel pressure ulcers in hospitals. Eight Tissue Viability Nurse Specialists from across the UK (England, Wales, and Northern Ireland) were interviewed. Questions sought to elicit whether, and in what ways, initial theories about the use of heel pressure ulcers fitted with interviewee's experiences. Thirteen initial theories were refined into three programme theories about how offloading devices are used by nurses 'proactively' to prevent heel pressure ulcers, 'reactively' to treat and minimise deterioration of early-stage pressure ulcers, and patient factors that influence how these devices are used. Offloading devices were used in clinical practice by all the interviewees. It was viewed that they were not suitable to be used by every patient, at every point in their inpatient journey, nor was it financially viable. However, the interviewees thought that identifying suitable 'at risk' patient groups that can maintain use of the devices could lead to proactive and cost-effective use of the devices. This understanding of the contexts and mechanisms that influence the effective use of offloading devices has implications for clinical practice and design of clinical trials of offloading devices. How, for whom, and in what circumstances do offloading devices work to prevent heel pressure ulcers? Tissue viability nurses' perspectives. / CG conducted this review as part of her PhD at the University of Leeds which was funded by a Charitable Grant from https://leedscares.org/LeedsHospitalsCharity (https://www.leedshospitalscharity. org.uk/) and Smith and Nephew Foundation.
57

An Integrated End-User Data Service for HPC Centers

Monti, Henry Matthew 16 January 2013 (has links)
The advent of extreme-scale computing systems, e.g., Petaflop supercomputers, High Performance Computing (HPC) cyber-infrastructure, Enterprise databases, and experimental facilities such as large-scale particle colliders, are pushing the envelope on dataset sizes.  Supercomputing centers routinely generate and consume ever increasing amounts of data while executing high-throughput computing jobs. These are often result-datasets or checkpoint snapshots from long-running simulations, but can also be input data from experimental facilities such as the Large Hadron Collider (LHC) or the Spallation Neutron Source (SNS). These growing datasets are often processed by a geographically dispersed user base across multiple different HPC installations.  Moreover, end-user workflows are also increasingly distributed in nature with massive input, output, and even intermediate data often being transported to and from several HPC resources or end-users for further processing or visualization. The growing data demands of applications coupled with the distributed nature of HPC workflows, have the potential to place significant strain on both the storage and network resources at HPC centers. Despite this potential impact, rather than stringently managing HPC center resources, a common practice is to leave application-associated data management to the end-user, as the user is intimately aware of the application's workflow and data needs. This means end-users must frequently interact with the local storage in HPC centers, the scratch space, which is used for job input, output, and intermediate data. Scratch is built using a parallel file system that supports very high aggregate I/O throughput, e.g., Lustre, PVFS, and GPFS. To ensure efficient I/O and faster job turnaround, use of scratch by applications is encouraged.  Consequently, job input and output data are required to be moved in and out of the scratch space by end-users before and after the job runs, respectively. In practice, end-users arbitrarily stage and offload data as and when they deem fit, without any consideration to the center's performance, often leaving data on the scratch long after it is needed. HPC centers resort to "purge" mechanisms that sweep the scratch space to remove files found to be no longer in use, based on not having been accessed in a preselected time threshold called the purge window that commonly ranges from a few days to a week. This ad-hoc data management ignores the interactions between different users' data storage and transmission demands, and their impact on center serviceability leading to suboptimal use of precious center resources. To address the issues of exponentially increasing data sizes and ad-hoc data management, we present a fresh perspective to scratch storage management by fundamentally rethinking the manner in which scratch space is employed. Our approach is twofold. First, we re-design the scratch system as a "cache" and build "retention", "population", and "eviction"  policies that are tightly integrated from the start, rather than being add-on tools. Second, we aim to provide and integrate the necessary end-user data delivery services, i.e. timely offloading (eviction) and just-in-time staging (population), so that the center's scratch space usage can be optimized through coordinated data movement. Together, these two combined approaches create our Integrated End-User Data Service, wherein data transfer and placement on the scratch space are scheduled with job execution. This strategy allows us to couple job scheduling with cache management, thereby bridging the gap between system software tools and scratch storage management. It enables the retention of only the relevant data for the duration it is needed. Redesigning the scratch as a cache captures the current HPC usage pattern more accurately, and better equips the scratch storage system to serve the growing datasets of workloads. This is a fundamental paradigm shift in the way scratch space has been managed in HPC centers, and outweighs providing simple purge tools to serve a caching workload. / Ph. D.
58

Estudo de aplicação de ferramentas numéricas ao problema de ressonância de ondas na operação de alívio lado a lado. / Study of application of numerical tools of the wave resonance problem in side-by-side offocading operation.

Dotta, Raul 30 March 2017 (has links)
Este trabalho apresenta uma abordagem numérica com base em ensaios experimentais previamente realizados, direcionada ao problema de ressonância do campo de ondas em operações de alívio lado a lado (side by side). Os efeitos dessas interferências hidrodinâmicas são responsáveis por alterar drasticamente o campo de ondas em regiões de confino, gerando amplificação nos movimentos de primeira ordem e trazendo risco à operação. Este fenômeno está presente em diversas áreas da exploração e produção offshore e vem sendo o principal objeto de estudo nos últimos anos, principalmente em operações de alívio lado a lado, nos quais existe uma grande preocupação de colisão, rompimento dos cabos e integridade estrutural das defensas, devido à proximidade dos cascos. Neste contexto, devido à complexidade do problema, a modelagem numérica utilizada para interpretar o fenômeno de ressonância em softwares comerciais deve ser realizada com cautela, sendo que a utilização direta desta ferramenta gera amplificações equivocadas da superfície ressonante uma vez que esta resolução tem como base a teoria potencial. As diferenças observadas durante a comparação entre ensaios numéricos e experimentais são causadas em virtude da negligência na avaliação da dissipação de parte da energia das ondas ressonantes provocadas devido aos efeitos como viscosidade, vorticidade e turbulência do escoamento. Com o objetivo de analisar corretamente este fenômeno por meio de ensaios numéricos, uma maneira consiste na inclusão de adaptações no modelo para atingir os resultados desejáveis. Estas adaptações consistem na implementação de métodos artificiais, tais como os chamados \"Modos Generalizados\" e \"Praias Numéricas\", aplicados à região entre as embarcações com o intuito de amortecer as elevações irrealistas da superfície. Sendo assim, este trabalho abordará o problema de ressonância de ondas, investigando o desempenho de duas ferramentas numéricas para a sua predição, o WAMIT (Wave Analysis Massachusetts Institute of Technology) e o TDRPM (Time Domain Rankine Painel Method). Os resultados serão comparados com dados obtidos em um conjunto de ensaios em escala reduzida, realizado previamente no laboratório Tanque de Provas Numérico da USP (TPN). Dessa forma, o estudo dos fenômenos de ressonância será discutido, principalmente, em seu aspecto numérico, visando à verificação do desempenho do WAMIT e do TDRPM. / This work presents a numerical study based on previously conducted experimental studies, focused on the problem of resonance of the wave field in operations involving multi-body. The hydrodynamic interferences effects are responsible for drastically changing the wave field in confine regions, generating amplification of first order movements and bringing operational risk. This phenomenon is present in several areas of offshore exploration and production and has been the main object of study in recent years, mainly in side-by-side offloading operations, in which there is a great concern due to the risk of mooring lines breaking, damages to the fenders and also collision. In this context, due to the complexity of the problem, the numerical modeling used to evaluate the resonance phenomenon in commercial software becomes unsuitable, generating erroneous amplifications of the resonant surface since it is based on the potential theory. The differences observed during the comparisons between numerical and experimental tests are caused by negligence in the evaluation of the dissipation of part of the resonant wave energy caused by viscosity, vorticity and flow turbulence effects. In order to correctly analyze this phenomenon through numerical tests, one way is to include adaptations on the model to achieve the desired results. These adaptations consist of the implementation of artificial methods, such as \"Generalized Modes\" and \"Numerical Damping Zones\", applied to the region between the vessels in order to damp the unrealistic elevations of the surface. Thus, this study will approach the problem of gap wave resonance, investigating the performance of two numerical tools for its prediction, WAMIT (Wave Analysis Massachusetts Institute of Technology) and TDRPM (Time Domain Rankin Panel Method). The results will be compared with data obtained from a set of small scale tests previously performed at the Numerical Test Tank of USP laboratory (TPN). Therefore, the study of resonance phenomena will be discussed, mainly, in its numerical aspect, in order to verify the performance of WAMIT and TDRPM.
59

Extending Polyhedral Techniques towards Parallel Specifications and Approximations / Extension des Techniques Polyedriques vers les Specifications Parallelles et les Approximations

Isoard, Alexandre 05 July 2016 (has links)
Les techniques polyédriques permettent d’appliquer des analyses et transformations de code sur des structures multidimensionnelles telles que boucles imbriquées et tableaux. Elles sont en général restreintes aux programmes séquentiels dont le contrôle est affine et statique. Cette thèse consiste à les étendre à des programmes comportant par exemple des tests non analysables ou exprimant du parallélisme. Le premier résultat est l'extension de l’analyse de durée de vie et conflits mémoire, pour les scalaires et les tableaux, à des programmes à spécification parallèle ou approximée. Dans les travaux précédents sur l’allocation mémoire pour laquelle cette analyse est nécessaire, la notion de temps ordonne totalement les instructions entre elles et l’existence de cet ordre est implicite et nécessaire. Nous avons montré qu'il est possible de mener à bien de telles analyses sur un ordre partiel quelconque qui correspondra au parallélisme du programme étudié. Le deuxième résultat est d'étendre les techniques de repliement mémoire, basées sur les réseaux euclidiens, de manière à trouver automatiquement une base adéquate à partir de l'ensemble des conflits mémoire. Cet ensemble est fréquemment non convexe, cas qui était traité de façon insuffisante par les méthodes précédentes. Le dernier résultat applique les deux analyses précédentes au calcul par blocs "pipelinés" et notamment au cas de blocs de taille paramétrique. Cette situation donne lieu à du contrôle non-affine mais peut être traité de manière précise par le choix d’approximations adaptées. Ceci ouvre la voie au transfert efficace de noyaux de calculs vers des accélérateurs tels que GPU, FPGA ou autre circuit spécialisé. / Polyhedral techniques enable the application of analysis and code transformations on multi-dimensional structures such as nested loops and arrays. They are usually restricted to sequential programs whose control is both affine and static. This thesis extend them to programs involving for example non-analyzable conditions or expressing parallelism. The first result is the extension of the analysis of live-ranges and memory conflicts, for scalar and arrays, to programs with parallel or approximated specification. In previous work on memory allocation for which this analysis is required, the concept of time provides a total order over the instructions and the existence of this order is an implicit requirement. We showed that it is possible to carry out such analysis on any partial order which match the parallelism of the studied program. The second result is to extend memory folding techniques, based on Euclidean lattices, to automatically find an appropriate basis from the set of memory conflicts. This set is often non convex, case that was inadequately handled by the previous methods. The last result applies both previous analyzes to "pipelined" blocking methods, especially in case of parametric block size. This situation gives rise to non-affine control but can be processed accurately by the choice of suitable approximations. This paves the way for efficient kernel offloading to accelerators such as GPUs, FPGAs or other dedicated circuit.
60

Análise estrutural de mangotes de transferência utilizando materiais compósitos e poliméricos avançados

Tonatto, Maikson Luiz Passaia January 2017 (has links)
Mangotes de transferência têm sido utilizados em grande quantidade em operações de descarga de óleo, principalmente em águas profundas, onde existem cargas estáticas e cíclicas variáveis devido ao ambiente de trabalho. Apesar da grande demanda dessas estruturas, seu comportamento é pouco conhecido e discutido na literatura devido a sua complexidade. Além disso, os materiais utilizados nesse equipamento podem ocasionar um elevado número de falhas, sendo muitas vezes superestimados, deixando o mangote com peso excessivo. Este trabalho objetiva o desenvolvimento de uma metodologia de análise de materiais poliméricos avançados, especificamente fibras de poliaramida e materiais compósitos à base de fibra de carbono, em substituição a materiais tradicionais, utilizando modelos numéricos capazes de prever o comportamento da pressão de ruptura das carcaças e resistência a compressão radial do mangote, além da avaliação em fadiga dos cordonéis à base de poliaramida dessas novas estruturas. Modelos em meso-escala foram desenvolvidos utilizando conceitos de hiperelasticidade e de critérios de falha de materiais compósitos para previsão das tensões e deformações locais em regiões críticas do mangote. Análises numéricas foram realizadas via elementos finitos com o software comercial para auxiliar a elaboração dos modelos e a realização dos cálculos numéricos. Foram realizados ensaios experimentais para validação desses modelos numéricos, bem como para a previsão do comportamento estático e em fadiga dos materiais envolvidos. Foram desenvolvidos dois modelos. Em um modelo foi aplicado pressão interna no mangote para previsão de ruptura das carcaças no qual tem o objetivo de avaliar o desempenho dos novos reforços de poliaramida. No outro modelo foi aplicada uma carga radial na seção central do mangote para prever a resistência ao esmagamento, no qual tem o objetivo de avaliar o desempenho do componente de sustentação em material compósito de fibra de carbono. Os resultados dos modelos numéricos apresentaram boa concordância com os resultados experimentais em grande parte das análises. Também se observou que os novos materiais apresentam um grande potencial de substituição dos materiais tradicionais, bem como um excelente comportamento frente a carregamentos estáticos e dinâmicos envolvidos na aplicação, sendo verificada diminuição significativa de peso e aumento do desempenho. / Offloading hoses have been extensively used at offloading oil operations, especially in deep water, where there are variable static and cyclic loads due to the working environment. Despite the great demand for these structures, their behavior is little known and discussed in the literature due to the complexity. In addition, the materials used in this equipment may lead to a high number of failures, being often overestimated, leading to excessive weight. This work aims to develop a methodology for analysis of advanced polymeric materials, specifically polyaramide fibers and carbon fiber composite materials, in the substitution of traditional materials, using numerical models able to predict the static behavior of the burst pressure of the carcasses and radial compression strength of the hose. In addition, fatigue tests were performed to evaluate the polyaramide cords of these new structures. Meso-scale models were developed using advanced hyperplastic and composite failure criteria concepts to predict local stresses and strains in critical regions of the hose. Numerical analyses were performed using finite elements with commercial software to aid the development of models and to carry out numerical calculations. Several experimental tests were performed to validate numerical models, as well as to forecast the static and fatigue behavior of the materials used. Two models were developed. A model is used to predict the burst pressure of the hose in order to evaluate the performance of the new polyaramide reinforcements cords. In the other model, a radial load was applied in the central section of the hose to predict the crushing strength, in which it has the aim of evaluating the performance of the load-bearing component made with carbon fiber composite material. The results of the computer models showed good agreement with the experimental results in most analyses. It was also found that the studied materials offered considerable potential for the substitution of traditional materials, as well as an excellent behavior under static and dynamic loads related to this application, with a significant weight reduction and increased performance of the new configurations over traditional hoses.

Page generated in 0.0639 seconds