• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 83
  • 19
  • 6
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 128
  • 28
  • 23
  • 20
  • 17
  • 16
  • 15
  • 15
  • 13
  • 12
  • 12
  • 11
  • 11
  • 10
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Nové hydrogely na bázi polysacharidů pro regeneraci měkkých tkání: příprava a charakterizace / Novel hydrogels based on polysaccharides for soft tissue regeneration: preparation and characterization

Nedomová, Eva January 2015 (has links)
Předložená diplomová práce se zabývá přípravou, síťováním a fyzikálně-chemickou charakterizací hydrogelů na bázi polysacharidů. Cílem práce bylo vyvinout elastické filmy, které by mohly být použity pro vlhké hojení ran. Teoretická část shrnuje současné způsoby regenerace měkkých tkání a jejích náhradách (ať už se jedná o přírodní nebo syntetické materiály). Zároveň jsou zdůrazněny základní informace o přírodních polysacharidech (chemická struktura, rozpustnost, tepelná a pH stabilita atd.), jejich modifikace a chemické síťování. Experimentální část je zaměřena na modifikaci přírodní gumy Karaya tak, aby transparentní hydrogely měly nastavitelnou hydrolytickou stabilitu. Vzorky byly analyzovány pomocí FTIR, TGA následované vyhodnocením bobtnání a hydrolytické degradace. Z výsledků vyplynulo, že chemická modifikace zvýšila stabilizaci elastického filmu z přírodního polysacharidu ve vodě až na 25 dní. Díky řízené degradaci a vysoké absorpci vody (85 - 96%) jsou tyto nové hydrogely využitelné především pro vlhké hojení ran (např. popálenin).
62

Optimalizace vlastností kolagenních pěn z rybího kolagenu pro medicínské a veterinární použití. / Fish collagen foam properties optimalization for medical and veterinary use.

Lukáč, Peter January 2021 (has links)
V průběhu projektu byly vyvinuty unikátní kolagenní pěny z kolagenu získaného z kůže sladkovodní ryby (kapr obecný, Cyprinus carpio). Pomocí síťování karbodiimidem byl překonán problém s nestabilitou kolagenní matrix z kolagenu získávaného z chladnokrevných živočichů při tělesné teplotě savců. Následně byly pěny impregnovány antibiotiky (gentamicin a vankomycin) a opětovně lyofilizovány, což je postup, který zajišťuje požadovanou koncentraci antibiotika bez rizika následného vymytí při dalších technologických krocích. Uvedený produkt je, na rozdíl od přípravků z nesíťovanéhokolagenu, stabilní i při sterilizaci gamma zářením. Finální sterilizovaný produkt byl testován in vivo na potkaním modelu infikované rány. Byla prokázána efektivita v léčbě potenciálně letální infekce Pseudomonas aeruginosa a kmene Stafylococcus aureus rezistentní k meticilinu (MRSA). Vzhledem k vysoké potřebě profylaxe a terapie infekcí pooperačních a jiných ran právě výše uvedenými polyrezistentními původci se jedná o slibný prostředek k budoucímu klinickému využití. Zkušenosti, které jsme získali v průběhu uvolnování ATB z kolagenních pěn budou v dalším vyvoji použity pro impregnaci zevní kolagenní vrstvy cévní protézy, čímž bychom mohli eliminovat jednu z největších nevýhod a rizik spojených s použitím umělých materiálu a tím je...
63

Integer-forcing architectures: cloud-radio access networks, time-variation and interference alignment

El Bakoury, Islam 04 June 2019 (has links)
Next-generation wireless communication systems will need to contend with many active mobile devices, each of which will require a very high data rate. To cope with this growing demand, network deployments are becoming denser, leading to higher interference between active users. Conventional architectures aim to mitigate this interference through careful design of signaling and scheduling protocols. Unfortunately, these methods become less effective as the device density increases. One promising option is to enable cellular basestations (i.e., cell towers) to jointly process their received signals for decoding users’ data packets as well as to jointly encode their data packets to the users. This joint processing architecture is often enabled by a cloud radio access network that links the basestations to a central processing unit via dedicated connections. One of the main contributions of this thesis is a novel end-to-end communications architecture for cloud radio access networks as well as a detailed comparison to prior approaches, both via theoretical bounds and numerical simulations. Recent work has that the following high-level approach has numerous advantages: each basestation quantizes its observed signal and sends it to the central processing unit for decoding, which in turn generates signals for the basestations to transmit, and sends them quantized versions. This thesis follows an integer-forcing approach that uses the fact that, if codewords are drawn from a linear codebook, then their integer-linear combinations are themselves codewords. Overall, this architecture requires integer-forcing channel coding from the users to the central processing unit and back, which handles interference between the users’ codewords, as well as integer-forcing source coding from the basestations to the central processing unit and back, which handles correlations between the basestations’ analog signals. Prior work on integer-forcing has proposed and analyzed channel coding strategies as well as a source coding strategy for the basestations to the central processing unit, and this thesis proposes a source coding strategy for the other direction. Iterative algorithms are developed to optimize the parameters of the proposed architecture, which involve real-valued beamforming and equalization matrices and integer-valued coefficient matrices in a quadratic objective. Beyond the cloud radio setting, it is argued that the integer-forcing approach is a promising framework for interference alignment between multiple transmitter-receiver pairs. In this scenario, the goal is to align the interfering data streams so that, from the perspective of each receiver, there seems to be only a signal receiver. Integer-forcing interference alignment accomplishes this objective by having each receiver recover two linear combinations that can then be solved for the desired signal and the sum of the interference. Finally, this thesis investigates the impact of channel coherence on the integer-forcing strategy via numerical simulations.
64

Increasing energy efficiency of O-RAN through utilization of xApps

Borg, Fredrik January 2023 (has links)
As 5G becomes more established and faces widespread roll-out, energy consumption of radio access networks around the globe will increase. Since the high-frequency radio waves used in 5G communication has a shorter effective range compared to the waves used in previous generations, 5G networks will require a higher number of radio units to compensate for their reduced range. Since the transmission of radio waves is a costly procedure in terms of energy consumption, this further increases the relevancy of radio equipment when considering solutions for increasing radio access networks' energy efficiency. This thesis has therefore provided a possible solution for increasing the energy efficiency of an O-RAN compliant radio access network by decreasing the energy consumption of its radio units. If a network's radio units are capable of entering a low-power sleep mode whenever they are left idle, i.e. not handling any traffic, the energy efficiency of a network can thus be increased by forcing its radio units to enter sleep mode as often as possible. This can be done by offloading traffic from radio units with little traffic onto other nearby radio units. The handovers required to perform such offloading, however, need to be predicted on the fly somewhere within the network. The solution proposed within this thesis therefore utilizes a component indigenous to the O-RAN architecture called RIC and its functionality-defining xApps which are capable of automatically detecting situations where radio units can be put to sleep as well as handling the offloading procedures. Through testing inside a simulated network, the set of xApps designed as a solution resulted in a potential 20-35% decrease in energy consumption among a radio access network's radio units.
65

ML-Based Optimization of Large-Scale Systems: Case Study in Smart Microgrids and 5G RAN

Zhou, Hao 10 August 2023 (has links)
The recent advances in machine learning (ML) have brought revolutionary changes to every field. Many novel applications, such as face recognition and natural language processing, have demonstrated the great potential of ML techniques. Indeed, ML can significantly enhance the intelligence of many existing systems, including smart grid, wireless communications, mechanical engineering, and so on. For instance, microgrid (MG), a distribution-level power system, can exchange energy with the main grid or work in islanded mode, which enables higher flexibility for the smart grid. However, it suffers considerable management complexity by including multiple entities such as renewable energy resources, energy storage system (ESS), loads, etc. In addition, each entity may have unique observations and policies to make autonomous decisions. Similarly, 5G networks are designed to provide lower latency, higher throughput and reliability for a large number of user devices, but the evolving network architecture also leads to great complexity for network management. The 5G network management should jointly consider various user types and network resources in a dynamic wireless environment. In addition, the integration of new techniques, such as reconfigurable intelligent surfaces (RISs), requires more efficient algorithms for network optimization. Consequently, intelligent management schemes are crucial to schedule network resources. In this work, we aim to develop state-of-the-art ML techniques to improve the performance of large-scale systems. As case studies, we focus on MG energy management and 5G radio access network (RAN) management. Multi-agent reinforcement learning (MARL) is presumed to be an ideal solution for MG energy management by considering each entity as an independent agent. We further investigate how communication failures will affect MG energy trading by using Bayesian deep reinforcement learning (BA-DRL). On the 5G side, we use MARL, transfer reinforcement learning (TRL), and hierarchical reinforcement learning (HRL) to improve network performance. In particular, we study the performance of those algorithms under various scenarios, including radio resource allocation for network slicing, joint radio and computation resource for mobile edge computing (MEC), joint radio and cache resource allocation for edge caching. Additionally, we further investigate how HRL can improve the energy efficiency (EE) of RIS-aided heterogeneous networks. The findings of this research highlight the capabilities of various ML techniques under different application domains. Firstly, different MG entities can be well coordinated by applying MARL, enabling intelligent decision-making for each agent. Secondly, Bayesian theory can be used to solve partially observable Markov decision process (POMDP) problems caused by communication failures in MARL. Thirdly, MARL is capable of balancing the heterogeneous requirements of different slices in 5G networks, guaranteeing satisfactory overall network performance. Then, we find that TRL can significantly improve the convergence performance of conventional reinforcement learning or deep reinforcement learning by transferring the knowledge from experts to learners, which is demonstrated over a 5G network slicing case study. Finally, we find that long-term and short-term decisions are well coordinated by HRL, and the proposed cooperative hierarchical architecture achieves higher throughput and EE than conventional algorithms.
66

On energy minimization of heterogeneos cloud radio access networks

Sigwele, Tshiamo, Pillai, Prashant, Hu, Yim Fun January 2016 (has links)
No / Next-generation 5G networks is the future of information networks and it will experience a tremendous growth in traffic. To meet such traffic demands, there is a necessity to increase the network capacity, which requires the deployment of ultra dense heterogeneous base stations (BSs). Nevertheless, BSs are very expensive and consume a significant amount of energy. Meanwhile, cloud radio access networks (C-RAN) has been proposed as an energy-efficient architecture that leverages the cloud computing technology where baseband processing is performed in the cloud. In addition, the BS sleeping is considered as a promising solution to conserving the network energy. This paper integrates the cloud technology and the BS sleeping approach. It also proposes an energy-efficient scheme for reducing energy consumption by switching off remote radio heads (RRHs) and idle BBUs using a greedy and first fit decreasing (FFD) bin packing algorithms, respectively. The number of RRHs and BBUs are minimized by matching the right amount of baseband computing load with traffic load. Simulation results demonstrate that the proposed scheme achieves an enhanced energy performance compared to the existing distributed long term evolution advanced (LTE-A) system.
67

Tracing Control with Linux Tracing Toolkit, next generation in a Containerized Environment

Ravi, Vikhram January 2021 (has links)
5G is becoming reality with companies rolling out the technology around the world. In 5G,the Radio Access Network (RAN) is moving from a monolithic-based architecture into a cloud-based microservice architecture for the purpose of simplifying deployment and manageability,and explore scalability and flexibility. Thus, the transition of functionalities from a proprietaryhardware-based system into a more distributed and flexible virtualized system is ongoing. Insuch systems, legacy methods performance monitoring is relevant, wheresystem tracingplaysan important role. System tracing is important for the purpose of performance analysis of anygiven system. However, current tools were designed thinking about monolith architectureswhere, therefore, in new distributed architectures, new tracing tools need to be developed. System tracing often requires special permissions to be executed in applications running ina virtualized third-party environment. Unfortunately, not all applications running in a dis-tributed virtualized environment can be given such special access, at the risk of compromis-ing security and stability of the system. However, tracing data needs to be also collected fromapplications running in such environments. This thesis addresses the challenge of remotely configuring and controlling the system tracingtool with the example of LTTng in applications that run as part of a distributed virtualizedenvironment with Kubernetes. We explore the problem of remotely controlling and configuringsystem tracing as well as to optimize data collection. The main outcome is a tool able to re-motely control and configure system tracing tools. In addition, a proof-of-concept is presentedwith working demos for basic system tracing commands. It was discovered that a relay-based solution can be exposed outside the cluster via node-portwhich can relay incoming requests on-wards to any number of microservices. However, dis-covery of these microservices that are running system tracing tools is critial. Service discoverymechanism’s were brought forth and introduced to the system for the purpose of disoveringmicroservices with system tracing tools. Tracing data that is saved locally can be extracted bythe user through the relay-based solution or sent directly to any remote system using LTTngrelay daemon functionality. Comparison between directly executing commands in a bash shelland the remote CLI was measured. It has been concluded that the overall the response timeof both Linux and LTTng commands that are sent through the remote CLI is 1.96 times longerthan directly executing these commands in a bash shell. This was accounted to the fact thatcommands sent over the network traffic within the kubernetes cluster which is the cost ofremotely being able to control and configure system tracing tools. This being said, there arestill many steps that can be taken to improve the solution and to develop a more productionready solution.i
68

Reinforcement Learning Based Resource Allocation for Network Slicing in O-RAN

Cheng, Nien Fang 06 July 2023 (has links)
Fifth Generation (5G) introduces technologies that expedite the adoption of mobile networks, such as densely connected devices, ultra-fast data rate, low latency and more. With those visions in 5G and 6G in the next step, the need for a higher transmission rate and lower latency is more demanding, possibly breaking Moore’s law. With Artificial Intelligence (AI) techniques becoming mature in the past decade, optimizing resource allocation in the network has become a highly demanding problem for Mobile Network Operators (MNOs) to provide better Quality of Service (QoS) with less cost. This thesis proposes a Reinforcement Learning (RL) solution on bandwidth allocation for network slicing integration in disaggregated Open Radio Access Network (O-RAN) architecture. O-RAN redefines traditional Radio Access Network (RAN) elements into smaller components with detailed functional specifications. The concept of open modularization leads to greater potential for managing resources of different network slices. In 5G mobile networks, there are three major types of network slices, Enhanced Mobile Broadband (eMBB), Ultra-Reliable Low Latency Communications (URLLC), and Massive Machine Type Communications (mMTC). Each network slice has different features in the 5G network; therefore, the resources can be relocated depending on different needs. The virtualization of O-RAN divides the RAN into smaller function groups. This helps the network slices to divide the shared resources further down. Compared to traditional sequential signal processing, allocating dedicated resources for each network slice can improve the performance individually. In addition, shared resources can be customized statically based on the feature requirement of each slice. To further enhance the bandwidth utilization on the disaggregated O-RAN, a RL algorithm is proposed in this thesis on midhaul bandwidth allocation shared between Centralized Unit (CU) and Distributed Unit (DU). A Python-based simulator has been implemented considering several types of mobile User Equipment (UE)s for this thesis. The simulator is later integrated with the proposed Q-learning model. The RL model finds the optimization on bandwidth allocation in midhaul between Edge Open Cloud (O-Cloud)s (DUs) and Regional O-Cloud (CU). The results show up to 50% improvement in the throughput of the targeted slice, fairness to other slices, and overall bandwidth utilization on the O-Clouds. In addition, the UE QoS has a significant improvement in terms of transmission time.
69

Two sides of the plant nuclear pore complex and a potential link between ran GTPASE and plant cell division

Xu, Xianfeng 21 September 2007 (has links)
No description available.
70

Interplay between capacity and energy consumption in C-RAN transport network design

Wang, Huajun January 2016 (has links)
Current mobile network architecture is facing a big challenge as the traffic demands have been increasing dramatically these years. Explosive mobile data demands are driving a significant growth in energy consumption in mobile networks, as well as the cost and carbon footprints [1]. In 2010, China Mobile Research Institute proposed Cloud Radio Access Network (C-RAN) [2], which has been regarded as one of the most promising architecture to solve the challenge of operators. In C-RAN, the baseband units (BBU) are decoupled from the remote radio units (RRH) and centralized in one or more locations. The feasibility of combination of implementing the very tight radio coordination schemes and sharing baseband processing and cooling system resources proves to be the two main advantages of C-RAN compared to traditional RAN. More importantly, mobile operators can quickly deploy RRHs to expand and make upgrades to their networks. Therefore, the C-RAN has been advocated by both operators and equipment vendors as a means to achieve the significant performance gains required for 5G [3]. However, one of the biggest barriers has shown up in the deployment of C-RAN as the novel architecture imposes very high capacity requirement on the transport network between the RRHs and BBUs, which is been called fronthaul network. With the implementation of 5G wireless system using advanced multi-antenna transmission (MIMO), the capacity requirement would go further up, as well as the power consumption. One solution has been proposed to solve the problem is to have the baseband functions divided, partially staying with RRHs and other functions would be centralized in BBU pool. Different splitting solutions has been proposed in [4] [5] and [6]. In this thesis work, we choose four different splitting solutions to build four CRAN architecture models. Under one specific case scenario with the fixed number of LTE base stations, we calculate the transport capacity requirement for fronthaul and adopt three different fronthaul technology. The power consumption is calculated by adding up the power utilized by RRHs, fronthaul network and baseband processing. By comparing the numerical results, split 1 and 2 shows the best results while split 2 is more practical for dense cell area, since split 1 requires large fronthaul capacity. The fronthaul transport technology can be decided according to different density of base stations. TWDM-PON shows better energy performance as fronthaul network when the capacity requirement is high, compared to EPON. However, for larger number of BSs, mm-Wave fronthaul is a better solution in terms of energy efficiency, fiber saving and flexibility.

Page generated in 0.0325 seconds