• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 7
  • 7
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Efficient, Reliable and Secure Content Delivery

Lin, Yin January 2014 (has links)
<p>Delivering content of interest to clients is one of the most important tasks of the Internet </p><p>and an everlasting research question of today's networking. Content distribution networks(CDNs) </p><p>emerged in response to the rising demand of content providers to deliver contents to clients efficiently, </p><p>reliably, and securely at relatively low cost.</p><p>This dissertation explores how CDNs can achieve major performance benefits by adopting better </p><p>caching strategies without changing the network, or by collaboration with ISPs and taking advantage of their </p><p>better knowledge of network status and topology. It discusses the emerging trends of hybrid CDN architectures </p><p>and solutions to reliability problems introduced by them. Finally, it demonstrates how CDNs could better </p><p>protect both content providers and consumers from attacks and other malicious behaviors.</p> / Dissertation
2

Livraison de contenus sur un réseau hybride satellite / terrestre / Content delivery on an hybrid satellite / terrestrial network

Bouttier, Elie Bernard 05 July 2018 (has links)
L’augmentation et le renforcement des usages d’Internet rend nécessaire l’évolution des réseaux existants. Cependant, on constate de fortes inégalités entre les zones urbaines, bien desservies et qui concentrent l’essentiel des investissements, et les zones rurales, mal desservies etdélaissées. Face à cette situation, les utilisateurs de ces zones se tournent vers d’autres moyensd’accès, et notamment vers les accès Internet par satellite. Cependant, ces derniers souffrentd’une limitation qui est le délai important induit par le temps de propagation du signal entre la terre et l’orbite géostationnaire. Dans cette thèse, nous nous intéressons à l’utilisation simultanée d’un réseau d’accès terrestre, caractérisé par un faible débit et un faible délai, et d’un réseau d’accès satellite, caractérisé par une forte latence et un débit plus important. D’autre part, les réseaux dediffusion de contenus ou CDNs, constitués d’un grand nombre de serveurs de cache, apportentune réponse à l’augmentation du trafic et des besoins en termes de latence et de débit.Cependant, localisés dans les réseaux de cœur, les caches restent éloignés des utilisateurs etn’atteignent pas les réseaux d’accès. Ainsi, les fournisseurs d’accès Internet (FAI) se sontintéressés au déploiement de ces serveurs au sein de leur propre réseau, que l’on appelle alorsTelCo CDN. La diffusion des contenus nécessite idéalement l’interconnexion des opérateurs CDNavec les TelCo CDNs, permettant ainsi la délégation de la diffusion à ces derniers. Ils sont alorsen mesure d’optimiser la diffusion des contenus sur leur réseau dont ils ont une meilleureconnaissance. Ainsi, nous nous intéresserons à l’optimisation de la livraison de contenus sur unréseau hybride satellite / terrestre intégré à une chaîne de livraison CDN. Nous nous attacheronsdans un premier temps à décrire une architecture permettant, grâce à l’interconnexion de CDNs,de prendre en charge la diffusion des contenus sur le réseau hybride. Dans un second temps,nous étudierons l’intérêt de la connaissance des informations apportées par le contexte CDN pour le routage sur une telle architecture. Dans ce cadre, nous proposerons un mécanisme de routage fondé sur la taille des contenus. Finalement, nous montrerons la supériorité de notre approche sur l’utilisation du protocole de transport multichemin MP-TCP / The increase and reinforcement of Internet uses make necessary to improve existing networks.However, we observe strong inequalities between urban areas, well served and which concentratethe major part of investments, and rural areas, underserved and forkasen. To face this situation,users in underserved areas are moving to others Internet access, and in particular satellite Internetaccess. However, the latter suffer from a limitation which is the long delay induced by thepropagation time between the earth and the geostationnary orbit. In this thesis, we are interresedin the simultaneous use of a terrestrial access network, characterized by a low delay and a lowthroughput, and a satellite access network, characterized by a high throughput and an long delay.Elsewhere, Content Delivery Networks (CDNs), consisting of a large number of cache servers,bring an answer to the increase in trafic and needs in terms of latency and throughput. However,located in core networks, cache servers stay far from end users and do not reach accessnetworks. Thus, Internet Service Providers (ISPs) have taken an interest in deploying their ownCDNs, which will be referred to as TelCo CDNs. The content delivery ideally needs theinterconnection between CDN operators and TelCo CDNS, allowing the delegation of the contentdelivery to the TelCo CDNs. The latter are then able to optimize the content delivery on theirnetwork, for which they have a better knowledge. Thus, we will study the optimization of thecontents delivery on a hybrid satellite / terrestrial network, integrated in a CDN delivery chain. Wewill initially focus on the description of a architecture allowing, thanks to a CDN interconnection,handling contents delivery on the hybrid network. In a second stage, we will study the value of theinformation provided by the CDN context in the routing on such architecture. In this framework, wewill propose a routing mechanism based on contents size. Finally, we will show the superiority ofour approach over the multipath transport protocol MP-TCP
3

Evolution of public transport network design due to the arrival of autonomous buses

Yudo Purnomo, Robby January 2020 (has links)
There is rapid development in the transportation field. Soon, along with the rapid population growth, there will be a change in the mobility pattern. To prepare the different travel demand, there are several new advanced technologies that are on the development process such as the electrification of the vehicle, the micro-mobility service, and the automation of the vehicle. The latter subject is the main focus of this research. The main objective of this research is to observe and analyze the development of a new model to provide a tool for the analysis of the public transport system and the analysis of different scenarios related to the degree of development of automated vehicles and the characteristics of the area of service and demand. The network design in this research is a hybrid concept developed by Carlos Daganzo in 2010 that combines the grid network on the central area and radial network on the peripheral area. In the central area, there is two intersecting public transit (bus and metro). In contrast, on the peripheral area, a feeder bus will provide the service for the passengers and also there will be two feeder alternatives, namely fixed route and door to door. The objective function of the optimization is to minimize the total cost regarding the available decision variables. The total cost is consist of agency cost (infrastructure length, total vehicle distance travelled, total vehicle hours travelled) and user cost (waiting time, access time, in-vehicle time) and the minimization process need to follow the constraint of headway, spacing, and vehicles capacity. Based on the base optimization, the most optimum value for alpha, bus spacing, metro spacing, and inner area length regarding to the total cost is 0.23, 0.2 km, 4 km, and 0.3 km respectively. while the Fixed Route Feeder Service with Full Automation is the most beneficial type of service. It generates the lowest total cost per passenger regarding to any decision variables except feeder spacing due to the different formulation between fixed route and door to door service. On contrary, Door to Door Feeder Service with No Automation has the highest total cost per passenger. The total cost in figure, based on the optimum value for each decision variables. Therefore there is no optimum value for headway considering the trend of the total cost is linear
4

Load balancing in hybrid LiFi and RF networks

Wang, Yunlu January 2018 (has links)
The increasing number of mobile devices challenges the current radio frequency (RF) networks. The conventional RF spectrum for wireless communications is saturating, motivating to develop other unexplored frequency bands. Light Fidelity (LiFi) which uses more than 300 THz of the visible light spectrum for high-speed wireless communications, is considered a promising complementary technology to its RF counterpart. LiFi enables daily lighting infrastructures, i.e. light emitting diode (LED) lamps to realise data transmission, and maintains the lighting functionality at the same time. Since LiFi mainly relies on line-of-sight (LoS) transmission, users in indoor environments may experience blockages which significantly affects users' quality of service (QoS). Therefore, hybrid LiFi and RF networks (HLRNs) where LiFi supports high data rate transmission and RF offers reliable connectivity, can provide a potential solution to future indoor wireless communications. In HLRNs, efficient load balancing (LB) schemes are critical in improving the traffic performance and network utilisation. In this thesis, the optimisation-based scheme (OBS) and the evolutionary game theory (EGT) based scheme (EGTBS) are proposed for load balancing in HLRNs. Specifically, in OBS, two algorithms, the joint optimisation algorithm (JOA) and the separate optimisation algorithm (SOA) are proposed. Analysis and simulation results show that JOA can achieve the optimal performance in terms of user data rate while requiring high computational complexity. SOA reduces the computational complexity but achieves low user data rates. EGTBS is able to achieve a better performance/complexity trade-off than OBS and other conventional load balancing schemes. In addition, the effects of handover, blockages, orientation of LiFi receivers, and user data rate requirement on the throughput of HLRNs are investigated. Moreover, the packet latency in HLRNs is also studied in this thesis. The notion of LiFi service ratio is introduced, defined as the proportion of users served by LiFi in HLRNs. The optimal LiFi service ratio to minimise system delay is mathematically derived and a low-complexity packet flow assignment scheme based on this optimum ratio is proposed. Simulation results show that the theoretical optimum of the LiFi service ratio is very close to the practical solution. Also, the proposed packet flow assignment scheme can reduce at most 90% of packet delay compared to the conventional load balancing schemes at reduced computational complexity.
5

Dynamische Neuronale Netzarchitektur für Kontinuierliches Lernen

Tagscherer, Michael 23 August 2001 (has links) (PDF)
Am Beispiel moderner Automatisierungssysteme wird deutlich, dass die Steuerung und optimale Führung der technischen Prozesse eng verbunden ist mit der Verfügbarkeit eines möglichst exakten Prozessmodells. Steht jedoch kein Modell des zu steuernden Systems zur Verfügung oder ist das System nicht ausreichend genau analytisch beschreibbar, muss ein adäquates Modell auf der Basis von Beobachtungen (Messdaten) abgeleitet werden. Erschwerend wirken sich hierbei starke Nichtlinearitäten sowie der zeitvariante Charakter der zu identifizierenden Systeme aus. Die Zeitvarianz, beispielsweise durch Alterung oder Verschleiß hervorgerufen, erfordert zusätzlich eine schritthaltende Adaption an den sich verändernden Prozess. Das einmalige, zeitlich begrenzte Erstellen eines Modells ist somit nicht ausreichend. Stattdessen muss zeitlich unbegrenzt "nachtrainiert" werden, was dementsprechend als "Kontinuierliches Lernen" bezeichnet wird. Auch wenn das Ableiten eines Systemmodells anhand von Beobachtungen eine typische Aufgabenstellung für Neuronale Netze ist, stellt die Zeitvarianz Neuronale Netze dennoch vor enorme Probleme. Im Rahmen der Dissertation wurden diese Probleme identifiziert und anhand von unterschiedlichen Neuronalen Netzansätzen analysiert. Auf den sich hieraus ergebenden Ergebnissen steht anschließend die Entwicklung eines neuartigen Neuronalen Netzansatzes im Mittelpunkt. Die besondere Eigenschaft des hybriden ICE-Lernverfahrens ist die Fähigkeit, eine zur Problemkomplexität adäquate Netztopologie selbstständig zu generieren und diese entsprechend des zeitvarianten Charakters der Zielfunktion dynamisch adaptieren zu können. Diese Eigenschaft begünstigt insbesondere schnelles Initiallernen. Darüber hinaus ist das ICE-Verfahren in der Lage, parallel zur Modellausgabe Vertrauenswürdigkeitsprognosen für die aktuelle Ausgabe zur Verfügung zu stellen. Den Abschluss der Arbeit bildet eine spezielle Form des ICE-Ansatzes, bei der durch asymmetrische Aktivierungsfunktionen Parallelen zur Fuzzy-Logik hergestellt werden. Dadurch wird es möglich, automatisch Regeln abzuleiten, welche das erlernte Modell beschreiben. Die "Black-Box", die Neuronale Netze in der Regel darstellen, wird dadurch transparenter. / One of the main requirements for an optimal industrial control system is the availability of a precise model of the process, e.g. for a steel rolling mill. If no model or no analytical description of such a process is available a sufficient model has to be derived from observations, i.e. system identification. While nonlinear function approximation is a well-known application for neural networks, the approximation of nonlinear functions that change over time poses many additional problems which have been in the focus of this research. The time-variance caused for example by aging or attrition requires a continuous adaptation to process changes throughout the life-time of the system, here referred to as continuous learning. Based on the analysis of different neural network approaches the novel incremental construction algorithm ICE for continuous learning tasks has been developed. One of the main advantages of the ICE-algorithm is that the number of RBF-neurons and the number of local models of the hybrid network have not to be determined in advance. This is an important feature for fast initial learning. The evolved network is automatically adapted to the time-variant target function. Another advantage of the ICE-algorithm is the ability to simultaneously learn the target function and a confidence value for the network output. Finally a special version of the ICE-algorithm with asymmetric receptive fields is introduced. Here similarities to fuzzy logic are intended. The goal is to automatically derive rules which describe the learned model of the unknown process. In general a neural network is a "black box". In contrast to that an ICE-network is more transparent.
6

Numerical methods for computationally efficient and accurate blood flow simulations in complex vascular networks: Application to cerebral blood flow

Ghitti, Beatrice 04 May 2023 (has links)
It is currently a well-established fact that the dynamics of interacting fluid compartments of the central nervous system (CNS) may play a role in the CNS fluid physiology and pathology of a number of neurological disorders, including neurodegenerative diseases associated with accumulation of waste products in the brain. However, the mechanisms and routes of waste clearance from the brain are still unclear. One of the main components of this interacting cerebral fluids dynamics is blood flow. In the last decades, mathematical modeling and fluid dynamics simulations have become a valuable complementary tool to experimental approaches, contributing to a deeper understanding of the circulatory physiology and pathology. However, modeling blood flow in the brain remains a challenging and demanding task, due to the high complexity of cerebral vascular networks and the difficulties that consequently arise to describe and reproduce the blood flow dynamics in these vascular districts. The first part of this work is devoted to the development of efficient numerical strategies for blood flow simulations in complex vascular networks. In cardiovascular modeling, one-dimensional (1D) and lumped-parameter (0D) models of blood flow are nowadays well-established tools to predict flow patterns, pressure wave propagation and average velocities in vascular networks, with a good balance between accuracy and computational cost. Still, the purely 1D modeling of blood flow in complex and large networks can result in computationally expensive simulations, posing the need for extremely efficient numerical methods and solvers. To address these issues, we develop a novel modeling and computational framework to construct hybrid networks of coupled 1D and 0D vessels and to perform computationally efficient and accurate blood flow simulations in such networks. Starting from a 1D model and a family of nonlinear 0D models for blood flow, with either elastic or viscoelastic tube laws, this methodology is based on (i) suitable coupling equations ensuring conservation principles; (ii) efficient numerical methods and numerical coupling strategies to solve 1D, 0D and hybrid junctions of vessels; (iii) model selection criteria to construct hybrid networks, which provide a good trade-off between accuracy in the predicted results and computational cost of the simulations. By applying the proposed hybrid network solver to very complex and large vascular networks, we show how this methodology becomes crucial to gain computational efficiency when solving networks and models where the heterogeneity of spatial and/or temporal scales is relevant, still ensuring a good level of accuracy in the predicted results. Hence, the proposed hybrid network methodology represents a first step towards a high-performance modeling and computational framework to solve highly complex networks of 1D-0D vessels, where the complexity does not only depend on the anatomical detail by which a network is described, but also on the level at which physiological mechanisms and mechanical characteristics of the cardiovascular system are modeled. Then, in the second part of the thesis, we focus on the modeling and simulation of cerebral blood flow, with emphasis on the venous side. We develop a methodology that, departing from the high-resolution MRI data obtained from a novel in-vivo microvascular imaging technique of the human brain, allows to reconstruct detailed subject-specific cerebral networks of specific vascular districts which are suitable to perform blood flow simulations. First, we extract segmentations of cerebral districts of interest in a way that the arterio-venous separation is addressed and the continuity and connectivity of the vascular structures is ensured. Equipped with these segmentations, we propose an algorithm to extract a network of vessels suitable and good enough, i.e. with the necessary properties, to perform blood flow simulations. Here, we focus on the reconstruction of detailed venous vascular networks, given that the anatomy and patho-physiology of the venous circulation is of great interest from both clinical and modeling points of view. Then, after calibration and parametrization of the MRI-reconstructed venous networks, blood flow simulations are performed to validate the proposed methodology and assess the ability of such networks to predict physiologically reasonable results in the corresponding vascular territories. From the results obtained we conclude that this work represents a proof-of-concept study that demonstrates that it is possible to extract subject-specific cerebral networks from the novel high-resolution MRI data employed, setting the basis towards the definition of an effective processing pipeline for detailed blood flow simulations from subject-specific data, to explore and quantify cerebral blood flow dynamics, with focus on venous blood drainage.
7

Dynamische Neuronale Netzarchitektur für Kontinuierliches Lernen

Tagscherer, Michael 01 May 2001 (has links)
Am Beispiel moderner Automatisierungssysteme wird deutlich, dass die Steuerung und optimale Führung der technischen Prozesse eng verbunden ist mit der Verfügbarkeit eines möglichst exakten Prozessmodells. Steht jedoch kein Modell des zu steuernden Systems zur Verfügung oder ist das System nicht ausreichend genau analytisch beschreibbar, muss ein adäquates Modell auf der Basis von Beobachtungen (Messdaten) abgeleitet werden. Erschwerend wirken sich hierbei starke Nichtlinearitäten sowie der zeitvariante Charakter der zu identifizierenden Systeme aus. Die Zeitvarianz, beispielsweise durch Alterung oder Verschleiß hervorgerufen, erfordert zusätzlich eine schritthaltende Adaption an den sich verändernden Prozess. Das einmalige, zeitlich begrenzte Erstellen eines Modells ist somit nicht ausreichend. Stattdessen muss zeitlich unbegrenzt "nachtrainiert" werden, was dementsprechend als "Kontinuierliches Lernen" bezeichnet wird. Auch wenn das Ableiten eines Systemmodells anhand von Beobachtungen eine typische Aufgabenstellung für Neuronale Netze ist, stellt die Zeitvarianz Neuronale Netze dennoch vor enorme Probleme. Im Rahmen der Dissertation wurden diese Probleme identifiziert und anhand von unterschiedlichen Neuronalen Netzansätzen analysiert. Auf den sich hieraus ergebenden Ergebnissen steht anschließend die Entwicklung eines neuartigen Neuronalen Netzansatzes im Mittelpunkt. Die besondere Eigenschaft des hybriden ICE-Lernverfahrens ist die Fähigkeit, eine zur Problemkomplexität adäquate Netztopologie selbstständig zu generieren und diese entsprechend des zeitvarianten Charakters der Zielfunktion dynamisch adaptieren zu können. Diese Eigenschaft begünstigt insbesondere schnelles Initiallernen. Darüber hinaus ist das ICE-Verfahren in der Lage, parallel zur Modellausgabe Vertrauenswürdigkeitsprognosen für die aktuelle Ausgabe zur Verfügung zu stellen. Den Abschluss der Arbeit bildet eine spezielle Form des ICE-Ansatzes, bei der durch asymmetrische Aktivierungsfunktionen Parallelen zur Fuzzy-Logik hergestellt werden. Dadurch wird es möglich, automatisch Regeln abzuleiten, welche das erlernte Modell beschreiben. Die "Black-Box", die Neuronale Netze in der Regel darstellen, wird dadurch transparenter. / One of the main requirements for an optimal industrial control system is the availability of a precise model of the process, e.g. for a steel rolling mill. If no model or no analytical description of such a process is available a sufficient model has to be derived from observations, i.e. system identification. While nonlinear function approximation is a well-known application for neural networks, the approximation of nonlinear functions that change over time poses many additional problems which have been in the focus of this research. The time-variance caused for example by aging or attrition requires a continuous adaptation to process changes throughout the life-time of the system, here referred to as continuous learning. Based on the analysis of different neural network approaches the novel incremental construction algorithm ICE for continuous learning tasks has been developed. One of the main advantages of the ICE-algorithm is that the number of RBF-neurons and the number of local models of the hybrid network have not to be determined in advance. This is an important feature for fast initial learning. The evolved network is automatically adapted to the time-variant target function. Another advantage of the ICE-algorithm is the ability to simultaneously learn the target function and a confidence value for the network output. Finally a special version of the ICE-algorithm with asymmetric receptive fields is introduced. Here similarities to fuzzy logic are intended. The goal is to automatically derive rules which describe the learned model of the unknown process. In general a neural network is a "black box". In contrast to that an ICE-network is more transparent.

Page generated in 0.0489 seconds