• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 34
  • 16
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 76
  • 76
  • 36
  • 21
  • 19
  • 17
  • 15
  • 14
  • 10
  • 10
  • 10
  • 8
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

A dual approximation framework for dynamic network analysis: congestion pricing, traffic assignment calibration and network design problem

Lin, Dung-Ying 10 November 2009 (has links)
Dynamic Traffic Assignment (DTA) is gaining wider acceptance among agencies and practitioners because it serves as a more realistic representation of real-world traffic phenomena than static traffic assignment. Many metropolitan planning organizations and transportation departments are beginning to utilize DTA to predict traffic flows within their networks when conducting traffic analysis or evaluating management measures. To analyze DTA-based optimization applications, it is critical to obtain the dual (or gradient) information as dual information can typically be employed as a search direction in algorithmic design. However, very limited number of approaches can be used to estimate network-wide dual information while maintaining the potential to scale. This dissertation investigates the theoretical/practical aspects of DTA-based dual approximation techniques and explores DTA applications in the context of various transportation models, such as transportation network design, off-line DTA capacity calibration and dynamic congestion pricing. Each of the later entities is formulated as bi-level programs. Transportation Network Design Problem (NDP) aims to determine the optimal network expansion policy under a given budget constraint. NDP is bi-level by nature and can be considered a static case of a Stackelberg game, in which transportation planners (leaders) attempt to optimize the overall transportation system while road users (followers) attempt to achieve their own maximal benefit. The first part of this dissertation attempts to study NDP by combining a decomposition-based algorithmic structure with dual variable approximation techniques derived from linear programming theory. One of the critical elements in considering any real-time traffic management strategy requires assessing network traffic dynamics. Traffic is inherently dynamic, since it features congestion patterns that evolve over time and queues that form and dissipate over a planning horizon. It is therefore imperative to calibrate the DTA model such that it can accurately reproduce field observations and avoid erroneous flow predictions when evaluating traffic management strategies. Satisfactory calibration of the DTA model is an onerous task due to the large number of variables that can be modified and the intensive computational resources required. In this dissertation, the off-line DTA capacity calibration problem is studied in an attempt to devise a systematic approach for effective model calibration. Congestion pricing has increasingly been seen as a powerful tool for both managing congestion and generating revenue for infrastructure maintenance and sustainable development. By carefully levying tolls on roadways, a more efficient and optimal network flow pattern can be generated. Furthermore, congestion pricing acts as an effective travel demand management strategy that reduces peak period vehicle trips by encouraging people to shift to more efficient modes such as transit. Recently, with the increase in the number of highway Build-Operate-Transfer (B-O-T) projects, tolling has been interpreted as an effective way to generate revenue to offset the construction and maintenance costs of infrastructure. To maximize the benefits of congestion pricing, a careful analysis based on dynamic traffic conditions has to be conducted before determining tolls, since sub-optimal tolls can significantly worsen the system performance. Combining a network-wide time-varying toll analysis together with an efficient solution-building approach will be one of the main contributions of this dissertation. The problems mentioned above are typically framed as bi-level programs, which pose considerable challenges in theory and as well as in application. Due to the non-convex solution space and inherent NP-complete complexity, a majority of recent research efforts have focused on tackling bi-level programs using meta-heuristics. These approaches allow for the efficient exploration of complex solution spaces and the identification of potential global optima. Accordingly, this dissertation also attempts to present and compare several meta-heuristics through extensive numerical experiments to determine the most effective and efficient meta-heuristic, as a means of better investigating realistic network scenarios. / text
42

Analysis of network management protocols in optical networks

Lim, Kok Seng 03 1900 (has links)
Approved for public release, distribution is unlimited / In this thesis, the scalability issues of Simple Network Management Protocol (SNMP) in optical network management are explored. It is important to understand the effect of varying the number of nodes, the request inter-arrival times and the polling interval on the performance of SNMP and number of nodes that can be effectively managed. The current study explored the effect of varying these parameters in a controlled test environment using the OPNET simulation package. In addition, traffic analysis was performed on measured SNMP traffic and statistics were developed from the traffic analysis. With this understanding of SNMP traffic, an SNMPv1 model was defined and integrated into an OPNET network model to study the performance of SNMP. The simulation results obtained were useful in providing needed insight into the allowable number of nodes an optical network management system can effectively manage. / Civilian, Singapore Ministry of Defense
43

Analyse des Straßenverkehrs mit verteilten opto-elektronischen Sensoren

Schischmanow, Adrian 14 November 2005 (has links)
Aufgrund der steigenden Verkehrsnachfrage und der begrenzten Resourcen zum Ausbau der Straßenverkehrsnetze werden zukünftig größere Anforderungen an die Effektivität von Telematikanwendungen gestellt. Die Erhebung und Bereitstellung aktueller Verkehrsdaten durch geeignete Sensoren ist dazu eine entscheidende Voraussetzung. Gegenstand dieser Arbeit ist die großflächige Analyse des Straßenverkehrs auf der Basis bodengebundener und verteilter opto-elektronischer Sensoren. Es wird ein Konzept vorgestellt, dass eine von der Bilddatenerhebung bis zur Bereitstellung der Daten für Verkehrsanwendungen durchgehende Verarbeitungskette enthält. Der interdisziplinäre Ansatz bildet die Basis zur Verknüpfung eines solchen Sensorsystems mit Verkehrstelematik. Die Abbildung des Verkehrsgeschehens erfolgt im Gegensatz zu herkömmlichen bodengebundenen Messsystemen innerhalb größerer zusammenhängender Ausschnitte des Verkehrsraums. Dadurch können streckenbezogene Verkehrskenngrößen direkt bestimmt werden. Die Georeferenzierung der Verkehrsobjekte ist die Grundlage für eine optimale Verkehrsanalyse und Verkehrssteuerung. Die generierten Daten sind Basis zur Findung und Verifizierung von Theorien und Modellen sowie zur Entwicklung verkehrsadaptiver Steuerungsverfahren auf mikroskopischer Ebene. Es wird gezeigt, wie aus der Fusion gleichzeitig erhaltener Daten mehrerer Sensoren, die im Bereich des Sichtbaren und im thermalen Infrarot sensitiv sind, ein zusammengesetztes Abbildungsmosaik eines vergrößerten Verkehrsraums erzeugt werden kann. In diesem Abbildungsmosaik werden Verkehrsdatenmodelle unterschiedlicher räumlicher Kategorien abgeleitet. Die Darstellung des Abbildungsmosaiks mit seinen Daten erfolgt auf unterschiedlichen Informationsebenen in geokodierten Karten. Die Bewertung mikroskopischer Verkehrsprozesse wird durch die besondere Berücksichtigung der Zeitkomponente bei der Visualisierung möglich. Die vorgestellte Verarbeitungskette beinhaltet neue Anwendungsbereiche für geografische Informationssysteme (GIS). Der beschriebene Ansatz wurde konzeptionell bearbeitet, in der Programmiersprache IDL realisiert und erfolgreich getestet. / The growing demand of urban and interregional road traffic requires an improvement regarding the effectiveness of telematics systems. The use of appropriate sensor systems for traffic data acquisition is a decisive prerequisite for the efficiency of traffic control. This thesis focuses on analyzing road traffic based on stationary and distributed ground opto-electronic matrix sensors. A concept which covers all parts from image data acquisition up to traffic data provision is presented. This interdisciplinary approach establishes a basis for the integration of such a sensor system into telematics systems. Unlike conventional ground stationary sensors, the acquisition of traffic data is spread over larger areas in this case. As a result, road specific traffic data can be measured directly. Georeferencing of traffic objects is the basis for optimal road traffic analysis and road traffic control. This approach will demonstrate how to generate a spatial mosaic consisting of traffic data generated by several sensors with different spectral resolution. For traffic flow analysis the realisation of special 4D data visualisation methods on different information levels was an essential need. The data processing chain introduces new areas of application for geographical information systems (GIS). The approach utilised in this study has been worked out conceptually and also successfully tested and applied in the programming language IDL.
44

Towards Ideal Network Traffic Measurement: A Statistical Algorithmic Approach

Zhao, Qi 03 October 2007 (has links)
With the emergence of computer networks as one of the primary platforms of communication, and with their adoption for an increasingly broad range of applications, there is a growing need for high-quality network traffic measurements to better understand, characterize and engineer the network behaviors. Due to the inherent lack of fine-grained measurement capabilities in the original design of the Internet, it does not have enough data or information to compute or even approximate some traffic statistics such as traffic matrices and per-link delay. While it is possible to infer these statistics from indirect aggregate measurements that are widely supported by network measurement devices (e.g., routers), how to obtain the best possible inferences is often a challenging research problem. We name this as "too little data" problem after its root cause. Interestingly, while "too little data" is clearly a problem, "too much data" is not a blessing either. With the rapid increase of network link speeds, even to keep sampled summarized network traffic (for inferring various network statistics) at low sample rates results in too much data to be stored, processed, and transmitted over measurement devices. In summary high-quality measurements in today's Internet is very challenging due to resource limitations and lack of built-in support, manifested as either "too little data" or "too much data". We present some new practices and proposals to alleviate these two problems.The contribution is four fold: i) designing universal methodologies towards ideal network traffic measurements; ii) providing accurate estimations for several critical traffic statistics guided by the proposed methodologies; iii) offering multiple useful and extensible building blocks which can be used to construct a universal network measurement system in the future; iv) leading to some notable mathematical results such as a new large deviation theorem that finds applications in various areas.
45

Network Service Misuse Detection: A Data Mining Approach

Hsiao, Han-wei 01 September 2004 (has links)
As network services progressively become essential communication and information delivery mechanisms of business operations and individuals¡¦ activities, a challenging network management issue emerges: network service misuse. Network service misuse is formally defined as ¡§abuses or unethical, surreptitious, unauthorized, or illegal uses of network services by those who attempt to mask their uses or presence that evade the management and monitoring of network or system administrators.¡¨ Misuses of network services would inappropriately use resources of network service providers (i.e., server machines), compromise the confidentiality of information maintained in network service providers, and/or prevent other users from using the network normally and securely. Motivated by importance of network service misuse detection, we attempt to exploit the use of router-based network traffic data for facilitating the detection of network service misuses. Specifically, in this thesis study, we propose a cross-training method for learning and predicting network service types from router-based network traffic data. In addition, we also propose two network service misuse detection systems for detecting underground FTP servers and interactive backdoors, respectively. Our evaluations suggest that the proposed cross-training method (specifically, NN->C4.5) outperforms traditional classification analysis techniques (namely C4.5, backpropagation neural network, and Naïve Bayes classifier). In addition, our empirical evaluation conducted in a real-world setting suggests that the proposed underground FTP server detection system could effectively identify underground FTP servers, achieving a recall rate of 95% and a precision rate of 34% (by the NN->C4.5 cross-training technique). Moreover, our empirical evaluation also suggests that the proposed interactive backdoor detection system have the capability in capturing ¡§true¡¨ (or more precisely, highly suspicious) interactive backdoors existing in a real-world network environment.
46

Μικροσκοπική ανάλυση της συμπεριφοράς των οχημάτων σε cluster υπό την επίδραση κυκλοφοριακού πλήγματος σε αυτοκινητόδρομο / Microscopic analysis of vehicle behaviour in a cluster under the influence of shockwaves in motorways

Πεππέ, Μαρίνα 07 May 2015 (has links)
Ο ρόλος των Ευφυών Συστημάτων Μεταφορών είναι η βελτίωση της οδικής ασφάλειας, μέσω της έγκαιρης ανίχνευσης συμβάντων και της αποτελεσματικής διαχείρισης της κυκλοφορίας. Στο πλαίσιο αυτό, η έρευνα αυτή εστιάζει στην ανάλυση της σχέσης μεταξύ ενός καθημερινού σοβαρού φαινομένου της κυκλοφορίας, όπως το κυκλοφοριακό πλήγμα (shockwave) και τα οχήματα που κινούνται σε σχηματισμό cluster, μέσω του κυκλοφοριακού πλήγματος. Για να σχηματιστεί ένα cluster απαιτείται δύο ή περισσότερα οχήματα να περιλαμβάνονται στην ακολουθία οχημάτων είτε λόγω της εγγύτητάς τους είτε λόγω της σχετικής τους απόστασής από άλλα οχήματα. Ξεκινώντας με βίντεο κυκλοφοριακής στον αυτοκινητόδρομο I-94, Minnesota, USA, οι τροχιές των οχημάτων εξήχθησαν. Τα αποτελέσματα χρησιμοποιήθηκαν στη συνέχεια προκειμένου να καθοριστούν μεταβλητές όπως χρονοαπόσταση, ταχύτητα, επιτάχυνση για τρεις ομάδες οχημάτων, το Cluster, η ομάδα Πριν το Cluster και η ομάδα Μετά το Cluster. Η σχέση μεταξύ αυτών των ομάδων μελετήθηκε και αποτυπώθηκε σε γραμμικές συναρτήσεις με πολλαπλές ανεξάρτητες μεταβλητές. / The role of Intellignet Transportation Systems is to enhance traffic safety, through timely detection of incidents and effective traffic management. In this framework, this research focuses into analyzing the relationship between a daily severe traffic phenomenon such as shockwave and vehicles which move in cluster formation through the shockwave. For a cluster to be formed it is required for two or more vehicles to be included either because of their closeness or because of their relative distance from other vehicles on the link. Starting with video recordings of traffic flow on I-94, Minnesota, USA, vehicles trajectories were extracted. The results were then used in order to define variables such as vehicles space and time headway, velocity, acceleration for three groups of vehicles; the Cluster, the Before Cluster group and the After Cluster group. The relationship between these groups was studied and was modeled in linear functions with multiple variables.
47

Um modelo para tarifação confiável em computação em nuvem. / A model for reliable billing in cloud computing.

DANTAS, Ana Cristina Alves de Oliveira. 09 May 2018 (has links)
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-05-09T19:09:21Z No. of bitstreams: 1 ANA CRISTINA ALVES DE OLIVEIRA DANTAS - TESE PPGCC 2015..pdf: 7095339 bytes, checksum: 1129b2ebd21ae60871fd33c142249912 (MD5) / Made available in DSpace on 2018-05-09T19:09:21Z (GMT). No. of bitstreams: 1 ANA CRISTINA ALVES DE OLIVEIRA DANTAS - TESE PPGCC 2015..pdf: 7095339 bytes, checksum: 1129b2ebd21ae60871fd33c142249912 (MD5) Previous issue date: 2015-11-30 / A computação em nuvem define uma infraestrutura virtual para prestação de serviços em rede sob demanda. Os clientes contratam serviços em que a infraestrutura primária de hardware e software encontra-se em centros de dados remotos e não localmente e sobre seu próprio domínio. Há uma necessidade de ferramentas de monitoramento regulatório, que possam operar dentro da infraestrutura do provedor, ou fora dele, deixando os clientes a par do estado atual ou do histórico do desempenho dos serviços contratados. A computação em nuvem é fortemente dependente das redes computadores e o desempenho dos serviços em nuvem pode ser monitorado via métricas de rede. O conhecimento de métricas de desempenho sobre a execução dos serviços contribui para promover a relação de confiança entre cliente e provedor, bem como fornece subsídios para contestações em faturas, caso necessário. Um modelo detarifaçãoconfiável envolve a disponibilização de métricas de desempenho dos serviços contratados, de modo que o cliente possa aferir as tarifas cobradas. Clientes e provedores podem alternar papeis em diferentes níveis de prestação de serviços de computação em nuvem. Um cliente no nível de infraestrutura pode ser um provedor de dados, por exemplo. Um modelo de tarifação confiável fornece subsídios também ao provedor de serviços para melhorar a alocação de recursos, bem como indicadores para investimentos em infraestrutura que evitem perdas financeiras causadas pelo pagamento de multas por descumprimento de acordo de nível de serviço. O objeto desta tese de doutorado é desenvolver um modelo para tarifação confiável de serviços de computação em nuvem que envolva a detecção e notificação de anomalias de tráfego de rede em tempo real que auxilie na estimativa do custo causado por tais anomalias para o modelo de negócio e que contribua para um processo de alocação de recursos capaz de reduzir custos com penalidades financeiras. A validação do modelo foi realizada por meio de escalonamento de recursos baseado em custo. O modelo de tarifação confiável integrado ao mecanismo de escalonamento reduziu custos e perdas financeiras provenientes de violações de acordos de nível de serviço. / Cloud computing defines a virtual infrastructure to provide network services on demand. Customers contract services where the primary infrastructure of hardware software is in remote data centers and on the customer own domain. Sharing the same network, or the same physical machine, amongvarious tenants entails some concerns related to information confidentiality, security, troubleshooting, separation of responsibilities for guaranteeing the quality of the technical goals across the different abstraction levels, and how the customer may monitor the use of services and eventual failures. Prior to cloud computing, allowed the service providers dominate the entire chain of information, providing information to enable them to manage the business globally to avoid financial losses and increase profits. With the use of cloud computing services, the customer possesses no control over levels virtualization services that are supporting the level you are operating. A client in infrastructure level can be a data provider, for instance. Thus, it is important to have appropriate tools to keep track of the performance of the contracted services. Cloud computing is heavily dependent on computer networks. In this sense, it is abusiness differential to provide network performance metrics either for the customers, which is an important non-functional requirement that is sometimes ignored by many cloud service providers. The disposal of real-time performance metrics contributes to promote trust relationship between customer and provider, and to aid the provider to better dimension resources to avoid finacial losses. The object of this doctoral thesis is to develop a model for reliable charging of cloud computing services cloud that accomplishes the network traffic anomaly detection and appropriate notification in real time, as well as enables the estimation of the cost caused by anomalies to business model.
48

An approach for profiling distributed applications through network traffic analysis

Vieira, Thiago Pereira de Brito 05 March 2013 (has links)
Submitted by João Arthur Martins (joao.arthur@ufpe.br) on 2015-03-12T17:32:13Z No. of bitstreams: 2 Dissertação Thiago Vieira.pdf: 1199574 bytes, checksum: 81f443f0b4fbf4d223cda440cc56d722 (MD5) license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) / Approved for entry into archive by Daniella Sodre (daniella.sodre@ufpe.br) on 2015-03-13T14:22:30Z (GMT) No. of bitstreams: 2 Dissertação Thiago Vieira.pdf: 1199574 bytes, checksum: 81f443f0b4fbf4d223cda440cc56d722 (MD5) license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) / Made available in DSpace on 2015-03-13T14:22:30Z (GMT). No. of bitstreams: 2 Dissertação Thiago Vieira.pdf: 1199574 bytes, checksum: 81f443f0b4fbf4d223cda440cc56d722 (MD5) license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Previous issue date: 2013-03-05 / Distributed systems has been adopted for building modern Internet services and cloud computing infrastructures, in order to obtain services with high performance, scalability, and reliability. Cloud computing SLAs require low time to identify, diagnose and solve problems in a cloud computing production infrastructure, in order to avoid negative impacts into the quality of service provided for its clients. Thus, the detection of error causes, diagnose and reproduction of errors are challenges that motivate efforts to the development of less intrusive mechanisms for monitoring and debugging distributed applications at runtime. Network traffic analysis is one option to the distributed systems measurement, although there are limitations on capacity to process large amounts of network traffic in short time, and on scalability to process network traffic where there is variation of resource demand. The goal of this dissertation is to analyse the processing capacity problem for measuring distributed systems through network traffic analysis, in order to evaluate the performance of distributed systems at a data center, using commodity hardware and cloud computing services, in a minimally intrusive way. We propose a new approach based on MapReduce, for deep inspection of distributed application traffic, in order to evaluate the performance of distributed systems at runtime, using commodity hardware. In this dissertation we evaluated the effectiveness of MapReduce for a deep packet inspection algorithm, its processing capacity, completion time speedup, processing capacity scalability, and the behavior followed by MapReduce phases, when applied to deep packet inspection for extracting indicators of distributed applications. / Sistemas distribuídos têm sido utilizados na construção de modernos serviços da Internet e infraestrutura de computação em núvem, com o intuito de obter serviços com alto desempenho, escalabilidade e confiabilidade. Os acordos de níves de serviço adotados pela computação na núvem requerem um reduzido tempo para identificar, diagnosticar e solucionar problemas em sua infraestrutura, de modo a evitar que problemas gerem impactos negativos na qualidade dos serviços prestados aos seus clientes. Então, a detecção de causas de erros, diagnóstico e reprodução de erros provenientes de sistemas distribuídos são desafios que motivam esforços para o desenvolvimento de mecanismos menos intrusivos e mais eficientes, para o monitoramento e depuração de aplicações distribuídas em tempo de execução. A análise de tráfego de rede é uma opção para a medição de sistemas distribuídos, embora haja limitações na capacidade de processar grande quantidade de tráfego de rede em curto tempo, e na escalabilidade para processar tráfego de rede sob variação de demanda de recursos. O objetivo desta dissertação é analisar o problema da capacidade de processamento para mensurar sistemas distribuídos através da análise de tráfego de rede, com o intuito de avaliar o desempenho de sistemas distribuídos de um data center, usando hardware não especializado e serviços de computação em núvem, de uma forma minimamente intrusiva. Nós propusemos uma nova abordagem baseada em MapReduce para profundamente inspecionar tráfego de rede de aplicações distribuídas, com o objetivo de avaliar o desempenho de sistemas distribuídos em tempo de execução, usando hardware não especializado. Nesta dissertação nós avaliamos a eficácia do MapReduce para um algoritimo de avaliação profunda de pacotes, sua capacidade de processamento, o ganho no tempo de conclusão de tarefas, a escalabilidade na capacidade de processamento, e o comportamento seguido pelas fases do MapReduce, quando aplicado à inspeção profunda de pacotes, para extrair indicadores de aplicações distribuídas.
49

Segmentace obrazových dat pomocí hlubokých neuronových sítí / Image Segmentation with Deep Neural Network

Pazderka, Radek January 2019 (has links)
This master's thesis is focused on segmentation of the scene from traffic environment. The solution to this problem is segmentation neural networks, which enables classification of every pixel in the image. In this thesis is created segmentation neural network, that has reached better results than present state-of-the-art architectures. This work is also focused on the segmentation of the top view of the road, as there are no freely available annotated datasets. For this purpose, there was created automatic tool for generation of synthetic datasets by using PC game Grand Theft Auto V. The work compares the networks, that have been trained solely on synthetic data and the networks that have been trained on both real and synthetic data. Experiments prove, that the synthetic data can be used for segmentation of the data from the real environment. There has been implemented a system, that enables work with segmentation neural networks.
50

Vylepšení Adversariální Klasifikace v Behaviorální Analýze Síťové Komunikace Určené pro Detekci Cílených Útoků / Improvement of Adversarial Classification in Behavioral Analysis of Network Traffic Intended for Targeted Attack Detection

Sedlo, Ondřej January 2020 (has links)
V této práci se zabýváme vylepšením systémů pro odhalení síťových průniků. Konkrétně se zaměřujeme na behaviorální analýzu, která využívá data extrahovaná z jednotlivých síťových spojení. Tyto informace využívá popsaný framework k obfuskaci cílených síťových útoků, které zneužívají zranitelností v sadě soudobých zranitelných služeb. Z Národní databáze zranitelností od NIST vybíráme zranitelné služby, přičemž se omezujeme jen na roky 2018 a 2019. Ve výsledku vytváříme nový dataset, který sestává z přímých a obfuskovaných útoků, provedených proti vybraným zranitelným službám, a také z jejich protějšků ve formě legitimního provozu. Nový dataset vyhodnocujeme za použití několika klasifikačních technik, a demonstrujeme, jak důležité je trénovat tyto klasifikátory na obfuskovaných útocích, aby se zabránilo jejich průniku bez povšimnutí. Nakonec provádíme křížové vyhodnocení datasetů pomocí nejmodernějšího datasetu ASNM-NPBO a našeho datasetu. Výsledky ukazují důležitost opětovného trénování klasifikátorů na nových zranitelnostech při zachování dobrých schopností detekovat útoky na staré zranitelnosti.

Page generated in 0.0465 seconds