Spelling suggestions: "subject:"4traffic analysis"" "subject:"ktraffic analysis""
41 |
Reversibla 2+1-fält på motortrafikled Utvärdering av restidseffekter för Värmdöpendlare : Utvärdering av restidseffekter för Värmdöpendlare / Reversible 2+1 lanes on motorways : Evaluation of travel time effects for Värmdö commuter.JOHANSSON, JOSEFIN January 2023 (has links)
Värmdö is a commuter municipality to Stockholm. Road 222 between Värmdö and Stockholm is the main commuter route for both bus and car traffic. Road 222 is a bottleneck at Farstabron in the direction towards Värmdö, where the motorway will go from two to one lane and become a non-meeting motorway. Towards Stockholm, the bridge has two lanes, which is why capacity is not affected as strongly in that direction. The accessibility problems arise mainly in the direction of Värmdö during maximum hours in the afternoon and during weekends and summer time as the municipality also has many holiday homes. Measures to improve accessibility have been raised by both the municipality and the Swedish Transport Administration. Building a new bridge is not relevant as the remaining expected technical life of the bridge is long. The Swedish Transport Administration has an idea for a reversible lane solution on the bridge, which is the proposal studied in this thesis. Data collection and traffic analysis has been performed to study how the travel time effect would be if Farstabron was rebuilt into a reversible 2 + 1 road, with or without a reversible bus lane. The tool used is the microsimulation program PTV VISSIM. The results show that a reversible solution without a bus lane is the alternative that provides by far the largest travel time gains for both car and bus in 2040. The degree project contains a chapter that deals with traffic engineering theory and traffic simulation theory as well as a literature study chapter that summarizes the knowledge about reversible lanes. The information about reversible lanes, even international studies, is poor.Experiences of reversible lanes is good and is mainly to be recommended as the flow in one direction is significantly greater than in the other. The traffic safety risk is primarily linked to unprotected road users. The most common internationally according to what has been identified is to implement reversible lanes on motorways with protective barriers. However, no reversible lane without a barrier have been identified holding 80km/h. Studies have shown that reversible lanes could have a cost-benefit ratio of around 7, which means that the benefit outweighed the costs 7 times in money measured. The weaving dynamics of VISSIM from two to one lane were challenging to calibrate against the reality. Preparatory behavior during lane changes is mainly affected by car-following and lane-changing models in VISSIM. In the simulation the correlation with collected data was slightly more accurate with the car following model for W99 (freeway) rather than W74 (weaving urban rd).
|
42 |
Enhanced Feature Representation in Multi-Modal Learning for Driving Safety AssessmentShi, Liang 03 December 2024 (has links)
This dissertation explores innovative approaches in driving safety through the development of multi-modal learning frameworks that leverage high-frequency, high-resolution driving data and videos to detect safety-critical events (SCEs). The research unfolds across four methodologies, each contributing to advance the field. The introductory chapter sets the stage by outlining the motivations and challenges in driving safety research, highlighting the need for advanced data-driven approaches to improve SCE prediction and detection. The second chapter presents a framework that combines Convolutional Neural Networks (CNN) and Gated Recurrent Units (GRU) with XGBoost. This approach reduces dependency on domain expertise and effectively manages imbalanced crash data, enhancing the accuracy and reliability of SCE detection. In the third chapter, a two-stream network architecture is introduced, integrating optical flow with TimeSFormer with a multi-head attention mechanism. This innovative combination achieves exceptional detection accuracy, demonstrating its potential for applications in driving safety. The fourth chapter focuses on the Dual Swin Transformer framework, which enables concurrent analysis of video and time-series data, this methodology shows effective in processing driving front videos for improved SCE detection. The fifth chapter explores the integration of corporate labels' semantic meaning into a classification model and introduces ScVLM, a hybrid approach that merges supervised learning with contrastive learning techniques to enhance understanding of driving videos and improve event description rationality for Vision-Language Models (VLMs). This chapter addresses existing model limitations by providing a more comprehensive analysis of driving scenarios. This dissertation addresses the challenges of analyzing multimodal data and paves the way for future advancements in autonomous driving and traffic safety management. It underscores the potential of integrating diverse data sources to enhance driving safety. / Doctor of Philosophy / This dissertation explores new approaches to enhance driving safety by using advanced learning frameworks that combine video data with high-frequency, high-resolution driving information, introducing innovative techniques to predict and detect critical driving events. The introduction chapter outlines the current challenges in driving safety and emphasizes the potential of data-driven methods to improve predictions and prevent accidents. The second chapter describes a method that uses machine learning models to analyze crash data, reducing the need for expert input and effectively handling data imbalances. This approach improves the accuracy of predicting safety-critical events. The third chapter introduces a two-stream network that processes both sensor data and video frames, achieving high accuracy in detecting safety-related driving incidents. The fourth chapter presents a framework that simultaneously analyzes video and time-series data, validated using a comprehensive driving study dataset. This technique enhances the detection of complex driving scenarios. The fifth chapter introduces a hybrid learning approach that improves understanding of driving videos and event descriptions. By combining different learning techniques, this method addresses limitations in existing models. This work tackles challenges in analyzing multimodal data and sets the stage for future advancements in autonomous driving and traffic safety management. It highlights the potential of integrating diverse data types to create safer driving environments.
|
43 |
A dual approximation framework for dynamic network analysis: congestion pricing, traffic assignment calibration and network design problemLin, Dung-Ying 10 November 2009 (has links)
Dynamic Traffic Assignment (DTA) is gaining wider acceptance among agencies and practitioners because it serves as a more realistic representation of real-world traffic phenomena than static traffic assignment. Many metropolitan planning organizations and transportation departments are beginning to utilize DTA to predict traffic flows within their networks when conducting traffic analysis or evaluating management measures. To analyze DTA-based optimization applications, it is critical to obtain the dual (or gradient) information as dual information can typically be employed as a search direction in algorithmic design. However, very limited number of approaches can be used to estimate network-wide dual information while maintaining the potential to scale. This dissertation investigates the theoretical/practical aspects of DTA-based dual approximation techniques and explores DTA applications in the context of various transportation models, such as transportation network design, off-line DTA capacity calibration and dynamic congestion pricing. Each of the later entities is formulated as bi-level programs. Transportation Network Design Problem (NDP) aims to determine the optimal network expansion policy under a given budget constraint. NDP is bi-level by nature and can be considered a static case of a Stackelberg game, in which transportation planners (leaders) attempt to optimize the overall transportation system while road users (followers) attempt to achieve their own maximal benefit. The first part of this dissertation attempts to study NDP by combining a decomposition-based algorithmic structure with dual variable approximation techniques derived from linear programming theory. One of the critical elements in considering any real-time traffic management strategy requires assessing network traffic dynamics. Traffic is inherently dynamic, since it features congestion patterns that evolve over time and queues that form and dissipate over a planning horizon. It is therefore imperative to calibrate the DTA model such that it can accurately reproduce field observations and avoid erroneous flow predictions when evaluating traffic management strategies. Satisfactory calibration of the DTA model is an onerous task due to the large number of variables that can be modified and the intensive computational resources required. In this dissertation, the off-line DTA capacity calibration problem is studied in an attempt to devise a systematic approach for effective model calibration. Congestion pricing has increasingly been seen as a powerful tool for both managing congestion and generating revenue for infrastructure maintenance and sustainable development. By carefully levying tolls on roadways, a more efficient and optimal network flow pattern can be generated. Furthermore, congestion pricing acts as an effective travel demand management strategy that reduces peak period vehicle trips by encouraging people to shift to more efficient modes such as transit. Recently, with the increase in the number of highway Build-Operate-Transfer (B-O-T) projects, tolling has been interpreted as an effective way to generate revenue to offset the construction and maintenance costs of infrastructure. To maximize the benefits of congestion pricing, a careful analysis based on dynamic traffic conditions has to be conducted before determining tolls, since sub-optimal tolls can significantly worsen the system performance. Combining a network-wide time-varying toll analysis together with an efficient solution-building approach will be one of the main contributions of this dissertation. The problems mentioned above are typically framed as bi-level programs, which pose considerable challenges in theory and as well as in application. Due to the non-convex solution space and inherent NP-complete complexity, a majority of recent research efforts have focused on tackling bi-level programs using meta-heuristics. These approaches allow for the efficient exploration of complex solution spaces and the identification of potential global optima. Accordingly, this dissertation also attempts to present and compare several meta-heuristics through extensive numerical experiments to determine the most effective and efficient meta-heuristic, as a means of better investigating realistic network scenarios. / text
|
44 |
Analysis of network management protocols in optical networksLim, Kok Seng 03 1900 (has links)
Approved for public release, distribution is unlimited / In this thesis, the scalability issues of Simple Network Management Protocol (SNMP) in optical network management are explored. It is important to understand the effect of varying the number of nodes, the request inter-arrival times and the polling interval on the performance of SNMP and number of nodes that can be effectively managed. The current study explored the effect of varying these parameters in a controlled test environment using the OPNET simulation package. In addition, traffic analysis was performed on measured SNMP traffic and statistics were developed from the traffic analysis. With this understanding of SNMP traffic, an SNMPv1 model was defined and integrated into an OPNET network model to study the performance of SNMP. The simulation results obtained were useful in providing needed insight into the allowable number of nodes an optical network management system can effectively manage. / Civilian, Singapore Ministry of Defense
|
45 |
Towards Ideal Network Traffic Measurement: A Statistical Algorithmic ApproachZhao, Qi 03 October 2007 (has links)
With the emergence of computer networks as one of the primary platforms of communication,
and with their adoption for an increasingly broad range of applications, there is a growing need for high-quality network traffic measurements to better understand, characterize and engineer the network behaviors. Due to the inherent lack of fine-grained measurement capabilities in the original design of the Internet, it does not have enough data or information to compute or even approximate
some traffic statistics such as traffic matrices and per-link delay. While it is possible to infer these statistics from indirect aggregate measurements that are widely supported by network measurement devices (e.g., routers), how to obtain the best possible inferences is often a challenging research problem. We name this as "too little data" problem after its root cause. Interestingly, while "too little data" is clearly a problem, "too much data" is not a blessing either. With the rapid increase
of network link speeds, even to keep sampled summarized network traffic (for inferring various
network statistics) at low sample rates results in too much data to be stored, processed, and transmitted over measurement devices. In summary high-quality measurements in today's Internet is
very challenging due to resource limitations and lack of built-in support, manifested as either "too little data" or "too much data".
We present some new practices and proposals to alleviate these two problems.The contribution is four fold: i) designing universal methodologies towards ideal network traffic measurements; ii) providing accurate estimations for several critical traffic statistics guided
by the proposed methodologies; iii) offering multiple useful and extensible building blocks which can be used to construct a universal network measurement system in the future; iv) leading to some notable mathematical results such as a new large deviation theorem that finds applications in various areas.
|
46 |
Network Service Misuse Detection: A Data Mining ApproachHsiao, Han-wei 01 September 2004 (has links)
As network services progressively become essential communication and information delivery mechanisms of business operations and individuals¡¦ activities, a challenging network management issue emerges: network service misuse. Network service misuse is formally defined as ¡§abuses or unethical, surreptitious, unauthorized, or illegal uses of network services by those who attempt to mask their uses or presence that evade the management and monitoring of network or system administrators.¡¨ Misuses of network services would inappropriately use resources of network service providers (i.e., server machines), compromise the confidentiality of information maintained in network service providers, and/or prevent other users from using the network normally and securely. Motivated by importance of network service misuse detection, we attempt to exploit the use of router-based network traffic data for facilitating the detection of network service misuses. Specifically, in this thesis study, we propose a cross-training method for learning and predicting network service types from router-based network traffic data. In addition, we also propose two network service misuse detection systems for detecting underground FTP servers and interactive backdoors, respectively.
Our evaluations suggest that the proposed cross-training method (specifically, NN->C4.5) outperforms traditional classification analysis techniques (namely C4.5, backpropagation neural network, and Naïve Bayes classifier). In addition, our empirical evaluation conducted in a real-world setting suggests that the proposed underground FTP server detection system could effectively identify underground FTP servers, achieving a recall rate of 95% and a precision rate of 34% (by the NN->C4.5 cross-training technique). Moreover, our empirical evaluation also suggests that the proposed interactive backdoor detection system have the capability in capturing ¡§true¡¨ (or more precisely, highly suspicious) interactive backdoors existing in a real-world network environment.
|
47 |
Μικροσκοπική ανάλυση της συμπεριφοράς των οχημάτων σε cluster υπό την επίδραση κυκλοφοριακού πλήγματος σε αυτοκινητόδρομο / Microscopic analysis of vehicle behaviour in a cluster under the influence of shockwaves in motorwaysΠεππέ, Μαρίνα 07 May 2015 (has links)
Ο ρόλος των Ευφυών Συστημάτων Μεταφορών είναι η βελτίωση της οδικής ασφάλειας,
μέσω της έγκαιρης ανίχνευσης συμβάντων και της αποτελεσματικής διαχείρισης της
κυκλοφορίας. Στο πλαίσιο αυτό, η έρευνα αυτή εστιάζει στην ανάλυση της σχέσης
μεταξύ ενός καθημερινού σοβαρού φαινομένου της κυκλοφορίας, όπως το
κυκλοφοριακό πλήγμα (shockwave) και τα οχήματα που κινούνται σε σχηματισμό
cluster, μέσω του κυκλοφοριακού πλήγματος. Για να σχηματιστεί ένα cluster απαιτείται
δύο ή περισσότερα οχήματα να περιλαμβάνονται στην ακολουθία οχημάτων είτε λόγω
της εγγύτητάς τους είτε λόγω της σχετικής τους απόστασής από άλλα οχήματα.
Ξεκινώντας με βίντεο κυκλοφοριακής στον αυτοκινητόδρομο I-94, Minnesota, USA, οι
τροχιές των οχημάτων εξήχθησαν. Τα αποτελέσματα χρησιμοποιήθηκαν στη συνέχεια
προκειμένου να καθοριστούν μεταβλητές όπως χρονοαπόσταση, ταχύτητα, επιτάχυνση
για τρεις ομάδες οχημάτων, το Cluster, η ομάδα Πριν το Cluster και η ομάδα Μετά το
Cluster. Η σχέση μεταξύ αυτών των ομάδων μελετήθηκε και αποτυπώθηκε σε
γραμμικές συναρτήσεις με πολλαπλές ανεξάρτητες μεταβλητές. / The role of Intellignet Transportation Systems is to enhance traffic safety, through
timely detection of incidents and effective traffic management. In this framework, this
research focuses into analyzing the relationship between a daily severe traffic
phenomenon such as shockwave and vehicles which move in cluster formation through
the shockwave. For a cluster to be formed it is required for two or more vehicles to be
included either because of their closeness or because of their relative distance from
other vehicles on the link. Starting with video recordings of traffic flow on I-94,
Minnesota, USA, vehicles trajectories were extracted. The results were then used in
order to define variables such as vehicles space and time headway, velocity,
acceleration for three groups of vehicles; the Cluster, the Before Cluster group and the
After Cluster group. The relationship between these groups was studied and was
modeled in linear functions with multiple variables.
|
48 |
Um modelo para tarifação confiável em computação em nuvem. / A model for reliable billing in cloud computing.DANTAS, Ana Cristina Alves de Oliveira. 09 May 2018 (has links)
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-05-09T19:09:21Z
No. of bitstreams: 1
ANA CRISTINA ALVES DE OLIVEIRA DANTAS - TESE PPGCC 2015..pdf: 7095339 bytes, checksum: 1129b2ebd21ae60871fd33c142249912 (MD5) / Made available in DSpace on 2018-05-09T19:09:21Z (GMT). No. of bitstreams: 1
ANA CRISTINA ALVES DE OLIVEIRA DANTAS - TESE PPGCC 2015..pdf: 7095339 bytes, checksum: 1129b2ebd21ae60871fd33c142249912 (MD5)
Previous issue date: 2015-11-30 / A computação em nuvem define uma infraestrutura virtual para prestação de serviços em
rede sob demanda. Os clientes contratam serviços em que a infraestrutura primária de hardware e software encontra-se em centros de dados remotos e não localmente e sobre seu próprio domínio. Há uma necessidade de ferramentas de monitoramento regulatório, que possam operar dentro da infraestrutura do provedor, ou fora dele, deixando os clientes a par do estado atual ou do histórico do desempenho dos serviços contratados. A computação em nuvem é fortemente dependente das redes computadores e o desempenho dos serviços em nuvem pode ser monitorado via métricas de rede. O conhecimento de métricas de desempenho sobre a execução dos serviços contribui para promover a relação de confiança entre cliente e provedor, bem como fornece subsídios para contestações em faturas, caso necessário. Um modelo detarifaçãoconfiável envolve a disponibilização de métricas de desempenho dos serviços contratados, de modo que o cliente possa aferir as tarifas cobradas. Clientes e provedores podem alternar papeis em diferentes níveis de prestação de serviços de computação em nuvem. Um cliente no nível de infraestrutura pode ser um provedor de dados, por exemplo. Um modelo de tarifação confiável fornece subsídios também ao provedor de serviços para melhorar a alocação de recursos, bem como indicadores para investimentos em infraestrutura que evitem perdas financeiras causadas pelo pagamento de multas por descumprimento de acordo de nível de serviço. O objeto desta tese de doutorado é desenvolver um modelo para tarifação confiável de serviços de computação em nuvem que envolva a detecção e notificação de anomalias de tráfego de rede em tempo real que auxilie na estimativa do custo causado por tais anomalias para o modelo de negócio e que contribua para um processo de alocação de recursos capaz de reduzir custos com penalidades financeiras. A validação do modelo foi realizada por meio de escalonamento de recursos baseado em custo. O modelo de tarifação confiável integrado ao mecanismo de escalonamento reduziu custos e perdas financeiras provenientes de violações de acordos de nível de serviço. / Cloud computing defines a virtual infrastructure to provide network services on demand.
Customers contract services where the primary infrastructure of hardware software is in
remote data centers and on the customer own domain. Sharing the same network, or the
same physical machine, amongvarious tenants entails some concerns related to information confidentiality, security, troubleshooting, separation of responsibilities for guaranteeing the quality of the technical goals across the different abstraction levels, and how the customer may monitor the use of services and eventual failures. Prior to cloud computing, allowed the service providers dominate the entire chain of information, providing information to enable them to manage the business globally to avoid financial losses and increase profits. With the use of cloud computing services, the customer possesses no control over levels virtualization services that are supporting the level you are operating. A client in infrastructure level can be a data provider, for instance. Thus, it is important to have appropriate tools to keep track of the performance of the contracted services. Cloud computing is heavily dependent on computer networks. In this sense, it is abusiness differential to provide network performance metrics either for the customers, which is an important non-functional requirement that is sometimes ignored by many cloud service providers. The disposal of real-time performance metrics contributes to promote trust relationship between customer and provider, and to aid the provider to better dimension resources to avoid finacial losses. The object of this doctoral thesis is to develop a model for reliable charging of cloud computing services cloud that accomplishes the network traffic anomaly detection and appropriate notification in real time, as well as enables the estimation of the cost caused by anomalies to business model.
|
49 |
An approach for profiling distributed applications through network traffic analysisVieira, Thiago Pereira de Brito 05 March 2013 (has links)
Submitted by João Arthur Martins (joao.arthur@ufpe.br) on 2015-03-12T17:32:13Z
No. of bitstreams: 2
Dissertação Thiago Vieira.pdf: 1199574 bytes, checksum: 81f443f0b4fbf4d223cda440cc56d722 (MD5)
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) / Approved for entry into archive by Daniella Sodre (daniella.sodre@ufpe.br) on 2015-03-13T14:22:30Z (GMT) No. of bitstreams: 2
Dissertação Thiago Vieira.pdf: 1199574 bytes, checksum: 81f443f0b4fbf4d223cda440cc56d722 (MD5)
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) / Made available in DSpace on 2015-03-13T14:22:30Z (GMT). No. of bitstreams: 2
Dissertação Thiago Vieira.pdf: 1199574 bytes, checksum: 81f443f0b4fbf4d223cda440cc56d722 (MD5)
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
Previous issue date: 2013-03-05 / Distributed systems has been adopted for building modern Internet services and cloud
computing infrastructures, in order to obtain services with high performance, scalability,
and reliability. Cloud computing SLAs require low time to identify, diagnose and solve
problems in a cloud computing production infrastructure, in order to avoid negative
impacts into the quality of service provided for its clients. Thus, the detection of error
causes, diagnose and reproduction of errors are challenges that motivate efforts to the
development of less intrusive mechanisms for monitoring and debugging distributed
applications at runtime.
Network traffic analysis is one option to the distributed systems measurement, although
there are limitations on capacity to process large amounts of network traffic
in short time, and on scalability to process network traffic where there is variation of
resource demand.
The goal of this dissertation is to analyse the processing capacity problem for measuring
distributed systems through network traffic analysis, in order to evaluate the
performance of distributed systems at a data center, using commodity hardware and cloud
computing services, in a minimally intrusive way.
We propose a new approach based on MapReduce, for deep inspection of distributed
application traffic, in order to evaluate the performance of distributed systems at runtime,
using commodity hardware. In this dissertation we evaluated the effectiveness of
MapReduce for a deep packet inspection algorithm, its processing capacity, completion
time speedup, processing capacity scalability, and the behavior followed by MapReduce
phases, when applied to deep packet inspection for extracting indicators of distributed
applications. / Sistemas distribuídos têm sido utilizados na construção de modernos serviços da Internet
e infraestrutura de computação em núvem, com o intuito de obter serviços com alto
desempenho, escalabilidade e confiabilidade. Os acordos de níves de serviço adotados
pela computação na núvem requerem um reduzido tempo para identificar, diagnosticar
e solucionar problemas em sua infraestrutura, de modo a evitar que problemas gerem
impactos negativos na qualidade dos serviços prestados aos seus clientes. Então, a
detecção de causas de erros, diagnóstico e reprodução de erros provenientes de sistemas
distribuídos são desafios que motivam esforços para o desenvolvimento de mecanismos
menos intrusivos e mais eficientes, para o monitoramento e depuração de aplicações
distribuídas em tempo de execução.
A análise de tráfego de rede é uma opção para a medição de sistemas distribuídos,
embora haja limitações na capacidade de processar grande quantidade de tráfego de
rede em curto tempo, e na escalabilidade para processar tráfego de rede sob variação de
demanda de recursos.
O objetivo desta dissertação é analisar o problema da capacidade de processamento
para mensurar sistemas distribuídos através da análise de tráfego de rede, com o intuito
de avaliar o desempenho de sistemas distribuídos de um data center, usando hardware
não especializado e serviços de computação em núvem, de uma forma minimamente
intrusiva.
Nós propusemos uma nova abordagem baseada em MapReduce para profundamente
inspecionar tráfego de rede de aplicações distribuídas, com o objetivo de avaliar o
desempenho de sistemas distribuídos em tempo de execução, usando hardware não
especializado. Nesta dissertação nós avaliamos a eficácia do MapReduce para um
algoritimo de avaliação profunda de pacotes, sua capacidade de processamento, o ganho
no tempo de conclusão de tarefas, a escalabilidade na capacidade de processamento, e o
comportamento seguido pelas fases do MapReduce, quando aplicado à inspeção profunda
de pacotes, para extrair indicadores de aplicações distribuídas.
|
50 |
Segmentace obrazových dat pomocí hlubokých neuronových sítí / Image Segmentation with Deep Neural NetworkPazderka, Radek January 2019 (has links)
This master's thesis is focused on segmentation of the scene from traffic environment. The solution to this problem is segmentation neural networks, which enables classification of every pixel in the image. In this thesis is created segmentation neural network, that has reached better results than present state-of-the-art architectures. This work is also focused on the segmentation of the top view of the road, as there are no freely available annotated datasets. For this purpose, there was created automatic tool for generation of synthetic datasets by using PC game Grand Theft Auto V. The work compares the networks, that have been trained solely on synthetic data and the networks that have been trained on both real and synthetic data. Experiments prove, that the synthetic data can be used for segmentation of the data from the real environment. There has been implemented a system, that enables work with segmentation neural networks.
|
Page generated in 0.0633 seconds