• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 34
  • 16
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 76
  • 76
  • 36
  • 21
  • 19
  • 17
  • 15
  • 14
  • 10
  • 10
  • 10
  • 8
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Assessment and reduction of the impacts of large freight vehicles on urban traffic corridor performance

Ramsay, Euan Douglas January 2007 (has links)
Increasing demand for road freight has lead to a widespread adoption of more-productive large freight vehicles (LFVs), such as B-Doubles, by Australia's road freight industry. Individual LFVs have a greater potential to impact traffic efficiency through their greater length and poorer longitudinal performance. However, this is offset to an extent as fewer vehicles are required to perform a given freight task on a tonne-km basis. This research has developed a means of characterising the effects that large freight vehicles have on the performance of an urban arterial corridor managed by signalised intersections. A corridor-level microsimulation model was developed from first principles, which modelled the longitudinal performance of individual vehicles to a greater accuracy than most existing traffic simulation software does. The model was calibrated from traffic counts and GPS-equipped chase car surveys conducted on an urban arterial corridor in Brisbane's southern suburbs. The model was applied to various freight policy and traffic management scenarios, including freight vehicle mode choice, lane utilisation and traffic signal settings; as well as the effectiveness of green time extension for approaching heavy vehicles. Benefits were able to be quantified in terms of reduced travel times and stop rates for both heavy and light vehicles in urban arterial corridors.
22

Síťová sonda / Network Probe

Tkáč, Peter Unknown Date (has links)
he concern of the thesis is analysis and comparison of network probes. This thesis is dedicated to open-source solutions of network probes, which are available under public license. The first part of the thesis describes architecture and function principles of network probes. Next part describes each probe and its functions. Last part of the thesis contains the description of the solution of network probe and principles of its operation.
23

Automatic Forensic Analysis of PCCC Network Traffic Log

Senthivel, Saranyan 09 August 2017 (has links)
Most SCADA devices have a few built-in self-defence mechanisms and tend to implicitly trust communications received over the network. Therefore, monitoring and forensic analysis of network traffic is a critical prerequisite for building an effective defense around SCADA units. In this thesis work, We provide a comprehensive forensic analysis of network traffic generated by the PCCC(Programmable Controller Communication Commands) protocol and present a prototype tool capable of extracting both updates to programmable logic and crucial configuration information. The results of our analysis shows that more than 30 files are transferred to/from the PLC when downloading/uplloading a ladder logic program using RSLogix programming software including configuration and data files. Interestingly, when RSLogix compiles a ladder-logic program, it does not create any lo-level representation of a ladder-logic file. However the low-level ladder logic is present and can be extracted from the network traffic log using our prototype tool. the tool extracts SMTP configuration from the network log and parses it to obtain email addresses, username and password. The network log contains password in plain text.
24

Comparing Expected and Real–Time Spotify Service Topology

Visockas, Vilius January 2012 (has links)
Spotify is a music streaming service that allows users to listen to their favourite music. Due to the rapid growth in the number of users, the amount of processing that must be provided by the company’s data centers is also growing. This growth in the data centers is necessary, despite the fact that much of the music content is actually sourced by other users based on a peer-to-peer model. Spotify’s backend (the infrastructure that Spotify operates to provide their music streaming service) consists of a number of different services, such as track search, storage, and others. As this infrastructure grows, some service may behave not as expected. Therefore it is important not only for Spotify’s operations (footnote: Also known as the Service Reliability Engineers Team (SRE)) team, but also for developers, to understand exactly how the various services are actually communicating. The problem is challenging because of the scale of the backend network and its rate of growth. In addition, the company aims to grow and expects to expand both the number of users and the amount of content that is available. A steadily increasing feature-set and support of additional platforms adds to the complexity. Another major challenge is to create tools which are useful to the operations team by providing information in a readily comprehensible way and hopefully integrating these tools into their daily routine. The ultimate goal is to design, develop, implement, and evaluate a tool which would help the operations team (and developers) to understand the behavior of the services that are deployed on Spotify’s backend network. The most critical information is to alert the operations staff when services are not operating as expected. Because different services are deployed on different servers the communication between these services is reflected in the network communication between these servers. In order to understand how the services are behaving when there are potentially many thousands of servers we will look for the patterns in the topology of this communication, rather than looking at the individual servers. This thesis describes the tools that successfully extract these patterns in the topology and compares them to the expected behavior. / Spotify är en växande musikströmningstjänst som möjliggör för dess användare att lyssna på sin favoritmusik. Med ett snabbt växande användartal, följer en tillväxt i kapacitet som måste tillhandahållas genom deras datacenter. Denna växande kapacitet är nödvändig trots det faktum att mycket av deras innehåll hämtas från andra användare via en peer-to-peer modell. Spotifys backend (den infrastruktur som kör Spotifys tjänster) består av ett antal distinkta typer som tillhandahåller bl.a. sökning och lagring. I takt med att deras backend växer, ökar risken att tjänster missköter sig. Därför är det inte bara viktigt för Spotifys driftgrupp, utan även för deras utvecklare, att förstå hur dessa kommunicerar. Detta problem är en utmaning p.g.a. deras storskaliga infrastruktur, och blir större i takt med att den växer. Företaget strävar efter tillväxt och förväntar detta i både antalet användare och tillgängligt innehåll. Stadigt ökande funktioner och antalet distinkta plattformar bidrar till komplexitet. Ytterligare en utmaning är att bidra med verktyg som kan användas av driftgrupp för att tillhandahålla information i ett tillgängligt och överskådligt format, och att förhoppningsvis integrera dessa i en daglig arbetsrutin. Det slutgiltiga målet är att designa, utveckla, implementera och utvärdera ett verktyg som låter deras driftgrupp (och utvecklare) förstå beteenden i olika tjänster som finns i Spotifys infrastruktur. Då dessa tjänster är utplacerade på olika servrar, reflekteras kommunikationen mellan dem i deras nätverketskommunikation. För att förstå tjänsternas beteende när det potentiellt kan finnas tusentals servrar bör vi leta efter mönster i topologin, istället för beteenden på individuella servrar.
25

Collecting and analyzing Tor exit node traffic

Jonsson, Torbjörn, Edeby, Gustaf January 2021 (has links)
Background. With increased Internet usage occurring across the world journalists, dissidents and criminals have moved their operations online, and in turn, governments and law enforcement have increased their surveillance of their country’s networks. This has increased the popularity of programs masking users’ identities online such as the Tor Project. By encrypting and routing the traffic through several nodes, the users’ identity is hidden. But how are Tor users utilizing the network, and is any of it in the plain text despite the dangers of it? How has the usage of Tor changed compared to 11 years ago? Objectives. The thesis objective is to analyze captured Tor network traffic that reveals what data is sent through the network. The collected data helps draw conclusions about Tor usage and is compared with previous studies. Methods. Three Tor exit nodes are set up and operated for one week in the US, Germany, and Japan. We deploy packet sniffers performing a deep packet inspection on each traffic flow to identify attributes such as application protocol, number of bytes sent in a flow, and content-type if the traffic was sent in plain text. All stored data is anonymized. Results. The results show that 100.35 million flows were recorded, with 32.47%of them sending 4 or fewer packets in total. The most used application protocol was TLS with 55.03% of total traffic. The HTTP usage was 15.91% and 16% was unknown protocol(s). The countries receiving the most traffic were the US with over45% of all traffic, followed by the Netherlands, UK, and Germany with less than 10%of recorded traffic as its destination. The most frequently used destination ports were 443 at 49.5%, 5222 at 12.7%, 80 with 11.9%, and 25 at 9.3%.Conclusions. The experiment shows that it is possible to perform traffic analysis on the Tor network and acquire significant data. It shows that the Tor network is widely used in the world but with the US and Europe accounting for most of the traffic. As expected there has been a shift from HTTP to HTTPS traffic when compared to previous research. However, there is still unencrypted traffic on the network, where some of the traffic could be explained by automated tools like web crawlers. Tor users need to increase their awareness in what traffic they are sending through the network, as a user with malicious intent can perform the same experiment and potentially acquire unencrypted sensitive data.
26

Encrypted Traffic Analysis on Smart Speakers with Deep Learning

Kennedy, Sean M. 21 October 2019 (has links)
No description available.
27

Validation and Evaluation of Emergency Response Plans through Agent-Based Modeling and Simulation

Helsing, Joseph 05 1900 (has links)
Biological emergency response planning plays a critical role in protecting the public from possible devastating results of sudden disease outbreaks. These plans describe the distribution of medical countermeasures across a region using limited resources within a restricted time window. Thus, the ability to determine that such a plan will be feasible, i.e. successfully provide service to affected populations within the time limit, is crucial. Many of the current efforts to validate plans are in the form of live drills and training, but those may not test plan activation at the appropriate scale or with sufficient numbers of participants. Thus, this necessitates the use of computational resources to aid emergency managers and planners in developing and evaluating plans before they must be used. Current emergency response plan generation software packages such as RE-PLAN or RealOpt, provide rate-based validation analyses. However, these types of analysis may neglect details of real-world traffic dynamics. Therefore, this dissertation presents Validating Emergency Response Plan Execution Through Simulation (VERPETS), a novel, computational system for the agent-based simulation of biological emergency response plan activation. This system converts raw road network, population distribution, and emergency response plan data into a format suitable for simulation, and then performs these simulations using SUMO, or Simulations of Urban Mobility, to simulate realistic traffic dynamics. Additionally, high performance computing methodologies were utilized to decrease agent load on simulations and improve performance. Further strategies, such as use of agent scaling and a time limit on simulation execution, were also examined. Experimental results indicate that the time to plan completion, i.e. the time when all individuals of the population have received medication, determined by VERPETS aligned well with current alternate methodologies. It was determined that the dynamic of traffic congestion at the POD itself was one of the major factors affecting the completion time of the plan, and thus allowed for more rapid calculations of plan completion time. Thus, this system provides not only a novel methodology to validate emergency response plans, but also a validation of other current strategies of emergency response plan validation.
28

Macroscopic Traffic Safety Analysis Based On Trip Generation Characteristics

Siddiqui, Chowdhury 01 January 2009 (has links)
Recent research has shown that incorporating roadway safety in transportation planning has been considered one of the active approaches to improve safety. Aggregate level analysis for predicting crash frequencies had been contemplated to be an important step in this process. As seen from the previous studies various categories of predictors at macro level (census blocks, traffic analysis zones, census tracts, wards, counties and states) have been exhausted to find appropriate correlation with crashes. This study contributes to this ongoing macro level road safety research by investigating various trip productions and attractions along with roadway characteristics within traffic analysis zones (TAZs) of four counties in the state of Florida. Crashes occurring in one thousand three hundred and forty-nine TAZs in Hillsborough, Citrus, Pasco, and Hernando counties during the years 2005 and 2006 were examined in this study. Selected counties were representative from both urban and rural environments. To understand the prevalence of various trip attraction and production rates per TAZ the Euclidian distances between the centroid of a TAZ containing a particular crash and the centroid of the ZIP area containing the at fault driver's home address for that particular crash was calculated. It was found that almost all crashes in Hernando and Citrus County for the years 2005-2006 took place in about 27 miles radius centering at the at-fault drivers' home. Also about sixty-two percent of crashes occurred approximately at a distance of between 2 and 10 miles from the homes of drivers who were at fault in those crashes. These results gave an indication that home based trips may be more associated with crashes and later trip related model estimates which were found significant at 95% confidence level complied with this hypothesized idea. Previous aggregate level road safety studies widely addressed negative binomial distribution of crashes. Properties like non-negative integer counts, non-normal distribution, over-dispersion in the data have increased suitability of applying the negative binomial technique and has been selected to build crash prediction models in this research. Four response variables which were aggregated at TAZ-level were total number of crashes, severe (fatal and severe injury) crashes, total crashes during peak hours, and pedestrian and bicycle related crashes. For each response separate models were estimated using four different sets of predictors which are i) various trip variables, ii) total trip production and total trip attraction, iii) road characteristics, and iv) finally considering all predictors into the model. It was found that the total crash model and peak hour crash model were best estimated by the total trip productions and total trip attractions. On the basis of log-likelihoods, deviance value/degree of freedom, and Pearson Chi-square value/degree of freedom, the severe crash model was best fit by the trip related variables only and pedestrian and bicycle related crash model was best fit by the road related variables only. The significant trip related variables in the severe crash models were home-based work attractions, home-based shop attractions, light truck productions, heavy truck productions, and external-internal attractions. Only two variables- sum of roadway segment lengths with 35 mph speed limit and number of intersections per TAZ were found significant for pedestrian and bicycle related crash model developed using road characteristics only. The 1349 TAZs were grouped into three different clusters based on the quartile distribution of the trip generations and were termed as less-tripped, moderately-tripped, and highly-tripped TAZs. It was hypothesized that separate models developed for these clusters would provide a better fit as the clustering process increases the homogeneity within a cluster. The cluster models were re-run using the significant predictors attained from the joint models and were compared with the previous sets of models. However, the differences in the model fits (in terms of Alkaike's Information Criterion values) were not significant. This study points to different approaches when predicting crashes at the zonal level. This research is thought to add to the literature on macro level crash modeling research by considering various trip related data into account as previous studies in zone level safety have not explicitly considered trip data as explanatory covariates.
29

A Holistic Study on Electronic and Visual Signal Integration for Efficient Surveillance

Li, Gang 11 August 2017 (has links)
No description available.
30

Statistical Analysis of Malformed Packets and Their Origins in the Modern Internet

Bykova, Marina 12 April 2002 (has links)
No description available.

Page generated in 0.0493 seconds