• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 982
  • 544
  • 185
  • 108
  • 74
  • 66
  • 63
  • 28
  • 24
  • 20
  • 20
  • 14
  • 9
  • 7
  • 6
  • Tagged with
  • 2546
  • 1096
  • 875
  • 757
  • 721
  • 626
  • 347
  • 326
  • 308
  • 293
  • 253
  • 240
  • 238
  • 237
  • 220
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

HIDRA: Hierarchical Inter-domain Routing Architecture

Clevenger, Bryan 01 May 2010 (has links) (PDF)
As the Internet continues to expand, the global default-free zone (DFZ) forwarding table has begun to grow faster than hardware can economically keep pace with. Various policies are in place to mitigate this growth rate, but current projections indicate policy alone is inadequate. As such, a number of technical solutions have been proposed. This work builds on many of these proposed solutions, and furthers the debate surrounding the resolution to this problem. It discusses several design decisions necessary to any proposed solution, and based on these tradeoffs it proposes a Hierarchical Inter-Domain Routing Architecture - HIDRA, a comprehensive architecture with a plausible deployment scenario. The architecture uses a locator/identifier split encapsulation scheme to attenuate both the immediate size of the DFZ forwarding table, and the projected growth rate. This solution is based off the usage of an already existing number allocation policy - Autonomous System Numbers (ASNs). HIDRA has been deployed to a sandbox network in a proof-of-concept test, yielding promising results.
92

Optimizing Peer Selection among Internet Service Providers (ISPs)

Mustafa, Shahzeb 01 January 2021 (has links)
Connections among Internet Service Providers (ISPs) form the backbone of the Internet. This enables communications across the globe. ISPs are represented as Autonomous Systems (ASes) in the global Internet and inter-ISP traffic exchange takes place via inter-AS links, which are formed based on inter-ISP connections and agreements. In addition to customer-provider agreements, a crucial type of inter-ISP agreement is peering. ISP administrators use various platforms like AP-NIC and NANOG networking events for establishing new peering connections in accordance with their business and technical needs. Such methods are often inefficient and slow, potentially resulting in missed opportunities or sub-optimal routes. The process can take several months with excessive amounts of paperwork. We investigate developing tools and algorithms that can help make the inter-AS connection formation more dynamic and reliable by helping ISPs make informed decisions, in line with their needs. We analyze the largest public datasets from CAIDA and PeeringDB to identify common trends and requirements that ISPs have in the context of peering. Using this analysis, we develop a simple yet effective peering predictor model, that identifies ISP pairs that show promising signs of forming a good peering relation. For motivating research and development in this area, we develop an Internet eXchange Point (IXP) emulator that ISP admins can use as a testbed for analyzing different peering policies within an IXP. We further extend our ideas about peering to wireless cellular network and design a working wireless peering model, and present how optimal agreements can be reached and best wireless peering partners and areas can be chosen. With the exponential increase in traffic volume and dependency on the Internet, it is crucial that the underlying network is dynamic and robust. To this end, we address issues with peering from multiple angles and develop novel models for automation and optimization.
93

Security mechanisms for multimedia networking

Tosun, Ali Saman 07 August 2003 (has links)
No description available.
94

ink - An HTTP Benchmarking Tool

Phelps, Andrew Jacob 15 June 2020 (has links)
The Hypertext Transfer Protocol (HTTP) is one the foundations of the modern Internet. Because HTTP servers may be subject to unexpected periods of high load, developers use HTTP benchmarking utilities to simulate the load generated by users. However, many of these tools do not report performance details at a per-client level, which deprives developers of crucial insights into a server's performance capabilities. In this work, we present ink, an HTTP benchmarking tool that enables developers to better understand server performance. ink provides developers with a way of visualizing the level of service that each individual client receives. It does this by recording a trace of events for each individual simulated client. We also present a GUI that enables users to explore and visualizing the data that is generated by an HTTP benchmark. Lastly, we present a method for running HTTP benchmarks that uses a set of distributed machines to scale up the achievable load on the benchmarked server. We evaluate ink by performing a series of case studies to show that ink is both performant and useful. We validate ink's load generation abilities within the context of a single machine and when using a set of distributed machines. ink is shown to be capable of simulating hundreds of thousands of HTTP clients and presenting per-client results through the ink GUI. We also perform a set of HTTP benchmarks where ink is able to highlight performance issues and differences between server implementations. We compare servers like NGINX and Apache and highlight their differences using ink. / Master of Science / The World Wide Web (WWW) uses the Hypertext Transfer Protocol to send web content such as HTML pages or video to users. The servers providing this content are called HTTP servers. Sometimes, the performance of these HTTP servers is compromised because a large number of users requests documents at the same time. To prepare for this, server maintainers test how many simultaneous users a server can handle by using benchmarking utilities. These benchmarking utilities work by simulating a set of clients. Currently, these tools focus only on the amount of requests that a server can process per second. Unfortunately, this coarse-grained metric can hide important information, such as the level of service that individual clients received. In this work, we present ink, an HTTP benchmarking utility we developed that focuses on reporting information for each simulated client. Reporting data in this way allows for the developer to see how well each client was served during the benchmark. We achieve this by constructing data visualizations that include a set of client timelines. Each of these timelines represents the service that one client received. We evaluated ink through a series of case studies. These focus on the performance of the utility and the usefulness of the visualizations produced by ink. Additionally, we deployed ink in Virginia Tech's Computer Systems course. The students were able to use the tool and took a survey pertaining to their experience with the tool.
95

Social Networking for Learning in Higher Education: Capitalising on Social Capital

Hartley, Alison S., Kassam, A.A. January 2015 (has links)
Yes / This study explores the evolution of student-led social networking groups initiated and sustained by a cohort of undergraduate students over a 3-year time frame. The study contributes to this growing area of research by exploring the impact of peer-led, peer-supported informal learning through social media networks. Social capital is a useful lens through which to understand the findings, and particularly in interpreting descriptions of the evolution of the group over time. The findings suggest that students build bridging social capital to support the transition into higher education, form relationships and learn collaboratively through a large cohort-based Facebook group. Over time, this form of social capital and the use of the Facebook group declines due to a lack of perceived reciprocity and an increased perception of competitiveness amongst peers. However, there is accompanied by a subsequent rise in the building of bonding social capital between closer peer relationships facilitated through the use of various WhatsApp groups. The findings have implications for considering how social networking might support the student journey towards more nuanced, more personalised collaborative learning and a move towards more self-directed learning.
96

Gurthang - A Fuzzing Framework for Concurrent Network Servers

Shugg, Connor William 13 June 2022 (has links)
The emergence of Internet-connected technologies has given the world a vast number of services easily reachable from our computers and mobile devices. Web servers are one of the dominant types of computer programs that provide these services to the world by serving files and computations to connected users. Because of their accessibility and importance, web servers must be robust to avoid exploitation by hackers and other malicious users. Fuzzing is a software testing technique that seeks to discover bugs in computer programs in an automated fashion. However, most state-of-the-art fuzzing tools (fuzzers) are incapable of fuzzing web servers effectively, due to their reliance on network connections to receive input and other unique constraints they follow. Past research exists to remedy this situation, and while they have had success, certain drawbacks are introduced in the process. To address this, we created Gurthang, a fuzzing framework that gives state-of-the-art fuzzers the ability to fuzz web servers easily, without having to modify source code, the web server's threading model, or fundamentally change the way a server behaves. We introduce novelty by providing the ability to establish and send data across multiple concurrent connections to the target web server in a single execution of a fuzzing campaign, thus opening the door to the discovery of concurrency-related bugs. We accomplish this through a novel file format and two shared libraries that harness existing state-of-the-art fuzzers. We evaluated Gurthang by performing a research study at Virginia Tech that yielded 48 discovered bugs among 55 web servers written by students. Participants utilized Gurthang to integrate fuzzing into their software development process and discover bugs. In addition, we evaluated Gurthang against Apache and Nginx, two real-world web servers. We did not discover any bugs on Apache or Nginx, but Gurthang successfully enabled us to fuzz them without needing to modify their source code. Our evaluations show Gurthang is capable of performing fuzz-testing on web servers and discovering real bugs. / Master of Science / The Internet is widely apparent in our everyday lives. Since its creation, a wide variety of technologies and critical infrastructures have become accessible via the Internet. While this accessibility is a great boon for many, it does not come without risk. Web servers are one of the dominant types of computer programs that make the Internet what it is today; they are responsible for transmitting web pages and other files to connected users, as well as performing important computations per the user's request. Like any computer program, web servers contain bugs that may lead to vulnerabilities if exploited by a malicious user (a hacker). Considering they are open to all via the Internet, it is critical to catch and fix as many bugs as possible during a web server's development. Certain tools, called fuzzers, have been created to test computer programs in an automated fashion to discover bugs (called fuzzing, or fuzz-testing), although many of these fuzzers lack the ability to effectively test web servers due to the specific constraints a web server must follow. Previous research exists to fix this problem, but certain drawbacks are introduced in the process. To address this, we developed Gurthang, a fuzzing framework that gives state-of-the-art fuzzers the ability to test a variety web servers, while also fixing some of these drawbacks and introducing a novel technique to test the concurrency aspects of a web server. We evaluated Gurthang against several web servers through a research study at Virginia Tech in which participating students performed fuzz-testing on web servers they implemented for their coursework. We discovered 48 bugs across 55 web servers through this study. We also evaluated Gurthang against Apache and Nginx (two web servers frequently used in the real world) and showed Gurthang is capable of fuzzing them without the need to modify their source code.
97

Networking a podmínky jeho úspěšné realizace / Networking

Francová, Tereza January 2007 (has links)
The thesis focuses on the networking topic. The networking is defined and compared to other terms that are often used as a synonym, e.g. social capital. Three contexts are researched. The first study aims networking online, its specifics, advantages and possible risks. The second part studies selected correlations of networking activities of University of economics alumni. The third study searches out the networking activities and standards of Czech managers in the field of finance. The results are connected with the possible usage in both fields, the one of university education and management too.
98

ASSESSMENT OF DISAGGREGATING THE SDN CONTROL PLANE

Adib Rastegarnia (7879706) 20 November 2019 (has links)
Current SDN controllers have been designed based on a monolithic approach that integrates all of services and applications into one single, huge program. The monolithic design of SDN controllers restricts programmers who build management applications to specific programming interfaces and services that a given SDN controller provides, making application development dependent on the controller, and thereby restricting portability of management applications across controllers. Furthermore, the monolithic approach means an SDN controller must be recompiled whenever a change is made, and does not provide an easy way to add new functionality or scale to handle large networks. To overcome the weaknesses inherent in the monolithic approach, the next generation of SDN controllers must use a distributed, microservice architecture that disaggregates the control plane by dividing the monolithic controller into a set of cooperative microservices. Disaggregation allows a programmer to choose a programming language that is appropriate for each microservice. In this dissertation, we describe steps taken towards disaggregating the SDN control plane, consider potential ways to achieve the goal, and discuss the advantages and disadvantages of each. We propose a distributed architecture that disaggregates controller software into a small controller core and a set of cooperative microservices. In addition, we present a software defined network programming framework called Umbrella that provides a set of abstractions that programmers can use for writing of SDN management applications independent of NB APIs that SDN controllers provide. Finally, we present an intent-based network programming framework called OSDF to provide a high-level policy based API for programming of network devices using SDN. <br>
99

Knowledge Sharing in Inter-Organizational Networks : An Evaluation of the Knowledge Sharing Processes in the SAPSA Network

Fröjdh, Karin, Brengesjö, Josef, Wenderholm, Kirsten January 2012 (has links)
This paper is aiming to discover the conditions and processes that facilitate and influence an efficient knowledge transfer in knowledge networks such as the inter-organizational SAP network SAPSA. Knowledge is a strategically important source for companies, not only because it fosters internal growth, but also because it leads to competitive advantage. In the last years the importance of knowledge networking has considerably increased and especially inter-organizational learning is considered to present a factor having critical influence on the success of a company. Through the participation in networks individuals are able to trade their knowledge and information with others experiences, ideas and expertise. Knowledge sharing and networking should hence be considered a highly social process, which is influenced by various factors and conditions. Through interviews with the different members and participative observation in the focus groups of the SAPSA network the importance and effect, these facilitating conditions were evaluated, drawing valuable conclusions on how to enhance the knowledge sharing process. It was found that the main problem of SAPSA was the low activity in the focus groups, which had a negative influence on the knowledge sharing processes. The problem however was not that the members did not consider knowledge networking per se as useful, in contrast almost all respondents regarded knowledge networking as highly beneficial stressed the advantages of knowledge sharing. This led to the assumption that the problem had to lie in the implementation of the knowledge sharing process. It furthermore was detected that for sharing different kinds of knowledge such as tacit and explicit knowledge, different forms of meeting proved to be more efficient than others and that form of knowledge and the conversion mode should be taken into consideration when deciding on the type of meeting. Various conditions were found to have impact on the efficiency of the knowledge sharing process, such as an optimal group size, the level of trust and commitment and the composition of a group and knowledge base. Furthermore communication was regarded to present an important issue having a big impact on the quality of the knowledge exchange. Management support from SAPSA and the respective user companies proved to be essential in order to increase motivation and commitment in the focus groups. Some strategic changes were considered to have a positive influence on the knowledge networking processes within SAPSA. The establishment of a clear consistent vision capturing all the different groups within the network would provide benefits in order to be able to motivate members to participate. Here the focus should lie on the decision makers, since those were the ones to have the ability to set incentives and provide resources for the users. In this process the difficulties to measure the positive outcomes of knowledge networking and the subsequent danger of an underinvestment into knowledge networking should be taken into consideration. SAPSA should increase their influence on the focus groups and provide more guidance, in order to assure the quality of the knowledge exchange in the meetings. A new communication strategy should be developed with focus on an Internet based forum, where users and management could interact with each other. Further research in other knowledge networks is necessary in order to increase the transferability of the gained results.
100

Named Data Networking in Local Area Networks

Shi, Junxiao, Shi, Junxiao January 2017 (has links)
The Named Data Networking (NDN) is a new Internet architecture that changes the network semantic from packet delivery to content retrieval and promises benefits in areas such as content distribution, security, mobility support, and application development. While the basic NDN architecture applies to any network environment, local area networks (LANs) are of particular interest because of their prevalence on the Internet and the relatively low barrier to deployment. In this dissertation, I design NDN protocols and implement NDN software, to make NDN communication in LAN robust and efficient. My contributions include: (a) a forwarding behavior specification required on every NDN node; (b) a secure and efficient self-learning strategy for switched Ethernet, which discovers available contents via occasional flooding, so that the network can operate without manual configuration, and does not require a routing protocol or a centralized controller; (c) NDN-NIC, a network interface card that performs name-based packet filtering, to reduce CPU overhead and power consumption of the main system during broadcast communication on shared media; (d) the NDN Link Protocol (NDNLP), which allows the forwarding plane to add hop-by-hop headers, and provides a fragmentation-reassembly feature so that large NDN packets can be sent directly over Ethernet with limited MTU.

Page generated in 0.0954 seconds