• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8598
  • 2933
  • 1104
  • 1047
  • 1014
  • 659
  • 315
  • 302
  • 277
  • 266
  • 135
  • 128
  • 79
  • 78
  • 75
  • Tagged with
  • 19855
  • 3859
  • 2738
  • 2556
  • 2387
  • 2308
  • 1874
  • 1784
  • 1553
  • 1505
  • 1488
  • 1479
  • 1459
  • 1417
  • 1378
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Residual Capsule Network

Bhamidi, Sree Bala Shruthi 08 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / The Convolutional Neural Network (CNN) have shown a substantial improvement in the field of Machine Learning. But they do come with their own set of drawbacks. Capsule Networks have addressed the limitations of CNNs and have shown a great improvement by calculating the pose and transformation of the image. Deeper networks are more powerful than shallow networks but at the same time, more difficult to train. Residual Networks ease the training and have shown evidence that they can give good accuracy with considerable depth. Putting the best of Capsule Network and Residual Network together, we present Residual Capsule Network and 3-Level Residual Capsule Network, a framework that uses the best of Residual Networks and Capsule Networks. The conventional Convolutional layer in Capsule Network is replaced by skip connections like the Residual Networks to decrease the complexity of the Baseline Capsule Network and seven ensemble Capsule Network. We trained our models on MNIST and CIFAR-10 datasets and have seen a significant decrease in the number of parameters when compared to the Baseline models.
42

Biologically-inspired Network Memory System for Smarter Networking

Mokhtar, Bassem Mahmoud Mohamed Ali 24 February 2014 (has links)
Current and emerging large-scale networks, for example the current Internet and the future Internet of Things, target supporting billions of networked entities to provide a wide variety of services and resources. Such complexity results in network-data from different sources with special characteristics, such as widely diverse users and services, multiple media (e.g., text, audio, video, etc.), high-dimensionality (i.e., large sets of attributes) and various dynamic concerns (e.g., time-sensitive data). With huge amounts of network data with such characteristics, there are significant challenges to a) recognize emergent and anomalous behavior in network traffic and b) make intelligent decisions for efficient and effective network operations. Fortunately, numerous analyses of Internet traffic have demonstrated that network traffic data exhibit multi-dimensional patterns that can be learned in order to enable discovery of data semantics. We claim that extracting and managing network semantics from traffic patterns and building conceptual models to be accessed on-demand would help in mitigating the aforementioned challenges. The current Internet, contemporary networking architectures and current tools for managing large network-data largely lack capabilities to 1) represent, manage and utilize the wealth of multi-dimensional traffic data patterns; 2) extract network semantics to support Internet intelligence through efficiently building conceptual models of Internet entities at different levels of granularity; and 3) predict future events (e.g., attacks) and behaviors (e.g., QoS of unfamiliar services) based on learned semantics. We depict the limited utilization of traffic semantics in networking operations as the “Internet Semantics Gap (ISG)”. We hypothesize that endowing the Internet and next generation networks with a “memory” system that provides data and semantics management would help resolve the ISG and enable “Internet Intelligence”. We seek to enable networked entities, at runtime and on-demand, to systematically: 1) learn and retrieve network semantics at different levels of granularity related to various Internet elements (e.g., services, protocols, resources, etc.); and 2) utilize extracted semantics to improve network operations and services in various aspects ranging from performance, to quality of service, to security and resilience. In this dissertation, we propose a distributed network memory management system, termed NetMem, for Internet intelligence. NetMem design is inspired by the functionalities of human memory to efficiently store Internet data and extract and utilize traffic data semantics in matching and prediction processes, and building dynamic network-concept ontology (DNCO) at different levels of granularity. The DNCO provides dynamic behavior models for various Internet elements. Analogous to human memory functionalities, NetMem has a memory system structure comprising short-term memory (StM) and long-term memory (LtM). StM maintains highly dynamic network data or data semantics with lower levels of abstraction for short time, while LtM keeps for long time slower varying semantics with higher levels of abstraction. Maintained data in NetMem can be accessed and learned at runtime and on-demand. From a system’s perspective, NetMem can be viewed as an overlay network of distributed “memory” agents, called NMemAgents, located at multiple levels targeting different levels of data abstraction and scalable operation. Our main contributions are as follows: • Biologically-inspired customizable application-agnostic distributed network memory management system with efficient processes for extracting and classifying high-level features and reasoning about rich semantics in order to resolve the ISG and target Internet intelligence. • Systematic methodology using monolithic and hybrid intelligence techniques for efficiently managing data semantics and building runtime-accessible dynamic ontology of correlated concept classes related to various Internet elements and at different levels of abstraction and granularity that would facilitate: ▪ Predicting future events and learning about new services; ▪ Recognizing and detecting of normal/abnormal and dynamic/emergent behavior of various Internet elements; ▪ Satisfying QoS requirements with better utilization of resources. We have evaluated the NetMem’s efficiency and effectiveness employing different semantics reasoning algorithms. We have evaluated NetMem operations over real Internet traffic data with and without using data dimensionality reduction techniques. We have demonstrated the scalability and efficiency of NetMem as a distributed multi-agent system using an analytical model. The effectiveness of NetMem has been evaluated through simulation using real offline data sets and also via the implementation of a small practical test-bed. Our results show the success of NetMem in learning and using data semantics for anomaly detection and enhancement of QoS satisfaction of running services. / Ph. D.
43

Role of a Small Switch in a Network-Based Data Acquisition System

Hildin, John 10 1900 (has links)
ITC/USA 2010 Conference Proceedings / The Forty-Sixth Annual International Telemetering Conference and Technical Exhibition / October 25-28, 2010 / Town and Country Resort & Convention Center, San Diego, California / Network switches are an integral part of most network-based data acquisition systems. Switches fall into the category of network infrastructure. They support the interconnection of nodes and the movement of data in the overall network. Unlike endpoints such as data acquisition units, recorders, and display modules, switches do not collect, store or process data. They are a necessary expense required to build the network. The goal of this paper is to show how a small integrated network switch can be used to maximize the value proposition of a given switch port in the network. This can be accomplished by maximizing the bandwidth utilization of individual network segments and minimizing the necessary wiring needed to connect all the network components.
44

NETWORK-BASED DISTRIBUTED DATA ACQUISITION AND RECORDING FOR SMALL SYSTEMS

Hildin, John 10 1900 (has links)
ITC/USA 2007 Conference Proceedings / The Forty-Third Annual International Telemetering Conference and Technical Exhibition / October 22-25, 2007 / Riviera Hotel & Convention Center, Las Vegas, Nevada / Some of the first applications of network-based data acquisition systems have been for large aircraft. These systems contained numerous network nodes including data acquisition units, switches, recorders, network management units, and others. One of the desirable aspects of a networked-based system is the ability to scale such a system to meet increasing test requirements. Similarly, these systems lend themselves to scaling down, as well, to meet the testing needs of smaller test articles. These needs may include fewer nodes and/or physically smaller components. The testing of smaller vehicles places slightly different requirements on the testing process. In general, there is a greater need for real-time analysis, flexibility and ad-hoc testing. This paper will attempt to show how a small to medium sized test article can benefit from the same powerful, feature-rich network-based data acquisition and recording system as used on larger programs. The paper will also show how a smaller system can deliver on this promise without sacrificing performance and functionality.
45

Wireless medium access control protocols for real-time industrial applications

Kutlu, Akif January 1997 (has links)
Wireless Communication is the only solution for data transfer between mobile terminals to access the sensors and actuators in industrial environment Control Area Network (CAN) is desirable solution for many industrial applications since it meets the requirements of real-time transfer of messages between systems. In situations where the use of a cable is not feasible it is important and necessary to design wireless medium access control protocols for CAN to provide real-time communications. This thesis deals with modelling, simulation and performance analysis of wireless medium access control protocols for CAN. The main issue in this concept is to determine prioritisation of the messages in the wireless environment. In order to accomplish this, a Wireless Medium Access Control protocol called WMAC is first proposed for distributed environment. The prioritisation in the WMAC protocol is achieved by performing an operation of timing the interframe gap. In this method, every message within the network is assigned a unique time period before the transmission of the message. These individual time periods distinguish messages from each other and provides message priority. Second access method called Remote Frame Medium Access Control (RFMAC) protocol is proposed for centralised wireless environment. Since the central node organises the message traffic the prioritisation is accomplished automatically by the central node. Both protocols are evaluated by using simulation techniques. The third access method called Comb is designed by using an additional overhead which consist of binary sequence. The prioritisation in this access method is managed by the overhead. Additionally, the interconnection of wireless nodes is investigated. The results of the simulations and performance analysis show that the proposed protocols operating in the centralised and distributed environments are capable of supporting the prioritisation of the messages required for real-time industrial applications in a wireless environment.
46

The Implications for Network Switch Design in a Networked FTI Data Acquisition System

Cranley, Nikki 10 1900 (has links)
ITC/USA 2011 Conference Proceedings / The Forty-Seventh Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2011 / Bally's Las Vegas, Las Vegas, Nevada / Switches are a critical component in any networked FTI data acquisition system in order to allow the forwarding of data from the DAU to the target destination devices such as the network recorder, PCM gateways, or ground station. Commercial off the shelf switches cannot meet the harsh operating conditions of FTI. This paper describes a hardware implementation of a crossbar switching architecture that meets the reliability and performance requirements of FTI equipment. Moreover, by combining the crossbar architecture with filtering techniques, the switch can be configured to achieve sophisticated forwarding operations. By way of illustration, a Gigabit network tap application is used to demonstrate the fundamental concepts of switching, forwarding, crossbar architecture, and filtering.
47

QoS in MPLS and IP Networks / QoS in MPLS and IP Networks

Sabri, Gull Hussain January 2009 (has links)
The thesis report provides broader information about IP and MPLS technologies and routing protocols. Internet architecture and problems in an IP networks are illustrated when different internet protocols are used. Small focus is provides on the demand oriented real time applications and data traffic for QoS parameters in IP and MPLS networks. Evaluation of QoS guarantee parameters such as delay, jitter and throughput are described with state of art study results mainly for real time applications in IP and MPLS networks. Finally MPLS TE implementation and working is described and proposed to achieve better network performance.
48

Modeling carrier collaboration in freight networks

Voruganti, Avinash 2009 August 1900 (has links)
This work presents two mechanisms for modeling alliance formation between leader carriers in a freight network for more efficient utilization of their resources: partial collaboration and complete collaboration. The performance of these alliance formation mechanisms is compared against the no collaboration case for various network topologies and demand levels. In the partial collaboration case, each leader carrier first maximizes his individual profits and leases out the residual capacity to other carriers. In the complete collaboration case all leader carriers join together to maximize the profit of the alliance. The profits are then distributed among the alliance members using the Shapley value principle. Numerical tests reveal that the topology of the network and the demand levels play an important role in determining the profits from each collaboration mechanism. It was also inferred that each of these factors also play a major role in determining the best collaboration strategy. / text
49

Analysis of the Mobile Number Portability Policy in the Telecom Market with or without Price Discrimination

邱惠蘭, Chiou, Hui Lan Unknown Date (has links)
We attempt to analyze why the adoption of the mobile number portability policy incurs no (or very little) effect in encouraging competition in the telecommunication market. The cause is related to network externality. The level of network externality can be characterized by the proportion of any individual’s friends who are also adopting in the same carrier as the individual does. We find that such network externality may prohibit competition in the telecommunication market when termination-based pricing is prevailing. When termination-based pricing is prohibited, carriers cannot take advantage of network externality. We characterize the conditions such that without termination-based pricing, carriers become more competitive and consumers benefit more than with termination-based prices. Our study provides insightful implication on how to effectively impose the mobile number portability policy to improve competition in the telecommunication market.
50

Network on Chip : Performance Bound and Tightness

Zhao, Xueqian January 2015 (has links)
Featured with good scalability, modularity and large bandwidth, Network-on-Chip (NoC) has been widely applied in manycore Chip Multiprocessor (CMP) and Multiprocessor System-on-Chip (MPSoC) architectures. The provision of guaranteed service emerges as an important NoC design problem due to the application requirements in Quality-of-Service (QoS). Formal analysis of performance bounds plays a critical role in ensuring guaranteed service of NoC by giving insights into how the design parameters impact the network performance. The study in this thesis proposes analysis methods for delay and backlog bounds with Network Calculus (NC). Based on xMAS (eXecutable Micro-Architectural Specification), a formal framework to model communication fabrics, the delay bound analysis procedure is presented using NC. The micro-architectural xMAS representation of a canonical on-chip router is proposed with both the data flow and control flow well captured. Furthermore, a well-defined xMAS model for a specific application on an NoC can be created with network and flow knowledge and then be mapped to corresponding NC analysis model for end-to-end delay bound calculation. The xMAS model effectively bridges the gap between the informal NoC micro-architecture and the formal analysis model. Besides delay bound, the analysis of backlog bound is also crucial for predicting buffer dimensioning boundary in on-chip Virtual Channel (VC) routers. In this thesis, basic buffer use cases are identified with corresponding analysis models proposed so as to decompose the complex flow contention in a network. Then we develop a topology independent analysis technique to convey the backlog bound analysis step by step. Algorithms are developed to automate this analysis procedure. Accompanying the analysis of performance bounds, tightness evaluation is an essential step to ensure the validity of the analysis models. However, this evaluation process is often a tedious, time-consuming, and manual simulation process in which many simulation parameters may have to be configured before the simulations run. In this thesis, we develop a heuristics aided tightness evaluation method for the analytical delay and backlog bounds. The tightness evaluation is abstracted as constrained optimization problems with the objectives formulated as implicit functions with respect to the system parameters. Based on the well-defined problems, heuristics can be applied to guide a fully automated configuration searching process which incorporates cycle-accurate bit-accurate simulations. As an example of heuristics, Adaptive Simulated Annealing (ASA) is adopted to guide the search in the configuration space. Experiment results indicate that the performance analysis models based on NC give tight results which are effectively found by the heuristics aided evaluation process even the model has a multidimensional discrete search space and complex constraints. In order to facilitate xMAS modeling and corresponding validation of the performance analysis models, the thesis presents an xMAS tool developed in Simulink. It provides a friendly graphical interface for xMAS modeling and parameter configuring based on the powerful Simulink modeling environment. Hierarchical model build-up and Verilog-HDL code generation are essentially supported to manage complex models and conduct simulations. Attributed to the synthesizable xMAS library and the good extendibility, this xMAS tool has promising use in application specific NoC design based on the xMAS components. / <p>QC 20150520</p>

Page generated in 0.0733 seconds