Spelling suggestions: "subject:"computer networkvirtualization"" "subject:"computer networksformation""
1 |
Models of synthetic programs proposed as performance evaluation tools in a computer networkCulp, David January 2010 (has links)
Digitized by Kansas Correctional Industries
|
2 |
Performance evaluation of multicomputer networks for real-time computingMcHenry, John 14 April 2009 (has links)
Real-time constraints place additional limitations on distributed memory computing systems. Message passing delay variance and maximum message delay are important aspects of such systems that are often neglected by performance studies. This thesis examines the performance of the spanning bus hypercube, dual bus hypercube, and torus topologies to understand their desirable characteristics for real-time systems. FIFO, TDM, and token passing link access protocols and several queueing priorities are studied to measure their effect on the system’s performance. Finally, the contribution of the message parameters to the overall system delay is discussed. Existing analytic models are extended to study delay variance and maximum delay in addition to mean delay. These models separate the effects of node and link congestion, and thus provide a more accurate method for studying multicomputer networks. The SLAM simulation language substantiates results obtained analytically for the mean and variance of message delay for the FIFO link access protocol, as well as providing a method for measuring the message delay for the other link access protocols and queueing priorities. Both analytic and simulation results for the various topologies, protocols, priorities, and message parameters are presented. / Master of Science
|
3 |
A model for assessing and reporting network performance measurement in SANReNDraai, Kevin January 2017 (has links)
The performance measurement of a service provider network is an important activity. It is required for the smooth operation of the network as well as for reporting and planning. SANReN is a service provider tasked with serving the research and education network of South Africa. It currently has no structure or process for determining network performance metrics to measure the performance of its network. The objective of this study is to determine, through a process or structure, which metrics are best suited to the SANReN environment. This study is conducted in 3 phases in order to discover and verify the solution to this problem. The phases are "Contextualisation", "Design",and "Verification". The "Contextualisation" phase includes the literature review. This provides the context for the problem area but also serves as a search function for the solution. This study adopts the design science research paradigm which requires the creation of an artefact. The "Design" phase involves the creation of the conceptual network performance measurement model. This is the artefact and a generalised model for determining the network performance metrics for an NREN. To prove the utility of the model it is implemented in the SANReN environment. This is done in the "Verification" phase. The network performance measurement model proposes a process to determine network performance metrics. This process includes getting NREN requirements and goals, defining the NRENs network design goals through these requirements, define network performance metrics from these goals, evaluating the NRENs monitoring capability, and measuring what is possible. This model provides a starting point for NRENs to determine network performance metrics tailored to its environment. This is done in the SANReN environment as a proof of concept. The utility of the model is shown through the implementation in the SANReN environment thus it can be said that it is generic.The tools that monitor the performance of the SANReN network are used to retrieve network performance data from. Through understanding the requirements, determining network design goals and performance metrics, and determining the gap the retrieving of results took place. These results are analysed and finally aggregated to provide information that feeds into SANReN reporting and planning processes. A template is provided to do the aggregation of metric results. This template provides the structure to enable metrics results aggregation but leaves the categories or labels for the reporting and planning sections blank. These categories are specific to each NREN. At this point SANReN has the aggregated information to use for planning and reporting. The model is verified and thus the study’s main research objective is satisfied.
|
4 |
Performance monitoring in transputer-based multicomputer networksJiang, Jie Cheng January 1990 (has links)
Parallel architectures, like the transputer-based multicomputer network, offer potentially
enormous computational power at modest cost. However, writing programs on a multicomputer to exploit parallelism is very difficult due to the lack of tools to help users understand the run-time behavior of the parallel system and detect performance bottlenecks
in their programs. This thesis examines the performance characteristics of parallel programs in a multicomputer network, and describes the design and implementation of a real-time performance monitoring tool on transputers.
We started with a simple graph theoretical model in which a parallel computation is represented as a weighted directed acyclic graph, called the execution graph. This model allows us to easily derive a variety of performance metrics for parallel programs, such as program execution time, speedup, efficiency, etc. From this model, we also developed a new analysis method called weighted critical path analysts (WCPA), which incorporates the notion of parallelism into critical path analysis and helps users identify the program activities which have the most impact on performance. Based on these ideas, the design of a real-time performance monitoring tool was proposed and implemented on a 74-node transputer-based multicomputer. Major problems in parallel and distributed monitoring addressed in this thesis are: global state and global clock, minimization of monitoring overhead, and the presentation of meaningful data. New techniques and novel approaches to these problems have been investigated and implemented in our tool. Lastly, benchmarks are used to measure the accuracy and the overhead of our monitoring tool. We also demonstrate how this tool was used to improve the performance of an actual parallel application by more than 50%. / Science, Faculty of / Computer Science, Department of / Graduate
|
5 |
Organizational Considerations for and Individual Perceptions of Web-Based Intranet SystemsMyerscough, Mark Alan 05 1900 (has links)
Utilization of World Wide Web style Web-Based Intranet Systems (W-BIS) is a rapidly expanding information delivery technique in many organizations. Published reports concerning these systems have cited return on investment values exceeding 1300% and direct payback time periods as low as six to twelve weeks. While these systems have been widely implemented, little theoretically grounded research has been conducted in relation to users' acceptance, utilization or the perceived quality of these systems. The study employed a two-site investigation of corporate Web-Based Intranet Systems, with surveys distributed via the traditional mail system. The complete survey instrument distributed to employees included the ServQual/ServPerf, User Information Satisfaction, Ease of Use/Usefulness, and Computer Playfulness instruments. In addition to these previously developed instruments, the survey instrument for this study included measures of Web-Based Intranet Systems utilization and usefulness along with respondent demographics and subordinate-reported managerial commitment. This study investigated the reliability and validity of the ServQual/ServPerf instrument in an information systems service environment. The same analysis was conducted of the more generally accepted User Information Satisfaction instrument.
|
6 |
The impact of network characteristics on the selection of a deadlock detection algorithm for distributed databasesDaniel, Pamela Dorr Fuller 10 June 2012 (has links)
Much attention has been focused on the problem of deadlock detection in distributed databases, resulting in the publication of numerous algorithms to accomplish this function. The algorithms published to date differ greatly in many respects: timing, location, information collection, and basic approach. The emphasis of this research has been on theory and proof of correctness, rather than on practical application.
Relatively few attempts have been made to implement the algorithms.
The impact of the characteristics of the underlying database management system, transaction model, and communications network upon the effectiveness and performance of the proposed deadlock detection algorithms has largely been ignored. It is the intent of this study to examine more closely the interaction between a deadlock detection algorithm and one aspect of the environment in which it is implemented: namely, the communications network. / Master of Science
|
7 |
Design and performance evaluation of a high-speed fiber optic integrated computer network for imaging communication systems.Nematbakhsh, Mohammed Ali. January 1988 (has links)
In recent years, a growing number of diagnostic examinations in a hospital are being generated by digitally formatted imaging modalities. The evolution of these systems has led to the development of a totally digitized imaging system, which is called Picture Archiving and Communication System (PACS). A high speed computer network plays a very important role in the design of a Picture Archiving and Communication System. The computer network must not only offer a high data rate, but also it must be structured to satisfy the PACS requirements efficiently. In this dissertation, a computer network, called PACnet, is proposed for PACS. The PACnet is designed to carry image, voice, image pointing overlay, and intermittent data over a 200 Mbps dual fiber optic ring network. The PACnet provides a data packet channel and image and voice channels based on Time Division Multiple Access (TDMA) technique. The intermittent data is transmitted over a data packet channel using a modified token passing scheme. The voice and image pointing overlay are transferred between two stations in real-time to support the consultive nature of a radiology department using circuit switching techniques. Typical 50 mega-bit images are transmitted over the image channel in less than a second using circuit switching techniques. A technique, called adaptive variable frame size, is developed for PACnet to achieve high network utilization and short response time. This technique allows the data packet traffic to use any residual voice or image traffic momentarily available due to variation in voice traffic or absence of images. To achieve optimal design parameters for network and interfaces, the PACnet is also simulated under different conditions.
|
8 |
Performance Evaluation Tools for Interconnection Network DesignKolinska, Anna 08 April 1994 (has links)
A methodology is proposed for designing performance optimized computer systems. The methodology uses software tools created for performance monitoring and evaluation of parallel programs, replacing actual hardware with a simulator modeling the hardware under development. We claim that a software environment can help hardware designers to make decisions on the architectural design level. A simulator executes real programs and provides access to performance monitors from user's code. The performance monitoring system collects data traces when running the simulator and the performance analysis module extracts performance data of interest, that are later displayed with visualization tools. Key features of our methodology are "plug and play" simulation and modeling hardware/software interaction during the process of hardware design. The ability to use different simulators gives the user flexibility to configure the system for the required functionality, accuracy and simulation performance. Evaluation of hardware performance based on results obtained by modeling hardware/software interaction is crucial for designing performance optimized computer systems. We have developed a software system, based on our design methodology, for performance evaluation of multicomputer interconnection networks. The system, called the Parsim Common Environment (PCE), consists of an instrumented network simulator that executes assembly language instructions, and performance analysis and visualization modules. Using PCE we have investigated a specific network design example. The system helped us spot performance problems, explain why they happened and find the ways to solve them. The obtained results agreed with observations presented in the literature, hence validating our design methodology and the correctness of the software performance evaluation system for hardware designs. Using software tools a designer can easily check different design options and evaluate the obtained performance results without the overhead of building expensive prototypes. With our system, data analysis that required 10 man-hours to complete manually took just a couple of seconds on a Sparc-4 workstation. Without experimentation with the simulator and the performance evaluation environment one might build an expensive hardware prototype, expecting improved performance, and then be disappointed with poorer results than expected. Our tools help designers spot and solve performance problems at early stages of the hardware design process.
|
9 |
Lntp : the implementation and performance of a new local area network transport protocolRobinson, James Beresford January 1987 (has links)
In the past it has been convenient to adopt existing long haul network (LHN) protocols for use in local area networks (LANs). However, due to the different operating parameters that exist between these two types of networks, it is not possible for a LHN protocol to fully exploit the characteristics of a LAN. Thus, the need arises for a protocol designed specifically for use in a LAN environment.
LNTP is one such transport level protocol. It was designed for exclusive use in LANs, and thus does not incorporate those features which are not relevant to a LAN environment. The result of this is a simpler and more efficient protocol. As well, LNTP employs a novel deferred flow control strategy which minimizes the time that a transmitting process will be blocked.
This thesis examines the implementation of LNTP in the 4.2 BSD UNIX operating system. Various measurements are taken, and LNTP's performance is compared to that of TCP/IP, a LHN protocol which is often used in LAN environments. Several formulas are developed to determine the optimum values for various LNTP parameters, and these theoretical results are compared to the experimentally observed values.
We conclude that LNTP does indeed outperform TCP/IP. However, due to the overhead of the non-LNTP specific protocol layers, this improvement is not as great as it might be. Nonetheless, LNTP proves itself to be a viable replacement for TCP/IP. / Science, Faculty of / Computer Science, Department of / Graduate
|
10 |
A low level analysis of Cellular Automata and Random Boolean Networks as a computational architectureDamera, Prateen Reddy 01 January 2011 (has links)
With the transition from single-core to multi-core computing and CMOS technology reaching its physical limits, new computing architectures which are scalable, robust, and low-power are required. A promising alternative to conventional computing architectures are Cellular Automata (CA) networks and Random Boolean Networks (RBN), where simple computational nodes combine to form a network that is capable of performing a larger computational task. It has previously been shown that RBNs can offer superior characteristics over mesh networks in terms of robustness, information processing capabilities, and manufacturing costs while the locally connected computing elements of a CA network provide better scalability and low average interconnect length. This study presents a low level hardware analysis of these architectures using a framework which generates the HDL code and netlist of these networks for various network parameters. The HDL code and netlists are then used to simulate these new computing architectures to estimate the latency, area and power consumed when implemented on silicon and performing a pre-determined computation. We show that for RBNs, information processing is faster compared to a CA network, but CA networks are found to a have lower and better distribution of power dissipation than RBNs because of their regular structure. A well-established task to determine the latency of operation for these architectures is presented for a good understanding of the effect of non-local connections in a network. Programming the nodes for this purpose is done externally using a novel self-configuration algorithm requiring minimal hardware. Configuration for RBNs is done by sending in configuration packets through a randomly chosen node. Logic for identifying the topology for the network is implemented for the nodes in the RBN network to enable compilers to analyze and generate the configuration bit stream for that network. On the other hand, the configuration of the CA network is done by passing in configuration data through the inputs on one of the sides of the cell array and shifting it into the network. A study of the overhead of the network configuration and topology identification mechanisms are presented. An analysis of small-world networks in terms of interconnect power and information propagation capability has been presented. It has been shown that small-world networks, whose randomness lies between that of completely regular and completely irregular networks, are realistic while providing good information propagation capability. This study provides valuable information to help designers make decisions for various performance parameters for both RBN and CA networks, and thus to find the best design for the application under consideration.
|
Page generated in 0.1058 seconds