• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 30
  • 10
  • 5
  • 5
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 76
  • 76
  • 19
  • 17
  • 10
  • 10
  • 9
  • 9
  • 9
  • 9
  • 8
  • 8
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Modeling and simulation of VMD desalination process by ANN

Cao, W., Liu, Q., Wang, Y., Mujtaba, Iqbal M. 21 August 2015 (has links)
Yes / In this work, an artificial neural network (ANN) model based on the experimental data was developed to study the performance of vacuum membrane distillation (VMD) desalination process under different operating parameters such as the feed inlet temperature, the vacuum pressure, the feed flow rate and the feed salt concentration. The proposed model was found to be capable of predicting accurately the unseen data of the VMD desalination process. The correlation coefficient of the overall agreement between the ANN predictions and experimental data was found to be more than 0.994. The calculation value of the coefficient of variation (CV) was 0.02622, and there was coincident overlap between the target and the output data from the 3D generalization diagrams. The optimal operating conditions of the VMD process can be obtained from the performance analysis of the ANN model with a maximum permeate flux and an acceptable CV value based on the experiment.
22

The COMPASS Paradigm For The Systematic Evaluation Of U.S. Army Command And Control Systems Using Neural Network And Discrete Event Computer Simulation

Middlebrooks, Sam E. 15 April 2003 (has links)
In today's technology based society the rapid proliferation of new machines and systems that would have been undreamed of only a few short years ago has become a way of life. Developments and advances especially in the areas of digital electronics and micro-circuitry have spawned subsequent technology based improvements in transportation, communications, entertainment, automation, the armed forces, and many other areas that would not have been possible otherwise. This rapid "explosion" of new capabilities and ways of performing tasks has been motivated as often as not by the philosophy that if it is possible to make something better or work faster or be more cost effective or operate over greater distances then it must inherently be good for the human operator. Taken further, these improvements typically are envisioned to consequently produce a more efficient operating system where the human operator is an integral component. The formal concept of human-system interface design has only emerged this century as a recognized academic discipline, however, the practice of developing ideas and concepts for systems containing human operators has been in existence since humans started experiencing cognitive thought. An example of a human system interface technology for communication and dissemination of written information that has evolved over centuries of trial and error development, is the book. It is no accident that the form and shape of the book of today is as it is. This is because it is a shape and form readily usable by human physiology whose optimal configuration was determined by centuries of effort and revision. This slow evolution was mirrored by a rate of technical evolution in printing and elsewhere that allowed new advances to be experimented with as part of the overall use requirement and need for the existence of the printed word and some way to contain it. Today, however, technology is advancing at such a rapid rate that evolutionary use requirements have no chance to develop along side the fast pace of technical progress. One result of this recognition is the establishment of disciplines like human factors engineering that have stated purposes and goals of systematic determination of good and bad human system interface designs. However, other results of this phenomenon are systems that get developed and placed into public use simply because new technology allowed them to be made. This development can proceed without a full appreciation of how the system might be used and, perhaps even more significantly, what impact the use of this new system might have on the operator within it. The U.S. Army has a term for this type of activity. It is called "stove-piped development". The implication of this term is that a system gets developed in isolation where the developers are only looking "up" and not "around". They are thus concerned only with how this system may work or be used for its own singular purposes as opposed to how it might be used in the larger community of existing systems and interfaces or, even more importantly, in the larger community of other new systems in concurrent development. Some of the impacts for the Army from this mode of system development are communication systems that work exactly as designed but are unable to interface to other communications systems in other domains for battlefield wide communications capabilities. Having communications systems that cannot communicate with each other is a distinct problem in its own right. However, when developments in one industry produce products that humans use or attempt to use with products from totally separate developments or industries, the Army concept of product development resulting from stove-piped design visions can have significant implication on the operation of each system and the human operator attempting to use it. There are many examples that would illustrate the above concept, however, one that will be explored here is the Army effort to study, understand, and optimize its command and control (C2) operations. This effort is at the heart of a change in the operational paradigm in C2 Tactical Operations Centers (TOCs) that the Army is now undergoing. For the 50 years since World War II the nature, organization, and mode of the operation of command organizations within the Army has remained virtually unchanged. Staffs have been organized on a basic four section structure and TOCs generally only operate in a totally static mode with the amount of time required to move them to keep up with a mobile battlefield going up almost exponentially from lower to higher command levels. However, current initiatives are changing all that and while new vehicles and hardware systems address individual components of the command structures to improve their operations, these initiatives do not necessarily provide the environment in which the human operator component of the overall system can function in a more effective manner. This dissertation examines C2 from a system level viewpoint using a new paradigm for systematically examining the way TOCs operate and then translating those observations into validated computer simulations using a methodological framework. This paradigm is called COmputer Modeling Paradigm And Simulation of Systems (COMPASS). COMPASS provides the ability to model TOC operations in a way that not only includes the individuals, work groups and teams in it, but also all of the other hardware and software systems and subsystems and human-system interfaces that comprise it as well as the facilities and environmental conditions that surround it. Most of the current literature and research in this area focuses on the concept of C2 itself and its follow-on activities of command, control, communications (C3), command, control, communications, and computers (C4), and command, control, communications, computers and intelligence (C4I). This focus tends to address the activities involved with the human processes within the overall system such as individual and team performance and the commander's decision-making process. While the literature acknowledges the existence of the command and control system (C2S), little effort has been expended to quantify and analyze C2Ss from a systemic viewpoint. A C2S is defined as the facilities, equipment, communications, procedures, and personnel necessary to support the commander (i.e., the primary decision maker within the system) for conducting the activities of planning, directing, and controlling the battlefield within the sector of operations applicable to the system. The research in this dissertation is in two phases. The overall project incorporates sequential experimentation procedures that build on successive TOC observation events to generate an evolving data store that supports the two phases of the project. Phase I consists of the observation of heavy maneuver battalion and brigade TOCs during peacetime exercises. The term "heavy maneuver" is used to connotate main battle forces such as armored and mechanized infantry units supported by artillery, air defense, close air, engineer, and other so called combat support elements. This type of unit comprises the main battle forces on the battlefield. It is used to refer to what is called the conventional force structure. These observations are conducted using naturalistic observation techniques of the visible functioning of activities within the TOC and are augmented by automatic data collection of such things as analog and digital message traffic, combat reports generated by the computer simulations supporting the wargame exercise, and video and audio recordings where appropriate and available. Visible activities within the TOC include primarily the human operator functions such as message handling activities, decision-making processes and timing, coordination activities, and span of control over the battlefield. They also include environmental conditions, functional status of computer and communications systems, and levels of message traffic flows. These observations are further augmented by observer estimations of such indicators as perceived level of stress, excitement, and level of attention to the mission of the TOC personnel. In other words, every visible and available component of the C2S within the TOC is recorded for analysis. No a priori attempt is made to evaluate the potential significance of each of the activities as their contribution may be so subtle as to only be ascertainable through statistical analysis. Each of these performance activities becomes an independent variable (IV) within the data that is compared against dependent variables (DV) identified according to the mission functions of the TOC. The DVs for the C2S are performance measures that are critical combat tasks performed by the system. Examples of critical combat tasks are "attacking to seize an objective", "seizure of key terrain", and "river crossings'. A list of expected critical combat tasks has been prepared from the literature and subject matter expert (SME) input. After the exercise is over, the success of these critical tasks attempted by the C2S during the wargame are established through evaluator assessments, if available, and/or TOC staff self analysis and reporting as presented during after action reviews. The second part of Phase I includes datamining procedures, including neural networks, used in a constrained format to analyze the data. The term constrained means that the identification of the outputs/DV is known. The process was to identify those IV that significantly contribute to the constrained DV. A neural network is then constructed where each IV forms an input node and each DV forms an output node. One layer of hidden nodes is used to complete the network. The number of hidden nodes and layers is determined through iterative analysis of the network. The completed network is then trained to replicate the output conditions through iterative epoch executions. The network is then pruned to remove input nodes that do not contribute significantly to the output condition. Once the neural network tree is pruned through iterative executions of the neural network, the resulting branches are used to develop algorithmic descriptors of the system in the form of regression like expressions. For Phase II these algorithmic expressions are incorporated into the CoHOST discrete event computer simulation model of the C2S. The programming environment is the commercial programming language Micro Saintä running on a PC microcomputer. An interrogation approach was developed to query these algorithms within the computer simulation to determine if they allow the simulation to reflect the activities observed in the real TOC to within an acceptable degree of accuracy. The purpose of this dissertation is to introduce the COMPASS concept that is a paradigm for developing techniques and procedures to translate as much of the performance of the entire TOC system as possible to an existing computer simulation that would be suitable for analyses of future system configurations. The approach consists of the following steps: • Naturalistic observation of the real system using ethnographic techniques. • Data analysis using datamining techniques such as neural networks. • Development of mathematical models of TOC performance activities. • Integration of the mathematical into the CoHOST computer simulation. • Interrogation of the computer simulation. • Assessment of the level of accuracy of the computer simulation. • Validation of the process as a viable system simulation approach. / Ph. D.
23

EbitSim: simulador de BitTorrent utilizando o arcabouço OMNeT++. / EbitSim: BitTorrent simulator using the OMNeT++ framework.

Evangelista, Pedro Manoel Fabiano Alves 28 September 2012 (has links)
O protocolo BitTorrent é uma das aplicações P2P mais bem sucedidas da Internet, sendo largamente estudada pela comunidade de pesquisa. Contudo, o estudo da dinâmica de uma rede BitTorrent de larga escala apresenta diversos desafios, tais como a dificuldade em realizar capturas da rede ou montar experimentos para medição. O método utilizado para superar estes desafios é a simulação, porém não há uma ferramenta adequada disponível para a comunidade de pesquisa. Por conta disso, a maioria dos trabalhos que utilizam simulação desenvolvem os seus próprios simuladores, resultando em trabalhos que não podem ser repetidos ou verificados. Neste trabalho, é apresentado o simulador de BitTorrent EbitSim, que permite a alteração dos mecanismos utilizados, a configuração dos parâmetros do sistema e a definição da topologia utilizada. O simulador foi desenvolvido utilizando o arcabouço OMNeT++, que fornece um conjunto de ferramentas que facilitam a configuração de variados cenários e dos parâmetros do modelo. Além disso, o arcabouço INET foi utilizado para modelar as camadas inferiores de rede. O desenvolvimento do modelo do BitTorrent foi baseado na especificação oficial, e contou com o auxilio de trabalhos relacionados e discussões com desenvolvedores de clientes BitTorrent. O EbitSim foi validado por meio de comparações com resultados obtidos a partir de uma implementação real de uma rede BitTorrent, realizada em um ambiente de testes controlado. Foi demonstrado que o EbitSim apresenta resultados compatíveis com uma rede BitTorrent real. / The BitTorrent protocol is one of the most successful P2P applications, being largely studied by the research community. Nevertheless, studying the dynamics of a large BitTorrent network presents several challenges, such as difficulty in acquiring network traces or building measurement experiments. The simulation method is capable of overcoming these challenges, but there is not an adequate simulation tool available for the research community. This thesis presents the EbitSim BitTorrent Simulator, which is capable of modifying the utilized mechanisms, configuring the systems parameters and defining the topology used in the simulations. The simulator was developed using the OMNeT++ framework, which provides a set of tools that facilitates the configuration of diversified scenarios and the parameters of the model. In addition, the INET framework was utilized to accurately model the lower network levels. We developed the BitTorrent model based on the official specification, with the aid of related works and discussions with developers of BitTorrent client programs. The EbitSim Simulator was validated by performing comparisons with results obtained from a real implementation of a BitTorrent network, deployed in a controlled testbed. We show that the EbitSim Simulator generates results compatible with a real BitTorrent network.
24

Overcoming Limitations in Computer Worm Models

Posluszny III, Frank S 31 January 2005 (has links)
In less than two decades, destruction and abuse caused by computer viruses and worms have grown from an anomaly to an everyday occurrence. In recent years, the Computer Emergency Response Team (CERT) has recorded a steady increase in software defects and vulnerabilities, similar to those exploited by the Slammer and Code Red worms. In response to such a threat, the academic community has started a set of research projects seeking to understand worm behavior through creation of highly theoretical and generalized models. Staniford et. al. created a model to explain the propagation behaviors of such worms in computer network environments. Their model makes use of the Kermack-McKendrick biological model of propagation as applied to digital systems. Liljenstam et. al. add a spatial perspective to this model, varying the infection rate by the scanning worms' source and destination groups. These models have been shown to describe generic Internet-scale behavior. However, they are lacking from a localized (campus-scale) network perspective. We make the claim that certain real-world constraints, such as bandwidth and heterogeneity of hosts, affect the propagation of worms and thus should not be ignored when creating models for analysis. In setting up a testing environment for this hypothesis, we have identified areas that need further work in the computer worm research community. These include availability of real-world data, a generalized and behaviorally complete worm model, and packet-based simulations. The major contributions of this thesis involve a parameterized, algorithmic worm model, an openly available worm simulation package (based on SSFNet and SSF.App.Worm), analysis of test results showing justification to our claim, and suggested future directions.
25

EbitSim: simulador de BitTorrent utilizando o arcabouço OMNeT++. / EbitSim: BitTorrent simulator using the OMNeT++ framework.

Pedro Manoel Fabiano Alves Evangelista 28 September 2012 (has links)
O protocolo BitTorrent é uma das aplicações P2P mais bem sucedidas da Internet, sendo largamente estudada pela comunidade de pesquisa. Contudo, o estudo da dinâmica de uma rede BitTorrent de larga escala apresenta diversos desafios, tais como a dificuldade em realizar capturas da rede ou montar experimentos para medição. O método utilizado para superar estes desafios é a simulação, porém não há uma ferramenta adequada disponível para a comunidade de pesquisa. Por conta disso, a maioria dos trabalhos que utilizam simulação desenvolvem os seus próprios simuladores, resultando em trabalhos que não podem ser repetidos ou verificados. Neste trabalho, é apresentado o simulador de BitTorrent EbitSim, que permite a alteração dos mecanismos utilizados, a configuração dos parâmetros do sistema e a definição da topologia utilizada. O simulador foi desenvolvido utilizando o arcabouço OMNeT++, que fornece um conjunto de ferramentas que facilitam a configuração de variados cenários e dos parâmetros do modelo. Além disso, o arcabouço INET foi utilizado para modelar as camadas inferiores de rede. O desenvolvimento do modelo do BitTorrent foi baseado na especificação oficial, e contou com o auxilio de trabalhos relacionados e discussões com desenvolvedores de clientes BitTorrent. O EbitSim foi validado por meio de comparações com resultados obtidos a partir de uma implementação real de uma rede BitTorrent, realizada em um ambiente de testes controlado. Foi demonstrado que o EbitSim apresenta resultados compatíveis com uma rede BitTorrent real. / The BitTorrent protocol is one of the most successful P2P applications, being largely studied by the research community. Nevertheless, studying the dynamics of a large BitTorrent network presents several challenges, such as difficulty in acquiring network traces or building measurement experiments. The simulation method is capable of overcoming these challenges, but there is not an adequate simulation tool available for the research community. This thesis presents the EbitSim BitTorrent Simulator, which is capable of modifying the utilized mechanisms, configuring the systems parameters and defining the topology used in the simulations. The simulator was developed using the OMNeT++ framework, which provides a set of tools that facilitates the configuration of diversified scenarios and the parameters of the model. In addition, the INET framework was utilized to accurately model the lower network levels. We developed the BitTorrent model based on the official specification, with the aid of related works and discussions with developers of BitTorrent client programs. The EbitSim Simulator was validated by performing comparisons with results obtained from a real implementation of a BitTorrent network, deployed in a controlled testbed. We show that the EbitSim Simulator generates results compatible with a real BitTorrent network.
26

The Design and Evaluation of Advanced TCP-based Services over an Evolving Internet

He, Qi 19 July 2005 (has links)
Performance evaluation continues to play an important role in network research. Two types of research efforts related to network performance evaluation are particularly noteworthy: (1) using performance evaluation to understand specific problems and to design better solutions, and (2) designing efficient performance evaluation methodologies. This thesis addresses several performance evaluation challenges, encompassing both categories of effort listed above, in building high-performance TCP-based network services in the context of overlay routing and peer-to-peer systems. With respect to the first type of research effort, this thesis addresses two issues related to the design of TCP-based network services: 1. Prediction of large transfer TCP throughput: Predicting the TCP throughput attainable on given paths is used for applications such as route selection in overlay routing. Based on a systematic measurement study, we evaluate the accuracy of two categories of TCP throughput prediction techniques. We then analyze the factors that affect the accuracy of each. 2. Congestion control and message loss in Gnutella peer-to-peer networks: We evaluate the congestion control mechanisms and message loss behavior in a real-world overlay network, the Gnutella system. The challenges for congestion control in such a network are analyzed, as are the design tradeoffs of alternative mechanisms. In order to study systems such as the above with details of the network, we build a scalable, extensible and portable packet-level simulator of peer-to-peer systems. The second part of the thesis, representing the second type of effort above, proposes two techniques to improve network simulation by exploiting the detailed knowledge of TCP: 1. Speed up network simulation by exploiting TCP steady-state predictability: We develop a technique that uses prediction to accurately summarize a series of packet events and, therefore, to save on processing cost while maintaining fidelity. Our technique integrates well with packet-level simulations and is more faithful in several respects than previous optimization techniques. 2. TCP workload generation under link load constraints: We develop an algorithm that generates traffic for a specific network configuration such that realistic and specific load conditions are obtained on user-specified links. At the same time, the algorithm minimizes the simulation memory requirement.
27

Algorithms for Self-Organizing Wireless Sensor Networks

Ould-Ahmed-Vall, ElMoustapha 09 April 2007 (has links)
The unique characteristics of sensor networks pose numerous challenges that have to be overcome to enable their efficient use. In particular, sensor networks are energy constrained because of their reliance on battery power. They can be composed of a large number of unreliable nodes. These characteristics render node collaboration essential to the accomplishment of the network task and justify the development of new algorithms to provide services such as routing, fault tolerance and naming. This work increases the knowledge on the growing field of sensor network algorithms by contributing a new evaluation tool and two new algorithms. A new sensor network simulator that can be used to evaluate sensor network algorithms is discussed. It incorporates models for the different functional units composing a sensor node and characterizes the energy consumption of each. It is designed in a modular and efficient way favoring the ease of use and extension. It allows the user to choose from different implementations of energy models, accuracy models and types of sensors. The second contribution of this thesis is a distributed algorithm to solve the unique ID assignment problem in sensor networks. Our solution starts by assigning long unique IDs and organizing nodes in a tree structure. This tree structure is used to compute the size of the network. Then, unique IDs are assigned using the minimum length. Globally unique IDs are useful in providing many network functions, e.g. node maintenance and security. Theoretical and simulation analysis of the ID assignment algorithm demonstrate that a high percentage of nodes are assigned unique IDs at the termination of the algorithm when the algorithm parameters are set properly. Furthermore, the algorithm terminates in a short time that scales well with the network size. The third contribution of this thesis is a general fault-tolerant event detection scheme that allows nodes to detect erroneous local decisions based on the local decisions reported by their neighbors. It can handle cases where nodes have different and dynamic accuracy levels. We prove analytically that the derived fault-tolerant estimator is optimal under the maximum a posteriori criterion. An equivalent weighted voting scheme is derived.
28

Performance evaluation of real-time bilateral teleoperation systems with wired and wireless network simulation

Liao, Stephen 20 December 2012 (has links)
This thesis presents a general simulation framework used for evaluating the performance of bilateral teleoperation systems under consistent and controllable network conditions. A teleoperation system is where an operator uses a master device to control a slave robot through a communication link. The communication link between the master and slave has an important impact on the system performance. Network emulation using ns-2 has been proposed as a way of simulating the communication link. It allows for the network conditions to be controlled and for repeatable results. The proposed setup was used to test the performance of a hydraulic actuator under various conditions of wired and wireless networks. Three control schemes were evaluated using various combinations of time delay and packet loss. The system was also tested simulating wireless communication between the master and slave to determine the effects of transmission power and distance on the performance of the system.
29

Performance evaluation of real-time bilateral teleoperation systems with wired and wireless network simulation

Liao, Stephen 20 December 2012 (has links)
This thesis presents a general simulation framework used for evaluating the performance of bilateral teleoperation systems under consistent and controllable network conditions. A teleoperation system is where an operator uses a master device to control a slave robot through a communication link. The communication link between the master and slave has an important impact on the system performance. Network emulation using ns-2 has been proposed as a way of simulating the communication link. It allows for the network conditions to be controlled and for repeatable results. The proposed setup was used to test the performance of a hydraulic actuator under various conditions of wired and wireless networks. Three control schemes were evaluated using various combinations of time delay and packet loss. The system was also tested simulating wireless communication between the master and slave to determine the effects of transmission power and distance on the performance of the system.
30

Performance Analysis Of Reliable Multicast Protocols

Celik, Coskun 01 December 2004 (has links) (PDF)
IP multicasting is a method for transmitting the same information to multiple receivers over IP networks. Reliability issue of multicasting contains the challenges for detection and recovery of packet losses and ordered delivery of the entire data. In this work, existing reliable multicast protocols are classified into three main groups, namely tree based, NACK-only and router assisted, and a representative protocol for each group is selected to demonstrate the advantages and disadvantages of the corresponding approaches. The selected protocols are SRM, PGM and RMTP. Performance characteristics of these protocols are empirically evaluated by using simulation results. Network Simulator-2 (ns2), a discrete event simulator is used for the implementation and simulation of the selected protocols. The contributions of the thesis are twofold, i.e. the extension of the ns library with an open source implementation of RMTP which did not exist earlier and the evaluation of the selected protocols by investigating performance metrics like distribution delay and recovery latency with respect to varying multicast group size, network diameter, link loss rate, etc.

Page generated in 0.1096 seconds