• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 88
  • Tagged with
  • 89
  • 89
  • 89
  • 89
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Minimal infrastructure short range radar networks

Donovan, Brian C 01 January 2008 (has links)
Distributed networks of short-range radars offer the potential to observe winds and rainfall at high spatial resolution in volumes of the troposphere that are unobserved by today's long-range weather radars. One class of potential radar network designs includes Off-the-Grid (OTG) weather radar networks. These are envisioned as self-contained networks of small remote-sensing, communication, and computation nodes, each occupying a volume of 1 m 3 and capable of operating independent of the wired power and communication infrastructure. Independence of the wired infrastructure would allow OTG networks to be deployed in specific regions where sensing needs are greatest, such as mountain valleys prone to flash-flooding, geographic regions where the infrastructure is not available or susceptible to failure, certain high-population areas, and underdeveloped regions lacking built-up infrastructure. OTG radar nodes would communicate wirelessly with one-another by operating as ad-hoc networks, and they would distribute computational functions among various points capable of computation throughout the network. The individual nodes derive energy from solar panels or other self-contained means and therefore OTG networks operate under a constraint of limited energy consumption. These systems would be required to exhibit the property of energy balance, meaning that they dynamically balance the allocation of energy consumed by different functions with the generation of power from the environment. This would be done through dynamic control decisions in order to operate over extended periods of time during severe and changing weather. Sensing, communicating, and computing tasks may be redistributed throughout the network as the storm cells migrate over the coverage area, in such a way that it maximizes the ratio of useful information collected to power consumed. This dissertation focuses on power management and energy harvesting of OTG radar networks. A prototype OTG radar node was developed to obtain practical measurements of energy consumption. Experiences and data gained from the operation of the prototype nodes are used to develop a model of the OTG node for simulation. An OTG radar network simulator was developed experiment with potential OTG networks. These simulations are used to investigate the impact of geographic location, battery capacity, optimization of power consumption, and node density on the performance and operational lifetime of such a sensor network. Additionally we present the results from initial exploration of dynamic control in OTG networks.
22

Dynamic monitoring and static analysis: New approaches for intrusion detection

Feng, Hanping 01 January 2005 (has links)
In this dissertation, we describe how we develop novel approaches for host-based anomaly detection. We investigate new ways to improve detection capability without sacrificing false positive performance and efficiency, and present new methods using both dynamic monitoring and static analysis techniques. Most former work used fixed-length subsequences within the system call traces. We propose a novel variable-length pattern extraction algorithm, called LookN, based on loss-less compression techniques. This algorithm is applied on system call traces for anomaly detection purposes. It is computationally simple and efficient. The call stack of program execution can be a very good information source for intrusion detection. There was no prior work on dynamically extracting information from call stack and effectively using it to detect exploits. We propose another new method that we call Vt-Path to do anomaly detection using call stack information. The basic idea is to extract return addresses from the call stack, and generate abstract execution path between two program execution points. Experiments show that our method can detect some attacks that cannot be detected by other approaches, while its convergence and false positive performance is comparable to or better than the other approaches. Models constructed using static analysis have the highly desirable feature that they do not produce false alarms; however, they may still miss attacks. Prior work has shown a trade-off between efficiency and precision. In particular, the more accurate models based upon pushdown automata (PDA) are very inefficient to operate due to non-determinism in stack activity. We present techniques for determinizing PDA models. We provide a formal analysis framework of PDA models and introduce the concepts of determinism and stack-determinism. We then present the VPStatic model, which achieves determinism by extracting information about stack activity of the program. Our results shows that reasonable efficiency needs not be sacrificed for model precision, and deterministic PDA are more efficient to operate than stack-deterministic PDA. In summary, we study different ways to improve intrusion detection system performance. We explore different information sources, different model generating approaches, and different ways of using the information. Several new approaches are proposed.
23

Design mapping algorithms for hybrid FPGAs

Krishnamoorthy, Srinivasaraghavan 01 January 2004 (has links)
The ongoing advancements in VLSI technology and Field Programmable Gate Array (FPGA) architectures have enabled the development of multi-million gate hybrid FPGAs containing diverse types of on-chip resources. Each type of resource in a hybrid FPGA is best suited for implementing a particular type of logic function. Given a target hybrid FPGA containing look-up tables (LUTs) and product term blocks (PLAs), this dissertation presents mapping approaches to minimize the design LUT count by packing PLAs, subject to user-defined performance constraints. The mapping approaches developed during the course of this dissertation support both gate-level and register transfer level (RTL) design descriptions. Given a gate-level design description, potential design PLA partitions are identified using a subgraph identification heuristic. PLA candidate partitions are subsequently selected using fast area, delay and Pterm estimators that evaluate and rank the fitness of each potential PLA partition. Given an RTL design description, the design RTL constructs are initially characterized using rule-based RTL area, delay and Pterm estimators. Based on the estimations, RTL structures are appropriately sized for PLAs using a set of RTL partitioning and clustering algorithms. In both the gate-level and RTL approaches, the rest of the design that is not mapped to PLAs is considered to be the LUT partition and is fed to vendor tools along with PLA partitions for final synthesis, placement and routing. It is shown that when tinning-constrained, our gate-level design mapping approach reduces LUT utilization for Apex20KE devices [1] by 8% and when unconstrained by 14% by migrating logic from LUTs to Pterm structures. Due to the presence of fewer number of PLA candidate subgraphs at RTL, the PLA logic identification procedure at RTL is twice as fast as the gate-level PLA subgraph identification procedure and saves about twice the number of LUTs.
24

On fast simulation techniques for queueing systems

Wu, Yujing 01 January 2004 (has links)
Communication networks have experienced dramatic growth in all dimensions: size, speed and heterogeneity etc. This poses great challenges to network modeling and performance evaluation. Various schemes have been proposed to speed up network simulation. For example, abstract simulation trades off simulation fidelity for speedup. In this dissertation, we investigate accuracy issues of abstract simulation, and also address some efforts to accelerate simulation by utilizing special properties of queueing systems. First we describe low and high resolution models of a simple queue via Poisson Driven Stochastic Differential Equations. Explicit formulas of evaluation errors are obtained, by which we identify the impacts of different traffic components and the utilization of the queue. It is not new to simulate networks at a burst scale. For example, a cluster of closely spaced packets are modeled as a fluid chunk with a constant rate. In previous studies, the loss of accuracy is mainly assessed by experiments. There are fewer researches on quantitative characterization of simulation errors. We obtain error formulas for a specific queueing system. By that, we identify the occurrence of queue empty periods as a major contributor to the degradation of accuracy. This provides further understanding of the impacts of source traffic and queueing systems on the error. Time stepped simulation (TSS) has been proposed to deal with the scalability issue encountered by event-driven fluid simulation and packet-level discrete event simulation. The systematic investigation on simulation errors is rather modest. We study the impacts of traffic short-term and long-term burstiness on the errors and show that the accuracy of TSS is related to traffic properties and system loads. In order to obtain tolerable degradation for a wider range of utilizations, we propose compensated TSS (CTSS). We also discuss the effect of traffic long-term burstiness and system load on the accuracy of TSS in capturing queue outputs, and briefly study TCP networks. Motivated by simulation speedup, we explore decomposition phenomena in queueing systems. The original queue is converted to a new system, where a fast simulation can be applied. We prove the equivalence of the mean queue length of the two systems under certain circumstances.
25

Quality of service support for wireless networks

Kim, Ilhwan 01 January 2003 (has links)
This dissertation addresses the issue of Quality of Service (QoS) support for wireless networks, especially IEEE802.11 wireless LANs, having time-varying capacity which makes the QoS support problem difficult. To provide guarantees of QoS parameters such as delay violation probability, packet loss probability due to buffer overflow, etc., for the wireless networks, we need to take into account not only input traffic characteristics, but also time-varying wireless channel characteristics. Considering the wireless channel characteristics, with the theoretical tool of large deviation theory we analyze the wireless Generalized Processor Sharing (GPS) scheme to provide stochastic performance guarantees for wireless LANs through the PCF mode which is based on a polling scheme, and propose connection admission schemes to admit as many connections as possible while satisfying QoS requirements of each connection. To increase the throughput and delay performance of the best-effort traffics such as the TCP traffics flowing through the wireless LANs we propose the H∞ AQM scheme for the congested queue which resides in the AP station, facing the wireless networks with time-varying channel capacity. We show superior performance of the proposed scheme in comparison with the ABED scheme through ns simulation results. For the cases where traffic or channel models are not available, we propose the adaptive bandwidth allocation scheme and the relative differentiated service architecture for wireless LANs, and show the experimental results obtained from the testbed in which the proposed schemes were implemented.
26

A non-asymptotic approach to the analysis of communication networks: From error correcting codes to network properties

Eslami, Ali 01 January 2013 (has links)
This dissertation has its focus on two different topics: 1. non-asymptotic analysis of polar codes as a new paradigm in error correcting codes with very promising features, and 2. network properties for wireless networks of practical size. In its first part, we investigate properties of polar codes that can be potentially useful in real-world applications. We start with analyzing the performance of finite-length polar codes over the binary erasure channel (BEC), while assuming belief propagation (BP) as the decoding method. We provide a stopping set analysis for the factor graph of polar codes, where we find the size of the minimum stopping set. Our analysis along with bit error rate (BER) simulations demonstrates that finite-length polar codes show superior error floor performance compared to the conventional capacity-approaching coding techniques. Motivated by good error floor performance, we introduce a modified version of BP decoding while employing a guessing algorithm to improve the BER performance. Each application may impose its own requirements on the code design. To be able to take full advantage of polar codes in practice, a fundamental question is which practical requirements are best served by polar codes. For example, we will see that polar codes are inherently well-suited for rate-compatible applications and they can provably achieve the capacity of time-varying channels with a simple rate-compatible design. This is in contrast to LDPC codes for which no provably universally capacity-achieving design is known except for the case of the erasure channel. This dissertation investigates different approaches to applications such as UEP, rate-compatible coding, and code design over parallel sub-channels (non-uniform error correction). Furthermore, we consider the idea of combining polar codes with other coding schemes, in order to take advantage of polar codes' best properties while avoiding their shortcomings. Particularly, we propose, and then analyze, a polar code-based concatenated scheme to be used in Optical Transport Networks (OTNs) as a potential real-world application. The second part of the dissertation is devoted to the analysis of finite wireless networks as a fundamental problem in the area of wireless networking. We refer to networks as being finite when the number of nodes is less than a few hundred. Today, due to the vast amount of literature on large-scale wireless networks, we have a fair understanding of the asymptotic behavior of such networks. However, in real world we have to face finite networks for which the asymptotic results cease to be valid. Here we study a model of wireless networks, represented by random geometric graphs. In order to address a wide class of the network's properties, we study the threshold phenomena. Being extensively studied in the asymptotic case, the threshold phenomena occurs when a graph theoretic property (such as connectivity) of the network experiences rapid changes over a specific interval of the underlying parameter. Here, we find an upper bound for the threshold width of finite line networks represented by random geometric graphs. These bounds hold for all monotone properties of such networks. We then turn our attention to an important non-monotone characteristic of line networks which is the Medium Access (MAC) layer capacity, defined as the maximum number of possible concurrent transmissions. Towards this goal, we provide a linear time algorithm which finds a maximal set of concurrent non-interfering transmissions and further derive lower and upper bounds for the cardinality of the set. Using simulations, we show that these bounds serve as reasonable estimates for the actual value of the MAC-layer capacity. Keywords: Polar Codes, Channel Capacity, Rate-Compatible Codes, Non-Uniform Coding, Unequal Error Protection, Concatenated Codes, Belief Propagation, Random Geometric Graphs, Monotone Properties, Threshold Phenomena, Percolation Theory, Finite Wireless Networks, Connectivity, Coverage, MAC-Layer Capacity.
27

Networking issues in distributed real -time systems

Lakamraju, Vijaya Ramaraju 01 January 2002 (has links)
Networking involves every aspect in the design of the network infrastructure from the selection/synthesis of the interconnection topology to what communication protocols it should use and how it should be deployed and maintained. A large body of literature is available on these issues. We attempt to further increase this body of literature by looking at two specific issues: the synthesis of networks that satisfy multiple properties and the design of fault tolerant communication services for high-speed networks. Synthesizing networks that satisfy multiple requirements, such as high reliability, low diameter, good embeddability etc., is a difficult problem to which there has been no completely satisfactory solution. Our approach to the problem involves a simple filtration process that takes as input a large number of randomly generated graphs. By using multiple filters, one for each requirement and arranging them such that one feeds the other, the final output consists of a short-list of networks that the designer can choose from. Our experimental results show that this approach is both practical and powerful. Perhaps our biggest achievement here is that we show how this seemingly simple approach can generate networks that are serious competitors to several traditional well-known networks. We further highlight the practical applicability of these networks by considering how they can be effectively used in a packaging environment. The interconnection network can have a dominant effect on the reliability of a distributed system. While existing network softwares have been optimized for performance, they have not been able to deal with network failures effectively. We have developed a light-weight fault detection and recovery technique that provides coverage for almost all network interface failures. The detection is based on software watchdog timers and the recovery is based on delta-logging. We have implemented the schemes as a fault tolerance layer over Myrinet, a commercially available networking technology. The implementation showed that a fault detection time of 1 ms and a complete recovery time of around 0.5 second can be achieved with a performance impact of less than 10%. The effectiveness of our fault tolerance schemes was evaluated using a versatile performance and recovery analysis tool called RAPIDS.
28

General hydrodynamic equation solver and its application to submicrometer semiconductor device simulations

Ieong, Meikei 01 January 1996 (has links)
A two-dimensional general hydrodynamic equation (HDE) solver has been developed. The solver is capable of solving equations from most of the existing hydrodynamic (HD) models. The code is written so that it does not depend on a specific form of the model/parameters which is only introduced at the final stage. Consequently, merits of the different HD models can be studied on the same platform. A new discretization scheme based on optimum artificial diffusivity (OAD) is developed to resolve the numerical instability often existing in the nonlinear, coupled HDE's. The OAD scheme also has an advantage of one formula applies to all. The robustness of this discretization scheme and numerical solution methods is demonstrated by some numerical examples. A practical device simulator must provide not only more accurate physical models but also shorter turn-around. Parallelization is the best strategy to reduce the total elapsed time without sacrificing physical accuracy. Two parallelization approaches, one based on different bias points and the other based on domain-decomposition, are implemented on a network of workstations. Finally, the general HDE solver has been applied to solve several submicrometer SOI-MOSFET's and SiGe HBT's. The predicted device characteristics is highly dependent on the competing effects between the thermal back diffusion and the non-local phenomena related to the energy relaxation. Our numerical results indicate that some kind of experimental verification for the proper choice of HD model is urgently needed.
29

Architecture and technology tradeoffs in the design of high performance microprocessor-based systems

Albonesi, David Henry 01 January 1996 (has links)
Increasing levels of VLSI integration present new opportunities, and new challenges, for designers of high performance microprocessor-based systems. With more transistors at their disposal, architects are faced with complex decisions regarding processor features, cache hierarchies, and supporting several uniprocessor and multiprocessor target systems. In addition, as the speed gap between microprocessors and board-level technology continues to widen, a robust system-level design becomes a critical element for attaining acceptable performance. This dissertation describes STATS, a comprehensive, semi-automated, trade-off analysis toolset. STATS overcomes the limitations of previous approaches by including the processor, cache hierarchy, system interconnect, and main memory designs, technology and architectural considerations, and both uniprocessor and multiprocessor analysis, within a single framework. STATS employs a judicious combination of compilation, execution-driven simulation, analytical modeling, and Spice analysis tools to achieve a reasonable balance of accuracy and analysis time. STATS is used in three architectural investigations. The first, an in-depth analysis of cache hierarchy alternatives for the Alpha 21064A processor design, includes a comparison of employing one, two, or three levels of hierarchy. A detailed analysis demonstrates the importance of precisely characterizing all aspects of cache hierarchy design, including traffic rates, miss ratios, cycle time, latency, and bandwidth, to avoid incorrect design decisions. The second explores tradeoffs in the design of a next-generation 8-way super-scalar microprocessor-based workstation. Some conclusions are that trading off a smaller L1 Dcache size for more arithmetic units provides the best overall performance, and only marginal performance gains are obtained by using the package pins for an L3 cache rather than a direct main memory connection. Novel mechanisms for multi-porting L1 Dcaches and pipelining large, on-chip L2 caches are shown to achieve up to an 81% performance improvement over conventional methods. The third investigation concerns the cluster design of CC-NUMA multiprocessors using the 8-way superscalar microprocessor. The results demonstrate that integrating the main memory controller onto the microprocessor die considerably reduces bus utilization and improves multiprocessor performance by as much as 35%. Interleaving alternatives for the distributed main memory are explored, as well as options for managing bus utilization in future cluster designs.
30

Image reconstruction and boundary detection using weak continuity constraints

Guler, Sadiye Fatma 01 January 1996 (has links)
In this dissertation we study the problem of boundary detection and discontinuity preserving reconstruction for a wide class of images. Our objective is to combine these two problems into a single optimization problem and devise an efficient algorithm for its solution. To achieve this goal, we propose a deformable weak-elastic model, namely deformable-membrane model, and the Constrained Graduated Non-Convexity algorithm. Our model preserves the discontinuities and incorporates prior knowledge about the expected shape of the boundaries into the reconstruction process to organize the detected discontinuities. The weak-elastic models used for image reconstruction are based on the weak continuity constraints that model the image discontinuities implicitly. The implicit representation of the discontinuities while giving rise to effective deterministic algorithms, does not allow for including prior knowledge about the geometry of the boundaries detected. On the other hand, boundary detection algorithms need prior knowledge about the image discontinuities to produce smooth and connected boundaries. We incorporate boundary-context information into weak-elastic models, as constraints on an auxiliary line process, by using a set of line configuration constraints. The set of line configuration constraints are assigned favorability coefficients to impose prior knowledge about the boundaries. To preserve the implicit nature of the line process, we translate the constraints on the line process into constraints on the image field. We extend the weak-elastic models to the problem of detecting boundaries in textured images. We use the sufficient statistics of a first-order GMRF model as the image features to represent textured images. Using interacting layers of the deformable-membrane model for image features, the discontinuities detected on each layer are fused to obtain the resultant boundaries. This model allows for the image features to vary slowly within regions, under the weak continuity constraints, corresponding to the nonstationarity of the texture model. The discontinuities are defined as the places where one or more of the image features vary abruptly. We devise an adaptive feature selection criterion to optimally integrate multiple feature data. Based on the observation that the image features may have varying discriminatory properties over the regions of an image, we establish a criterion that employs a measure of between-region variance of image features. We pose the boundary detection and image reconstruction problem as finding the minimum energy state of the deformable-membrane. The energy of the deformable-membrane is a nonconvex function. The prior knowledge about the boundaries introduce constraints into the calculation of the minimum energy state, resulting in a constrained optimization problem. We adopt the Graduated Non-Convexity algorithm and extend it to constrained optimization. We show that a constrained minimum exists, if the line configuration constraints are gradually introduced into the reconstruction process. We develop versions of the algorithm for boundary detection and image reconstruction in intensity images, boundary detection in textured images, and image reconstruction from sparse data and test them on a wide range of synthetic and real images. We also present a survey of the commonly used model-based methods for the low-level image reconstruction problem.

Page generated in 0.1378 seconds