• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 370
  • 356
  • 40
  • 34
  • 34
  • 32
  • 30
  • 28
  • 8
  • 7
  • 6
  • 4
  • 4
  • 3
  • 2
  • Tagged with
  • 1074
  • 1074
  • 331
  • 274
  • 193
  • 134
  • 117
  • 99
  • 92
  • 91
  • 77
  • 75
  • 74
  • 72
  • 65
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
431

Determining the performance costs in establishing cryptography services as part of a secure endpoint device for the Industrial Internet of Things

Ledwaba, Lehlogonolo P.I. January 2017 (has links)
Endpoint devices are integral in the realisation of any industrial cyber-physical system (ICPS) application. As part of the work of promoting safer and more secure industrial Internet of Things (IIoT) networks and devices, the Industrial Internet Consortium (IIC) and the OpenFog Consortium have developed security framework specifications detailing security techniques and technologies that should be employed during the design of an IIoT network. Previous work in establishing cryptographic services on platforms intended for wireless sensor networks (WSN) and the Internet of Things (IoT) has concluded that security mechanisms cannot be implemented using software libraries owing to the lack of memory and processing resources, the longevity requirements of the processor platforms, and the hard real-time requirements of industrial operations. Over a decade has passed since this body of knowledge was created, however, and IoT processors have seen a vast improvement in the available operating and memory resources while maintaining minimal power consumption. This study aims to update the body of knowledge regarding the provision of security services on an IoT platform by conducting a detailed analysis regarding the performance of new generation IoT platforms when running software cryptographic services. The research considers execution time, power consumption and memory occupation and works towards a general, implementable design of a secure, IIoT edge device. This is realised by identifying security features recommended for IIoT endpoint devices; identifying currently available security standards and technologies for the IIoT; and highlighting the trade-offs that the application of security will have on device size, performance, memory requirements and monetary cost. / Dissertation (MSc)--University of Pretoria, 2017. / Electrical, Electronic and Computer Engineering / MSc / Unrestricted
432

ScaleMesh: A Scalable Dual-Radio Wireless Mesh Testbed

ElRakabawy, Sherif M., Frohn, Simon, Lindemann, Christoph 17 December 2018 (has links)
In this paper, we introduce and evaluate ScaleMesh, a scalable miniaturized dual-radio wireless mesh testbed based on IEEE 802.11b/g technology. ScaleMesh can emulate large-scale mesh networks within a miniaturized experimentation area by adaptively shrinking the transmission range of mesh nodes by means of variable signal attenuators. To this end, we derive a theoretical formula for approximating the attenuation level required for downscaling desired network topologies. We present a performance study in which we validate the feasibility of ScaleMesh for network emulation and protocol evaluation. We further conduct singleradio vs. dual-radio experiments in ScaleMesh, and show that dual-radio communication significantly improves network goodput. The median TCP goodput we observe in a typical random topology at 54 Mbit/s and dual-radio communication ranges between 1468 Kbit/s and 7448 Kbit/s, depending on the current network load.
433

Gateway Adaptive Pacing for TCP across Multihop Wireless Networks and the Internet

ElRakabawy, Sherif M., Klemm, Alexander, Lindemann, Christoph 17 December 2018 (has links)
In this paper, we introduce an effective congestion control scheme for TCP over hybrid wireless/wired networks comprising a multihop wireless IEEE 802.11 network and the wired Internet. We propose an adaptive pacing scheme at the Internet gateway for wired-to-wireless TCP flows. Furthermore, we analyze the causes for the unfairness of oncoming TCP flows and propose a scheme to throttle aggressive wired-to-wireless TCP flows at the Internet gateway to achieve nearly optimal fairness. Thus, we denote the introduced congestion control scheme TCP with Gateway Adaptive Pacing (TCP-GAP). For wireless-to-wired flows, we propose an adaptive pacing scheme at the TCP sender. In contrast to previous work, TCP-GAP does not impose any control traffic overhead for achieving fairness among active TCP flows. Moreover, TCP-GAP can be incrementally deployed because it does not require any modifications of TCP in the wired part of the network and is fully TCP-compatible. Extensive simulations using ns-2 show that TCPGAP is highly responsive to varying traffic conditions, provides nearly optimal fairness in all scenarios and achieves up to 42% more goodput than TCP NewReno.
434

TCP with Adaptive Pacing for Multihop Wireless Networks

ElRakabawy, Sherif M., Klemm, Alexander, Lindemann, Christoph 17 December 2018 (has links)
In this paper, we introduce a novel congestion control algorithm for TCP over multihop IEEE 802.11 wireless networks implementing rate-based scheduling of transmissions within the TCP congestion window. We show how a TCP sender can adapt its transmission rate close to the optimum using an estimate of the current 4-hop propagation delay and the coefficient of variation of recently measured round-trip times. The novel TCP variant is denoted as TCP with Adaptive Pacing (TCP-AP). Opposed to previous proposals for improving TCP over multihop IEEE 802.11 networks, TCP-AP retains the end-to-end semantics of TCP and does neither rely on modifications on the routing or the link layer nor requires cross-layer information from intermediate nodes along the path. A comprehensive simulation study using ns-2 shows that TCP-AP achieves up to 84% more goodput than TCP NewReno, provides excellent fairness in almost all scenarios, and is highly responsive to changing traffic conditions.
435

TCP with gateway adaptive pacing for multihop wireless networks with Internet connectivity

ElRakabawy, Sherif M., Klemm, Alexander, Lindemann, Christoph 17 December 2018 (has links)
This paper introduces an effective congestion control pacing scheme for TCP over multihop wireless networks with Internet connectivity. The pacing scheme is implemented at the wireless TCP sender as well as at the Internet gateway, and reacts according to the direction of TCP flows running across the wireless network and the Internet. Moreover, we analyze the causes for the unfairness of oncoming TCP flows and propose a scheme to throttle aggressive wired-to-wireless TCP flows at the Internet gateway to achieve nearly optimal fairness. The proposed scheme, which we denote as TCP with Gateway Adaptive Pacing (TCP-GAP), does not impose any control traffic overhead for achieving fairness among active TCP flows and can be incrementally deployed since it does not require any modifications of TCP in the wired part of the network. In an extensive set of experiments using ns-2 we show that TCP-GAP is highly responsive to varying traffic conditions, provides nearly optimal fairness in all scenarios and achieves up to 42% more goodput for FTP-like traffic as well as up to 70% more goodput for HTTP-like traffic than TCP NewReno. We also investigate the sensitivity of the considered TCP variants to different bandwidths of the wired and wireless links with respect to both aggregate goodput and fairness.
436

Saving Energy in Network Hosts With an Application Layer Proxy: Design and Evaluation of New Methods That Utilize Improved Bloom Filters

Jimeno, Miguel 11 December 2009 (has links)
One of the most urgent challenges of the 21st century is to investigate new technologies that can enable a transition towards a society with a reduced CO2 footprint. Information Technology generates about 2% of the global CO2, which is comparable to the aviation industry. Being connected to the Internet requires active participation in responding to protocol messages. Billions of dollars worth of electricity every year are used to keep network hosts fully powered-on at all times only for the purpose of maintaining network presence. Most network hosts are idle most of the time, thus presenting a huge opportunity for energy savings and reduced CO2 emissions. Proxying has been previously explored as a means for allowing idle hosts to sleep yet still maintain network presence. This dissertation develops general requirements for proxying and is the first exploration of application-level proxying. Proxying for TCP connections, SIP, and Gnutella P2P was investigated. The TCP proxy keeps TCP connections open (when a host is sleeping) and buffers and/or discards packets as appropriate. The SIP proxy handles all communication with the SIP server and wakes up a sleeping SIP phone on an incoming call. The P2P proxy enables a Gnutella leaf node to sleep when not actively uploading or downloading files by handling all query messages and keyword lookups in a list of shared files. All proxies were prototyped and experimentally evaluated. Proxying for P2P lead to the exploration of space and time efficient data structures to reduce the computational requirements of keyword search in the proxy. The use of pre-computation and hierarchical structures for reducing the false positive rate of a Bloom filter was explored. A Best-of-N Bloom filter was developed, which was shown to have a lower false positive rate than a standard Bloom filter and the Power-of-2 Bloom filter. An analysis of the Best-of-N Bloom Filter was completed using Order Statistics to predict the false positive rate. Potential energy savings are shown to be in the hundreds of millions of dollars per year assuming a modest adoption rate of the methods investigated in this dissertation. Future directions could lead to greater savings.
437

The influence of multi-walled carbon nanotubes on single-phase heat transfer and pressure drop characteristics in the transitional flow regime of smooth tubes

Grote, Kersten 10 June 2013 (has links)
There are in general two different types of studies concerning nanofluids. The first one concerns itself with the study of the effective thermal conductivity and the other with the study of convective heat transfer enhancement. The study on convective heat transfer enhancement generally incorporates the study on the thermal conductivity. Not many papers have been written on the convective heat transfer enhancement and even fewer concerning the study on multi-walled carbon nanotubes in the transitional flow regime. In this paper the thermal conductivity and viscosity was determined experimentally in order to study the convective heat transfer enhancement of the nanofluids. Multi-walled carbon nanotubes suspended in distilled water flowing through a straight, horizontal tube was investigated experimentally for a Reynolds number range of a 1 000 - 8 000, which included the transitional flow regime. The tube was made out of copper and has an internal diameter of 5.16 mm. Results on the thermal conductivity and viscosity indicated that they increase with nanoparticle concentration. Convective heat transfer experiments were conducted at a constant heat flux of 13 kW/m2 with 0.33%, 0.75% and 1.0% volume concentrations of multi-walled carbon nanotubes. The nanotubes had an outside diameter of 10 - 20 nm, an inside diameter of 3 - 5 nm and a length of 10 - 30 μm. Temperature and pressure drop measurements were taken from which the heat transfer coefficients and friction factors were determined as a function of Reynolds number. The thermal conductivities and viscosities of the nanofluids were also determined experimentally so that the Reynolds and Nusselt numbers could be determined accurately. It was found that heat transfer was enhanced when comparing the data on a Nusselt number as a function of Reynolds number graph but comparing the results on a heat transfer coefficient as a function of average velocity graph the opposite effect was observed. Performance evaluation of the nanofluids showed that the increase in viscosity was four times the increase in the thermal conductivity which resulted in an inefficient nanofluid. However, a study on the performance evaluation criterion showed that operating nanofluids in the transition and turbulent flow regime due to the energy budget being better than that of the distilled water. / Dissertation (MEng)--University of Pretoria, 2012. / Mechanical and Aeronautical Engineering / unrestricted
438

Raspberry Pi Based Vision System for Foreign Object Debris (FOD) Detection

Mahammad, Sarfaraz Ahmad, Sushma, Vendrapu January 2020 (has links)
Background: The main purpose of this research is to design and develop a cost-effective system for detection of Foreign Object Debris (FOD), dedicated to airports. FOD detection has been a significant problem at airports as it can cause damage to aircraft. Developing such a device to detect FOD may require complicated hardware and software structures. The proposed solution is based on a computer vision system, which comprises of flexible off the shelf components such as a Raspberry Pi and Camera Module, allowing the simplistic and efficient way to detect FOD. Methods: The solution to this research is achieved through User-centered design, which implies to design a system solution suitably and efficiently. The system solution specifications, objectives and limitations are derived from this User-centered design. The possible technologies are concluded from the required functionalities and constraints to obtain a real-time efficient FOD detection system. Results: The results are obtained using background subtraction for FOD detection and implementation of SSD (single-shot multi-box detector) model for FOD classification. The performance evaluation of the system is analysed by testing the system to detect FOD of different size for different distances. The web design is also implemented to notify the user in real-time when there is an occurrence of FOD. Conclusions: We concluded that the background subtraction and SSD model are the most suitable algorithms for the solution design with Raspberry Pi to detect FOD in a real-time system. The system performs in real-time, giving the efficiency of 84% for detecting medium-sized FOD such as persons at a distance of 75 meters and 72% efficiency for detecting large-sized FOD such as cars at a distance of 125 meters, and the average frame per second (fps) that is the system ’s performance in recording and processing frames of the area required to detect FOD is 0.95.
439

DirectX 12: Performance Comparison Between Single- and Multithreaded Rendering when Culling Multiple Lights

J'lali, Yousra January 2020 (has links)
Background. As newer computers are constructed, more advanced and powerful hardware come along with them. This leads to the enhancement of various program attributes and features by corporations to get ahold of the hardware, hence, improving performance. A relatively new API which serves to facilitate such logic, is Microsoft DirectX 12. There are numerous opinions about this specific API, and to get a slightly better understanding of its capabilities with hardware utilization, this research puts it under some tests. Objectives. This article’s aim is to steadily perform tests and comparisons in order to find out which method has better performance when using DirectX 12; single-threading, or multithreading. For performance measurements, the average CPU and GPU utilizations are gathered, as well as the average FPS and the speed of which it takes to perform the Render function. When all results have been collected, the comparison between the methods are assessed. Methods. In this research, the main method which is being used is experiments. To find out the performance differences between the two methods, they must undergo different trials while data is gathered. There are four experiments for the single-threaded and multithreaded application, respectively. Each test varies in the number of lights and objects that are rendered in the simulation environment, gradually escalading from 50; then 100; 1000; and lastly, 5000. Results. A similar pattern was discovered throughout the experiments, with all of the four tests, where the multithreaded application used considerably more of the CPU than the single-threaded version. And despite there being less simultaneous work done by the GPU in the one-threaded program, it appeared to be using more GPU utilization than multithreading. Furthermore, the system with many threads tended to perform the Render function faster than its counterpart, regardless of which test was executed. Nevertheless, both applications never differed in FPS. Conclusion. Half of the hypotheses stated in this article were contradicted after some unexpected tun of events. It was believed that the multithreaded system would utilize less of the CPU and more of the GPU. Instead, the outcome contradicted the hypotheses, thus, opposing them. Another theory believed that the system with multiple threads would execute the Render function faster than the other version, a hypothesis that was strongly supported by the results. In addition to that, more objects and lights inserted into the scene did increased the applications’ utilization in both the CPU and GPU, which also supported another hypothesis. In conclusion, the multithreaded program performs faster but still has no gain in FPS compared to single-threading. The multithreaded version also utilizes more CPU and less GPU
440

Accurate workload design for web performance evaluation.

Peña Ortiz, Raúl 13 February 2013 (has links)
Las nuevas aplicaciones y servicios web, cada vez má¡s populares en nuestro día a día, han cambiado completamente la forma en la que los usuarios interactúan con la Web. En menos de media década, el papel que juegan los usuarios ha evolucionado de meros consumidores pasivos de información a activos colaboradores en la creación de contenidos dinámicos, típicos de la Web actual. Y, además, esta tendencia se espera que aumente y se consolide con el paso del tiempo. Este comportamiento dinámico de los usuarios es una de las principales claves en la definición de cargas de trabajo adecuadas para estimar con precisión el rendimiento de los sistemas web. No obstante, la dificultad intrínseca a la caracterización del dinamismo del usuario y su aplicación en un modelo de carga, propicia que muchos trabajos de investigación sigan todavía empleando cargas no representativas de las navegaciones web actuales. Esta tesis doctoral se centra en la caracterización y reproducción, para estudios de evaluación de prestaciones, de un tipo de carga web más realista, capaz de imitar el comportamiento de los usuarios de la Web actual. El estado del arte en el modelado y generación de cargas para los estudios de prestaciones de la Web presenta varias carencias en relación a modelos y aplicaciones software que representen los diferentes niveles de dinamismo del usuario. Este hecho nos motiva a proponer un modelo más preciso y a desarrollar un nuevo generador de carga basado en este nuevo modelo. Ambas propuestas han sido validadas en relación a una aproximación tradicional de generación de carga web. Con este fin, se ha desarrollado un nuevo entorno de experimentación con la capacidad de reproducir cargas web tradicionales y dinámicas, mediante la integración del generador propuesto con un benchmark de uso común. En esta tesis doctoral también se analiza y evalúa por primera vez, según nuestro saber y entender, el impacto que tiene el empleo de cargas de trabajo dinámicas en las métrica / Peña Ortiz, R. (2013). Accurate workload design for web performance evaluation [Tesis doctoral]. Editorial Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/21054 / Palancia

Page generated in 0.1264 seconds