• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • 1
  • Tagged with
  • 3
  • 3
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Performance estimation of wireless networks using traffic generation and monitoring on a mobile device.

Tiemeni, Ghislaine Livie Ngangom January 2015 (has links)
Masters of Science / In this study, a traffic generator software package namely MTGawn was developed to run packet generation and evaluation on a mobile device. The call generating software system is able to: simulate voice over Internet protocol calls as well as user datagram protocol and transmission control protocol between mobile phones over a wireless network and analyse network data similar to computer-based network monitoring tools such as Iperf and D-ITG but is self-contained on a mobile device. This entailed porting a ‘stripped down’ version of a packet generation and monitoring system with functionality as found in open source tools for a mobile platform. This mobile system is able to generate and monitor traffic over any network interface on a mobile device, and calculate the standard quality of service metrics. The tool was compared to a computer–based tool namely distributed Internet traffic generator (D-ITG) in the same environment and, in most cases, MTGawn reported comparable results to D-ITG. The important motivation for this software was to ease feasibility testing and monitoring in the field by using an affordable and rechargeable technology such as a mobile device. The system was tested in a testbed and can be used in rural areas where a mobile device is more suitable than a PC or laptop. The main challenge was to port and adapt an open source packet generator to an Android platform and to provide a suitable touchscreen interface for the tool. ACM Categories and Subject Descriptors B.8 [PERFORMANCE AND RELIABILITY] B.8.2 [Performance Analysis and Design Aids] C.4 [PERFORMANCE OF SYSTEMS] Measurement techniques, Performance attributes
2

Algorithm/architecture codesign of low power and high performance linear algebra compute fabrics

Pedram, Ardavan 27 September 2013 (has links)
In the past, we could rely on technology scaling and new micro-architectural techniques to improve the performance of processors. Nowadays, both of these methods are reaching their limits. The primary concern in future architectures with billions of transistors on a chip and limited power budgets is power/energy efficiency. Full-custom design of application-specific cores can yield up to two orders of magnitude better power efficiency over conventional general-purpose cores. However, a tremendous design effort is required in integrating a new accelerator for each new application. In this dissertation, we present the design of specialized compute fabrics that maintain the efficiency of full custom hardware while providing enough flexibility to execute a whole class of coarse-grain operations. The broad vision is to develop integrated and specialized hardware/software solutions that are co-optimized and co-designed across all layers ranging from the basic hardware foundations all the way to the application programming support through standard linear algebra libraries. We try to address these issues specifically in the context of dense linear algebra applications. In the process, we pursue the main questions that architects will face while designing such accelerators. How broad is this class of applications that the accelerator can support? What are the limiting factors that prevent utilization of these accelerators on the chip? What is the maximum achievable performance/efficiency? Answering these questions requires expertise and careful codesign of the algorithms and the architecture to select the best possible components, datapaths, and data movement patterns resulting in a more efficient hardware-software codesign. In some cases, codesign reduces complexities that are imposed on the algorithm side due to the initial limitations in the architectures. We design a specialized Linear Algebra Processor (LAP) architecture and discuss the details of mapping of matrix-matrix multiplication onto it. We further verify the flexibility of our design for computing a broad class of linear algebra kernels. We conclude that this architecture can perform a broad range of matrix-matrix operations as complex as matrix factorizations, and even Fast Fourier Transforms (FFTs), while maintaining its ASIC level efficiency. We present a power-performance model that compares state-of-the-art CPUs and GPUs with our design. Our power-performance model reveals sources of inefficiencies in CPUs and GPUs. We demonstrate how to overcome such inefficiencies in the process of designing our LAP. As we progress through this dissertation, we introduce modifications of the original matrix-matrix multiplication engine to facilitate the mapping of more complex operations. We observe the resulting performance and efficiencies on the modified engine using our power estimation methodology. When compared to other conventional architectures for linear algebra applications and FFT, our LAP is over an order of magnitude better in terms of power efficiency. Based on our estimations, up to 55 and 25 GFLOPS/W single- and double-precision efficiencies are achievable on a single chip in standard 45nm technology. / text
3

Establecimiento del Modelo de Agregación más apropiado para Ingeniería del Software

Amatriain, Hernán Guillermo January 2014 (has links)
Antecedentes: la síntesis cuantitativa consiste en integrar los resultados de un conjunto de experimentos, previamente identificados, en una medida resumen. Al realizar este tipo de síntesis, se busca hallar un resultado que sea resumen representativo de los resultados de los estudios individuales, y por tanto que signifique una mejora sobre las estimaciones individuales. Este tipo de procedimientos recibe el nombre de Agregación o Meta-Análisis. Existen dos estrategias a la hora de agregar un conjunto de experimentos, la primera parte del supuesto de que las diferencias en los resultados de un experimento a otro obedecen a un error aleatorio propio de la experimentación y de que existe un único resultado o tamaño de efecto que es compartido por toda la población, la segunda estrategia parte del supuesto de que no existe un único tamaño de efecto representativo de toda la población, sino que dependiendo del origen o momento en que se realicen los experimentos los resultados van a modificarse debido a la influencia de variables no controladas, a pesar de esto puede obtenerse un promedio de los distintos resultados para una conclusión general. A la primera de las estrategias se la denominada modelo de efecto fijo y a la segunda se la denominada modelo de efectos aleatorios. Los autores que han comenzado a trabajar en Meta-Análisis, no muestran una línea de trabajo unificada. Este hecho hace que sea necesaria la unificación de criterios para la realización de este tipo de trabajos. Objetivo: establecer un conjunto de recomendaciones o guías que permitan, a los investigadores en Ingeniería del Software, determinar bajo qué condiciones es conveniente desarrollar un Meta-Análisis mediante modelo de efecto fijo y cuando es conveniente utilizar el modelo de efectos aleatorios. Métodos: la estrategia sería la de obtener los resultados de experimentos de características similares mediante el método de Monte Carlo. Todos ellos contarían con un número de sujetos bajo, ya que esa es la característica principal en el campo de la Ingeniería de Software y que genera la necesidad de tener que agregar el resultado de varios experimentos. Luego se agrega el resultado de estos experimentos con el método de Diferencia de Medias Ponderadas aplicada primero con el modelo de efecto fijo, y posteriormente con el modelo de efectos aleatorios. Con las combinaciones realizadas, se analiza y compara la fiabilidad y potencia estadística de ambos modelos de efectos.

Page generated in 0.1478 seconds