• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • Tagged with
  • 12
  • 12
  • 7
  • 7
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A Hardware Testbed for Measuring IEEE 802.11g DCF Performance

Symington, Andrew 01 April 2009 (has links)
The Distributed Coordination Function (DCF) is the oldest and most widely-used IEEE 802.11 contention-based channel access control protocol. DCF adds a significant amount of overhead in the form of preambles, frame headers, randomised binary exponential back-off and inter-frame spaces. Having accurate and verified performance models for DCF is thus integral to understanding the performance of IEEE 802.11 as a whole. In this document DCF performance is measured subject to two different workload models using an IEEE 802.11g test bed. Bianchi proposed the first accurate analytic model for measuring the performance of DCF. The model calculates normalised aggregate throughput as a function of the number of stations contending for channel access. The model also makes a number of assumptions about the system, including saturation conditions (all stations have a fixed-length packet to send at all times), full-connectivity between stations, constant collision probability and perfect channel conditions. Many authors have extended Bianchi's machine model to correct certain inconsistencies with the standard, while very few have considered alternative workload models. Owing to the complexities associated with prototyping, most models are verified against simulations and not experimentally using a test bed. In addition to a saturation model we considered a more realistic workload model representing wireless Internet traffic. Producing a stochastic model for such a workload was a challenging task, as usage patterns change significantly between users and over time. We implemented and compared two Markov Arrival Processes (MAPs) for packet arrivals at each client - a Discrete-time Batch Markovian Arrival Process (D-BMAP) and a modified Hierarchical Markov Modulated Poisson Process (H-MMPP). Both models had parameters drawn from the same wireless trace data. It was found that, while the latter model exhibits better Long Range Dependency at the network level, the former represented traces more accurately at the client-level, which made it more appropriate for the test bed experiments. A nine station IEEE 802.11 test bed was constructed to measure the real world performance of the DCF protocol experimentally. The stations used IEEE 802.11g cards based on the Atheros AR5212 chipset and ran a custom Linux distribution. The test bed was moved to a remote location where there was no measured risk of interference from neighbouring radio transmitters in the same band. The DCF machine model was fixed and normalised aggregate throughput was measured for one through to eight contending stations, subject to (i) saturation with fixed packet length equal to 1000 bytes, and (ii) the D-BMAP workload model for wireless Internet traffic. Control messages were forwarded on a separate wired backbone network so that they did not interfere with the experiments. Analytic solver software was written to calculate numerical solutions for thee popular analytic models for DCF and compared the solutions to the saturation test bed experiments. Although the normalised aggregate throughput trends were the same, it was found that as the number of contending stations increases, so the measured aggregate DCF performance diverged from all three analytic model's predictions; for every station added to the network normalised aggregate throughput was measured lower than analytically predicted. We conclude that some property of the test bed was not captured by the simulation software used to verify the analytic models. The D-BMAP experiments yielded a significantly lower normalised aggregate throughput than the saturation experiments, which is a clear result of channel underutilisation. Although this is a simple result, it highlights the importance of the traffic model on network performance. Normalised aggregate throughput appeared to scale more linearly when compared to the RTS/CTS access mechanism, but no firm conclusion could be drawn at 95% confidence. We conclude further that, although normalised aggregate throughput is appropriate for describing overall channel utilisation in the steady state, jitter, response time and error rate are more important performance metrics in the case of bursty traffic.
2

A Hybrid Scavenger Grid Approach to Intranet Search

Nakashole, Ndapandula 01 February 2009 (has links)
According to a 2007 global survey of 178 organisational intranets, 3 out of 5 organisations are not satisfied with their intranet search services. However, as intranet data collections become large, effective full-text intranet search services are needed more than ever before. To provide an effective full-text search service based on current information retrieval algorithms, organisations have to deal with the need for greater computational power. Hardware architectures that can scale to large data collections and can be obtained and maintained at a reasonable cost are needed. Web search engines address scalability and cost-effectiveness by using large-scale centralised cluster architectures. The scalability of cluster architectures is evident in the ability of Web search engines to respond to millions of queries within a few seconds while searching very large data collections. Though more cost-effective than high-end supercomputers, cluster architectures still have relatively high acquisition and maintenance costs. Where information retrieval is not the core business of an organisation, a cluster-based approach may not be economically viable. A hybrid scavenger grid is proposed as an alternative architecture — it consists of a combination of dedicated and dynamic resources in the form of idle desktop workstations. From the dedicated resources, the architecture gets predictability and reliability whereas from the dynamic resources it gets scalability. An experimental search engine was deployed on a hybrid scavenger grid and evaluated. Test results showed that the resources of the grid can be organised to deliver the best performance by using the optimal number of machines and scheduling the optimal combination of tasks that the machines perform. A system efficiency and cost-effectiveness comparison of a grid and a multi-core machine showed that for workloads of modest to large sizes, the grid architecture delivers better throughput per unit cost than the multi-core, at a system efficiency that is comparable to that of the multi-core. The study has shown that a hybrid scavenger grid is a feasible search engine architecture that is cost-effective and scales to medium- to large-scale data collections.
3

Performance Benchmarking Physical and Virtual Linux Environments

Fisher, Mario 01 January 2012 (has links)
Virtualisation is a method of partitioning one physical computer into multiple “virtual” computers, giving each the appearance and capabilities of running on its own dedicated hardware. Each virtual system functions as a full-fledged computer and can be independently shutdown and restarted. Xen is a form of paravirtualisation developed by the University of Cambridge Computer Laboratory and is available under both a free and commercial license. Performance results comparing Xen to native Linux as well as to other virtualisation tools such as VMWare and User Mode Linux (UML) were published in the paper "Xen and the Art of Virtualization" at the Symposium on Operating Systems Principles in October 2003 by (Barham et al, 2003). (Clark et al, 2004) performed a similar study and produced similar results. In this thesis, a similar performance analysis of Xen is undertaken and also extended to include the performance analysis of OpenVZ, an alternative open source virtualisation technology. This study made explicit use of open-source software and commodity hardware.
4

Forensic Analysis Of C-4 And Commercial Blasting Agents For Possible Discrimination

Steele, Katie 01 January 2007 (has links)
The criminal use of explosives has increased in recent years. Political instability and the wide spread access to the internet, filled with "homemade recipes," are two conjectures for the increase. C-4 is a plastic bonded explosive (PBX) comprised of 91% of the high explosive RDX, 1.6% processing oils, 5.3% plasticizer, and 2.1% polyisobutylene (PIB). C-4 is most commonly used for military purposes, but also has found use in commercial industry as well. Current methods for the forensic analysis of C-4 are limited to identification of the explosive; however, recent publications have suggested the plausibility of discrimination between C-4 samples based upon the processing oils and stable isotope ratios. This research focuses on the discrimination of C-4 samples based on ratios of RDX to HMX, a common impurity resulting from RDX synthesis. The relative amounts of HMX are a function of the RDX synthetic route and conditions. RDX was extracted from different C-4 samples and was analyzed by ESI-MS-SIM as the chloride adduct, EI-GC-MS-SIM, and NICI-GC-MS. Ratios (RDX/HMX) were calculated for each method. An analysis of variance (ANOVA) followed by a Tukey HSD allowed for an overall discriminating power to be assessed for each analytical method. The C-4 processing oils were also extracted, and analyzed by direct exposure probe mass spectrometry (DEP-MS) with electron ionization, a technique that requires less than two minutes for analysis. The overall discriminating power of the processing oils was calculated by conducting a series of t tests. Lastly, a set of heterogeneous commercial blasting agents were analyzed by laser induced breakdown spectroscopy (LIBS). The data was analyzed by principal components analysis (PCA), and the possibility of creating a searchable library was explored.
5

A framework for developing finite element codes for multi-disciplinary applications.

Dadvand, Pooyan 13 July 2007 (has links)
The world of computing simulation has experienced great progresses in recent years and requires more exigent multidisciplinary challenges to satisfy the new upcoming demands. Increasing the importance of solving multi-disciplinary problems makes developers put more attention to these problems and deal with difficulties involved in developing software in this area. Conventional finite element codes have several difficulties in dealing with multi-disciplinary problems. Many of these codes are designed and implemented for solving a certain type of problems, generally involving a single field. Extending these codes to deal with another field of analysis usually consists of several problems and large amounts of modifications and implementations. Some typical difficulties are: predefined set of degrees of freedom per node, data structure with fixed set of defined variables, global list of variables for all entities, domain based interfaces, IO restriction in reading new data and writing new results and algorithm definition inside the code. A common approach is to connect different solvers via a master program which implements the interaction algorithms and also transfers data from one solver to another. This approach has been used successfully in practice but results duplicated implementation and redundant overhead of data storing and transferring which may be significant depending to the solvers data structure. The objective of this thesis is to design and implement a framework for building multi-disciplinary finite element programs. Generality, reusability, extendibility, good performance and memory efficiency are considered to be the main points in design and implementation of this framework. Preparing the structure for team development is another objective because usually a team of experts in different fields are involved in the development of multi-disciplinary code. Kratos, the framework created in this work, provides several tools for easy implementation of finite element applications and also provides a common platform for natural interaction of its applications in different ways. This is done not only by a number of innovations but also by collecting and reusing several existing works. In this work an innovative variable base interface is designed and implemented which is used at different levels of abstraction and showed to be very clear and extendible. Another innovation is a very efficient and flexible data structure which can be used to store any type of data in a type-safe manner. An extendible IO is also created to overcome another bottleneck in dealing with multi-disciplinary problems. Collecting different concepts of existing works and adapting them to coupled problems is considered to be another innovation in this work. Examples are using an interpreter, different data organizations and variable number of dofs per node. The kernel and application approach is used to reduce the possible conflicts arising between developers of different fields and layers are designed to reflect the working space of different developers also considering their programming knowledge. Finally several technical details are applied in order to increase the performance and efficiency of Kratos which makes it practically usable. This work is completed by demonstrating the framework's functionality in practice. First some classical single field applications like thermal, fluid and structural applications are implemented and used as benchmark to prove its performance. These applications are used to solve coupled problems in order to demonstrate the natural interaction facility provided by the framework. Finally some less classical coupled finite element algorithms are implemented to show its high flexibility and extendibility. / El mundo de la simulación computacional ha experimentado un gran avance en los últimos años y cada día requiere desafíos multidisciplinares más exigentes para satisfacer las nuevas demandas. El aumento de la importancia por resolver problemas multidisciplinares hizo poner más atención a la resolución de estos problemas y a los problemas que éstos implican en el área de desarrollo de software. Los códigos convencionales de elementos finitos tienen varias dificultades para enfrentar se con problemas multidisciplinares. Muchos de estos códigos se diseñan y desarrollan para solucionar ciertos tipos de problemas, implicando generalmente un solo campo. Ampliar estos códigos para resolver problemas en otros campos del análisis, normalmente es difícil y se necesitan grandes modificaciones. Los ejemplos más comunes son: grados de libertad predefinidos para los nodos, estructura de datos capaz de guardar sólo una serie de variables definidas, lista global de las variables para todas las entidades, interfaces basadas en los dominios, capacidad del Input/Ouput para leer nuevos datos o escribir nuevos resultados y definición del algoritmo dentro del código. Un método común para resolver estos problemas es conectar varios modulos de calculo a través de un programa principal que implemente los algoritmos de la interacción y también transfiera datos de un modulo de calculo a otro. Este método se ha utilizado en la práctica con éxito, pero resulta en muchas duplicaciones del código y exceso de almacenamiento y tiempo de ejecución, dependiendo de la estructura de datos de los modulos de calculo. El objetivo de esta tesis es diseñar e implementar un marco general para el desarrollo programas de elementos finitos multidisciplinares. La generalidad, la reutilización, la capacidad de ampliación, el buen rendimiento y la eficiencia en el uso de la memoria por parte del codigo son considerados los puntos principales para el diseño e implementación de este marco. La preparación de esta estructura para un fácil desarrollo en equipo es otro objetivo importante, porque el desarrollo de un código multidisciplinar generalmente requiere expertos en diferentes campos trabajando juntos. Kratos, el marco creado en este trabajo, proporciona distintas herramientas para una fácil implementación de aplicaciones basadas en el método de los elementos finitos. También proporciona una plataforma común para una interacción natural y de diferentes maneras entre sus aplicaciones. Esto no sólo está hecho innovando, sino que además se han recogido y usado varios trabajos existentes. En este trabajo se diseña y se implementa una interface innovadora basada en variables, que se puede utilizar a diferentes niveles de abstracción y que ha demostrado ser muy clara y extensible. Otra innovación es una estructura de datos muy eficiente y flexible, que se puede utilizar para almacenar cualquier tipo de datos de manera "type-safe". También se ha creado un Input/Ouput extensible para superar otras dificultades en la resolución de problemas multidisciplinares. Otra innovación de este trabajo ha sido recoger e integrar diversos conceptos de trabajos ya existentes, adaptándolos a problemas acoplado.Esto incluye el uso de un intérprete, diversas organizaciones de datos y distinto número de grados de libertad por nodo. El concepto de núcleo y aplicación se utiliza para separar secciones del codigo y reducir posibles conflictos entre desarrolladores de diversos campos. Varias capas en la estructura de Kratos han sido diseñadas considerando los distintos niveles de programación de diferentes tipos de desarrolladores. Por último, se aplican varios detalles técnicos para aumentar el rendimiento y la eficacia de Kratos, convirtiendo lo en una herramienta muy útil para la resolución de problemas prácticos. Este trabajo se concluye demostrando el funcionamiento de Kratos en varios ejemplos prácticos. Primero se utilizan algunas aplicaciones clásicas de un solo campo como prueba patrón de rendimiento. Después, estas aplicaciones se acoplan para resolver problemas multidisciplinares, demostrando la facilidad natural de la interacción proporcionada por Kratos. Finalmente se han implementado algunos algoritmos menos clásicos para demostrar su alta flexibilidad y capacidad.
6

Acceleration of the noise suppression component of the DUCHAMP source-finder.

Badenhorst, Scott 01 January 2015 (has links)
The next-generation of radio interferometer arrays - the proposed Square Kilometre Array (SKA) and its precursor instruments, The Karoo Array Telescope (MeerKAT) and Australian Square Kilometre Pathfinder (ASKAP) - will produce radio observation survey data orders of magnitude larger than current sizes. The sheer size of the imaged data produced necessitates fully automated solutions to accurately locate and produce useful scientific data for radio sources which are (for the most part) partially hidden within inherently noisy radio observations (source extraction). Automated extraction solutions exist but are computationally expensive and do not yet scale to the performance required to process large data in practical time-frames. The DUCHAMP software package is one of the most accurate source extraction packages for general (source shape unknown) source finding. DUCHAMP's accuracy is primarily facilitated by the à trous wavelet reconstruction algorithm, a multi-scale smoothing algorithm which suppresses erratic observation noise. This algorithm is the most computationally expensive and memory intensive within DUCHAMP and consequently improvements to it greatly improve overall DUCHAMP performance. We present a high performance, multithreaded implementation of the à trous algorithm with a focus on `desktop' computing hardware to enable standard researchers to do their own accelerated searches. Our solution consists of three main areas of improvement: single-core optimisation, multi-core parallelism and the efficient out-of-core computation of large data sets with memory management libraries. Efficient out-of-core computation (data partially stored on disk when primary memory resources are exceeded) of the à trous algorithm accounts for `desktop' computing's limited fast memory resources by mitigating the performance bottleneck associated with frequent secondary storage access. Although this work focuses on `desktop' hardware, the majority of the improvements developed are general enough to be used within other high performance computing models. Single-core optimisations improved algorithm accuracy by reducing rounding error and achieved a 4 serial performance increase which scales with the filter size used during reconstruction. Multithreading on a quad-core CPU further increased performance of the filtering operations within reconstruction to 22 (performance scaling approximately linear with increased CPU cores) and achieved 13 performance increase overall. All evaluated out-of-core memory management libraries performed poorly with parallelism. Single-threaded memory management partially mitigated the slow disk access bottleneck and achieved a 3.6 increase (uniform for all tested large data sets) for filtering operations and a 1.5 increase overall. Faster secondary storage solutions such as Solid State Drives or RAID arrays are required to process large survey data on `desktop' hardware in practical time-frames.
7

Specification and Verification of Systems Using Model Checking and Markov Reward Models

Lifson, Farrel 01 May 2004 (has links)
The importance of service level management has come to the fore in recent years as computing power becomes more and more of a commodity. In order to present a consistently high quality of service systems must be rigorously analysed, even before implementation, and monitored to ensure these goals can be achieved. The tools and algorithms found in performability analysis offer a potentially ideal method to formally specify and analyse performance and reliability models. This thesis examines Markov reward models, a formalism based on continuous time Markov chains, and it's usage in the generation and analysis of service levels. The particular solution technique we employ in this thesis is model checking, using Continuous Reward Logic as a means to specify requirement and constraints on the model. We survey the current tools available allowing model checking to be performed on Markov reward models. Specifically we extended the Erlangen-Twente Markov Chain Checker to be able to solve Markov reward models by taking advantage of the Duality theorem of Continuous Stochastic Reward Logic, of which Continuous Reward Logic is a sub-logic. We are also concerned with the specification techniques available for Markov reward models, which have in the past merely been extensions to the available specification techniques for continuous time Markov chains. We implement a production rule system using Ruby, a high level language, and show the advantages gained by using it's native interpreter and language features in order to cut down on implementation time and code size. The limitations inherent in Markov reward models are discussed and we focus on the issue of zero reward states. Previous algorithms used to remove zero reward states, while preserving the numerical properties of the model, could potentially alter it's logical properties. We propose algorithms based on analysing the continuous reward logic requirement beforehand to determine whether a zero reward state can be removed safely as well as an approach based on substitution of zero reward states. We also investigate limitations on multiple reward structures and the ability to solve for both time and reward. Finally we perform a case study on a Beowulf parallel computing cluster using Markov reward models and the ETMCC tool, demonstrating their usefulness in the implementation of performability analysis and the determination of the service levels that can be offered by the cluster to it's users.
8

Model Driven Communication Protocol Engineering and Simulation based Performance Analysis using UML 2.0

de Wet, Nico 01 January 2005 (has links)
The automated functional and performance analysis of communication systems specified with some Formal Description Technique has long been the goal of telecommunication engineers. In the past SDL and Petri nets have been the most popular FDTs for the purpose. With the growth in popularity of UML the most obvious question to ask is whether one can translate one or more UML diagrams describing a system to a performance model. Until the advent of UML 2.0, that has been an impossible task since the semantics were not clear. Even though the UML semantics are still not clear for the purpose, with UML 2.0 now released and using ITU recommendation Z.109, we describe in this dissertation a methodology and tool called proSPEX (protocol Software Performance Engineering using XMI), for the design and performance analysis of communication protocols specified with UML. Our first consideration in the development of our methodology was to identify the roles of UML 2.0 diagrams in the performance modelling process. In addition, questions regarding the specification of non-functional duration contraints, or temporal aspects, were considered. We developed a semantic time model with which a lack of means of specifying communication delay and processing times in the language are addressed. Environmental characteristics such as channel bandwidth and buffer space can be specified and realistic assumptions are made regarding time and signal transfer. With proSPEX we aimed to integrate a commercial UML 2.0 model editing tool and a discrete-event simulation library. Such an approach has been advocated as being necessary in order to develop a closer integration of performance engineering with formal design and implementation methodologies. In order to realize the integration we firstly identified a suitable simulation library and then extended the library with features required to represent high-level SDL abstractions, such as extended finite state machines (EFSM) and signal addressing. In implementing proSPEX we filtered the XML output of our editor and used text templates for code generation. The filtering of the XML output and the need to extend our simulation library with EFSM abstractions was found to be significant implementation challenges. Lastly, in order to to illustrate the utility of proSPEX we conducted a performance analysis case-study in which the efficient short remote operations (ESRO) protocol is used in a wireless e-commerce scenario.
9

A lightweight interface to local Grid scheduling systems

Parker, Christopher P 01 May 2009 (has links)
Many complex research problems require an immense amount of computational power to solve. In order to solve such problems, the concept of the computational Grid was conceived. Although Grid technology is hailed as the next great enabling technology in Computer Science, the last being the inception of the World Wide Web, some concerns have to be addressed if this technology is going to be successful. The main difference between the Web and the Grid in terms of adoption is usability. The Web was designed with both functionality and end-users in mind, whereas the Grid has been designed solely with functionality in mind. Although large Grid installations are operational around the globe, their use is restricted to those who have an in-depth knowledge of its complex architecture and functionality. Such technology is therefore out of reach for the very scientists who need these resources because of its sheer complexity. The Grid is likely to succeed as a tool for some large-scale problem solving as there is no alternative on a similar scale. However, in order to integrate such systems into our daily lives, just as the Web has been, such systems need to be accessible to ``novice'' users. Without such accessibility, the use and growth of such systems will remain constrained. This dissertation details one possible way of making the Grid more accessible, by providing high-level access to the scheduling systems on which Grids rely. Since ``the Grid'' is a mechanism of transferring control of user submitted jobs to third-party scheduling systems, high-level access to the schedulers themselves was deemed to be a natural place to begin usability enhancing efforts. In order to design a highly usable and intuitive interface to a Grid scheduling system, a series of interviews with scientists were conducted in order to gain insight into the way in which supercomputing systems are utilised. Once this data was gathered, a paper-based prototype system was developed. This prototype was then evaluated by a group of test subjects who set out to criticise the interface and make suggestions as to where it could be improved. Based on this new data, the final prototype was developed firstly on paper and then implemented in software. The implementation makes use of lightweight Web 2.0 technologies. Designing lightweight software allows one to make use of the dynamic properties of Web technologies and thereby create more usable interfaces that are also visually appealing. Finally, the system was once again evaluated by another group of test subjects. In addition to user evaluations, performance experiments and real-world case studies were carried out on the interface. This research concluded that a dynamic Web 2.0-inspired interface appeals to a large group of users and allows for greater flexibility in the way in which data, in this case technical data, is presented. In terms of usability- the focal point of this research- it was found that it is possible to build an interface to a Grid scheduling system that can be used by users with no technical Grid knowledge. This is a significant outcome, as users were able to submit jobs to a Grid without fully comprehending the complexities involved with such actions, yet understanding the task they were required to perform. Finally, it was found that the use of a lightweight approach in terms of bandwidth usage and response time is superior to the traditional HTML-only approach. In this particular implementation of the interface, the benefits of using a lightweight approach are realised approximately halfway through a typical Grid job submission cycle.
10

Identification of New Metabolic Mutations that Enhance the Cell-Killing Effect of Hydroxyurea, A Clinically Used Drug with Multiple Implications

Samuel, Rittu Elsa 09 August 2019 (has links)
No description available.

Page generated in 0.024 seconds