• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 56
  • 13
  • 12
  • 8
  • 8
  • 7
  • 6
  • 1
  • 1
  • Tagged with
  • 145
  • 145
  • 37
  • 29
  • 28
  • 25
  • 24
  • 21
  • 20
  • 19
  • 17
  • 16
  • 16
  • 14
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Undersökning och framtagning av ett moduluppbyggt datainsamlingssystem / Study and Development of A Modular-based Data Acquisition System

Sadiq, Mohamad January 2008 (has links)
<p>This report is about a thesis that is performed in and for SYSteam Engineering AB in Motala. The thesis work is divided into three parts, study of the market, programming and electronics. The study part consists of examining and comparing different data acquisition systems for testing of different circuit boards, taking into account the modularity, real-time applications, mobility, environmental, interface hardware and software, to be able to define a general module-based data acquisition system in both hardware and software that enables for future developments.</p><p><strong></strong></p><p>The programming part consists of getting started with Visual Studio, which integrates the Measurement Studio for C#. NET. Measurement Studio includes classes and user controls for testing and measuring and offers tools for acquisition, analysis and presentation of real world data. The programming has been the biggest part of the project, the orienting of test specification and to learn how to program and control the hardware according to the test conditions took the most of time.</p><p><strong></strong></p><p>Electronics is the part that took the least time, which consists of orienting the test specification, connecting cables to the I/O modules and supplements the system with any components which is necessary to execute the various test cases.</p><p><strong></strong></p><p>The result was a system consisting of a chassis with a number of modules that National Instruments offers and a test program consisting of three class levels that can be reused in different projects and for different test items.</p>
52

Cost effective tests for high speed I/O subsystems

Chun, Ji Hwan 01 February 2012 (has links)
The growing demand for high performance systems in modern computing technology drives the development of advanced and high speed designs in I/O structures. Due to their data rate and architecture, however, testing of the high speed serial interfaces becomes more expensive when using conventional test methods. In order to alleviate the test cost issue, a loopback test scheme has been widely adopted. To assess the margin of the signal eye in the loopback configuration, the eye margin is purposely reduced by additional devices on the loopback path or using design for testability (DFT) features such as timing and voltage margining. Although the loopback test scheme successfully reduces the test cost by decoupling the dependency of external test equipment, it has robustness issues such as a fault masking issue and a non-ideality problem of margining circuits. The focus of this dissertation is to propose new methods to resolve the known issues in the loopback test mode. The fault masking issue in a loopback pair of analog to digital and digital to analog converters (ADC and DAC) which can be found in pulse amplitude modulation (PAM) signaling schemes is resolved using a proposed algorithm which separates the characteristics of the ADC and the DAC from a combined loopback response. The non-ideality problem of margining circuit is resolved using a proposed method which utilizes a random jitter injection technique. Using the injected random jitter, the jitter distribution is sampled by undersampling and margining, which provides the nonlinearity information using the proposed algorithm. Since the proposed method requires a random jitter source on the load board, an alternative solution is proposed which uses an intrinsic jitter profile and a sliding window search algorithm to characterize the nonlinearities. The sliding search algorithm was implemented in a low cost high volume manufacturing (HVM) tester to assess feasibility and validity of the proposed technique. The proposed methods are compatible with the existing loopback test scheme and require a minimal area and design overhead, hence they provide cost effective ways to enhance the robustness of the loopback test scheme. / text
53

Middleware for online scientific data analytics at extreme scale

Zheng, Fang 22 May 2014 (has links)
Scientific simulations running on High End Computing machines in domains like Fusion, Astrophysics, and Combustion now routinely generate terabytes of data in a single run, and these data volumes are only expected to increase. Since such massive simulation outputs are key to scientific discovery, the ability to rapidly store, move, analyze, and visualize data is critical to scientists' productivity. Yet there are already serious I/O bottlenecks on current supercomputers, and movement toward the Exascale is further accelerating this trend. This dissertation is concerned with the design, implementation, and evaluation of middleware-level solutions to enable high performance and resource efficient online data analytics to process massive simulation output data at large scales. Online data analytics can effectively overcome the I/O bottleneck for scientific applications at large scales by processing data as it moves through the I/O path. Online analytics can extract valuable insights from live simulation output in a timely manner, better prepare data for subsequent deep analysis and visualization, and gain improved performance and reduced data movement cost (both in time and in power) compared to the conventional post-processing paradigm. The thesis identifies the key challenges for online data analytics based on the needs of a variety of large-scale scientific applications, and proposes a set of novel and effective approaches to efficiently program, distribute, and schedule online data analytics along the critical I/O path. In particular, its solution approach i) provides a high performance data movement substrate to support parallel and complex data exchanges between simulation and online data analytics, ii) enables placement flexibility of analytics to exploit distributed resources, iii) for co-placement of analytics with simulation codes on the same nodes, it uses fined-grained scheduling to harvest idle resources for running online analytics with minimal interference to the simulation, and finally, iv) it supports scalable efficient online spatial indices to accelerate data analytics and visualization on the deep memory hierarchies of high end machines. Our middleware approach is evaluated with leadership scientific applications in domains like Fusion, Combustion, and Molecular Dynamics, and on different High End Computing platforms. Substantial improvements are demonstrated in end-to-end application performance and in resource efficiency at scales of up to 16384 cores, for a broad range of analytics and visualization codes. The outcome is a useful and effective software platform for online scientific data analytics facilitating large-scale scientific data exploration.
54

Improving Device Driver Reliability through Decoupled Dynamic Binary Analyses

Ruwase, Olatunji O. 01 May 2013 (has links)
Device drivers are Operating Systems (OS) extensions that enable the use of I/O devices in computing systems. However, studies have identified drivers as an Achilles’ heel of system reliability, their high fault rate accounting for a significant portion of system failures. Consequently, significant effort has been directed towards improving system robustness by protecting system components (e.g., OS kernel, I/O devices, etc.) from the harmful effects of driver faults. In contrast to prior techniques which focused on preventing unsafe driver interactions (e.g., with the OS kernel), my thesis is that checking a driver’s execution for correctness violations results in the detection and mitigation of more faults. To validate this thesis, I present Guardrail, a flexible and powerful framework that enables instruction-grained dynamic analysis (e.g., data race detection) of unmodified kernel-mode driver binaries to safeguard I/O operations and devices from driver faults. Guardrail decouples the analysis tool from driver execution to improve performance, and runs it in user-space to simplify the deployment of new tools. Moreover, Guardrail leverages virtualization to be transparent to both the driver and device, and enable support for arbitrary driver/device combinations. To demonstrate Guardrail’s generality, I implemented three novel dynamic checking tools within the framework for detecting memory faults, data races and DMA faults in drivers. These tools found 25 serious bugs, including previously unknown bugs, in Linux storage and network drivers. Some of the bugs existed in several Linux (and driver) releases, suggesting their elusiveness to existing approaches. Guardrail easily detected these bugs using common driver workloads. Finally, I present an evaluation of using Guardrail to protect network and storage I/O operations from memory faults, data races and DMA faults in drivers. The results show that with hardware-assisted logging for decoupling the heavyweight analyses from driver execution, standard I/O workloads generally experienced negligible slowdown on their end-to-end performance. In conclusion, Guardrail’s high fidelity fault detection and efficient monitoring performance makes it a promising approach for improving the resilience of computing systems to the wide variety of driver faults.
55

On-demand Isolated I/O for Security-sensitive Applications on Commodity Platforms

Zhou, Zongwei 01 May 2014 (has links)
Today large software systems (i.e., giants) thrive in commodity markets, but are untrustworthy due to their numerous and inevitable software bugs that can be exploited by the adversary. Thus, the best hope of security is that some small, simple, and trustworthy software components (i.e., wimps) can be protected from attacks launched by adversary-controlled giants. However, wimps in isolation typically give up a variety of basic services (e.g., file system, networking, device I/O), trading usefulness and viability with security. Among these basic services, isolated I/O channels remained an unmet challenge over the past three decades. Isolated I/O is a critical security primitive for a myriad of applications (e.g., secure user interface, remote device control). With this primitive, isolated wimps can transfer I/O data to commodity peripheral devices and the data secrecy and authenticity are protected from the co-existing giants. This thesis addresses this challenge by proposing a new security architecture to provide services to isolated wimps. Instead of restructuring the giants or bloating the Trusted Computing Base that underpins wimp-giant isolation (dubbed underlying TCB), this thesis presents a unique on-demand I/O isolation model and a trusted add-on component called wimpy kernel to instantiate this model. In our model, the wimpy kernel dynamically takes control of the devices managed by a commodity OS, connects them to the isolated wimps, and relinquishes controls to the OS when done. This model creates ample opportunities for the wimpy kernel to outsource I/O subsystem functions to the untrusted OS and verify their results. The wimpy kernel further exports device drivers and I/O subsystem code to wimps and mediates the operations of the exported code. These two methodologies help to significantly reduce the size and complexity of the wimpy kernel for high security assurance. Using the popular and complex USB subsystem as a case study, this thesis illustrates the dramatic reduction of the wimpy kernel; i.e., over 99% of the Linux USB code base is removed. In addition, the wimpy kernel also composes with the underlying TCB, by retaining its code size, complexity and security properties.
56

Electrostatic Discharge Protection Devices for CMOS I/O Ports

Li, Qing January 2012 (has links)
In modern integrated circuits, electrostatic discharge (ESD) is a major problem that influences the reliability of operation, yield and cost of fabrication. ESD discharge events can generate static voltages beyond a few kilo volts. If these voltages are dissipated in the chip, high electric field and high current are generated and will destroy the gate oxide material or melt the metal interconnects. In order to protect the chip from these unexpected ESD events, special protection devices are designed and connect to each pin of the IC for this purpose. With the scaling of nano-metric processing technologies, the ESD design window has become more critical. That leaves little room for designers to maneuver. A good ESD protection device must have superior current sinking ability and also does not affect the normal operation of the IC. The two main categories of ESD devices are snapback and non-snapback ones. Non-snapback designs usually consist of forward biased diode strings with properties, such as low heat and power, high current carrying ability. Snapback devices use MOSFET and silicon controlled rectifier (SCR). They exploit avalanche breakdown to conduct current. In order to investigate the properties of various devices, they need to be modeled in device simulators. That process begins with realizing a technology specific NMOS and PMOS in the device simulators. The MOSFET process parameters are exported to build ESD structures. Then, by inserting ESD devices into different simulation test-benches, such as human-body model or charged-device model, their performance is evaluated through a series of figures of merit, which include peak current, voltage overshoot, capacitance, latch-up immunity and current dissipation time. A successful design can sink a large amount of current within an extremely short duration, while it should demonstrate a low voltage overshoot and capacitance. In this research work, an inter-weaving diode and SCR hybrid device demonstrated its effectiveness against tight ESD test standards is shown.
57

A Scalable Architecture for Simplifying Full-Range Scientific Data Analysis

Kendall, Wesley James 01 December 2011 (has links)
According to a recent exascale roadmap report, analysis will be the limiting factor in gaining insight from exascale data. Analysis problems that must operate on the full range of a dataset are among the most difficult. Some of the primary challenges in this regard come from disk access, data managment, and programmability of analysis tasks on exascale architectures. In this dissertation, I have provided an architectural approach that simplifies and scales data analysis on supercomputing architectures while masking parallel intricacies to the user. My architecture has three primary general contributions: 1) a novel design pattern and implmentation for reading multi-file and variable datasets, 2) the integration of querying and sorting as a way to simplify data-parallel analysis tasks, and 3) a new parallel programming model and system for efficiently scaling domain-traversal tasks. The design of my architecture has allowed studies in several application areas that were not previously possible. Some of these include large-scale satellite data and ocean flow analysis. The major driving example is of internal-model variability assessments of flow behavior in the GEOS-5 atmospheric modeling dataset. This application issued over 40 million particle traces for model comparison (the largest parallel flow tracing experiment to date), and my system was able to scale execution up to 65,536 processes on an IBM BlueGene/P system.
58

New abstractions and mechanisms for virtualizing future many-core systems

Kumar, Sanjay 08 July 2008 (has links)
To abstract physical into virtual computing infrastructures is a longstanding goal. Efforts in the computing industry started with early work on virtual machines in IBM's VM370 operating system and architecture, continued with extensive developments in distributed systems in the context of grid computing, and now involve investments by key hardware and software vendors to efficiently virtualize common hardware platforms. Recent efforts in virtualization technology are driven by two facts: (i) technology push -- new hardware support for virtualization in multi- and many-core hardware platforms and in the interconnects and networks used to connect them, and (ii) technology pull -- the need to efficiently manage large-scale data-centers used for utility computing and extending from there, to also manage more loosely coupled virtual execution environments like those used in cloud computing. Concerning (i), platform virtualization is proving to be an effective way to partition and then efficiently use the ever-increasing number of cores in many-core chips. Further, I/O Virtualization enables I/O device sharing with increased device throughput, providing required I/O functionality to the many virtual machines (VMs) sharing a single platform. Concerning (ii), through server consolidation and VM migration, for instance, virtualization increases the flexibility of modern enterprise systems and creates opportunities for improvements in operational efficiency, power consumption, and the ability to meet time-varying application needs. This thesis contributes (i) new technologies that further increase system flexibility, by addressing some key problems of existing virtualization infrastructures, and (ii) it then directly addresses the issue of how to exploit the resulting increased levels of flexibility to improve data-center operations, e.g., power management, by providing lightweight, efficient management technologies and techniques that operate across the range of individual many-core platforms to data-center systems. Concerning (i), the thesis contributes, for large many-core systems, insights into how to better structure virtual machine monitors (VMMs) to provide more efficient utilization of cores, by implementing and evaluating the novel Sidecore approach that permits VMMs to exploit the computational power of parallel cores to improve overall VMM and I/O performance. Further, I/O virtualization still lacks the ability to provide complete transparency between virtual and physical devices, thereby limiting VM mobility and flexibility in accessing devices. In response, this thesis defines and implements the novel Netchannel abstraction that provides complete location transparency between virtual and physical I/O devices, thereby decoupling device access from device location and enabling live VM migration and device hot-swapping. Concerning (ii), the vManage set of abstractions, mechanisms, and methods developed in this work are shown to substantially improve system manageability, by providing a lightweight, system-level architecture for implementing and running the management applications required in data-center and cloud computing environments. vManage simplifies management by making it possible and easier to coordinate the management actions taken by the many management applications and subsystems present in data-center and cloud computing systems. Experimental evaluations of the Sidecore approach to VMM structure, Netchannel, and of vManage are conducted on representative platforms and server systems, with consequent improvements in flexibility, in I/O performance, and in management efficiency, including power management.
59

Simulation software for automation industry : Factory I/O and KUKASim software

Poblacion Salvatierra, Itxaso January 2018 (has links)
This thesis has two aims, both related with simulation programs. The first one is to analyze the viability of using Factory I/O as a tool at the University of Gävle for teaching and understanding the PLCs and the ladder programing. The second one is do a 3D model of the robotics laboratory to use it with the KUKASim and after that find a method to transfer code from KUKASim to the actual robots.Factory I/O has been install and use with a Siemens PLC, which was programed in Siemens TIA Portal. The evaluation of the software as a teaching tool has been done according to a supposition of how much time could it take to an average bachelor degree student in automation to create a functional project. In order to determine that, a demo has been done, which consist on a process in where a box enters by a conveyor. There are two ruts for the box to exit, straight or to the left, and the direction is choose by moving a switcher in the PLC. After analyzing finishing the demo, it has been determinate that it could take around 4 hours to complete a functional project.For the KUKASim part, KUKASim was already installed. However, during the development of this project it was updated from 2.2 to 3.0.4 which caused a minor issues; the SketchUp model could not be imported to 2.2 version and by the time that the upgrade was made the 3D environment of the robotic laboratory was already made in KUKASim.On the other hand, the Office Lite software was needed to be installed in order to transfer the code from KUKASim to the real robots, but due to some license issues, the installation was done at the end period of the project. The connection of both software was not possible to make, still, during the time that Office Lite was not available, an alternative method to transfer the code was found. The program files were download from KUKASim and transfer to the robot with WorkVisual.The conclusion of the thesis is that Factory I/O could be used as a learning and teaching tool because is an easy program to work with. All the same, KUKASim is a multifunctional software, which has make it possible to achieve both of the purposes for corresponding part of the project.
60

Transversal I/O scheduling for parallel file systems : from applications to devices / Escalonamento de E/S transversal para sistemas de arquivos paralelos : das aplicações aos dispositivos

Boito, Francieli Zanon January 2015 (has links)
Esta tese se concentra no escalonamento de operações de entrada e saída (E/S) como uma solução para melhorar o desempenho de sistemas de arquivos paralelos, aleviando os efeitos da interferência. É usual que sistemas de computação de alto desempenho (HPC) ofereçam uma infraestrutura compartilhada de armazenamento para as aplicações. Nessa situação, em que múltiplas aplicações acessam o sistema de arquivos compartilhado de forma concorrente, os acessos das aplicações causarão interferência uns nos outros, comprometendo a eficácia de técnicas para otimização de E/S. Uma avaliação extensiva de desempenho foi conduzida, abordando cinco algoritmos de escalonamento trabalhando nos servidores de dados de um sistema de arquivos paralelo. Foram executados experimentos em diferentes plataformas e sob diferentes padrões de acesso. Os resultados indicam que os resultados obtidos pelos escalonadores são afetados pelo padrão de acesso das aplicações, já que é importante que o ganho de desempenho provido por um algoritmo de escalonamento ultrapasse o seu sobrecusto. Ao mesmo tempo, os resultados do escalonamento são afetados pelas características do subsistema local de E/S - especialmente pelos dispositivos de armazenamento. Dispositivos diferentes apresentam variados níveis de sensibilidade à sequencialidade dos acessos e ao seu tamanho, afetando o quanto técnicas de escalonamento de E/S são capazes de aumentar o desempenho. Por esses motivos, o principal objetivo desta tese é prover escalonamento de E/S com dupla adaptabilidade: às aplicações e aos dispositivos. Informações sobre o padrão de acesso das aplicações são obtidas através de arquivos de rastro, vindos de execuções anteriores. Aprendizado de máquina foi aplicado para construir um classificador capaz de identificar os aspectos espacialidade e tamanho de requisição dos padrões de acesso através de fluxos de requisições anteriores. Além disso, foi proposta uma técnica para obter eficientemente a razão entre acessos sequenciais e aleatórios para dispositivos de armazenamento, executando testes para apenas um subconjunto dos parâmetros e estimando os demais através de regressões lineares. Essas informações sobre características de aplicações e dispositivos de armazenamento são usadas para decidir a melhor escolha em algoritmo de escalonamento através de uma árvore de decisão. A abordagem proposta aumenta o desempenho em até 75% sobre uma abordagem que usa o mesmo algoritmo para todas as situações, sem adaptabilidade. Além disso, essa técnica melhora o desempenho para até 64% mais situações, e causa perdas de desempenho em até 89% menos situações. Os resultados obtidos evidenciam que ambos aspectos - aplicações e dispositivos de armazenamento - são essenciais para boas decisões de escalonamento. Adicionalmente, apesar do fato de não haver algoritmo de escalonamento capaz de prover ganhos de desempenho para todas as situações, esse trabalho mostra que através da dupla adaptabilidade é possível aplicar técnicas de escalonamento de E/S para melhorar o desempenho, evitando situações em que essas técnicas prejudicariam o desempenho. / This thesis focuses on I/O scheduling as a tool to improve I/O performance on parallel file systems by alleviating interference effects. It is usual for High Performance Computing (HPC) systems to provide a shared storage infrastructure for applications. In this situation, when multiple applications are concurrently accessing the shared parallel file system, their accesses will affect each other, compromising I/O optimization techniques’ efficacy. We have conducted an extensive performance evaluation of five scheduling algorithms at a parallel file system’s data servers. Experiments were executed on different platforms and under different access patterns. Results indicate that schedulers’ results are affected by applications’ access patterns, since it is important for the performance improvement obtained through a scheduling algorithm to surpass its overhead. At the same time, schedulers’ results are affected by the underlying I/O system characteristics - especially by storage devices. Different devices present different levels of sensitivity to accesses’ sequentiality and size, impacting on how much performance is improved through I/O scheduling. For these reasons, this thesis main objective is to provide I/O scheduling with double adaptivity: to applications and devices. We obtain information about applications’ access patterns through trace files, obtained from previous executions. We have applied machine learning to build a classifier capable of identifying access patterns’ spatiality and requests size aspects from streams of previous requests. Furthermore, we proposed an approach to efficiently obtain the sequential to random throughput ratio metric for storage devices by running benchmarks for a subset of the parameters and estimating the remaining through linear regressions. We use this information on applications’ and storage devices’ characteristics to decide the best fit in scheduling algorithm though a decision tree. Our approach improves performance by up to 75% over an approach that uses the same scheduling algorithm to all situations, without adaptability. Moreover, our approach improves performance for up to 64% more situations, and decreases performance for up to 89% less situations. Our results evidence that both aspects - applications and storage devices - are essential for making good scheduling choices. Moreover, despite the fact that there is no scheduling algorithm able to provide performance gains for all situations, we show that through double adaptivity it is possible to apply I/O scheduling techniques to improve performance, avoiding situations where it would lead to performance impairment.

Page generated in 0.0301 seconds