• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 969
  • 351
  • 97
  • 80
  • 76
  • 48
  • 46
  • 36
  • 25
  • 19
  • 16
  • 12
  • 8
  • 7
  • 4
  • Tagged with
  • 2390
  • 889
  • 551
  • 417
  • 376
  • 306
  • 246
  • 239
  • 231
  • 219
  • 205
  • 190
  • 185
  • 184
  • 178
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Proposta de hardware para aquisição simultânea multicanal e sua aplicação na localização de fontes sonoras/

Ferreira, M. L. C. January 2015 (has links) (PDF)
Dissertação (Mestrado em Engenharia Elétrica) - Centro Universitário da FEI, São Bernardo do Campo, 2015
2

An Attack and a Defence in the Context of Hardware Security

Imeson, Frank 16 May 2013 (has links)
The security of digital Integrated Circuits (ICs) is essential to the security of a computer system that comprises them. We present an improved attack on computer hardware that avoids known defence mechanisms and as such raises awareness for the need of new and improved defence mechanisms. We also present a new defence method for securing computer hardware against modifications from untrusted manufacturing facilities, which is of concern since manufacturing is increasingly outsourced. We improve upon time triggered based backdoors, inserted maliciously in hardware. Prior work has addressed deterministic timer-based triggers — those that are designed to trigger at a specific time with probability 1. We address open questions related to the feasibility of realizing non-deterministic timer-based triggers in hardware — those that are designed with a random component. We show that such timers can be realized in hardware in a manner that is impractical to detect or disable using existing countermeasures of which we are aware. We discuss our design, implementation and analysis of such a timer. We show that the attacker can have surprisingly fine-grained control over the time-window within which the timer triggers. From the attacker’s standpoint our non-deterministic timer has key advantages over traditional timer designs. For example the hardware footprint is smaller which increases the chances of avoiding detection. Also our timer has a much smaller time-window for which a volatile state needs to be maintained which in turn makes the power reset defence mechanisms less effective. Our proposed defence mechanism addresses the threat of a malicious agent at the IC foundry who has information of the circuit and inserts covert, malicious circuitry. The use of 3D IC technology has been suggested as a possible technique to counter this threat. However, to our knowledge, there is no prior work on how such technology can be used effectively. We propose a way to use 3D IC technology for security in this context. Specifically, we obfuscate the circuit by lifting wires to a trusted tier, which is fabricated separately. We provide a precise notion of security that we call k-security and point out that it has interesting similarities and important differences from k-anonymity. We also give a precise specification of the underlying computational problems and their complexity and discuss a comprehensive empirical assessment with benchmark circuits that highlight the security versus cost trade-offs introduced by 3D IC based circuit obfuscation.
3

A Graphical Approach to Testing Real-Time Embedded Devices

Day, Steven M 01 June 2009 (has links)
Software Testing is both a vital and expensive part of the software development lifecycle. Improving the testing process has the potential for large returns. Current testing methodologies used to test real-time embedded devices are examined and the weaknesses in them are exposed. This leads to the introduction of a new graphical testing methodology based on flowcharts. The new approach is both a visual test creation program and an automated execution engine that together frame a new way of testing. The new methodology incorporates flow-based diagrams, visual layouts, and simple execution rules to improve upon traditional testing approaches. The new methodology is evaluated against other methodologies and is shown to provide significant improvements in the area of software testing.
4

RISCV Whisk: Unleashing the Power of Software Fuzzing on Hardware

Singh, Nandita 30 June 2023 (has links)
In the hardware industry, the fabrication of a chip with hardware bugs represents a critical concern due to the permanent and irreversible nature of the process. The detection of bugs in intricate designs, such as those found in central processing units (CPUs), is a highly challenging and labor-intensive task, which leaves little margin for error. Modern CPU verification techniques often employ a blend of simulation, formal and emulation verification to guarantee the accuracy of the design. Although these methods are successful in identifying various types of design flaws, they still have some limitations. The biggest limitations is achieving comprehensive coverage of all conceivable scenarios and exceptional cases which may interrupt a core and put it in a halt state. We are presenting a design agnostic methodology involving a three-stage process for verification of a multi-core 32-bits RISC-V processor. This methodology leverages software fuzzing and utilizing state-of-the-art tools to analyze CPU's design after converting it into an equivalent software model. Our approach for hardware fuzzing incorporates the use of a sparse memory matrix as external memory to hold the inputs and state of the core, which are encountered during the fuzzing process. This approach has significantly increased the efficiency of our fuzzing process, enabling us to achieve a 609x improvement in the fuzzing rate compared to prevalent hardware fuzzing techniques. To further optimize our process, we precisely constrained the inputs of the fuzzer to provide only valid test scenarios, which eliminated the crash overhead of the fuzzer. By doing so, we have improved the accuracy of our testing results and reduced the time and resources required to analyze potential vulnerabilities. Our verification techniques are implemented using open-source tools, making our fast and cost-effective process accessible to a wide range of hardware engineers and security professionals. By leveraging the benefits of sparse memory and precise input constraints, our approach to hardware fuzzing offers a powerful and efficient tool for identifying potential hardware vulnerabilities and defects. / Master of Science / In the world of technology, computer chips play a crucial role in almost everything we do. These chips are designed to perform specific tasks and are used in a variety of devices, such as smartphones, computers, and gaming consoles. It's crucial that these chips are free of bugs or errors because even a small flaw can lead to disastrous consequences, such as system crashes, data loss, or even security breaches. However, testing computer chips for bugs is a challenging and labor-intensive task, especially when it comes to complex designs like central processing units (CPUs). This is because CPUs are responsible for carrying out a wide range of operations and are made up of many intricate components, making it difficult to identify and fix any issues that arise during the testing process. To overcome these challenges, engineers and researchers have developed various testing methods, including simulation, formal verification, and emulation verification. These methods are effective in identifying most types of design flaws, but they still have some limitations. For instance, they may not be able to cover all conceivable scenarios or exceptional cases that could cause a CPU to malfunction. To address these limitations, we have developed a new testing method that leverages software fuzzing. Fuzzing is a technique in which millions of random inputs are given to the program to find unexpected behavior of the design. We are using state-of-the-art tools to analyze the CPU's design after converting it into an equivalent software model. This approach is called hardware fuzzing. We have used a special memory system called a sparse memory matrix to implement hardware fuzzing. This system is used to hold the inputs and state of the CPU during the testing process. By doing this, we are able to increase the efficiency of the fuzzing process by 609x compared to other hardware fuzzing techniques. This means we can test the CPU much faster and more accurately than before. To further optimize the process, we have constrained the inputs of the fuzzer to only include valid test scenarios. This eliminated the crash overhead of the fuzzer, which improved the accuracy of the testing results and reduced the time and resources required to analyze potential vulnerabilities.This new testing method is implemented using open-source tools, which makes it accessible to a wide range of hardware engineers and security professionals. In other words, anyone can use this method to test their CPU designs quickly and cost-effectively.
5

Systematic Analysis and Methodologies for Hardware Security

Moein, Samer 18 December 2015 (has links)
With the increase in globalization of Integrated Circuit (IC) design and production, hardware trojans have become a serious threat to manufacturers as well as consumers. These trojans could be intensionally or accidentally embedded in ICs to make a system vulnerable to hardware attacks. The implementation of critical applications using ICs makes the effect of trojans an even more serious problem. Moreover, the presence of untrusted foundries and designs cannot be eliminated since the need for ICs is growing exponentially and the use of third party software tools to design the circuits is now common. In addition if a trusted foundry for fabrication has to be developed, it involves a huge investment. Therefore, hardware trojan detection techniques are essential. Very Large Scale Integration (VLSI) system designers must now consider the security of a system against internal and external hardware attacks. Many hardware attacks rely on system vulnerabilities. Moreover, an attacker may rely on deprocessing and reverse engineering to study the internal structure of a system to reveal the system functionality in order to steal secret keys or copy the system. Thus hardware security is a major challenge for the hardware industry. Many hardware attack mitigation techniques have been proposed to help system designers build secure systems that can resist hardware attacks during the design stage, while others protect the system against attacks during operation. In this dissertation, the idea of quantifying hardware attacks, hardware trojans, and hardware trojan detection techniques is introduced. We analyze and classify hardware attacks into risk levels based on three dimensions Accessibility/Resources/Time (ART). We propose a methodology and algorithms to aid the attacker/defender to select/predict the hardware attacks that could use/threaten the system based on the attacker/defender capabilities. Because many of these attacks depends on hardware trojans embedded in the system, we propose a comprehensive hardware trojan classification based on hardware trojan attributes divided into eight categories. An adjacency matrix is generated based on the internal relationship between the attributes within a category and external relationship between attributes in different categories. We propose a methodology to generate a trojan life-cycle based on attributes determined by an attacker/defender to build/investigate a trojan. Trojan identification and severity are studied to provide a systematic way to compare trojans. Trojan detection identification and coverage is also studied to provide a systematic way to compare detection techniques and measure their e effectiveness related to trojan severity. We classify hardware attack mitigation techniques based on the hardware attack risk levels. Finally, we match these techniques to the attacks the could countermeasure to help defenders select appropriate techniques to protect their systems against potential hardware attacks. / Graduate / 0544 / 0984 / samerm@uvic.ca
6

tithu

udeuid, aiaqja 29 May 2017 (has links)
Submitted by admin admin (admin@admin.n) on 2017-05-29T13:40:24Z No. of bitstreams: 1 mysql-tutorial-excerpt-5.5-en.pdf: 196134 bytes, checksum: b70a6c8098b88d4fd210eab81596fec0 (MD5) / Made available in DSpace on 2017-05-29T13:40:25Z (GMT). No. of bitstreams: 1 mysql-tutorial-excerpt-5.5-en.pdf: 196134 bytes, checksum: b70a6c8098b88d4fd210eab81596fec0 (MD5) Previous issue date: 2017-05-29 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / akekaeuk / kaeukeuk
7

teste campus

udeuid, aiaqja 31 May 2017 (has links)
Submitted by admin admin (admin@admin.n) on 2017-05-31T12:02:08Z No. of bitstreams: 1 mysql-tutorial-excerpt-5.5-en.pdf: 196134 bytes, checksum: b70a6c8098b88d4fd210eab81596fec0 (MD5) / Made available in DSpace on 2017-05-31T12:02:09Z (GMT). No. of bitstreams: 1 mysql-tutorial-excerpt-5.5-en.pdf: 196134 bytes, checksum: b70a6c8098b88d4fd210eab81596fec0 (MD5) Previous issue date: 2017-05-31 / aaooeeuu / aaooeeuu
8

Load balancing strategies for distributed computer systems

Butt, Wajeeh U. N. January 1993 (has links)
The study investigates various load balancing strategies to improve the performance of distributed computer systems. A static task allocation and a number of dynamic load balancing algorithms are proposed, and their performances evaluated through simulations. First, in the case of static load balancing, the precedence constrained scheduling heuristic is defined to effectively allocate the task systems with high communication to computation ratios onto a given set of processors. Second, the dynamic load balancing algorithms are studied using a queueing theoretic model. For each algorithm, a different load index has been used to estimate the host loads. These estimates are utilized in simple task placement heuristics to determine the probabilities for transferring tasks between every two hosts in the system. The probabilities determined in this way are used to perform dynamic load balancing in a distributed computer system. Later, these probabilities are adjusted to include the effects of inter-host communication costs. Finally, network partitioning strategies are proposed to reduce the communication overhead of load balancing algorithms in a large distributed system environment. Several host-grouping strategies are suggested to improve the performance of load balancing algorithms. This is achieved by limiting the exchange of load information messages within smaller groups of hosts while restricting the transfer of tasks to long distance remote hosts which involve high communication costs. Effectiveness of the above-mentioned algorithms is evaluated by simulations. The model developed in this study for such simulations can be used in both static and dynamic load balancing environments.
9

Design techniques for enhancing the performance of frame buffer systems

Makris, Alexander January 1997 (has links)
The 2D and 3D graphics support for PC's and workstations is becoming a very challenging field. The need to continuously support real time image generation at higher frame rates and resolutions implies that all levels of the graphics generation process must continuously improve. New hardware algorithms need to be devised and the existing ones must be optimised for better performance. These algorithms must exploit parallelism in every possible way and new hardware architectures and memory configurations must accompany them to support this kind ofparallelism. This thesis focuses on new hardware techniques, of both architectural and algorithmic nature, to accelerate the 2D and 3D graphics performance of computer systems. Some of these new techniques are in the frame buffer access level, where the images are stored in the video memory and then displayed on the screen. Some are in the rasterisation level where the drawing of basic primitives such as lines, triangle and polygons takes place. Novel rasterisation algorithms are invented and compared with traditional ones in terms of hardware complexity and performance and their basic models have been implemented in VHDL and in other software languages. New frame buffer architectures are introduced and analysed that can improve the overall performance of a graphics system significantly and are compatible with a number of graphics systems in terms of their requirements. During the development of this thesis special consideration was given to the hardware (e. g. VHDL register-transfer level) implementation of the described architectures and algorithms. Both software, hardware models and their test environments were implemented in a way to maximise the accuracy of the results. The reason for that was to make sure that actual hardware implementation would be possible and it would produce the same results without any surprises
10

A formal approach to hardware analysis

Traub, Niklas Gerard January 1986 (has links)
No description available.

Page generated in 0.0433 seconds