• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 88
  • Tagged with
  • 89
  • 89
  • 89
  • 89
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Duck Hunt FPGA game, a project on UML and digital design

Nguyen, Vuong D. 15 September 2016 (has links)
<p> Field Programmable Gate Array (FPGA) is rarely associated with video games. Software video games can be made using the Unified Modeling Language (UML) and high level languages such as the Extensible Hypertext Markup Language (XHTML), Cascading Style Sheets (CSS), Javascript, and jQuery; however, FPGA video games require the building of complex hardware. The goal of this project is to create an FPGA video game by combining UML and digital design. </p><p> There are advantages to starting from the hardware level such as having more control, thus giving more freedom to create design and functional specifications. The disadvantages include creating device drivers. By using the Rational Unified Process (RUP) as the development process, a Duck Hunt FPGA Game is created that proves how software video game development is different compared to FPGA game development.</p>
72

Comparative analysis of load balancing algorithms in cloud computing

Tomar, Mohit 07 April 2017 (has links)
<p> Cloud computing is a novel trend emerging in Information Technology (IT) environments with immense infrastructure and resources. An integral aspect of cloud computing is load balancing. Efficient load balancing in cloud computing ensures effective resource utilization. There are two types of load balancers: the static load balancer and the dynamic load balancer. While both types of load balancers are widely used in the industry, they differ in performance. In this project, the performances of the most widely used static and dynamic load balancers, namely the round robin and the throttled, are compared. Specifically, the project examines whether the throttled algorithm takes less time than the round robin algorithm to access data in cloud computing. The results show that the throttled algorithm takes less time than the round robin algorithm to access data, and that this difference is due to a faultiness in the implementation of the round robin algorithm.</p>
73

Vehicle license plate detection and recognition

Ning, Guanghan 27 September 2016 (has links)
<p> In this work, we develop a license plate detection method using a SVM (Support Vector Machine) classifier with HOG (Histogram of Oriented Gradients) features. The system performs window searching at different scales and analyzes the HOG feature using a SVM and locates their bounding boxes using a Mean Shift method. Edge information is used to accelerate the time consuming scanning process. </p><p> Our license plate detection results show that this method is relatively insensitive to variations in illumination, license plate patterns, camera perspective and background variations. We tested our method on 200 real life images, captured on Chinese highways under different weather conditions and lighting conditions. And we achieved a detection rate of 100%. </p><p> After detecting license plates, alignment is then performed on the plate candidates. Conceptually, this alignment method searches neighbors of the bounding box detected, and finds the optimum edge position where the outside regions are very different from the inside regions of the license plate, from color's perspective in RGB space. This method accurately aligns the bounding box to the edges of the plate so that the subsequent license plate segmentation and recognition can be performed accurately and reliably. </p><p> The system performs license plate segmentation using global alignment on the binary license plate. A global model depending on the layout of license plates is proposed to segment the plates. This model searches for the optimum position where the characters are all segmented but not chopped into pieces. At last, the characters are recognized by another SVM classifier, with a feature size of 576, including raw features, vertical and horizontal scanning features. </p><p> Our character recognition results show that 99% of the digits are successfully recognized, while the letters achieve an recognition rate of 95%. </p><p> The license plate recognition system was then incorporated into an embedded system for parallel computing. Several TS7250 and an auxiliary board are used to simulate the process of vehicle retrieval.</p>
74

Performance/Accuracy Trade-offs of Floating-point Arithmetic on Nvidia GPUs| From a Characterization to an Auto-tuner

Surineni, Sruthikesh 09 March 2019 (has links)
<p> Floating-point computations produce approximate results, possibly leading to inaccuracy and reproducibility problems. Existing work addresses two issues: first, the design of high precision floating-point representations, and second, the study of methods to support a trade-off between accuracy and performance of central processing unit (CPU) applications. However, a comprehensive study of trade-offs between accuracy and performance on modern graphic processing units (GPUs) is missing. This thesis covers the use of different floating-point precisions (i.e., single and double floating-point precision) in the IEEE 754 standard, the GNU Multiple Precision Arithmetic Library (GMP), and composite floating-point precision on a GPU using a variety of synthetic and real-world benchmark applications. First, we analyze the support for a single and double precision floating-point arithmetic on the considered GPU architectures, and we characterize the latencies of all floating-point instructions on GPU. Second, a study is presented on the performance/accuracy tradeoffs related to the use of different arithmetic precisions on addition, multiplication, division, and natural exponential function. Third, an analysis is given on the combined use of different arithmetic operations on three benchmark applications characterized by different instruction mixes and arithmetic intensities. As a result of this analysis, a novel auto tuner was designed in order to select the arithmetic precision of a GPU program leading to a better performance and accuracy tradeoff depending on the arithmetic operations and math functions used in the program and the degree of multithreading of the code.</p><p>
75

Criticality Assessments for Improving Algorithmic Robustness

Jones, Thomas Berry 09 April 2019 (has links)
<p> Though computational models typically assume all program steps execute flawlessly, that does not imply all steps are equally important if a failure should occur. In the "Constrained Reliability Allocation" problem, sufficient resources are guaranteed for operations that prompt eventual program termination on failure, but those operations that only cause output errors are given a limited budget of some vital resource, insufficient to ensure correct operation for each of them. </p><p> In this dissertation, I present a novel representation of failures based on a combination of their timing and location combined with criticality assessments&mdash;a method used to predict the behavior of systems operating outside their design criteria. I observe that strictly correct error measures hide interesting failure relationships, failure importance is often determined by failure <i> timing</i>, and recursion plays an important role in structuring output error. I employ these observations to improve the output error of two matrix multiplication methods through an economization procedure that moves failures from worse to better locations, thus providing a possible solution to the constrained reliability allocation problem. I show a 38% to 63% decrease in absolute value error on matrix multiplication algorithms, despite nearly identical failure counts between control and experimental studies. Finally, I show that efficient sorting algorithms are less robust at large scale than less efficient sorting algorithms.</p><p>
76

Techniques for Enhancing the Security of Future Smart Grids

Saed, Mustafa 20 April 2019 (has links)
<p> The smart grid is a new technology that uses new and sophisticated techniques for electrical transmission and distribution in order to provide excellent electrical service to customers, and allow them to manage their electricity consumption in a two-way communication. The idea of the &ldquo;Smart Grid&rdquo; was most likely invented by researchers and engineers at the U.S. Department of Energy, who were concerned with increasing the level of functionalities and intelligence of the contemporary electrical grid. Some of these functionalities typically include knowledge about generation, the ability to automate substations, and methods of communicating with consumers. </p><p> Improvements in the performance of network and smart grid systems have significantly enriched their effectiveness and consistency. Unfortunately, these advances also pose new threats when the systems are not equipped with the proper security measures resulting in use safety issues, such as a disconnection of electrical power source. Even though addressing the security concerns of a massive and powerful system can be overwhelming, appropriate installation of electrical equipment can prevent cyber-attacks from harming essential functions. </p><p> The most effective security measures can be employed by every component of the smart grid communications network through understanding practices and principles found in similar systems and industries. </p><p> This dissertation leverages the work that has been done with regards to the security of the smart grid. Protecting the two-way direct and indirect communication of smart meters with collectors through the introduction of the three cryptographic protocols based on PKI will be emphasized. The security of indirect communication is more difficult in comparison to the direct one as readings (measurements) have to travel through other smart meters before reaching the collector. The introduced schemes satisfy the security requirements; confidentiality, integrity, and nonrepudiation. Furthermore, a risk analysis of the three designed security protocols for smart meters in smart grid networks will be performed. Finally, a technique for verifying the security of the three developed security protocols between smart meters, the central gateway (collector), and supervisory nodes (substation) will be presented. The verification process of these protocols relies on the CryptoVerif tool using two phases. In the first phase, the protocols were manually investigated for security flaws, inconsistencies, and incorrect usage of cryptographic primitives. During the second phase, the protocols were analyzed using the CryptoVerif, an automated formal method-based analysis tool. Several efficiency improvements are presented as an outcome of these analyses. Furthermore, the future work will concentrate on simulating and integrating the three designed protocols, and securing the data reading (Smart Meter-Collector-Substation/Utility) before uploading it to the smart grid cloud by the public utility. In addition, a new security technique to secure the smart grid cloud will be discussed.</p><p>
77

Low Power Techniques for Video Codec Motion Compensation

Badran-Louca, Serena 14 August 2015 (has links)
<p> A major milestone in the evolution of video coding standards is the well-known H.264/MPEG-4 Advanced Video Coding (AVC). In January 2013 that was followed by H.265/MPEG-5 High-Efficiency Video Coding (HEVC). Both of these standards achieve substantial improvement in bit-rate efficiency compared to their predecessor and are the standard of choice in many application standards such as HD, DVD, HD-DTV.</p><p> With the proliferation of handheld devices, emphasis on reducing power consumption and increasing battery life is growing, thus making the improvement of power efficiency the main goal of any decoder implementation. However, most of the current power solutions focus solely on reducing memory accesses, the largest power drain. Other aspects that are not getting as much attention include memory access efficiency, memory power down, and pipeline efficiency. Power for these aspects can be reduced with new architectures for Memory Access Arbitration and Reference Data Scheduling. As shown in this work, using these techniques results in 87.67% off chip memory power reduction and reduces the number of memory accesses by 86.9% when the memory is power up.</p>
78

Antikernel| A decentralized secure hardware-software operating system architecture

Zonenberg, Andrew D. 22 August 2015 (has links)
<p> Security of monolithic kernels, and even microkernels, relies on a large and complex body of code (including both software and hardware components) being entirely bug-free. Most contemporary operating systems can be completely compromised by a bug anywhere in this codebase, from the network stack to the CPU pipeline's handling of privilege levels, regardless of whether a particular application uses that feature or not. Even formally verified software is vulnerable to failure when the hardware, or the hardware-software interface, has not been verified.</p><p> This thesis describes Antikernel, a novel operating system architecture consisting of both hardware and software components and designed to be fundamentally more secure than the existing state of the art. In order to make formal verification easier, and improve parallelism, the Antikernel system is highly modular and consists of many independent hardware state machines (one or more of which may be a general-purpose CPU running application or systems software) connected by a packet-switched network-on-chip (NoC).</p><p> The Antikernel architecture is unique in that there is no "all-powerful" software which has the ability to read or modify arbitrary data on the system, gain low-level control of the hardware, etc. All software is unprivileged; the concept of "root" or "kernel mode" simply does not exist so there is no possibility of malicious software achieving such capabilities. </p><p> The prototype Antikernel system was written in a mixture of Verilog, C, and MIPS assembly language for the actual operating system, plus a large body of C++ in debug/support tools which are used for development but do not actually run on the target system. The prototype was verified with a combination of simulation (Xilinx ISim), formal model checking (using the MiniSAT solver integrated with yosys), and hardware testing (using a batch processing cluster consisting of Xilinx Spartan-3A, Spartan-6, Artix-7, and Kintex-7 FPGAs). </p>
79

Optimal investigation of a HVDC transmission system

Eleftheratos, Cleanthis N. January 1981 (has links)
This thesis describes an investigation into the design of an optimal control system for an A.C. -D. C. Power Transmission System. The steady-state operation of the plant is considered first, and the relationships between some important parameters in the system are established. The dynamic performance of the system is then considered, and an optimal controller for the system is designed using Pontryagin's Principle. Several computer programs were written to analyse and confirm the mathematical models developed in the thesis and good agreement was obtained between the computed values and the results obtained from the literature, over a range of operating and transient conditions.
80

Security through network-wide diversity assignment /

O'Donnell, Adam J. Sethu, Harish. January 2005 (has links)
Thesis (Ph. D.)--Drexel University, 2005. / Includes abstract and vita. Includes bibliographical references (leaves 91-98).

Page generated in 0.1802 seconds