• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6803
  • 683
  • 671
  • 671
  • 671
  • 671
  • 671
  • 671
  • 191
  • 62
  • 16
  • 7
  • 2
  • 2
  • 2
  • Tagged with
  • 10996
  • 10996
  • 6701
  • 1946
  • 992
  • 862
  • 543
  • 534
  • 524
  • 509
  • 507
  • 469
  • 458
  • 449
  • 404
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Efficient, sound formal verification for analog/mixed-signal circuits

Fisher, Andrew N. 26 January 2016 (has links)
<p> The increasing demand for smaller, more efficient circuits has created a need for both digital and analog designs to scale down. Digital technologies have been successful in meeting this challenge, but analog circuits have lagged behind due to smaller transistor sizes having a disproportionate negative affect. Since many applications require small, low-power analog circuits, the trend has been to take advantage of digital's ability to scale by replacing as much of the analog circuitry as possible with digital counterparts. The results are known as \emph{digitally-intensive analog/mixed-signal} (AMS) circuits. Though such circuits have helped the scaling problem, they have further complicated verification. This dissertation improves on techniques for AMS property specifications, as well as, develops sound, efficient extensions to formal AMS verification methods. With the \emph{language for analog/mixed-signal properties} (LAMP), one has a simple intuitive language for specifying AMS properties. LAMP provides a more procedural method for describing properties that is more straightforward than temporal logic-like languages. However, LAMP is still a nascent language and is limited in the types of properties it is capable of describing. This dissertation extends LAMP by adding statements to ignore transient periods and be able to reset the property check when the environment conditions change. After specifying a property, one needs to verify that the circuit satisfies the property. An efficient method for formally verifying AMS circuits is to use the restricted polyhedral class of \emph{zones}. Zones have simple operations for exploring the reachable state space, but they are only applicable to circuit models that utilize constant rates. To extend zones to more general models, this dissertation provides the theory and implementation needed to soundly handle models with ranges of rates. As a second improvement to the state representation, this dissertation describes how octagons can be adapted to model checking AMS circuit models. Though zones have efficient algorithms, it comes at a cost of over-approximating the reachable state space. Octagons have similarly efficient algorithms while adding additional flexibility to reduce the necessary over-approximations. Finally, the full methodology described in this dissertation is demonstrated on two examples. The first example is a switched capacitor integrator that has been studied in the context of transforming the original formal model to use only single rate assignments. Th property of not saturating is written in LAMP, the circuit is learned, and the property is checked against a faulty and correct circuit. In addition, it is shown that the zone extension, and its implementation with octagons, recovers all previous conclusions with the switched capacitor integrator without the need to translate the model. In particular, the method applies generally to all the models produced and does not require the soundness check needed by the translational approach to accept positive verification results. As a second example, the full tool flow is demonstrated on a digital C-element that is driven by a pair of RC networks, creating an AMS circuit. The RC networks are chosen so that the inputs to the C-element are ordered. LAMP is used to codify this behavior and it is verified that the input signals change in the correct order for the provided SPICE simulation traces.</p>
92

Improving processor efficiency through thermal modeling and runtime management of hybrid cooling strategies

Kaplan, Fulya 10 July 2017 (has links)
One of the main challenges in building future high performance systems is the ability to maintain safe on-chip temperatures in presence of high power densities. Handling such high power densities necessitates novel cooling solutions that are significantly more efficient than their existing counterparts. A number of advanced cooling methods have been proposed to address the temperature problem in processors. However, tradeoffs exist between performance, cost, and efficiency of those cooling methods, and these tradeoffs depend on the target system properties. Hence, a single cooling solution satisfying optimum conditions for any arbitrary system does not exist. This thesis claims that in order to reach exascale computing, a dramatic improvement in energy efficiency is needed, and achieving this improvement requires a temperature-centric co-design of the cooling and computing subsystems. Such co-design requires detailed system-level thermal modeling, design-time optimization, and runtime management techniques that are aware of the underlying processor architecture and application requirements. To this end, this thesis first proposes compact thermal modeling methods to characterize the complex thermal behavior of cutting-edge cooling solutions, mainly Phase Change Material (PCM)-based cooling, liquid cooling, and thermoelectric cooling (TEC), as well as hybrid designs involving a combination of these. The proposed models are modular and they enable fast and accurate exploration of a large design space. Comparisons against multi-physics simulations and measurements on testbeds validate the accuracy of our models (resulting in less than 1C error on average) and demonstrate significant reductions in simulation time (up to four orders of magnitude shorter simulation times). This thesis then introduces temperature-aware optimization techniques to maximize energy efficiency of a given system as a whole (including computing and cooling energy). The proposed optimization techniques approach the temperature problem from various angles, tackling major sources of inefficiency. One important angle is to understand the application power and performance characteristics and to design management techniques to match them. For workloads that require short bursts of intense parallel computation, we propose using PCM-based cooling in cooperation with a novel Adaptive Sprinting technique. By tracking the PCM state and incorporating this information during runtime decisions, Adaptive Sprinting utilizes the PCM heat storage capability more efficiently, achieving 29\% performance improvement compared to existing sprinting policies. In addition to the application characteristics, high heterogeneity in on-chip heat distribution is an important factor affecting efficiency. Hot spots occur on different locations of the chip with varying intensities; thus, designing a uniform cooling solution to handle worst-case hot spots significantly reduces the cooling efficiency. The hybrid cooling techniques proposed as part of this thesis address this issue by combining the strengths of different cooling methods and localizing the cooling effort over hot spots. Specifically, the thesis introduces LoCool, a cooling system optimizer that minimizes cooling power under temperature constraints for hybrid-cooled systems using TECs and liquid cooling. Finally, the scope of this work is not limited to existing advanced cooling solutions, but it also extends to emerging technologies and their potential benefits and tradeoffs. One such technology is integrated flow cell array, where fuel cells are pumped through microchannels, providing both cooling and on-chip power generation. This thesis explores a broad range of design parameters including maximum chip temperature, leakage power, and generated power for flow cell arrays in order to maximize the benefits of integrating this technology with computing systems. Through thermal modeling and runtime management techniques, and by exploring the design space of emerging cooling solutions, this thesis provides significant improvements in processor energy efficiency. / 2018-07-09T00:00:00Z
93

Cooperative high-performance computing with FPGAs - matrix multiply case-study

Munafo, Robert 03 July 2018 (has links)
In high-performance computing, there is great opportunity for systems that use FPGAs to handle communication while also performing computation on data in transit in an ``altruistic'' manner--that is, using resources for computation that might otherwise be used for communication, and in a way that improves overall system performance and efficiency. We provide a specific definition of \textbf{Computing in the Network} that captures this opportunity. We then outline some overall requirements and guidelines for cooperative computing that include this ability, and make suggestions for specific computing capabilities to be added to the networking hardware in a system. We then explore some algorithms running on a network so equipped for a few specific computing tasks: dense matrix multiplication, sparse matrix transposition and sparse matrix multiplication. In the first instance we give limits of problem size and estimates of performance that should be attainable with present-day FPGA hardware.
94

Exploring Data Compression and Random-access Reduction to Mitigate the Bandwidth Wall for Manycore Architectures

Nguyen, Tri Minh 31 October 2018 (has links)
<p> The performance gap between computer processors and memory bandwidth is severely limiting the throughput of modern and future multi-core and manycore architectures. To handle this growing gap, commercial processors such as the Intel Xeon Phi and NVIDIA or AMD GPUs have needed to use expensive memory solutions like high-bandwidth memory (HBM) and 3D-stacked memory to satisfy the bandwidth demand of the growing core-count over each product generation. Without a scalable solution for the memory bandwidth issue, throughput-oriented computation cannot be improved. This problem is widely known as the bandwidth-wall. </p><p> Data compression and random-access reduction are promising approaches to increase bandwidth without raising costs. This thesis makes three specific contributions to the state-of-the-art. First, to reduce cache misses, we propose an on-chip cache compression method that drastically increases compression performance and cache hit rate over prior work. Second, to improve direct compression of off-chip bandwidth and make it more scalable, we propose a novel link compression framework that exploits the on-chip caches themselves as a massive and scalable compression dictionary. Last, to overcome poor random-access performance of nonvolatile memory (NVM) and make it more attractive as a DRAM replacement with crash consistency, we propose a multi-undo logging scheme that seamlessly logs memory writes sequentially to maximize NVM I/O operations per second (IOPS). </p><p> As a common principle, this thesis seeks to overcome the bandwidth wall for manycore architectures not through expensive memory technologies but by assessing and exploiting workload behavior, and not through burdening programmers with specialized semantics but by implementing software-transparent architectural improvements.</p><p>
95

Design of a dynamic e-commerce system with engineering applications

Arikere, Ravi Kishan 25 November 2002 (has links)
The primary purpose of this thesis was to design and develop a prototype e-commerce system where dynamic parameters are included in the decision-making process and execution of an online transaction. The system developed and implemented takes into account previous usage history, priority and associated engineering capabilities. The system was developed using three-tiered client server architecture. The interface was the Internet browser. The middle tiered web server was implemented using Active Server Pages, which form a link between the client system and other servers. A relational database management system formed the data component of the three-tiered architecture. It includes a capability for data warehousing which extracts needed information from the stored data of the customers as well as their orders. The system organizes and analyzes the data that is generated during a transaction to formulate a client's behavior model during and after a transaction. This is used for making decisions like pricing, order rescheduling during a client's forthcoming transaction. The system helps among other things to bring about predictability to a transaction execution process, which could be highly desirable in the current competitive scenario.
96

Remote experimental station for engineering education

Doddapuneni, Muralidhar 02 December 2002 (has links)
This thesis provides a distance-learning laboratory for students of electrical and computer engineering department where the instructor can conduct experiments on a computer and send the results to the students at remote computers. The output of the experiment conducted by the instructor is sampled using a successive approximation Analog to Digital (A/D) converter. A microcontroller collects samples using high speed queued serial peripheral interface clock and transmits data to IBM-compatible personal computer over a serial port interface, where the samples are processed using Fast Fourier Transforms and graphed. The client/server application developed transfers the acquired samples over Transmission Control Protocol/Internet Protocol (TCP/IP) network with operational Graphical User Interface (GUI) to the remote computers where the samples are processed and presented to students. The application was tested on all Windows platforms and various Internet speeds (56k modem, Digital Subscriber Line (DSL), Local Area Network (LAN)). The results were analyzed and appropriate methodology of Remote Experimental Station was formulated.
97

Three-dimensional image analysis using confocal microscopy

Duranza, Sonia 22 May 1998 (has links)
This thesis introduces imaging algorithms for three-dimensional data analysis and classification using confocal microscopy. The third dimension, depth information, is provided through the optical sectioning property of the confocal microscope. The theme of this thesis is to develop imaging techniques that extend beyond the traditional two-dimensional (2-D) spatial coordinate system into an augmented three-dimensional (3-D) world where analysis, interpretation, and the eventual classification of data is greatly enhanced. In the development of the proposed 3-D algorithms, three main objectives were sought: (1) establish proper 3-D mathematical extensions and practical implementations of 2-D standard formulations; (2) ensure that the process of classification overcomes the burden imposed by dependence in size and orientation through applications of the principal component transform and the log-spherical plot; and (3) address such issues as memory management and accelerated processing to reach the final objective of data recognition and classification effectively.
98

Automated synthesis of a reduced-parameter model for 3D digital audio

Faller, Kenneth John, II 07 June 1996 (has links)
Head-Related Impulse Responses (HRIRs) are used in signal processing to implement the synthesis of spatialized audio. They represent the modification that sound undergoes from its source to the listener's eardrums. HRIRs are somewhat different for each listener and require expensive specialized equipment for their individual measurement. Therefore, the development of a method to obtain customized HRIRs without specialized equipment is extremely desirable. A customizable representation of HRIRs can be created by modeling them in terms of an appropriate set of time delays and a resonant frequency. Previously, this was achieved manually, by trial and error. In this research an automated algorithm for the definition of the appropriate delays and resonant frequency needed to model an HRIR was developed, implemented and evaluated. This provides an objective, repeatable way to determine the parameters of the HRIR model. The automated process provided an average accuracy of 96.9% in the analysis of 2160 HRIRs.
99

Stochastic Simulation Framework for a Data Mule Network in the Amazon Delta

January 2015 (has links)
abstract: This report investigates the improvement in the transmission throughput, when fountain codes are used in opportunistic data routing, for a proposed delay tolerant network to connect remote and isolated communities in the Amazon region in Brazil, to the main city of that area. To extend healthcare facilities to the remote and isolated communities, on the banks of river Amazon in Brazil, the network [7] utilizes regularly schedules boats as data mules to carry data from one city to other. Frequent thunder and rain storms, given state of infrastructure and harsh geographical terrain; all contribute to increase in chances of massages not getting delivered to intended destination. These regions have access to medical facilities only through sporadic visits from medical team from the main city in the region, Belem. The proposed network uses records for routine clinical examinations such as ultrasounds on pregnant women could be sent to the doctors in Belem for evaluation. However, due to the lack of modern communication infrastructure in these communities and unpredictable boat schedules due to delays and breakdowns, as well as high transmission failures due to the harsh environment in the region, mandate the design of robust delay-tolerant routing algorithms. The work presented here incorporates the unpredictability of the Amazon riverine scenario into the simulation model - accounting for boat mechanical failure in boats leading to delays/breakdowns, possible decrease in transmission speed due to rain and individual packet losses. Extensive simulation results are presented, to evaluate the proposed approach and to verify that the proposed solution [7] could be used as a viable mode of communication, given the lack of available options in the region. While the simulation results are focused on remote healthcare applications in the Brazilian Amazon, we envision that our approach may also be used for other remote applications, such as distance education, and other similar scenarios. / Dissertation/Thesis / Masters Thesis Computer Science 2015
100

Development and Implementation of Physical Layer Kernels for Wireless Communication Protocols

January 2016 (has links)
abstract: Historically, wireless communication devices have been developed to process one specific waveform. In contrast, a modern cellular phone supports multiple waveforms corresponding to LTE, WCDMA(3G) and 2G standards. The selection of the network is controlled by software running on a general purpose processor, not by the user. Now, instead of selecting from a set of complete radios as in software controlled radio, what if the software could select the building blocks based on the user needs. This is the new software-defined flexible radio which would enable users to construct wireless systems that fit their needs, rather than forcing to use from a small set of pre-existing protocols. To develop and implement flexible protocols, a flexible hardware very similar to a Software Defined Radio (SDR) is required. In this thesis, the Intel T2200 board is chosen as the SDR platform. It is a heterogeneous platform with ARM, CEVA DSP and several accelerators. A wide range of protocols is mapped onto this platform and their performance evaluated. These include two OFDM based protocols (WiFi-Lite-A, WiFi-Lite-B), one DFT-spread OFDM based protocol (SCFDM-Lite) and one single carrier based protocol (SC-Lite). The transmitter and receiver blocks of the different protocols are first mapped on ARM in the T2200 board. The timing results show that IFFT, FFT, and Viterbi decoder blocks take most of the transmitter and receiver execution time and so in the next step these are mapped onto CEVA DSP. Mapping onto CEVA DSP resulted in significant execution time savings. The savings for WiFi-Lite-A were 60%, for WiFi-Lite-B were 64%, and for SCFDM-Lite were 71.5%. No savings are reported for SC-Lite since it was not mapped onto CEVA DSP. Significant reduction in execution time is achieved for WiFi-Lite-A and WiFi-Lite-B protocols by implementing the entire transmitter and receiver chains on CEVA DSP. For instance, for WiFi-Lite-A, the savings were as large as 90%. Such huge savings are because the entire transmitter or receiver chain are implemented on CEVA and the timing overhead due to ARM-CEVA communication is completely eliminated. Finally, over-the-air testing was done for WiFi-Lite-A and WiFi-Lite-B protocols. Data was sent over the air using one Intel T2200 WBS board and received using another Intel T2200 WBS board. The received frames were decoded with no errors, thereby validating the over-the-air-communications. / Dissertation/Thesis / Masters Thesis Engineering 2016

Page generated in 0.1383 seconds