• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 40
  • 13
  • 7
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 92
  • 16
  • 12
  • 12
  • 10
  • 9
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Efficient Multi-ported Memories for FPGAs

LaForest, Charles Eric 15 February 2010 (has links)
Multi-ported memories are challenging to implement on FPGAs since the provided block RAMs typically have only two ports. In this dissertation we present a thorough exploration of the design space of FPGA multi-ported memories by evaluating conventional solutions to this problem, and introduce a new design that efficiently combines block RAMs into multi-ported memories with arbitrary numbers of read and write ports and true random access to any memory location, while achieving significantly higher operating frequencies than conventional approaches. For example we build a 256-location, 32-bit, 12-ported (4-write, 8-read) memory that operates at 281 MHz on Altera Stratix III FPGAs while consuming an area equivalent to 3679 ALMs: a 43% speed improvement and 84% area reduction over a pure ALM implemen- tation, and a 61% speed improvement over a pure "multipumped" implementation, although the pure multipumped implementation is 7.2-fold smaller.
42

FPGA interconnection networks with capacitive boosting in strong and weak inversion

Eslami, Fatemeh 22 August 2012 (has links)
Designers of Field-Programmable Gate Arrays (FPGAs) are always striving to improve the speed of their designs. The propagation delay of FPGA interconnection networks is a major challenge and continues to grow with newer technologies. FPGAs interconnection networks are implemented using NMOS pass transistor based multiplexers followed by buffers. The threshold voltage drop across an NMOS device degrades the high logic value, and results in unbalanced rising and falling edges, static power consumption due to the crowbar currents, and reduced noise margins. In this work, circuit design techniques to construct interconnection circuit with capacitive boosting are proposed. By using capacitive boosting in FPGAs interconnection networks, the signal transitions are accelerated and the crowbar currents of downstream buffers are reduced. In addition, buffers can be non-skewed or slightly skewed to improve noise immunity of the interconnection network. Results indicate that by using the presented circuit design technique, the propagation delay can be reduced by at least 10% versus prior art at the expense of a slight increase in silicon area. In addition, in a bid to reduce power consumption in reconfigurable arrays, operation in weak inversion region has been suggested. Current programmable interconnections cannot be directly used in this region due to a very poor propagation delay and sensitivity to Process-Voltage-Temperature (PVT) variations. This work also focuses on designing a common structure for FPGAs interconnection networks that can operate in both strong and weak inversion. We propose to use capacitive boosting together with a new circuit design technique, called Twins transmission gates in implementing FPGA interconnect multiplexers. We also propose to use capacitive boosting in designing buffers. This way, the operation region of the interconnection circuitry is shifted away from weak inversion toward strong inversion resulting in improved speed and enhanced tolerance to PVT variations. Simulation results indicate using capacitive boosting to implement the interconnection network can have a significant influence on delay and tolerance to variations. The interconnection network with capacitive boosting is at least 34% faster than prior art in weak inversion. / Graduate
43

On designing coarse grain reconfigurable arrays to operate in weak inversion

Ross, Dian Marie 17 December 2012 (has links)
Field Programmable Gate Arrays (FPGAs) support the reconfigurable computing paradigm by providing an integrated circuit hardware platform that facilitates software like reconfigurability. The addition of an embedded microprocessor and peripherals to traditional FPGA Combinational Logic Blocks (CLBs) interleaved with interconnections has effectively resulted in a programmable system on-chip. FPGAs are used to support flexible implementations of Application Specific Integrated Circuit (ASIC) functions. Because FPGAs are reconfigurable, they often are used in place of ASICs during the cicuit design process. FPGAs are also used when only a small number of ICs are required: ASICs necessitate large manufacturing runs to be economically viable; for smaller runs the use of FPGAs is an economic alternative. Application domains of interest, such as intelligent guidance systems, medical devices, and sensors, often require low power, inexpensive calculation of trance- dental functions. COordinate Rotation DIgital Computer (CORDIC) is an iterative algorithm used to emmulate hardware expensive multipliers, such as Multiply/ACculmulate (MAC) units, with only shift and add operations. However, because CORDIC is a sequential algorithm, characterized as having the latency of a serial multiplier, techniques that speed up computational performance have many applications.To this end, three implementations of standard CORDIC, (i) unrolled hardwired, (ii) unrolled programmable, and (iii) rolled programmable, were implemented on four Xilinx FPGA families: Virtex-4, -5, and -6, and Spartan-6. Although hardwired unrolled was found to have the greatest speed at the expense of no runtime flexibility, and rolled programmable was found to have the greatest flexibility and lowest silicon area consumption at the expense of the longest propagation delay, improvements to CORDIC implementations were still sought. Three parallelized CORDIC techniques, P-CORDIC, Flat-CORDIC, and Para-CORDIC, were implemented on the same four FPGA families. P-CORDIC and Flat-CORDIC, were shown to have the lowest latency under various conditions; Para-CORDIC was found to perform well in deeply pipelined, high throughput circuits. Design rules for when to use standard versus precomputation CORDIC techniques are presented. To address the low power requirements of many applications of interest, the Unfolded Multiplexor-LRB (UMUX-LRB), patent held by Sima, et al, was analyzed in weak inversion across four transistor technology nodes (180nm, 130nm, 90nm, and 65nm). Previous was also expanded from strong inversion across 180nm, 130nm, and 90nm technology nodes to also include 65nm. The UMUX-LRB interconnection network is based upon the Xilinx commercial interconnection network. Therefore, this network (MUX-LRB), and another static circuit technique, CMOS-Transmission Gates (CMOS-TG), were profiled across all four technology nodes to provide a baseline of comparision. This analysis found the UMUX-LRB to have the smallest and most balanced rising and falling edge propagation delay, in addition to having the greatest reliability for temperature and process variation. / Graduate
44

ANALYS AV SERIALISERINGSFORMAT I KARTBASERADE WEBB GIS APPLIKATIONER : Protocol Buffers vs. FlatBuffers

Rönkkö, Johan January 2018 (has links)
Detta arbete bidrar till val av serialiseringsformat. Val av serialiseringsformat är kritiskt för webbapplikationer som sänder och tar emot omfattande datamängder eftersom det påverkar reduceringen av datastorlek, samt hur snabbt en klient och server kan processa denna data. Arbetet utvärderar de binära serialiseringsformaten Protocol Buffers och FlatBuffers i webbaserade geografiska informationssystem. Tidigare forskning har förutsett att Flatbuffers borde vara effektivare än Protocol Buffers, men det saknas vetenskapliga belägg. Experiment genomfördes där serialiseringsformaten testades i programmeringsspråket Go, med kommunikationsprotokollen HTTP och WebSocket, samt där nätverkshastigheten var begränsad till 800, 200, 50 Mbit/s. Experimentet påvisade att det inte spelar det någon roll vilket serialiseringsformat som används när nätverkshastigheten är begränsad till 800 Mbit/s, och att Protocol Buffers presterade bättre när nätverkshastigheten är begränsad till 200 och 50 Mbit/s. Framtida arbeten kan öka kunskapen om serialiseringsformatens beteende i olika nätverkshastigheter och utvecklingsmiljöer, samt implementera renderingsverktyg utifrån schemafilerna i detta arbete.
45

Widening stakeholder involvement : exploiting interactive 3D visualisation and protocol buffers in geo-computing

McCreadie, Christopher Andrew January 2014 (has links)
Land use change has an impact on regional sustainability which can be assessed using social, economic and environmental indicators. Stakeholder engagement tools provide a platform that can demonstrate the possible future impacts land use change may have to better inform stakeholder groups of the impact of policy changes or plausible climatic variations. To date some engagement tools are difficult to use or understand and lack user interaction whilst other tools demonstrate model environments with a tightly coupled user interface, resulting in poor performance. The research and development described herein relates to the development and testing of a visualisation engine for rendering the output of an Agent Based Model (ABM) as a 3D Virtual Environment via a loosely-coupled data driven communications protocol called Protocol Buffers. The tool, named Rural Sustainability Visualisation Tool (R.S.V.T) is primarily aimed to enhance nonexpert knowledge and understanding of the effects of land use change, driven by farmer decision making, on the sustainability of a region. Communication protocols are evaluated and Protocol Buffers, a binarybased communications protocol is selected, based on speed of object serialization and data transfer, to pass message from the ABM to the 3D Virtual Environment. Early comparative testing of R.S.V.T and its 2D counterpart RepastS shows R.S.V.T and its loosely-coupled approach offers an increase in performance when rendering land use scenes. The flexibility of Protocol Buffer’s and MongoDB are also shown to have positive performance implications for storing and running of loosely-coupled model simulations. A 3D graphics Application Programming Interface (API), commonly used in the development of computer games technology is selected to develop the Virtual Environment. Multiple visualisation methods, designed to enhance stakeholder engagement and understanding, are developed and tested to determine their suitability in both user preference and information retrieval. The application of a prototype is demonstrated using a case study based in the Lunan catchment in Scotland, which has water quality and biodiversity issues due to intense agriculture. The region is modelled using three scenario storylines that broadly describe plausible futures. Business as Might Be Usual (BAMBU), Growth Applied Strategy (GRAS) and the Sustainable European Development Goal (SEDG) are the applied scenarios. The performance of the tool is assessed and it is found that R.S.V.T can run faster than its 2D equivalent when loosely coupled with a 3D Virtual Environment. The 3D Virtual Environment and its associated visualisation methods are assessed using non-expert stakeholder groups and it is shown that 3D ABM output is generally preferred to 2D ABM output. Insights are also gained into the most appropriate visualisation techniques for agricultural landscapes. Finally, the benefit of taking a loosely-coupled approach to the visualisation of model data is demonstrated through the performance of Protocol Buffers during testing, showing it is capable of transferring large amounts of model data to a bespoke visual front-end.
46

Formal Modeling and Verification Methodologies for Quasi-Delay Insensitive Asynchronous Circuits

Sakib, Ashiq Adnan January 2019 (has links)
Pre-Charge Half Buffers (PCHB) and NULL convention Logic (NCL) are two major commercially successful Quasi-Delay Insensitive (QDI) asynchronous paradigms, which are known for their low-power performance and inherent robustness. In industry, QDI circuits are synthesized from their synchronous counterparts using custom synthesis tools. Validation of the synthesized QDI implementation is a critical design prerequisite before fabrication. At present, validation schemes are mostly extensive simulation based that are good enough to detect shallow bugs, but may fail to detect corner-case bugs. Hence, development of formal verification methods for QDI circuits have been long desired. The very few formal verification methods that exist in the related field have major limiting factors. This dissertation presents different formal verification methodologies applicable to PCHB and NCL circuits, and aims at addressing the limitations of previous verification approaches. The developed methodologies can guarantee both safety (full functional correctness) and liveness (absence of deadlock), and are demonstrated using several increasingly larger sequential and combinational PCHB and NCL circuits, along with various ISCAS benchmarks. / National Science Foundation (Grant No. CCF-1717420)
47

Multi-Scale Response of Upland Birds to Targeted Agricultural Conservation

Evans, Kristine Oswald 12 May 2012 (has links)
As human populations rise exponentially, agricultural production systems must be adapted to sustain ecosystem function. Government administered agricultural conservation programs may actualize greater gains in ecosystem services, including wildlife population gains, if conservation practices designed to target specific environmental outcomes are implemented strategically in agricultural landscapes. I evaluated multi-scale, multi-species, and multi-season avian population responses to a targeted native herbaceous buffer practice (CP33: Habitat Buffers for Upland Birds) under the continuous sign-up Conservation Reserve Program administered by the U.S. Department of Agriculture. CP33 is the first conservation practice targeted directly to support habitat and population recovery objectives of a national wildlife conservation initiative (Northern Bobwhite Conservation Initiative). I coordinated breeding season, fall, and winter point transect surveys for northern bobwhite (Colinus virginianus), priority early-succession, and overwintering birds on ≈1,150 buffered and non-buffered fields in 14 states (10 ecoregions) from 2006-2009. I also assessed northern bobwhite-landscape associations within each ecoregion to determine effects of landscape structure on observed northern bobwhite abundances. Breeding season and autumn northern bobwhite densities were 60-74% and 52% greater, respectively, over all survey points in the near term (1-4 years post-establishment). However, breeding season and autumn response and associations between northern bobwhite abundance and landscape structure exhibited substantial regional variation, suggesting northern bobwhite conservation and management should be implemented on a regional basis. Breeding season densities of dickcissel (Spiza americana) and field sparrow (Spizella pusilla) were up to 190% greater on buffered fields, whereas overwintering densities of several Emberizid sparrow species were up to 2,707% greater on buffered fields. Species sensitive to patch area or those requiring vegetation structure different from that provided by buffers exhibited limited, but regionally and annually variable responses to buffered habitats. Increased bird densities of several species in several seasons suggest wildliferiendly farming practices delivered strategically and requiring minimal change in primary land use can benefit species across broad landscapes when conservation practices are targeted toward specific recovery objectives. Targeted conservation systems combining multiple conservation practices to provide an array of ecosystem services may be a mechanism for meeting multifarious conservation objectives and enhancing biodiversity in agricultural landscapes.
48

Stochastic Resource Constrained Project Scheduling With Stochastic Task Insertion Problems

Archer, Sandra 01 January 2008 (has links)
The area of focus for this research is the Stochastic Resource Constrained Project Scheduling Problem (SRCPSP) with Stochastic Task Insertion (STI). The STI problem is a specific form of the SRCPSP, which may be considered to be a cross between two types of problems in the general form: the Stochastic Project Scheduling Problem, and the Resource Constrained Project Scheduling Problem. The stochastic nature of this problem is in the occurrence/non-occurrence of tasks with deterministic duration. Researchers Selim (2002) and Grey (2007) laid the groundwork for the research on this problem. Selim (2002) developed a set of robustness metrics and used these to evaluate two initial baseline (predictive) scheduling techniques, optimistic (0% buffer) and pessimistic (100% buffer), where none or all of the stochastic tasks were scheduled, respectively. Grey (2007) expanded the research by developing a new partial buffering strategy for the initial baseline predictive schedule for this problem and found the partial buffering strategy to be superior to Selim s extreme buffering approach. The current research continues this work by focusing on resource aspects of the problem, new buffering approaches, and a new rescheduling method. If resource usage is important to project managers, then a set of metrics that describes changes to the resource flow would be important to measure between the initial baseline predictive schedule and the final as-run schedule. Two new sets of resource metrics were constructed regarding resource utilization and resource flow. Using these new metrics, as well as the Selim/Grey metrics, a new buffering approach was developed that used resource information to size the buffers. The resource-sized buffers did not show to have significant improvement over Grey s 50% buffer used as a benchmark. The new resource metrics were used to validate that the 50% buffering strategy is superior to the 0% or 100% buffering by Selim. Recognizing that partial buffers appear to be the most promising initial baseline development approach for STI problems, and understanding that experienced project managers may be able to predict stochastic probabilities based on prior projects, the next phase of the research developed a new set of buffering strategies where buffers are inserted that are proportional to the probability of occurrence. The results of this proportional buffering strategy were very positive, with the majority of the metrics (both robustness and resource), except for stability metrics, improved by using the proportional buffer. Finally, it was recognized that all research thus far for the SRCPSP with STI focused solely on the development of predictive schedules. Therefore, the final phase of this research developed a new reactive strategy that tested three different rescheduling points during schedule eventuation when a complete rescheduling of the latter portion of the schedule would occur. The results of this new reactive technique indicate that rescheduling improves the schedule performance in only a few metrics under very specific network characteristics (those networks with the least restrictive parameters). This research was conducted with extensive use of Base SAS v9.2 combined with SAS/OR procedures to solve project networks, solve resource flow problems, and implement reactive scheduling heuristics. Additionally, Base SAS code was paired with Visual Basic for Applications in Excel 2003 to implement an automated Gantt chart generator that provided visual inspection for validation of the repair heuristics. The results of this research when combined with the results of Selim and Grey provide strong guidance for project managers regarding how to develop baseline predictive schedules and how to reschedule the project as stochastic tasks (e.g. unplanned work) do or do not occur. Specifically, the results and recommendations are provided in a summary tabular format that describes the recommended initial baseline development approach if a project manager has a good idea of the level and location of the stochasticity for the network, highlights two cases where rescheduling during schedule eventuation may be beneficial, and shows when buffering proportional to the probability of occurrence is recommended, or not recommended, or the cases where the evidence is inconclusive.
49

Self-Esteem Buffers the Effect of Physical Symptoms on Negative Affect Less in Older Adults.

Chui, Helena, Diehl, M. January 2014 (has links)
n/a
50

Hedonic Valuation of Forested Riparian Buffers Along Rivers in Northwestern North Carolina

Vannoy, Mallory Drew 24 May 2021 (has links)
This revealed preference study estimates the implicit value associated with owning a home along a river and tree coverage of riparian areas along rivers. The setting of this study is Ashe and Watauga Counties in Northwestern North Carolina and the two rivers that flow through those counties: New River and Watauga River. House sales form the basis of the hedonic models used to value these environmental characteristics. Homes that border a river sell for at least $28,000 more than otherwise similar homes that do not border a river. Riparian area tree coverage positively impacts river-bordering house prices, but only to a certain point. The results of this study are important for environmental organizations in this region working to safeguard the New and Watauga Rivers through riparian buffer installation and protection. / Master of Science / This study describes homeowner values of owning a home near a river, along with values associated with tree coverage of riparian areas along rivers. The setting of this study is Ashe and Watauga Counties in Northwestern North Carolina and the two rivers that flow through those counties: New River and Watauga River. Using home sales data, models estimate the value of two environmental characteristics home properties. This research found that homes bordering a river sell for at least $28,000 more than otherwise similar homes that do not border a river. Having any amount of tree coverage up to 90% tree coverage in a riparian area increases home sale prices, therefore homeowners positively value tree coverage in riparian areas to a point. Tree coverage in riparian areas is beneficial for the protection of rivers and river-dependent wildlife. The results of this study are important for environmental organizations in this region working to safeguard the New and Watauga Rivers through riparian buffer installation and protection.

Page generated in 0.0181 seconds