• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 516
  • 486
  • 333
  • 276
  • 59
  • 35
  • 27
  • 16
  • 12
  • 10
  • 8
  • 4
  • 3
  • 3
  • 3
  • Tagged with
  • 2202
  • 403
  • 307
  • 295
  • 235
  • 205
  • 170
  • 137
  • 134
  • 125
  • 121
  • 120
  • 116
  • 114
  • 104
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
301

Digital instrumentation for the time integral-squared of a voltage and its error characteristics

Majithia, Jayantilal 05 1900 (has links)
A 16-level instrument in which the input voltage is sampled and quantised to yield direct decimal readouts of 1/T∫v^2dt . and T is described. This is an improved version of an instrument previously constructed; The upper frequency limit has been extended from 5 Hz to about 2 kHz. The readout of the instrument can be in any code, the decimal code being implemented in the instrument described. The original error analysis has been extended. An extensive analysis of the overall error characteristics was carried out theoretically and the results were confirmed experimentally. The instrument is capable of measuring the mean square value of periodic waveform to within 2%. Normal distribution noise of standard deviation between 1V and 3V can be measured with similar accuracy. The accuracy and the upper frequency limit are determined by the ' aperture time'of the sampling process. The errors arising in the sampling instrument depend on the number of levels used. / Thesis / Master of Engineering (MEngr)
302

Practical Feedback and Instrumentation Enhancements for Performant Security Testing of Closed-source Executables

Nagy, Stefan 25 May 2022 (has links)
The Department of Homeland Security reports that over 90% of cyberattacks stem from security vulnerabilities in software, costing the U.S. $109 billion dollars in damages in 2016 alone according to The White House. As NIST estimates that today's software contains 25 bugs for every 1,000 lines of code, the prompt discovery of security flaws is now vital to mitigating the next major cyberattack. Over the last decade, the software industry has overwhelmingly turned to a lightweight defect discovery approach known as fuzzing: automated testing that uncovers program bugs through repeated injection of randomly-mutated test cases. Academic and industry efforts have long exploited the semantic richness of open-source software to enhance fuzzing with fast and fine-grained code coverage feedback, as well as fuzzing-enhancing code transformations facilitated through lightweight compiler-based instrumentation. However, the world's increasing reliance on closed-source software (i.e., commercial, proprietary, and legacy software) demands analogous advances in automated security vetting beyond open-source contexts. Unfortunately, the semantic gaps between source code and opaque binary code leave fuzzing nowhere near as effective on closed-source targets. The difficulty of balancing coverage feedback speed and precision in binary executables leaves fuzzers frequently bottlenecked and orders-of-magnitude slower at uncovering security vulnerabilities in closed-source software. Moreover, the challenges of analyzing and modifying binary executables at scale leaves closed-source software fuzzing unable to fully leverage the sophisticated enhancements that have long accelerated open-source software vulnerability discovery. As the U.S. Cybersecurity and Infrastructure Security Agency reports that closed-source software makes up over 80% of the top routinely exploited software today, combating the ever-growing threat of cyberattacks demands new practical, precise, and performant fuzzing techniques unrestricted by the availability of source code. This thesis answers the following research questions toward enabling fast, effective fuzzing of closed-source software: 1. Can common-case fuzzing insights be exploited to more achieve low-overhead, fine-grained code coverage feedback irrespective of access to source code? 2. What properties of binary instrumentation are needed to extend performant fuzzing-enhancing program transformation to closed-source software fuzzing? In answering these questions, this thesis produces the following key innovations: A. The first code coverage techniques to enable fuzzing speed and code coverage greater than source-level fuzzing for closed-source software targets. (chapter 3) B. The first instrumentation platform to extend both compiler-quality code transformation and compiler-level speed to closed-source fuzzing contexts (chapter 4) / Doctor of Philosophy / The Department of Homeland Security reports that over 90% of cyberattacks stem from security vulnerabilities in software, costing the U.S. $109 billion dollars in damages in 2016 alone according to The White House. As NIST estimates that today's software contains 25 bugs for every 1,000 lines of code, the prompt discovery of security flaws is now vital to mitigating the next major cyberattack. Over the last decade, the software industry has overwhelmingly turned to lightweight defect discovery through automated testing, uncovering program bugs through the repeated injection of randomly-mutated test cases. Academic and industry efforts have long exploited the semantic richness of open-source software (i.e., software whose full internals are publicly available, interpretable, and changeable) to enhance testing with fast and fine-grained exploration feedback; as well as testing-enhancing program transformations facilitated during the process by which program executables are generated. However, the world's increasing reliance on closed-source software (i.e., software whose internals are opaque to anyone but its original developer) like commercial, proprietary, and legacy programs demands analogous advances in automated security vetting beyond open-source contexts. Unfortunately, the challenges of understanding programs without their full source information leaves testing nowhere near as effective on closed-source programs. The difficulty of balancing exploration feedback speed and precision in program executables leaves testing frequently bottlenecked and orders-of-magnitude slower at uncovering security vulnerabilities in closed-source software. Moreover, the challenges of analyzing and modifying program executables at scale leaves closed-source software testing unable to fully leverage the sophisticated enhancements that have long accelerated open-source software vulnerability discovery. As the U.S. Cybersecurity and Infrastructure Security Agency reports that closed-source software makes up over 80% of the top routinely exploited software today, combating the ever-growing threat of cyberattacks demands new practical, precise, and performant software testing techniques unrestricted by the availability of programs' source code. This thesis answers the following research questions toward enabling fast, effective fuzzing of closed-source software: 1. Can common-case testing insights be exploited to more achieve low-overhead, fine-grained exploration feedback irrespective of access to programs' source code? 2. What properties of program modification techniques are needed to extend performant testing-enhancing program transformations to closed-source programs? In answering these questions, this thesis produces the following key innovations: A. The first techniques enabling testing of closed-source programs with speed and exploration higher than on open-source programs. (chapter 3) B. The first platform to extend high-speed program transformations from open-source programs to closed-source ones (chapter 4)
303

Time Dependent Uptake of Volatile Organic Compounds on Silica and the Observation and Quantification of Ambient Sesquiterpenes in Virginia

Frazier, Graham Owen 15 May 2023 (has links)
Correlation of adsorption behavior for a gas-surface system to the molecular structure of the gas-phase molecule is necessary to better constrain atmospheric modeling. Despite the variety of compounds emitted into the atmosphere, the role of molecular structure on uptake probability, γ, for a compound adsorbing onto a surface is not well understood. A custom designed flow manifold coupled to a mass spectrometer provides the means to analyze changes in γ for gases at environmentally relevant low ppb concentrations. Studies have focused on the adsorption of two classes of compounds: singly substituted benzene derivatives and several terpenoid compounds for a silica surface composed of hydroxyl groups and bridging oxygen species. Results for benzene derivatives show an enhancement of the initial γ value of all functionalized compounds relative to benzene, with this initial value influenced by both the dipole and volatility of a compound. The time-dependent behavior of γ has been shown to depend on the interactions occurring between the adsorbate and surface with species capable of hydrogen bond interactions exhibiting a greater number of adsorbed species at equilibrium. The initial γ value for terpenoid species was observed to increase from isoprene to sesquiterpenes, with several monoterpene isomers exhibiting distinct adsorption behavior based on structural differences. The sesquiterpene examined, β-caryophyllene, exhibited continuous uptake onto the silica surface and such uptake behavior could contribute in part to the low concentrations of sesquiterpenes observed in the environment. Encapsulation of the hydroxyl groups resulted in decreased uptake of several aromatic and terpenoid species examined and indicates that hydroxyl groups represent the majority of adsorption sites for the systems examined. Comparison between observed uptake behavior to adsorption models exhibited the best agreement with a model depicting monolayer formation with cooperative adsorption due to interactions between adsorbates. Formulations for predicting initial γ values based on the volatility and dipole of the compound are provided. Overall, these results represent the first step towards a better understanding of gas-surface interactions that occur in the environment. Sesquiterpenes represent one class of biogenic emissions not well constrained with regards to SOA influence due to their low volatility and concentrations relative to more abundant terpenoid species. Ambient measurements of sesquiterpenes through the Semi-Volatile Thermal desorption Aerosol Gas chromatography (SV-TAG) instrument presented total sesquiterpene concentrations that ranged from 0.8 to 2 ppt with no isomer dominating. Sesquiterpene contribution towards hydroxyl reactivity is negligible in comparison to more abundant terpenoid species while ozone reactivity was dominated by two isomers and could contribute to atmospheric reactivity during periods of high emissions. These measurements represent the first step in better constraining the contribution of sesquiterpenes towards secondary organic aerosol formation. / Doctor of Philosophy / A large variety of compounds are emitted into the atmosphere from natural and human sources. The molecular structures of these compounds are complex and differences in structure can alter the environmental fate of compounds, which can directly affect human health. While atmospheric modeling provides insight into the fate of compounds, it relies on experimentally determined values for accuracy. The probability that a gas-phase molecule "sticks" to a surface upon collision, defined as the uptake probability, represents one value that is not well understood. My research has primarily focused on the development and testing of an instrument capable of observing gas uptake onto a surface for environmentally relevant gas concentrations. This has included understanding how differences in structure of a molecule affect adsorption for a single surface. For example, α-pinene and β-pinene (both compounds emitted by plants and found in air fresheners!) exhibit distinct adsorption behavior that arises from the location of a single carbon-carbon double bond for the same structure. Similarly, making a "water-loving" surface into a "water-repelling" surface results in less α-pinene adsorbed, despite α-pinene being considered insoluble in water. The developed instrument represents a vital tool in improving the accuracy of atmospheric modeling. In addition to understanding gas adsorption onto a surface, my research has focused on the detection and quantification of one class of naturally emitted compounds in Virginia: sesquiterpenes. These compounds are emitted by plants for a variety of reasons, and their influence on particle formation is not well understood. Measurement of sesquiterpenes at two sites in Virginia provides a better understanding of the abundance of sesquiterpenes in geographic regions not previously analyzed. These measurements contribute to an increased understanding of the role of different compound classes on particle formation in the atmosphere.
304

Design of a Micro Wireless Instrumented Payload for Unmanned Vehicle Testing

Hastings, Benjamin E. 06 October 2006 (has links)
The testing of unmanned vehicles presents a need for an independent device capable of accurately collecting position and orientation data. While commercial-off-the-shelf components could be pieced together to sense and record this information, this is an expensive, large, and heavy solution, not suitable for small or aerial vehicles. The micro wireless instrumented payload, or μWIP, was designed precisely for this purpose. The μWIP includes a GPS receiver, 3-axis accelerometer, 3-axis gyroscope, and 3-axis magnetometer which are used to measure an unmanned vehicle's position and orientation. The device also uses a secure digital card for data storage, and an 802.11b module to provide wireless connectivity. Additionally, the μWIP contains a on-board battery and the circuitry required to charge it. Firmware for the ARM7 processor was written to allow sensor calibration and data transmission, and a user interface was designed to run on a personal computer. The finished design is a tiny 3''x5''x1'', and weighs a mere 0.8 pounds including battery and antennas. It is capable of continuously streaming accurate GPS and inertial data over an 802.11b wireless network for over 5 hours. Having a bill of materials cost just over $600, the μWIP is also more cost effective than any alternative solutions. This thesis details the hardware and software design of the μWIP, as well as the initial testing, calibration, and evaluation of the device. / Master of Science
305

A Prototype Device for Isolating and Wirelessly Transmitting Neural Action Potentials

Slominski, Eric Christopher 31 December 2003 (has links)
An electrophysiology research laboratory at the Wake Forest University School of Medicine in the Physiology/Pharmacology Department currently carries out memory research by recording neural signals from laboratory animals with a wire tethering the animal to nearby signal conditioning and recording equipment. A wireless neural signal recording system is desirable because it removes the cumbersome wires from the animal, allowing it to roam more freely. The result is an animal that is more able to behave as it would in its natural habitat, thus opening the possibility of testing procedures that are not possible with wired recording systems. While there are wireless neural recording systems in existence, this thesis presents a new approach to recording neural signals wirelessly. The firings of neurons in the hippocampus are manifested as action potentials or voltage "spikes" on the order of 100 to 400uV in magnitude. Though the information content of the neural signal is riding on these action potentials, the spikes comprise a small fraction of the complete neural signal. A unique feature of the neural signal transceiver presented in this thesis is its ability to digitally isolate and transmit the action potentials, leaving out the remaining, unimportant part of the neural signal. This approach to recording neural signals makes efficient use of the limited bandwidth available with portable short range wireless devices. This thesis will present the spike isolating neural transmitter, which was built using commercially available electronic components. Then, the proper function of assembly language code written for a PIC18F458 microcontroller will be demonstrated. Finally, a discussion of the performance of the neural signal transmitter will be provided. / Master of Science
306

An Experimental Study of Bifurcated (Weekend and Weekday) and Unitary (Past Week) Retrospective Assessments of Sleep

Sethi, Kevin J. 08 1900 (has links)
Discordance between weekday and weekend sleep schedules is common (Bonnet & Arand, 1995; Breslau, Roth, Rosenthal, & Andreski, 1997; Machado, Varella, & Andrade, 1998; Strauch & Meier, 1988; Tsai & Li, 2004). Brief retrospective self-report measures are essential for epidemiological research studies (Moul, Hall, Pilkonis, & Buysse, 2004), but self-reports are prone to error in recall, and the greater the variability in nightly sleep, the less reliable are retrospective reports (Babkoff, Weller, & Lavidor, 1996). More accurate self-report responses may be possible if measures prompt participants to consider variations in sleep schedules that are consistent (i.e., weekday and weekend sleep schedules). The current study experimentally examined whether Bifurcated (Weekday and Weekend) retrospective assessments of sleep are more accurate than Unitary (Past Week) assessments. Participants were randomly assigned to complete one of the two versions (Bifurcated vs. Unitary) of the Sleep Questionnaire. One hundred and thirty-one participants were included in the analyses. Results of a a series of analyses demonstrated that the Bifurcated version of the Sleep Questionnaire provided more accurate and less variable estimates of total sleep time than the Unitary version of the Sleep Questionnaire. Differences between the versions of the Sleep Questionnaire for other sleep variables were less consistent, and the increased length of the Bifurcated version of the Sleep Questionnaire may have contributed to increased missing and unusable data in this group. Overall, the findings suggest that in both research and clinical work, retrospective measures that examine weekday and weekend sleep separately may offer advantages over retrospective measures that do not differentiate between weekday and weekend sleep.
307

An Electrometer Design and Characterization for a CubeSat Neutral Pressure Instrument

Rohrer, Todd Edward Bloomquist 02 February 2017 (has links)
Neutral gas pressure measurements in low Earth orbit (LEO) can facilitate the monitoring of atmospheric gravity waves, which can trigger instabilities that severely disrupt radio frequency communication signals. The Space Neutral Pressure Instrument (SNeuPI) is a low-power instrument detecting neutral gas density in order to determine neutral gas pressure. SNeuPI consists of an ionization chamber and a logarithmic electrometer circuit. The Rev. 1 SNeuPI electrometer prototype does not function as designed. A Rev. 2 electrometer circuit must be designed and its performance characterized across specified operating temperature and input current ranges. This document presents a design topology for the Rev. 2 electrometer and a derivation of the theoretical circuit transfer function. Component selection and layout are discussed. A range of predicted operating input currents is calculated using modeled neutral density data for a range of local times, altitudes, and latitudes corresponding to the conditions expected for the Lower Atmosphere/Ionosphere Coupling Experiment (LAICE) CubeSat mission. Laboratory test setups for measurements performed both under vacuum and at atmospheric pressure are documented in detail. Test procedures are presented to characterize the performance of the Rev. 2 electrometer at a range of controlled operating temperatures. The results of these tests are then extrapolated in order to predict the operation of the circuit at specified temperatures outside of the range controllable under laboratory test conditions. The logarithmic conformance, accuracy, sensitivity, power consumption, and deviations from expected response of the circuit are characterized. The results validate the electrometer for use under its expected flight conditions. / Master of Science / Neutral gas pressure measurements in low Earth orbit (LEO) can facilitate the monitoring of atmospheric gravity waves, oscillations that transfer energy from weather events or other disturbances through the atmosphere and can severely disrupt radio frequency communication signals. The Space Neutral Pressure Instrument (SNeuPI) is a low-power instrument detecting neutral gas density in order to determine neutral gas pressure. SNeuPI is part of the instrument payload for the Lower Atmosphere/Ionosphere Coupling Experiment (LAICE). LAICE is a CubeSat–a small satellite format utilizing commercial o↵-the-shelf (COTS) parts to minimize development cycle time and cost–developed with the goal of observing atmospheric gravity waves. SNeuPI utilizes an ionization chamber, which ionizes neutral gas molecules, an ion detector that ouputs an electron current proportional to ion density, and an electrometer circuit, which outputs a voltage logarithmically related to the magnitude of the detector electron current. The Rev. 1 SNeuPI electrometer prototype does not function as designed. A Rev. 2 electrometer circuit must be designed and its performance characterized across specified operating temperature and input current ranges. This document presents a circuit design for the Rev. 2 electrometer and a mathematical derivation of the relationship between its input current and output voltage. Component selection and layout are discussed. A range of predicted operating input currents is calculated using modeled neutral density data for a range of local times, altitudes, and latitudes corresponding to the conditions expected for the LAICE mission. Laboratory test setups for measurements performed both under vacuum and at atmospheric pressure are documented in detail. Test procedures are presented to characterize the performance of the Rev. 2 electrometer at a range of controlled operating temperatures. The results of these tests are then extrapolated in order to predict the operation of the circuit at specified temperatures outside of the range controllable under laboratory test conditions. The logarithmic conformance, accuracy, sensitivity, power consumption, and deviations from expected response of the circuit are characterized. The results validate the electrometer for use under its expected flight conditions.
308

Integration and Evaluation of Unsteady Temperature Gages for Heat Flux Determination in High Speed Flows

Ruda, Mathew Louis 22 June 2022 (has links)
This study documents the integration and testing of a new variety of unsteady surface temperature gages designed to operate in high speed flow. Heat flux through the surface of the test article was determined from the unsteady temperature by applying a 3D reconstruction algorithm based on a Green's function approach. The surface temperature gages used in this work were 1.59 mm inserts designed to maximize material matching with the test article, in this case 316 stainless steel. A series of benchtop experiments were first performed to understand the individual properties of the gage and determine measurement uncertainty. Prior to testing, all temperature gages are calibrated using an environmental chamber. Gages were installed into slugs of several materials and subjected to a heated jet with a total temperature of 620 K to examine the effects of material mismatch. A shock tube with a notional operating Mach of 2.6 was used to determine the thermal response of the gages as a function of time. In both tests, reference Medtherm Schmidt-Boelter gages ensure consistent heat fluxes are applied across all runs. The time response of the entire electrical system was determined by subjecting the gage to a nanosecond scale laser pulse. Two experimental campaigns were conducted in Virginia Tech's Hypersonic Wind Tunnel. First, gages were integrated into a flat plate test article and subjected to a notionally 2D Mach 3 flow. Tunnel total pressures and temperatures ranged from 793-876 kPa and 493-594 K, respectively. A reference 3.18 mm Medtherm Schmidt-Boelter gage was also installed for comparison. All temperature data are reconstructed using the algorithm to determine heat flux. The second test campaign utilized a flat-faced cylindrical test article in a notionally axisymmetric Mach 6 flow environment. Flow total pressures and temperatures ranged from 8375-8928 kPa and 485.5-622 K. respectively. The Fay-Riddell analytical method was applied to the resulting temperature traces in order to infer the heat flux at the stagnation point for comparison with the reconstructed heat flux. This experiment was complimented with steady, 3D CFD in order to understand the temperature variation across the test article. Both campaigns demonstrate good agreement between the heat flux reconstructed from surface temperatures measured using the new gage, reference measurements, and simulations/analytical methods. The importance of material matching is highlighted during this study. The performance of this gage is shown to exceed the current state-of-the-art, opening the possibility for future analysis of phenomenon present in high-speed flow. / Doctor of Philosophy / At very fast speeds, it is important to understand how the temperatures of surfaces change with time. Traditional devices which can measure surface temperatures have a number of weaknesses, and to address these a new type of surface temperature device has been designed. By using computational methods, one can determine how much energy is being transferred through the surface by measuring how the surface temperature changes over time. A series of laboratory experiments were conducted to understand how this new instrument compares to the current state-of-the-art. Two experimental campaigns were then conducted to test the temperature gages. The first experiment used a simple flat plate geometry in a flow 3 times the speed of sound to serve as a benchmark test case, as the flow over a flat plate is well understood. The second test utilized a flat-faced cylindrical test article in a flow 6 times the speed of sound. The results of this test was compared to exact solutions and flow simulations. The result of this study is a well quantified tool to study how energy flows through a body subjected to very high speed flow, which will enable further study of the complicated thermal environments experienced at high speeds.
309

Application of a Non-intrusive Optical Non-spherical Particle Sizing Sensor at Turboshaft Engine Inlet

Antous, Brittney Louise 20 April 2023 (has links)
Master of Science / Particulate ingestion has been an ongoing issue in the aviation industry as aircraft are required to operate in hostile environments. Ingesting particulates such as sand or dust can erode and damage engine components. This damage will affect the life cycle of parts and compromise the safety of the aircraft. This issue is very costly and dangerous. In order to combat these issues, a particle sensor with the ability to monitor in-stream particulate size, shape, and mass flow rate is necessary. Our team with the Advanced Propulsion and Power Laboratory developed a non-intrusive optical sensor that is able to characterize non-spherical particles. This sensor has been used in various applications through the years; however, most recently, the sensor has been demonstrated at the Virginia Tech M250 engine inlet. This was the first time that the sensor was directly attached to an engine's inlet and subjected to engine conditions. For this validation, highly erosive, coarse quartz was used. Utilizing laser and cameras, the sensor is able to deduce the particles' average shape and size distributions. From those measurements, the mass flow rate of the particle can be calculated. The works provided in the thesis show that particle ingestion rates can be measured to an acceptably high accuracy. In contrast, refinement of the processing techniques can provide spatially resolved measurements of particle characteristics as well.
310

A Novel Concussion Detection System Utilizing Pressure Transducers

Uysal, Troy O 01 August 2024 (has links) (PDF)
The goal of this study was to build and test a concussion detection device using fluid pressure. The study was an investigation on the efficacy of a pressure device, as well as attempting to compare the pressure metric to well documented acceleration metrics. An impact delivery system, head model, and data recording system was developed to simulate concussive impacts and record pressure and acceleration data. The study’s results were all under the operating pressure range of 15 psi, which resulted in impacts significantly below acceleration concussive thresholds set in literature. Because of that, this study’s findings do not prove or disprove the effectiveness of a pressure based model, further investigation is required to understand the relationship between pressure and impacts

Page generated in 0.1309 seconds