Spelling suggestions: "subject:"bfilter"" "subject:"builter""
431 |
A Low Voltage, Low Power 4th Order Continuous-time Butterworth Filter for Electroencephalography Signal RecognitionMulyana, Ridwan S. 25 October 2010 (has links)
No description available.
|
432 |
Investigation of Optimum Operating Conditions for Recirculating Sand FiltersWeng, Yonghui 01 1900 (has links)
<p> Recirculating Sand Filters (RSFs) provide a compact method of secondary treatment to septic systems and lagoons, are relatively easy to operate and require little maintenance. Together, these characteristics render RSFs particularly appropriate for small communities and municipalities, as they offer a number of economic and operational advantages over conventional technologies. A preliminary study investigating RSF effluent quality, conducted jointly by McMaster University, the Great Lakes Sustainability Fund (GLSF) and the Ontario Ministry of the Environment (MOE) in 1999-2001, conducted pilot-scale experiments and demonstrated that municipal sewage can be successfully treated year-round by RSFs. The results of the preliminary study recommended that further work be conducted to investigate the selection of media size, dosing frequency, recycle ratio, and hydraulic loading rate. </p> <p> The primary objective of this study was to develop design and operating conditions under Ontario climatic conditions with respect to media size, dosing frequency, recycle ratio and hydraulic loading rate by conducting further pilot-scale studies. Three pilot-scale RSFs, operating in parallel, were loaded intermittently with septic tank effluent to evaluate the above mentioned operating parameters on the removal of total suspended solids (TSS), 5-day carbonaceous biochemical oxygen demand (cBOD5), total ammonia-nitrogen (TAN) and total nitrogen (TN). The addition of alum was also implemented to evaluate the removal of total phosphorus (TP). The effluent objectives for this study were based on the MOE general secondary treatment level requirements of monthly averages based on a minimum of four weekly samples. The four-phase experimental program began in April, 2004 and ended in June, 2005. Three media sizes were investigated, with d10 of 2.6, 5 and 7.7 ·mm. The applied hydraulic loading rates were 0.2 and 0.4 rnlday. Dosing frequencies of 24 and 48 times/day were observed. Recycle ratios of 300% and 500% were also evaluated. </p> <p> It was found that the RSF operating with 2.6 mm media, 500% recycle ratio and 24 times/day dosing frequency under a hydraulic loading rate of 0.2 rnlday produced the best quality effluent, and achieved the effluent objectives required by the MOE. These operating criteria, however, must still be investigated under cold weather conditions to ensure acceptable year-round performance in Ontario. With proper addition of alum, the TP effluent objective was achieved under the optimum operating conditions. </p> / Thesis / Master of Applied Science (MASc)
|
433 |
Training of Neural Networks Using the Smooth Variable Structure Filter with Application to Fault DetectionAhmed, Ryan 04 1900 (has links)
Artificial neural network (ANNs) is an information processing paradigm inspired by the human brain. ANNs have been used in numerous applications to provide complex nonlinear input-output mappings. They have the ability to adapt and learn from observed data.
The training of neural networks is an important area of research and consideration. Training techniques have to provide high accuracy, fast speed of convergence, and avoid premature convergence to local minima.
In this thesis, a novel training method is proposed. This method is based on the relatively new Smooth Variable Structure filter (SVSF) and is formulated for feedforward multilayer perceptron training. The SVSF is a state and parameter estimation that is based on the Sliding Mode Concept and works in a predictor-corrector fashion. The SVSF applies a discontinuous corrective term to estimate state and parameters. Its advantages include guaranteed stability, robustness, and fast speed of convergence.
The proposed training technique is applied to three real-world benchmark problems and to a fault detection application in a Ford diesel engine.
SVSF-based training technique shows an excellent generalization capability and a fast speed of convergence. / Artificial neural network (ANNs) is an information processing paradigm inspired by the human brain. ANNs have been used in numerous applications to provide complex nonlinear input-output mappings. They have the ability to adapt and learn from observed data.
The training of neural networks is an important area of research and consideration. Training techniques have to provide high accuracy, fast speed of convergence, and avoid premature convergence to local minima.
In this thesis, a novel training method is proposed. This method is based on the relatively new Smooth Variable Structure filter (SVSF) and is formulated for feedforward multilayer perceptron training. The SVSF is a state and parameter estimation that is based on the Sliding Mode Concept and works in a predictor-corrector fashion. The SVSF applies a discontinuous corrective term to estimate state and parameters. Its advantages include guaranteed stability, robustness, and fast speed of convergence.
The proposed training technique is applied to three real-world benchmark problems and to a fault detection application in a Ford diesel engine.
SVSF-based training technique shows an excellent generalization capability and a fast speed of convergence. / Thesis / Master of Applied Science (MASc)
|
434 |
Prediction and Measurement of Thermal Exchanges within PyranometersSmith, Amie Michelle 10 November 1999 (has links)
The Eppley Precision Spectral Pyranometer (PSP) is a shortwave radiometer that is widely used in global networks to monitor solar irradiances at the earth's surface. Within the instrument, a blackened surface is in intimate thermal contact with the hot junction of a thermopile. The cold junction of the thermopile communicates thermally with the large thermal capacitance of the instrument body, which acts as a heat sink. Radiation arrives at the blackened surface through one or two hemispherical dome-shaped filters that limit the instrument response to the solar spectrum. The voltage developed by the thermopile is then interpreted in terms of the incident irradiance.
Measurements taken with the pyranometer are compared with results from theoretical models. Discrepancies between model results and measurements are used to isolate inaccuracies in the optical properties of the atmosphere used in the models. As the accuracy of the models increases, the reliability of the measurements must be examined in order to assure that the models keep up with reality. The sources of error in the pyranometer are examined in order to determine the accuracy of the instrument.
Measurements obtained using the pyranometer are known to be influenced by environmental conditions such as ambient temperature, wind, and cloud cover [Bush, et al., 1998]. It is surmised that at least some of the observed environmental variability in these data is due to parasitic thermal exchanges within the instrument [Haeffelin et al., 1999]. Thermal radiation absorbed and emitted by the filters, as well as that reflected and re-reflected among the internal surfaces, influences the net radiation at the detector surface and produces an offset from the signal that would result from the incident shortwave radiation alone. Described is an ongoing effort to model these exchanges and to use experimental results to verify the model.
The ultimate goal of the work described is to provide reliable protocols, based on an appropriate instrument model, for correcting measured shortwave irradiance for a variable thermal radiation environment. / Master of Science
|
435 |
Exploring Per-Input Filter Selection and Approximation Techniques for Deep Neural NetworksGaur, Yamini 21 June 2019 (has links)
We propose a dynamic, input dependent filter approximation and selection technique to improve the computational efficiency of Deep Neural Networks. The approximation techniques convert 32 bit floating point representation of filter weights in neural networks into smaller precision values. This is done by reducing the number of bits used to represent the weights. In order to calculate the per-input error between the trained full precision filter weights and the approximated weights, a metric called Multiplication Error (ME) has been chosen. For convolutional layers, ME is calculated by subtracting the approximated filter weights from the original filter weights, convolving the difference with the input and calculating the grand-sum of the resulting matrix. For fully connected layers, ME is calculated by subtracting the approximated filter weights from the original filter weights, performing matrix multiplication between the difference and the input and calculating the grand-sum of the resulting matrix. ME is computed to identify approximated filters in a layer that result in low inference accuracy. In order to maintain the accuracy of the network, these filters weights are replaced with the original full precision weights.
Prior work has primarily focused on input independent (static) replacement of filters to low precision weights. In this technique, all the filter weights in the network are replaced by approximated filter weights. This results in a decrease in inference accuracy. The decrease in accuracy is higher for more aggressive approximation techniques. Our proposed technique aims to achieve higher inference accuracy by not approximating filters that generate high ME. Using the proposed per-input filter selection technique, LeNet achieves an accuracy of 95.6% with 3.34% drop from the original accuracy value of 98.9% for truncating to 3 bits for the MNIST dataset. On the other hand upon static filter approximation, LeNet achieves an accuracy of 90.5% with 8.5% drop from the original accuracy.
The aim of our research is to potentially use low precision weights in deep learning algorithms to achieve high classification accuracy with less computational overhead. We explore various filter approximation techniques and implement a per-input filter selection and approximation technique that selects the filters to approximate during run-time. / Master of Science / Deep neural networks, just like the human brain can learn important information about the data provided to them and can classify a new input based on the labels corresponding to the provided dataset. Deep learning technology is heavily employed in devices using computer vision, image and video processing and voice detection. The computational overhead incurred in the classification process of DNNs prohibits their use in smaller devices. This research aims to improve network efficiency in deep learning by replacing 32 bit weights in neural networks with less precision weights in an input-dependent manner. Trained neural networks are numerically robust. Different layers develop tolerance to minor variations in network parameters. Therefore, differences induced by low-precision calculations fall well within tolerance limit of the network. However, for aggressive approximation techniques like truncating to 3 and 2 bits, inference accuracy drops severely. We propose a dynamic technique that during run-time, identifies the approximated filters resulting in low inference accuracy for a given input and replaces those filters with the original filters to achieve high inference accuracy. The proposed technique has been tested for image classification on Convolutional Neural Networks. The datasets used are MNIST and CIFAR-10. The Convolutional Neural Networks used are 4-layered CNN, LeNet-5 and AlexNet.
|
436 |
Development of a Virtual Acoustic Showroom for Simulating Listening Environments and Audio SpeakersCollins, Christopher Michael 17 June 2004 (has links)
Virtual acoustic techniques can be used to create virtual listening environments for multiple purposes. Using multi-speaker reproduction, a physical environment can take on the acoustical appearance of another environment. Implementation of this environment auralization could change the way customers evaluate speakers in a retail store.
The objective of this research is to develop a virtual acoustic showroom using a multi- speaker system. The two main components to the virtual acoustic showroom are simulating living environments using the image source method, and simulating speaker responses using inverse filtering. The image source method is used to simulate realistic living environments by filtering the environment impulse response by frequency-dependant absorption coefficients of typical building materials. Psychoacoustic tests show that listeners can match virtual acoustic cues with appropriate virtual visual cues. Inverse filtering is used to "replace" the frequency response function of one speaker with another, allowing a single set of speakers to represent any number of other speakers. Psychoacoustic tests show that listeners could not distinguish the difference between the original speaker and the reference speaker that was mimicking the original. The two components of this system are shown to be accurate both empirically and psychologically. / Master of Science
|
437 |
Design and Analysis of a Grid Connected Photovoltaic Generation System with Active Filtering FunctionLeslie, Leonard Gene Jr. 31 March 2003 (has links)
In recent years there has been a growing interest in moving away from large centralized power generation toward distributed energy resources. Solar energy generation presents several benefits for use as a distributed energy resource, especially as a peaking power source. One drawback of solar energy sources is the need for energy storage for the system to be utilized for a significant percentage of the day. One way of avoiding adding energy storage to a solar generation system while still maintaining high system utilization is to design the power conversion subsystem to also provide harmonic and reactive compensation. When the sun is unavailable for generation, the system hardware can still be utilized to correct for harmonic and reactive currents on the distribution system. This system's dual-purpose operation solves both the power generation need, and helps to improve the growing problem of harmonic and reactive pollution of the distribution system.
A control method is proposed for a system that provides approximately 1 kW of solar generation as well as up to 10 kVA of harmonic and reactive compensation simultaneously. The current control for the active was implemented with the synchronous reference frame method. The system and controller was designed and simulated. The harmonic and reactive compensation part of the system was built and tested experimentally. Due to the delay inherent in the control system from the sensors, calculation time, and power stage dynamics, the system was unable to correct for higher order harmonics. To allow the system to correct for all of the harmonics of concern, a hybrid passive - active approach was investigated by placing a set of inductors in series with the AC side of the load. A procedure was developed for properly sizing the inductors based on the harmonic residuals with the compensator in operation. / Master of Science
|
438 |
A Low Cost Localization Solution Using a Kalman Filter for Data FusionKing, Peter Haywood 06 June 2008 (has links)
Position in the environment is essential in any autonomous system. As increased accuracy is required, the costs escalate accordingly. This paper presents a simple way to systematically integrate sensory data to provide a drivable and accurate position solution at a low cost.
The data fusion is handled by a Kalman filter tracking five states and an undetermined number of asynchronous measurements. This implementation allows the user to define additional adjustments to improve the overall behavior of the filter. The filter is tested using a suite of inexpensive sensors and then compared to a differential GPS position.
The output of the filter is indeed a drivable solution that tracks the reference position remarkably well. This approach takes advantage of the short-term accuracy of odometry measurements and the long-term fix of a GPS unit. A maximum error of two meters of deviation from the reference is shown for a complex path over two minutes and 100 meters long. / Master of Science
|
439 |
Automated Fluorescence Microscopy Determination of Mycobacterium Tuberculosis Count via Vessel FilteringClaybon, Swazoo III 20 June 2017 (has links)
Tuberculosis (TB), a deadly infectious disease caused by the bacillus Mycobacterium tuberculosis (MTB), is the leading infectious disease killer globally, ranking in the top 10 overall causes of death despite being curable with a timely diagnosis and the correct treatment [3]. As such, eradicating tuberculosis (TB) is one of the targets of the Sustainable Development Goals (SDGs) for global health as approved by the World Health Assembly (WHA) in 2014 [2,3].
This work describes an automated method of screening and determining the severity, or count, of the TB infection in patients via images of fluorescent TB on a sputum smear. Using images from a previously published dataset [9], the algorithm involves a vessel filter which uses the second derivative information in an image by looking at the eigenvalues of the Hessian matrix. Finally, filtering for size and by using background subtraction techniques, each bacillus is effectively isolated in the image.
The primary objective was to develop an image processing algorithm in Python that can accurately detect Mycobacteria bacilli in an image for a later deployment in an automated microscope that can improve the timeliness of accurate screenings for acid-fast bacilli (AFB) in a high-volume healthcare setting. Major findings include comparable average and overall object level precision, recall, and F1-score results as compared to the support vector machine (SVM) based algorithm from Chang et al. [9]. Furthermore, this work's algorithm is more accurate on the field level infectiousness accuracy, based on F1-score results, and has a high visual semantic accuracy. / Master of Science / Tuberculosis (TB), a deadly infectious disease caused by the bacillus Mycobacterium tuberculosis (MTB), is the leading infectious disease killer globally, ranking in the top 10 overall causes of death despite being curable with a timely diagnosis and correct treatment [3]. Furthermore, 3.2 billion are part of an at risk population for contracting tuberculosis, yet 90 % of TB related deaths occur in countries across Africa and other Low and Middle Income Countries (LMICs) [2]. This occurrence is, at least in part, due to a lack of the skilled human resources in LMIC laboratories necessary to scan large numbers of patient specimens and properly screen for TB.
Sputum smear microscopy of acid-fast bacilli (AFB) is essential in the screening of TB in high-prevalence countries. With the high rates of TB found in LMICs, there is a need to develop affordable, time-efficient alternatives for lab technicians to effectively screen large volumes of patients. This work describes the development of an automated method of screening and determining the severity, or count, of the TB infection in patients via images of fluorescent TB on a sputum smear using images from a previously published dataset [9].
The primary objective of this study was to write a program that can accurately detect tuberculosis in an image for a later deployment in an automated microscope that can improve the timeliness of accurate screening for AFB in a high-volume healthcare setting. Major findings include improved accuracy compared to that of Chang et al.’s machine learning algorithm that was used on this dataset [9].
|
440 |
Analysis of the Effects of Privacy Filter Use on Horizontal Deviations in Posture of VDT OperatorsProbst, George T. 12 July 2000 (has links)
The visual display terminal (VDT) is an integral part of the modern office. An issue of concern associated with the use of the VDT is maintaining privacy of on-screen materials. Privacy filters are products designed to restrict the viewing angle to documents displayed on a VDT, so that the on-screen material is not visible to persons other than the VDT operator. Privacy filters restrict the viewing angle either by diffraction or diffusion of the light emitted from the VDT. Constrained posture is a human factors engineering problem that has been associated with VDT use. The purpose of this research was to evaluate whether the use of privacy filters affected: 1) the restriction of postures associated with VDT use, 2) operator performance, and 3) subjective ratings of display issues, posture, and performance.
Nine participants performed three types of tasks: word processing, data entry, and Web browsing. Each task was performed under three filter conditions: no filter, diffraction filter, and diffusion filter. Participants were videotaped during the tasks using a camera mounted above the VDT workstation. The videotape was analyzed and horizontal head deviation was measured at 50 randomly selected points during each task. Horizontal head deviation was measured as the angle between an absolute reference line, which bisects the center of the VDT screen, and a reference point located at the center of the participant's head. Standard deviation of head deviation were evaluated across filter type and task type. Accuracy- and/or time-based measures were used to evaluate performance within each task. Participants used a seven-point scale to rate the following: readability, image quality, brightness, glare, posture restriction, performance, and discomfort.
The results indicated that the interaction between task and filter type affected the standard deviation of horizontal head deviation (a measure of the average range of horizontal deviation). The standard deviation of horizontal deviation was significantly larger within the Web browsing task under the no filter and diffusion filter conditions as compared to the diffraction filter condition.
Filter type affected subjective ratings of the following: readability, image quality, brightness, posture restriction, and discomfort. The diffraction filter resulted in lower readability, image quality, and brightness ratings than the diffusion and no filter conditions. Participants reported that the ability to change postures was significantly decreased by the use of the diffraction filter as compared to the no filter and diffraction filter conditions. The diffraction filter resulted in an increase in reported discomfort as compared to the no filter condition. The interaction between filter and task type affected subjective ratings of performance. Participants reported a decrease in the rating of perceived performance under the diffraction filter / Web browsing condition as compared to the no filter / word processing, diffusion filter / Web browsing, and diffusion filter / data entry conditions. A decrease in the rating of perceived performance was reported in the diffraction filter / data entry condition as compared to the no filter / word processing and diffusion filter / Web browsing conditions. Neither diffraction nor diffusion filter affected performance within any of the tasks, based on the objective performance measures used in the experiment. / Master of Science
|
Page generated in 0.0339 seconds