41 |
Economic evaluation of small wind generation ownership under different electricity pricing scenariosJose, Anita Ann January 1900 (has links)
Master of Science / Department of Electrical and Computer Engineering / Anil Pahwa / With the Smart Grid trend setting in, various techniques to make the existing grid smarter are being considered. The price of electricity is one of the major factors, which affects the electric utility as well as the numerous consumers connected to the grid. Therefore deciding the right price of electricity for the time of day would be an important decision to make. Consumers’ response to this change in price will impact peak demand as well as their own annual energy bill. Owning a small wind generator under the Critical Peak Pricing (CPP) and Time of Use (TOU) price-based demand response programs could be a viable option. Economic evaluation of owning a small wind generator under the two pricing schemes, namely Critical Peak Pricing (CPP) and Time of Use (TOU), is the main focus of this research. Analysis shows that adopting either of the pricing schemes will not change the annual energy bill for the consumer. Taking into account the installed cost of the turbine, it may not be significantly economical for a residential homeowner to own a small wind turbine with either of the pricing schemes in effect under the conditions assumed.
|
42 |
Analysis of pavement condition data employing Principal Component Analysis and sensor fusion techniquesRajan, Krithika January 1900 (has links)
Master of Science / Department of Electrical and Computer Engineering / Dwight D. Day / Balasubramaniam Natarajan / This thesis presents an automated pavement crack detection and classification system via image processing and pattern recognition algorithms. Pavement crack
detection is important to the Departments of Transportation around the country as it
is directly related to maintenance of pavement quality. Manual inspection and
analysis of pavement distress is the prevalent method for monitoring pavement quality. However, inspecting miles of highway sections and analyzing each is a cumbersome
and time consuming process. Hence, there has been research into automating the system of crack detection. In this thesis, an automated crack detection and classification
algorithm is presented. The algorithm is built around the statistical tool of Principal Component Analysis (PCA). The application of PCA on images yields the primary features of cracks based on which, cracked images are distinguished from non-cracked ones.
The algorithm consists of three levels of classification: a) pixel-level b)
subimage (32 X 32 pixels) level and c) image level. Initially, at the lowermost level,
pixels are classified as cracked/non-cracked using adaptive thresholding. Then the
classified pixels are grouped into subimages, for reducing processing complexity. Following the grouping process, the classification of subimages is validated based on the
decision of a Bayes classifier. Finally, image level classification is performed based
on a subimage profile generated for the image. Following this stage, the cracks are
further classified as sealed/unsealed depending on the number of sealed and unsealed subimages. This classification is based on the Fourier transform of each subimage. The proposed algorithm detects cracks aligned in longitudinal as well as transverse directions with respect to the wheel path with high accuracy. The algorithm can also be extended to detect block cracks, which comprise of a pattern of cracks in both
alignments.
|
43 |
A transient solver for current density in thin conductors for magnetoquasistatic conditionsPetersen, Todd H. January 1900 (has links)
Doctor of Philosophy / Department of Electrical and Computer Engineering / Kenneth H. Carpenter / A computer simulation of transient current density distributions in thin conductors was developed using a time-stepped implementation of the integral equation method on a finite element mesh. A study of current distributions in thin conductors was carried out using AC analysis methods. The study of the AC current density distributions was used to develop a circuit theory model for the thin conductor which was then used to determine the nature of its transient response. This model was used to support the design and evaluation of the transient current density solver.
A circuit model for strip lines was made using the Partial Inductance Method to allow for simulations with the SPICE circuit solver. Magnetic probes were designed and tested that allow for physical measurements of voltages induced by the magnetic field generated by the current distributions in the strip line. A comparison of the measured voltages to simulated values from SPICE was done to validate the SPICE model. This model was used to validate the finite-integration model for the same strip line.
Formulation of the transient current density distribution problem is accomplished by the superposition of a source current and an eddy current distribution on the same space. The mathematical derivation and implementation of the time-stepping algorithm to the finite element model is explicitly shown for a surface mesh with triangular elements. A C++ computer program was written to solve for the total current density in a thin conductor by implementing the time-stepping integral formulation.
Evaluation of the finite element implementation was made regarding mesh size. Finite element meshes of increasing node density were simulated for the same structure until a smooth current density distribution profile was observed. The transient current density solver was validated by comparing simulations with AC conduction and transient response simulations of the SPICE model. Transient responses are compared for inputs at different frequencies and for varying time steps. This program is applied to thin conductors of irregular shape.
|
44 |
Design and application of fiber optic daylighting systemsWerring, Christopher G. January 1900 (has links)
Master of Science / Department of Architectural Engineering and Construction Science / Rhonda Wilkinson / Until recently sunlight was the primary source of illumination indoors, making perimeter fenestration essential and impacting the layout of buildings. Improvements in electric fixtures, light sources, control systems, electronic ballasts and dimming technology have influenced standard design practices to such a degree that allowing natural sunlight into a room is often seen as a liability. In the current climate of increasing energy prices and rising environmental awareness, energy conservation and resource preservation issues are a topic of governmental policy discussions for every nation on the planet. Governmental, institutional, social and economic incentives have emerged guiding the development and adoption of advanced daylighting techniques to reduce electric lighting loads in buildings used primarily during the day. A growing body of research demonstrates numerous health, occupant satisfaction, worker productivity and product sales benefits associated with natural lighting and exposure to sunlight. However, incorporating natural light into a lighting strategy is still complicated and risky as the intensity, variability and thermal load associated with sunlight can significantly impact mechanical systems and lead to serious occupant comfort issues if additional steps aren’t taken to attenuate or control direct sunlight.
Fiber optic daylighting systems represent a new and innovative means of bringing direct sunlight into a building while maintaining the control ability and ease of application usually reserved for electric lighting by collecting natural light and channeling it through optical fibers to luminairies within the space. This technology has the ability to bring sunlight much deeper into buildings without impacting space layout or inviting the glare, lighting variability and heat gain issues that complicate most daylighting strategies. As products become commercially available and increasingly economically viable, these systems have the potential to conserve significant amounts of energy and improve indoor environmental quality across a variety of common applications.
|
45 |
Computer vision system for identifying road signs using triangulation and bundle adjustmentKrishnan, Anupama January 1900 (has links)
Master of Science / Department of Electrical and Computer Engineering / Christopher L. Lewis / This thesis describes the development of an automated computer vision system that
identifies and inventories road signs from imagery acquired from the Kansas Department
of Transportation's road profiling system that takes images every 26.4 feet on highways
through out the state. Statistical models characterizing the typical size, color, and physical location of signs are used to help identify signs from the imagery. First, two phases of a computationally efficient K-Means clustering algorithm are applied to the images to achieve over-segmentation. The novel second phase ensures over-segmentation without excessive computation. Extremely large and very small segments are rejected. The remaining segments are then classified based on color. Finally, the frame to frame trajectories of sign colored segments are analyzed using triangulation and Bundle adjustment to determine their physical location relative to the road video log system. Objects having the appropriate color, and
physical placement are entered into a sign database. To develop the statistical models used for classification, a representative set of images was segmented and manually labeled determining the joint probabilistic models characterizing the color and location typical to that of road signs. Receiver Operating Characteristic curves were generated and analyzed to adjust the thresholds for the class identification. This system was tested and its performance characteristics are presented.
|
46 |
Calibration of permittivity sensors to measure contaminants in water and in biodiesel fuelShultz, Sarah January 1900 (has links)
Master of Science / Department of Biological & Agricultural Engineering / Naiqian Zhang / Four permittivity probes have been developed and tested to measure contaminants in water and in biodiesel fuel. An impedance meter was also used to measure the same contaminants. The pollutants measured in water were nitrate salts (potassium nitrate, calcium nitrate, and ammonium nitrate) and atrazine. The contaminants measured in biodiesel were water, glycerol, and glyceride. Each sensor measured the gain and phase of a sample with a known concentration of one of these pollutants.
The resulting signals were analyzed using stepwise regression, partial least squares regression, artificial neural network, and wavelet transformation followed by stepwise regression to predict the concentration of the contaminant using changes in the gain and phase data measured by the sensor. The same methods were used to predict the molecular weight of the nitrate salts. The reliability of the probes and the regression methods were compared using the coefficient of determination and the root mean square error. The frequencies selected using stepwise regression were studied to determine if any frequencies were more useful than others in detecting the contaminants.
The results showed that the probes were able to predict the concentration and the molecular weight of nitrates in water very accurately, with R2-values as high as 1.00 for the training data and 0.999 for the validation data for both concentration predictions and molecular weight predictions. The atrazine measurements were somewhat promising, the training R2-values were as high as 1.00 in some cases, but there were many low validation values, often below 0.400. The results for the biodiesel tests were also good; the highest training R2-value was 1.00 and the highest validation R2-value was 0.966.
|
47 |
Mathematical models for prediction and optimal mitigation of epidemicsChowdhury, Sohini Roy January 1900 (has links)
Master of Science / Department of Electrical and Computer Engineering / William H. Hsu / Caterina M. Scoglio / Early detection of livestock diseases and development of cost optimal mitigation strategies are becoming a global necessity. Foot and Mouth Disease (FMD) is considered one of the most serious livestock diseases owing to its high rate of transmission and extreme economic consequences. Thus, it is imperative to improve parameterized mathematical models for predictive and preventive purposes. In this work, a meta-population based stochastic model is implemented to assess the FMD infection dynamics and to curb economic losses in countries with underdeveloped livestock disease surveillance databases. Our model predicts the spatio-temporal evolution of FMD over a weighted contact network where the weights are characterized by the effect of wind and movement of animals and humans. FMD incidence data from countries such as Turkey, Iran and Thailand are used to calibrate and validate our model, and the predictive performance of our model is compared with that of baseline models as well. Additionally, learning-based prediction models can be utilized to detect the time of onset of an epidemic outbreak. Such models are computationally simple and they may be trained to predict infection in the absence of background data representing the dynamics of disease transmission, which is otherwise necessary for predictions using spatio-temporal models. Thus, we comparatively study the predictive performance of our spatio-temporal against neural networks and autoregressive models. Also, Bayesian networks combined with Monte-Carlo simulations are used to determine the gold standard by approximation.
Next, cost-effective mitigation strategies are simulated using the theoretical concept of infection network fragmentation. Based on the theoretical reduction in the total number of infected animals, several simulative mitigation strategies are proposed and their cost-effectiveness measures specified by the percentage reduction in the total number of infected animals per million US dollars, are also analyzed. We infer that the cost-effectiveness measures of mitigation strategies implemented using our spatio-temporal predictive model have a narrower range and higher granularity than those for mitigation strategies formulated using learning-based prediction models.
Finally, we coin optimal mitigation strategies using Fuzzy Dominance Genetic Algorithms (FDGA). We use the concept of hierarchical fuzzy dominance to minimize the total number of infected animals, the direct cost incurred due to the implementation of mitigation strategies, the number of animals culled, and the number of animals vaccinated to mitigate an epidemic. This method has the potential to aid in economic policy development for countries that have lost their FMD-free status.
|
48 |
Modeling, forecasting and resource allocation in cognitive radio networksAkter, Lutfa January 1900 (has links)
Doctor of Philosophy / Department of Electrical and Computer Engineering / Balasubramaniam Natarajan / With the explosive growth of wireless systems and services, bandwidth has become a treasured commodity. Traditionally, licensed frequency bands were exclusively reserved for use by the primary license holders (primary users), whereas, unlicensed frequency bands
allow spectrum sharing. Recent spectrum measurements indicate that
many licensed bands remain relatively unused for most of the time.
Therefore, allowing secondary users (users without a license to
operate in the band) to operate with minimal or no interference to primary users is one way of sharing spectrum to increase
efficiency. Recently, Federal Communications Commission (FCC) has
opened up licensed bands for opportunistic use by secondary users.
A cognitive radio (CR) is one enabling technology for systems
supporting opportunistic use. A cognitive radio adapts to the
environment it operates in by sensing the spectrum and quickly
decides on appropriate frequency bands and transmission parameters
to use in order to achieve certain performance goals. A cognitive
radio network (CRN) refers to a network of cognitive
radios/secondary users.
In this dissertation, we consider a competitive CRN with multiple
channels available for opportunistic use by multiple secondary
users. We also assume that multiple secondary users may coexist in a
channel and each secondary user (SU) can use multiple channels to
satisfy their rate requirements. In this context, firstly, we
introduce an integrated modeling and forecasting tool that provides
an upper bound estimate of the number of secondary users that may be
demanding access to each of the channels at the next instant.
Assuming a continuous time Markov chain model for both primary and
secondary users activities, we propose a Kalman filter based
approach for estimating the number of primary and secondary users.
These estimates are in turn used to predict the number of primary
and secondary users in a future time instant. We extend the modeling
and forecasting framework to the case when SU traffic is governed by
Erlangian process. Secondly, assuming that scheduling is complete
and SUs have identified the channels to use, we propose two quality
of service (QoS) constrained resource allocation frameworks. Our
measures for QoS include signal to interference plus noise ratio
(SINR) /bit error rate (BER) and total rate requirement. In the
first framework, we determine the minimum transmit power that SUs
should employ in order to maintain a certain SINR and use that
result to calculate the optimal rate allocation strategy across
channels. The rate allocation problem is formulated as a maximum
flow problem in graph theory. We also propose a simple heuristic to
determine the rate allocation. In the second framework, both
transmit power and rate per channel are simultaneously optimized
with the help of a bi-objective optimization problem formulation.
Unlike prior efforts, we transform the BER requirement constraint
into a convex constraint in order to guarantee optimality of
resulting solutions. Thirdly, we borrow ideas from social behavioral
models such as Homo Egualis (HE), Homo Parochius (HP) and Homo
Reciprocan (HR) models and apply it to the resource management
solutions to maintain fairness among SUs in a competitive CRN
setting. Finally, we develop distributed user-based approaches
based on ``Dual Decomposition Theory" and ``Game Theory" to solve
the proposed resource allocation frameworks. In summary, our body of
work represents significant ground breaking advances in the analysis
of competitive CRNs.
|
49 |
Characterization of the electrical and physical properties of scandium nitride grown using hydride vapor phase epitaxyRichards, Paul January 1900 (has links)
Master of Science / Department of Electrical and Computer Engineering / Andrew Rys / It is important in semiconductor manufacturing to understand the physical and electrical characteristics of new proposed semiconductors to determine their usefulness. Many tests are used in order to achieve this goal, such as x-ray diffraction, Hall effect measurements, and the scanning electron microscope. With these tests, the usefulness of the semiconductor can be determined, leading to more possibilities for growth in industry.
The purpose of the present study was to look at the semiconductor scandium nitride (ScN), grown using the hydride vapor phase epitaxy (HVPE) method on various substrates, and determine the physical and electrical properties of the sample. This study also sought to answer the following questions: 1) Can any trends be found from the results?, and 2) What possible application could scandium nitride be used for in the future?
A sample set of scandium nitride samples was selected. Each one of these samples was checked for contaminants from the growth procedure, such as chlorine, under the scanning electron microscope and checked for good conduction of current needed for the Hall effect measurements.
The thickness of the scandium nitride layer was computed using the scanning electron microscope. Using the thickness of the scandium nitride, Hall effect measurement values were computed. The plane the samples lie on was checked using x-ray diffraction. The test results shed light on many trends in the scandium nitride. Many of the samples were determined to have an aluminum nitride (AlN) contamination. This contamination led to a much higher resistivity and a much lower mobility no matter what thickness the scandium nitride was. The data from the samples was then used to offer suggestions on how to improve the growth process.
|
50 |
Wireless reflectance pulse oximeter design and photoplethysmographic signal processingLi, Kejia January 1900 (has links)
Master of Science / Department of Electrical and Computer Engineering / Steven Warren / Pulse oximetry, a noninvasive circulatory system monitoring technique, has been widely adopted in clinical and homecare applications for the determination of heart rate and blood oxygen saturation, where measurement locations are typically limited to fingertips and earlobes. Prior research indicates a variety of additional clinical parameters that can be derived from a photoplethysmogram (PPG), the fundamental time-domain signal yielded by a pulse oximeter sensor. The gap between this research potential and practical device applications can be decreased by improvements in device design (e.g., sensor performance and geometry, sampling fidelity and reliability, etc.) and PPG signal processing.
This thesis documents research focused on a novel pulse oximeter design and the accompanying PPG signal processing and interpretation. The filter-free reflectance design adopted in the module supplements new methods for signal sampling, control, and processing, with a goal to acquire high-fidelity raw data that can provide additional physiologic data for state-of-health analyses. Effective approaches are also employed to improve signal stability and quality, including shift-resistant baseline control, an anti-aliasing sampling frequency, light emitting diode intensity autoregulation, signal saturation inhibition, etc. MATLAB interfaces provide data visualization and processing for multiple applications. A feature detection algorithm (decision-making rule set) is presented as the latest application, which brings the element of intelligence into the pulse oximeter design by enabling onboard signal quality verification.
Two versions of the reflectance sensor were designed, built, calibrated, and utilized in data acquisition work. Raw data, which are composed of four channels of signals at a 240 Hz sampling rate and a 12-bit precision, successfully stream to a personal computer via a serial connection or wireless link. Due to the optimized large-area sensor and the intensity autoregulation mechanism, PPG signal acquisition from measurement sites other than fingertips and earlobes, e.g., the wrist, become viable and retain signal quality, e.g., signal-to-noise ratio. With appropriate thresholds, the feature detection algorithm can successfully indicate motion occurrence, signal saturation, and signal quality level. Overall, the experimental results from a variety of subjects and body locations in multiple applications demonstrate high quality PPGs, prototype reliability, and prospects for further research value.
|
Page generated in 0.0821 seconds