Spelling suggestions: "subject:"electrical anda computer engineering"" "subject:"electrical anda computer ingineering""
211 |
Spectrum Sensing, Spectrum Monitoring, and Security in Cognitive RadiosSoltanmohammadi, Erfan 10 June 2014 (has links)
Spectrum sensing is a key function of cognitive radios and is used to determine whether a primary user is present in the channel or not. In this dissertation, we formulate and solve the generalized likelihood ratio test (GLRT) for spectrum sensing when both primary user transmitter and the secondary user receiver are equipped with multiple antennas. We do not assume any prior information about the channel statistics or the primary users signal structure. Two cases are considered when the secondary user is aware of the energy of the noise and when it is not. The final test statistics derived from GLRT are based on the eigenvalues of the sample covariance matrix. In-band spectrum sensing in overlay cognitive radio networks requires that the secondary users (SU) periodically suspend their communication in order to determine whether the primary user (PU) has started to utilize the channel. In contrast, in spectrum monitoring the SU can detect the emergence of the PU from its own receiver statistics such as receiver error count (REC). We investigate the problem of spectrum monitoring in the presence of fading where the SU employs diversity combining to mitigate the channel fading effects. We show that a decision statistic based on the REC alone does not provide a good performance. Next we introduce new decision statistics based on the REC and the combiner coefficients. It is shown that the new decision statistic achieves significant improvement in the case of maximal ratio combining (MRC). Next we consider the problem of cooperative spectrum sensing in cognitive radio networks (CRN) in the presence of misbehaving radios. We propose a novel approach based on the iterative expectation maximization (EM) algorithm to detect the presence of the primary users, to classify the cognitive radios, and to compute their detection and false alarm probabilities. We also consider the problem of centralized binary hypothesis testing in a cognitive radio network (CRN) consisting of multiple classes of cognitive radios, where the cognitive radios are classified according to the probability density function (PDF) of their received data (at the FC) under each hypotheses.
|
212 |
A System Approach to Investing In Uncertain MarketsKhademi, Iman 11 June 2014 (has links)
We consider the problem of trend-following in US stock market and propose a
combined economic and technical model to approach this problem. A bank of
linear and nonlinear, discrete-time, low-pass filters with different sampling rates
is used to generate timing signals for US stock market indexes such as NASDAQ
Composite and S&P 500. These timing signals help us find the appropriate times
to step in or out of the market. Back-testing and real-time implementation results
along with the risk analysis validate our model.
According to the trend of the market, we may adopt a long or short position.
If we conclude that the market is in an uptrend (rising prices) then, we buy some
shares of a stock to sell them for a higher price in future (long position). On the
other hand, in a market downtrend (falling prices), we may borrow a number of
shares and sell them outright to repurchase them for a lower price in future (short
selling). The purpose of the market timing is to recognize the current trend of the
market and to find the appropriate times to step in or out of the market.
We do not consider market timing for the stocks of individual companies due to
the high sensitivity of daily prices to news, the performance of their competitors,
the conditions of the economic sector they belong to, and many other sources of
randomness. Instead, we consider the timing problem for the large market indexes
such as NASDAQ Composite and S&P 500 that are weighted averages of the price
of many companies from several economic sectors. Therefore, we use the daily
index value and volume (total number of trades) for a large market index in place
of an individual company. Such timing signals would be suitable for investing in
exchange traded funds (ETFs).
|
213 |
Localization and Security Algorithms for Wireless Sensor Networks and the Usage of Signals of OpportunityChacon Rojas, Gustavo Andres 09 May 2014 (has links)
In this dissertation we consider the problem of localization of wireless devices in environments
and applications where GPS (Global Positioning System) is not a viable
option. The rst part of the dissertation studies a novel positioning system based
on narrowband radio frequency (RF) signals of opportunity, and develops near optimum
estimation algorithms for localization of a mobile receiver. It is assumed that a
reference receiver (RR) with known position is available to aid with the positioning
of the mobile receiver (MR). The new positioning system is reminiscent of GPS and
involves two similar estimation problems. The rst is localization using estimates
of time-dierence of arrival (TDOA). The second is TDOA estimation based on the
received narrowband signals at the RR and the MR. In both cases near optimum
estimation algorithms are developed in the sense of maximum likelihood estimation
(MLE) under some mild assumptions, and both algorithms compute approximate
MLEs in the form of a weighted least-squares (WLS) solution. The proposed positioning
system is illustrated with simulation studies based on FM radio signals.
The numerical results show that the position errors are comparable to those of other
positioning systems, including GPS.
Next, we present a novel algorithm for localization of wireless sensor networks
(WSNs) called distributed randomized gradient descent (DRGD), and prove that in
the case of noise-free distance measurements, the algorithm converges and provides
the true location of the nodes. For noisy distance measurements, the convergence
properties of DRGD are discussed and an error bound on the location estimation
error is obtained. In contrast to several recently proposed methods, DRGD does
not require that blind nodes be contained in the convex hull of the anchor nodes,
and can accurately localize the network with only a few anchors. Performance of
DRGD is evaluated through extensive simulations and compared with three other algorithms,
namely the relaxation-based second order cone programming (SOCP), the
simulated annealing (SA), and the semi-denite programing (SDP) procedures. Similar
to DRGD, SOCP and SA are distributed algorithms, whereas SDP is centralized.
The results show that DRGD successfully localizes the nodes in all the cases, whereas
in many cases SOCP and SA fail. We also present a modication of DRGD for mobile
WSNs and demonstrate the ecacy of DRGD for localization of mobile networks with
several simulation results. We then extend this method for secure localization in the
presence of outlier distance measurements or distance spoong attacks. In this case
we present a centralized algorithm to estimate the position of the nodes in WSNs,
where outlier distance measurements may be present.
|
214 |
Channel Estimation and Symbol Detection In Massive MIMO Systems Using Expectation PropagationGhavami, Kamran 24 May 2017 (has links)
The advantages envisioned from using large antenna arrays have made massive multiple-
input multiple-output systems (also known as massive MIMO) a promising technology for
future wireless standards. Despite the advantages that massive MIMO systems provide,
increasing the number of antennas introduces new technical challenges that need to be
resolved. In particular, symbol detection is one of the key challenges in massive MIMO.
Obtaining accurate channel state information (CSI) for the extremely large number of chan-
nels involved is a difficult task and consumes significant resources. Therefore for Massive
MIMO systems coherent detectors must be able to cope with highly imperfect CSI. More
importantly, non-coherent schemes which do not rely on CSI for symbol detection become
very attractive.
Expectation propagation (EP) has been recently proposed as a low complexity algo-
rithm for symbol detection in massive MIMO systems , where its performance is evaluated
on the premise that perfect channel state information (CSI) is available at the receiver.
However, in practical systems, exact CSI is not available due to a variety of reasons in-
cluding channel estimation errors, quantization errors and aging. In this work we study
the performance of EP in the presence of imperfect CSI due to channel estimation er-
rors and show that in this case the EP detector experiences significant performance loss.
Moreover, the EP detector shows a higher sensitivity to channel estimation errors in the
high signal-to-noise ratio (SNR) regions where the rate of its performance improvement
decreases. We investigate this behavior of the EP detector and propose a Modified EP
detector for colored noise which utilizes the correlation matrix of the channel estimation
error. Simulation results verify that the modified algorithm is robust against imperfect CSI
and its performance is significantly improved over the EP algorithm, particularly in the
higher SNR regions, and that for the modified detector, the slope of the symbol error rate
(SER) vs. SNR plots are similar to the case of perfect CSI.
Next, an algorithm based on expectation propagation is proposed for noncoherent symbol detection in large-scale SIMO systems. It is verified through simulation that in terms
of SER, the proposed detector outperforms the pilotbased coherent MMSE detector for
blocks as small as two symbols. This makes the proposed detector suitable for fast fading
channels with very short coherence times. In addition, the SER performance of this detec-
tor converges to that of the optimum ML receiver when the size of the blocks increases.
Finally it is shown that for Rician fading channels, knowledge of the fading parameters is
not required for achieving the SER gains.
A channel estimation method was recently proposed for multi-cell massive MIMO sys-
tems based on the eigenvalue decomposition of the correlation matrix of the received vectors
(EVD-based). This algorithm, however, is sensitive to the size of the antenna array as well
as the number of samples used in the evaluation of the correlation matrix. As the final
work in this dissertation, we present a noncoherent channel estimation and symbol de-
tection scheme for multi-cell massive MIMO systems based on expectation propagation.
The proposed algorithm is initialized with the channel estimation result from the EVD-
based method. Simulation results show that after a few iterations, the EP-based algorithm
significantly outperforms the EVD-based method in both channel estimation and symbol
error rate. Moreover, the EP-based algorithm is not sensitive to antenna array size or the
inaccuracies of sample correlation matrix.
|
215 |
Micromagnetic Modeling of Write Heads for High-Density and High-Data-Rate Perpendicular RecordingBai, Daniel Zhigang 01 August 2004 (has links)
In this dissertation, three dimensional dynamic micromagnetic modeling based on Landau-Lifshitz equation with Gilbert damping has been used to study the magnetic processes of the thin film write heads for high density and high data rate perpendicular magnetic recording.
In extremely narrow track width regime, for example, around or below 100 nm, the head field is found to suffer from significant loss from the ideal AttM s value for perpendicular recording. In the meantime, remanent head field becomes significant, posing potential issue of head remanence erasure.
Using micromagnetic modeling, various novel head designs have been investigated. For an overall head dimension around one micron, the shape and structure of the head yoke have been found to greatly affect the head magnetization reversal performance, therefore the field rise time, especially for moderate driving currents. A lamination of the head across its thickness, both in the yoke and in the pole tip, yields excellent field reversal speed, and more importantly, it suppresses the remanent field very well and thus making itself a simple and effective approach to robust near-zero remanence. A single pole head design with a stitched pole tip and a recessed side yoke can produce significantly enhanced head field compared to a traditional single pole head. Various head design parameters have been examined via micromagnetic modeling.
Using the dynamic micromagnetic model, the magnetization reversal processes at data rates beyond 1 G bit/s have been studied. The excitation of spin wave during the head field reversal and the energy dissipation afterwards were found im portant in dictating the field rise time. Both the drive current rise time and the Gilbert damping constant affect the field reversal speed.
The effect of the soft underlayer (SUL) in both the write and the read processes have been studied via micromagnetic modeling. Although it is relatively easy to fulfill the requirement for the magnetic imaging in writing, the SUL deteriorates the readback performance and lowers the achievable recording linear density. Various parameters have been investigated and solutions have been proposed.
The effect of stress in magnetostrictive thin films has been studied both analytically and by simulation. The micromagnetic model has been extended to incorporate the stress-induced anisotropy effect. Simulation was done on both a magnetic thin film undergoing stresses to show the static domains and a conceptual write head design that utilizes the stress induced anisotropy to achieve better performance.
A self-consistent model based on energy minimization has been developed to model both the magnetization and the stress-strain states of a magnetic thin film.
|
216 |
Experimental Study of Frequency Oscillations in Islanded Power SystemWellman, Kevin Daniel 14 June 2017 (has links)
Since the introduction of power electronics to the grid, the power system has quickly changed. Fault detection and removal is performed more accurately and at quicker response time, and non-inertia driven loads have been added. This means stability must continue to be a main topic of concern to maintain a stable synchronized grid.
In this thesis a lab was designed, constructed, and tested for the purpose of studying transient stability in power systems. Many different options were considered and researched, but the focus of this paper is to describe the options chosen. The lab must be safe to operate and work around, have flexibility to perform many different type of experiments, and accurately simulate a power system.
The created lab was then tested to observe the impact of PSS on an unsynchronized generator connected to a static load. The lab performed as designed, which allows for the introduction of more machines to create the IEEE 14 Bus grid.
|
217 |
A Frequency Hopping Method to Detect Replay AttacksTang, Guofu 24 January 2017 (has links)
The application of information technology in network control systems introduces the
potential threats to the future industrial control system. The malicious attacks undermine
the security of network control system, which could cause a huge economic loss. This thesis
studies a particular cyber attack called the replay attack, which is motivated by the Stuxnet
worm allegedly used against the nuclear facilities in Iran. For replay attack, this thesis injects
the narrow-band signal into control signal and adopts the spectrum estimation approach to
test the estimation residue. In order to protect the information of the injected signal from
knowing by attackers, the frequency hopping technology is employed to encrypt the frequency
of the narrow-band signal. The detection method proposed in the thesis is illustrated and
examined by the simulation studies, and it shows the good detection rate and security.
|
218 |
A Mixed Consensus and Fuzzy Approach to Position Control of Four-Wheeled VehiclesHasheminezhad, Bita 24 January 2017 (has links)
Autonomous driving is a growing domain of intelligent transportation systems that utilizes
communications to autonomously control cooperative vehicles. This thesis presents
a multi-agent solution to the platoon control problem. First, an adaptive controller on
linearized longitudinal dynamics of a vehicle is applied to assure vehicles are able to track
their reference velocities. Then, an agent-based consensus approach is studied which enables
multiple vehicles driving together where each vehicle can follow its predecessor at a
close distance, safely. To deal with unexpected events, a fuzzy controller is added to the
reference signal of the consensus controller. Simulation results are provided to validate
the effectiveness of the approach in normal situations and in case of agents having an
instant brake or receiving a wrong reference signal.
|
219 |
Scheduling and Tuning Kernels for High-performance on Heterogeneous Processor SystemsFang, Ye 26 January 2017 (has links)
Accelerated parallel computing techniques using devices such as GPUs
and Xeon Phis (along with CPUs) have proposed promising solutions of
extending the cutting edge of high-performance computer systems. A
significant performance improvement can be achieved when suitable
workloads are handled by the accelerator. Traditional CPUs can handle
those workloads not well suited for accelerators. Combination of
multiple types of processors in a single computer system is referred
to as a heterogeneous system.
This dissertation addresses tuning and scheduling issues in
heterogeneous systems. The first section presents work on tuning
scientific workloads on three different types of processors:
multi-core CPU, Xeon Phi massively parallel processor, and NVIDIA GPU;
common tuning methods and platform-specific tuning techniques are
presented. Then, analysis is done to demonstrate the performance
characteristics of the heterogeneous system on different input data.
This section of the dissertation is part of the GeauxDock project,
which prototyped a few state-of-art bioinformatics algorithms, and
delivered a fast molecular docking program.
The second section of this work studies the performance model of the
GeauxDock computing kernel. Specifically, the work presents an
extraction of features from the input data set and the target systems,
and then uses various regression models to calculate the perspective
computation time. This helps understand why a certain processor is
faster for certain sets of tasks. It also provides the essential
information for scheduling on heterogeneous systems.
In addition, this dissertation investigates a high-level task
scheduling framework for heterogeneous processor systems in which,
the pros and cons of using different heterogeneous processors can
complement each other. Thus a higher performance can be achieve on
heterogeneous computing systems. A new scheduling algorithm with four
innovations is presented: Ranked Opportunistic Balancing (ROB),
Multi-subject Ranking (MR), Multi-subject Relative Ranking (MRR), and
Automatic Small Tasks Rearranging (ASTR). The new algorithm
consistently outperforms previously proposed algorithms with better
scheduling results, lower computational complexity, and more
consistent results over a range of performance prediction errors.
Finally, this work extends the heterogeneous task scheduling algorithm
to handle power capping feature. It demonstrates that a power-aware
scheduler significantly improves the power efficiencies and saves the
energy consumption. This suggests that, in addition to performance
benefits, heterogeneous systems may have certain advantages on overall
power efficiency.
|
220 |
A Performance Model and Optimization Strategies for Automatic GPU Code Generation of PDE Systems Described by a Domain-Specific LanguageHu, Yue 23 August 2016 (has links)
Stencil computations are a class of algorithms operating on multi-dimensional arrays also called grid functions (GFs), which update array elements using their nearest-neighbors. This type of computation forms the basis for computer simulations across almost every field of science, such as computational fluid dynamics. Its mostly regular data access patterns potentially enable it
to take advantage of GPU's high computation and data bandwidth. However, manual GPU programming is time-consuming and error-prone, as well as requiring an in-depth knowledge of GPU architecture and programming. To overcome the difficulties in manual programming, a number of stencil frameworks have been developed to automatically generate GPU codes from user-written stencil code, usually in a Domain Specific Language. The previous stencil frameworks demonstrate the feasibility, but they also introduce a set of unprecedented challenges in real stencil applications.
This dissertation is based on the Chemora stencil framework, aiming to better deal with real stencil applications, especially with large stencil calculations. The large calculations usually consist of dozens of GFs with a variety of stencil patterns, resulting in extremely large code-generation ways. First, we propose an algorithm to map a calculation into one or more kernels by minimizing off-chip memory accesses while maintaining a relatively high thread-level parallelism. Second, we propose an efficiency-based buffering algorithm which operates by scoring a change in buffering strategy for a GF using a performance estimation and resource usage. Let b (i.e., 5) denote the number of buffering strategies the framework supports. With the algorithm, a near optimal solution can be found in (b-1)N(N+1)/2 steps, instead of b^N steps, for a calculation with N GFs. Third, we wrote a set of microbenchmarks to explore and measure some performance-critical GPU microarchitecture features and parameters for better performance modeling. Finally, we propose an analytic performance model to predict the execution time.
|
Page generated in 0.114 seconds