Spelling suggestions: "subject:"electrical & computer engineering"" "subject:"electrical & computer ingineering""
11 |
OPTIMIZATION OF THE GENETIC ALGORITHM IN THE SHEHERAZADE WARGAMING SIMULATORMomen, Faisal January 2011 (has links)
Stability and Support Operations (SASO) continue to play an important role in modern military exercises. The Sheherazade simulation system was designed to facilitate SASO-type mission planning exercises by rapidly generating and evaluating hundreds of thousands of alternative courses-of-action (COAs). The system is comprised of a coevolution engine that employs a Genetic Algorithm (GA) to generate the COAs for each side in a multi-sided conflict and a wargamer that models various subjective factors such as regional attitudes and faction animosities to evaluate their effectiveness. This dissertation extends earlier work on Sheherazade, in the following ways: 1) The GA and coevolution framework have been parallelized for improved performance on current multi-core platforms 2) the effects of various algorithm parameters, both general and specific to Sheherazade, were analyzed 3) alternative search techniques reflecting recent developments in the field have been evaluated for their capacity to improve the quality of the results.
|
12 |
Improving the Error Floor Performance of LDCP Codes with Better Codes and Better DecodersNguyen, Dung Viet January 2012 (has links)
Error correcting codes are used in virtually all communication systems to ensure reliable transmission of information. In 1948, Shannon established an upper-bound on the maximum rate at which information can be transmitted reliably over a noisy channel. Reliably transmitting information with a rate close to this theoretical limit, known as the channel capacity, has been the goal of channel coding scientists for the last five decades. The rediscovery of low-density parity-check (LDPC) codes in the 1990s added much-renewed excitement in the coding community. LDPC codes are interesting because they can approach channel capacity under sub-optimum decoding algorithms whose complexity is linear in the code length. Unsurprisingly, LDPC codes quickly attained their popularity in practical applications such as magnetic storage, wireless and optical communications. One, if not the most, important and challenging problem in LDPC code research is the study and analysis of the error floor phenomenon. This phenomenon is described as an abrupt degradation in the frame error rate performance of LDPC codes in the high signal-to-noise ratio region. Error floor is harmful because its presence prevents the LDPC decoder from reaching very low probability of decoding failure, an important requirement for many applications. Not long after the rediscovery of LDPC codes, scientists established that error floor is caused by certain harmful structures, most commonly known as trapping sets, in the Tanner representation of a code. Since then, the study of error floor mostly consists of three major problems: 1) estimating error floor; 2) constructing LDPC codes with low error floor and 3) designing decoders that are less susceptible to error floor. Although some parts of this dissertation can be used as important elements in error floor estimation, our main contributions are a novel method for constructing LDPC codes with low error floor and a novel class of low complexity decoding algorithms that can collectively alleviate error floor. These contributions are summarized as follows. A method to construct LDPC codes with low error floors on the binary symmetric channel is presented. Codes are constructed so that their Tanner graphs are free of certain small trapping sets. These trapping sets are selected from the Trapping Set Ontology for the Gallager A/B decoder. They are selected based on their relative harmfulness for a given decoding algorithm. We evaluate the relative harmfulness of different trapping sets for the sum-product algorithm by using the topological relations among them and by analyzing the decoding failures on one trapping set in the presence or absence of other trapping sets. We apply this method to construct structured LDPC codes. To facilitate the discussion, we give a new description of structured LDPC codes whose parity-check matrices are arrays of permutation matrices. This description uses Latin squares to define a set of permutation matrices that have disjoint support and to derive a simple necessary and sufficient condition for the Tanner graph of a code to be free of four-cycles. A new class of bit flipping algorithms for LDPC codes over the binary symmetric channel is proposed. Compared to the regular (parallel or serial) bit flipping algorithms, the proposed algorithms employ one additional bit at a variable node to represent its "strength." The introduction of this additional bit allows an increase in the guaranteed error correction capability. An additional bit is also employed at a check node to capture information which is beneficial to decoding. A framework for failure analysis and selection of two-bit bit flipping algorithms is provided. The main component of this framework is the (re)definition of trapping sets, which are the most "compact" Tanner graphs that cause decoding failures of an algorithm. A recursive procedure to enumerate trapping sets is described. This procedure is the basis for selecting a collection of algorithms that work well together. It is demonstrated that decoders which employ a properly selected group of the proposed algorithms operating in parallel can offer high speed and low error floor decoding.
|
13 |
Geometric Modeling and Optimization Over Regular Domains for Graphics and Visual ComputingWan, Shenghua 09 September 2013 (has links)
The effective construction of parametric representation of complicated geometric objects can facilitate many design,
analysis, and simulation tasks in Computer-Aided Design (CAD), Computer-Aided Manufacturing (CAM), and Computer-Aided Engineering (CAE).
Given a 3D shape, the procedure of finding such a parametric representation upon a canonical domain is called geometric parameterization.
Regular geometric regions, such as polycubes and spheres, are desirable domains for parameterization.
Parametric representations defined upon regular geometric domains have many desirable mathematical properties
and can facilitate or simplify various surface/solid modeling and processing computation.
This dissertation studies the construction of parameterization on regular geometric domains and explores their applications in shape modeling and computer-aided design.
Specifically, we studies (1) the surface parameterization on the spherical domain for closed genus-zero surfaces;
(2) the surface parameterization on the polycube domain for general closed surfaces;
and (3) the volumetric parameterization for 3D-manifolds embedded in 3D Euclidean space.
We propose novel computational models to solve these geometric problems.
Our computational models reduce to nonlinear optimizations with various geometric constraints.
Hence, we also need to explore effective optimization algorithms.
The main contributions of this dissertation are three-folded.
(1) We developed an effective progressive spherical parameterization algorithm, with an efficient nonlinear optimization scheme subject to the spherical constraint.
Compared with the state-of-the-art spherical mapping algorithms, our algorithm demonstrates the advantages of great efficiency,
lower distortion, and guaranteed bijectiveness, and we show its applications in spherical harmonic decomposition and shape analysis.
(2) We propose a first topology-preserving polycube domain optimization algorithm that simultaneously optimizes polycube domain together
with the parameterization to balance the mapping distortion and domain simplicity.
We develop effective nonlinear geometric optimization algorithms dealing with variables with and without derivatives.
This polycube parameterization algorithm can benefit the regular quadrilateral mesh generation and cross-surface parameterization.
(3) We develop a novel quaternion-based optimization framework for 3D frame field construction and volumetric parameterization computation.
We demonstrate our constructed 3D frame field has better smoothness, compared with state-of-the-art algorithms,
and is effective in guiding low-distortion volumetric parameterization and high-quality hexahedral mesh generation.
|
14 |
Spectrum Sensing, Spectrum Monitoring, and Security in Cognitive RadiosSoltanmohammadi, Erfan 10 June 2014 (has links)
Spectrum sensing is a key function of cognitive radios and is used to determine whether a primary user is present in the channel or not. In this dissertation, we formulate and solve the generalized likelihood ratio test (GLRT) for spectrum sensing when both primary user transmitter and the secondary user receiver are equipped with multiple antennas. We do not assume any prior information about the channel statistics or the primary users signal structure. Two cases are considered when the secondary user is aware of the energy of the noise and when it is not. The final test statistics derived from GLRT are based on the eigenvalues of the sample covariance matrix. In-band spectrum sensing in overlay cognitive radio networks requires that the secondary users (SU) periodically suspend their communication in order to determine whether the primary user (PU) has started to utilize the channel. In contrast, in spectrum monitoring the SU can detect the emergence of the PU from its own receiver statistics such as receiver error count (REC). We investigate the problem of spectrum monitoring in the presence of fading where the SU employs diversity combining to mitigate the channel fading effects. We show that a decision statistic based on the REC alone does not provide a good performance. Next we introduce new decision statistics based on the REC and the combiner coefficients. It is shown that the new decision statistic achieves significant improvement in the case of maximal ratio combining (MRC). Next we consider the problem of cooperative spectrum sensing in cognitive radio networks (CRN) in the presence of misbehaving radios. We propose a novel approach based on the iterative expectation maximization (EM) algorithm to detect the presence of the primary users, to classify the cognitive radios, and to compute their detection and false alarm probabilities. We also consider the problem of centralized binary hypothesis testing in a cognitive radio network (CRN) consisting of multiple classes of cognitive radios, where the cognitive radios are classified according to the probability density function (PDF) of their received data (at the FC) under each hypotheses.
|
15 |
A System Approach to Investing In Uncertain MarketsKhademi, Iman 11 June 2014 (has links)
We consider the problem of trend-following in US stock market and propose a
combined economic and technical model to approach this problem. A bank of
linear and nonlinear, discrete-time, low-pass filters with different sampling rates
is used to generate timing signals for US stock market indexes such as NASDAQ
Composite and S&P 500. These timing signals help us find the appropriate times
to step in or out of the market. Back-testing and real-time implementation results
along with the risk analysis validate our model.
According to the trend of the market, we may adopt a long or short position.
If we conclude that the market is in an uptrend (rising prices) then, we buy some
shares of a stock to sell them for a higher price in future (long position). On the
other hand, in a market downtrend (falling prices), we may borrow a number of
shares and sell them outright to repurchase them for a lower price in future (short
selling). The purpose of the market timing is to recognize the current trend of the
market and to find the appropriate times to step in or out of the market.
We do not consider market timing for the stocks of individual companies due to
the high sensitivity of daily prices to news, the performance of their competitors,
the conditions of the economic sector they belong to, and many other sources of
randomness. Instead, we consider the timing problem for the large market indexes
such as NASDAQ Composite and S&P 500 that are weighted averages of the price
of many companies from several economic sectors. Therefore, we use the daily
index value and volume (total number of trades) for a large market index in place
of an individual company. Such timing signals would be suitable for investing in
exchange traded funds (ETFs).
|
16 |
Localization and Security Algorithms for Wireless Sensor Networks and the Usage of Signals of OpportunityChacon Rojas, Gustavo Andres 09 May 2014 (has links)
In this dissertation we consider the problem of localization of wireless devices in environments
and applications where GPS (Global Positioning System) is not a viable
option. The rst part of the dissertation studies a novel positioning system based
on narrowband radio frequency (RF) signals of opportunity, and develops near optimum
estimation algorithms for localization of a mobile receiver. It is assumed that a
reference receiver (RR) with known position is available to aid with the positioning
of the mobile receiver (MR). The new positioning system is reminiscent of GPS and
involves two similar estimation problems. The rst is localization using estimates
of time-dierence of arrival (TDOA). The second is TDOA estimation based on the
received narrowband signals at the RR and the MR. In both cases near optimum
estimation algorithms are developed in the sense of maximum likelihood estimation
(MLE) under some mild assumptions, and both algorithms compute approximate
MLEs in the form of a weighted least-squares (WLS) solution. The proposed positioning
system is illustrated with simulation studies based on FM radio signals.
The numerical results show that the position errors are comparable to those of other
positioning systems, including GPS.
Next, we present a novel algorithm for localization of wireless sensor networks
(WSNs) called distributed randomized gradient descent (DRGD), and prove that in
the case of noise-free distance measurements, the algorithm converges and provides
the true location of the nodes. For noisy distance measurements, the convergence
properties of DRGD are discussed and an error bound on the location estimation
error is obtained. In contrast to several recently proposed methods, DRGD does
not require that blind nodes be contained in the convex hull of the anchor nodes,
and can accurately localize the network with only a few anchors. Performance of
DRGD is evaluated through extensive simulations and compared with three other algorithms,
namely the relaxation-based second order cone programming (SOCP), the
simulated annealing (SA), and the semi-denite programing (SDP) procedures. Similar
to DRGD, SOCP and SA are distributed algorithms, whereas SDP is centralized.
The results show that DRGD successfully localizes the nodes in all the cases, whereas
in many cases SOCP and SA fail. We also present a modication of DRGD for mobile
WSNs and demonstrate the ecacy of DRGD for localization of mobile networks with
several simulation results. We then extend this method for secure localization in the
presence of outlier distance measurements or distance spoong attacks. In this case
we present a centralized algorithm to estimate the position of the nodes in WSNs,
where outlier distance measurements may be present.
|
17 |
Channel Estimation and Symbol Detection In Massive MIMO Systems Using Expectation PropagationGhavami, Kamran 24 May 2017 (has links)
The advantages envisioned from using large antenna arrays have made massive multiple-
input multiple-output systems (also known as massive MIMO) a promising technology for
future wireless standards. Despite the advantages that massive MIMO systems provide,
increasing the number of antennas introduces new technical challenges that need to be
resolved. In particular, symbol detection is one of the key challenges in massive MIMO.
Obtaining accurate channel state information (CSI) for the extremely large number of chan-
nels involved is a difficult task and consumes significant resources. Therefore for Massive
MIMO systems coherent detectors must be able to cope with highly imperfect CSI. More
importantly, non-coherent schemes which do not rely on CSI for symbol detection become
very attractive.
Expectation propagation (EP) has been recently proposed as a low complexity algo-
rithm for symbol detection in massive MIMO systems , where its performance is evaluated
on the premise that perfect channel state information (CSI) is available at the receiver.
However, in practical systems, exact CSI is not available due to a variety of reasons in-
cluding channel estimation errors, quantization errors and aging. In this work we study
the performance of EP in the presence of imperfect CSI due to channel estimation er-
rors and show that in this case the EP detector experiences significant performance loss.
Moreover, the EP detector shows a higher sensitivity to channel estimation errors in the
high signal-to-noise ratio (SNR) regions where the rate of its performance improvement
decreases. We investigate this behavior of the EP detector and propose a Modified EP
detector for colored noise which utilizes the correlation matrix of the channel estimation
error. Simulation results verify that the modified algorithm is robust against imperfect CSI
and its performance is significantly improved over the EP algorithm, particularly in the
higher SNR regions, and that for the modified detector, the slope of the symbol error rate
(SER) vs. SNR plots are similar to the case of perfect CSI.
Next, an algorithm based on expectation propagation is proposed for noncoherent symbol detection in large-scale SIMO systems. It is verified through simulation that in terms
of SER, the proposed detector outperforms the pilotbased coherent MMSE detector for
blocks as small as two symbols. This makes the proposed detector suitable for fast fading
channels with very short coherence times. In addition, the SER performance of this detec-
tor converges to that of the optimum ML receiver when the size of the blocks increases.
Finally it is shown that for Rician fading channels, knowledge of the fading parameters is
not required for achieving the SER gains.
A channel estimation method was recently proposed for multi-cell massive MIMO sys-
tems based on the eigenvalue decomposition of the correlation matrix of the received vectors
(EVD-based). This algorithm, however, is sensitive to the size of the antenna array as well
as the number of samples used in the evaluation of the correlation matrix. As the final
work in this dissertation, we present a noncoherent channel estimation and symbol de-
tection scheme for multi-cell massive MIMO systems based on expectation propagation.
The proposed algorithm is initialized with the channel estimation result from the EVD-
based method. Simulation results show that after a few iterations, the EP-based algorithm
significantly outperforms the EVD-based method in both channel estimation and symbol
error rate. Moreover, the EP-based algorithm is not sensitive to antenna array size or the
inaccuracies of sample correlation matrix.
|
18 |
Experimental Study of Frequency Oscillations in Islanded Power SystemWellman, Kevin Daniel 14 June 2017 (has links)
Since the introduction of power electronics to the grid, the power system has quickly changed. Fault detection and removal is performed more accurately and at quicker response time, and non-inertia driven loads have been added. This means stability must continue to be a main topic of concern to maintain a stable synchronized grid.
In this thesis a lab was designed, constructed, and tested for the purpose of studying transient stability in power systems. Many different options were considered and researched, but the focus of this paper is to describe the options chosen. The lab must be safe to operate and work around, have flexibility to perform many different type of experiments, and accurately simulate a power system.
The created lab was then tested to observe the impact of PSS on an unsynchronized generator connected to a static load. The lab performed as designed, which allows for the introduction of more machines to create the IEEE 14 Bus grid.
|
19 |
A Frequency Hopping Method to Detect Replay AttacksTang, Guofu 24 January 2017 (has links)
The application of information technology in network control systems introduces the
potential threats to the future industrial control system. The malicious attacks undermine
the security of network control system, which could cause a huge economic loss. This thesis
studies a particular cyber attack called the replay attack, which is motivated by the Stuxnet
worm allegedly used against the nuclear facilities in Iran. For replay attack, this thesis injects
the narrow-band signal into control signal and adopts the spectrum estimation approach to
test the estimation residue. In order to protect the information of the injected signal from
knowing by attackers, the frequency hopping technology is employed to encrypt the frequency
of the narrow-band signal. The detection method proposed in the thesis is illustrated and
examined by the simulation studies, and it shows the good detection rate and security.
|
20 |
A Mixed Consensus and Fuzzy Approach to Position Control of Four-Wheeled VehiclesHasheminezhad, Bita 24 January 2017 (has links)
Autonomous driving is a growing domain of intelligent transportation systems that utilizes
communications to autonomously control cooperative vehicles. This thesis presents
a multi-agent solution to the platoon control problem. First, an adaptive controller on
linearized longitudinal dynamics of a vehicle is applied to assure vehicles are able to track
their reference velocities. Then, an agent-based consensus approach is studied which enables
multiple vehicles driving together where each vehicle can follow its predecessor at a
close distance, safely. To deal with unexpected events, a fuzzy controller is added to the
reference signal of the consensus controller. Simulation results are provided to validate
the effectiveness of the approach in normal situations and in case of agents having an
instant brake or receiving a wrong reference signal.
|
Page generated in 0.1208 seconds