151 |
Construction Of High-Rate, Reliable Space-Time CodesRaj Kumar, K 06 1900 (has links) (PDF)
No description available.
|
152 |
Timing Offset And Frequency Offset Estimation In An OFDM SystemPrabhakar, A 07 1900 (has links) (PDF)
No description available.
|
153 |
On Adaptive Filtering Using Delayless IFIR Structure : Analysis, Experiments And Application To Active Noise Control And Acoustic Echo CancellationVenkataraman, S 09 1900 (has links) (PDF)
No description available.
|
154 |
Design And Analysis Of Microstrip Ring Antennas For Multi-frequency OperationsBehera, Subhrakanta 06 1900 (has links) (PDF)
In this research we attempted several modifications to microstrip ring/loop antennas to design multi-frequency antennas through systematic approaches. Such multi-frequency antennas can be useful while building compact terminals to operate at multiple wireless standards. One of the primary contributions was the use of a capacitive feed arrangement that enables simultaneous excitation of multiple concentric rings from an underlying transmission line. The combined antenna operates in the same resonant bands as the individual rings and avoids some of the bands at harmonic frequencies.
A similar feeding arrangement is used to obtain dual band characteristics from just one ring, with improved bandwidth. This is made possible by widening two adjacent sides of a square ring antenna symmetrically, and attaching an open stub to the inner edge of the side opposite to the feed line. Use of fractal segments replacing the side with the stub also results in a similar performance. Use of fractal geometries has been widely associated with multi-functional antennas. It has been observed from the parametric studies that, the ratio of the resonant frequencies can range from 1.5 to 2.0. This shows some flexibility in systematically designing dual-band antennas with a desired pair of resonant frequencies.
An analysis technique based on multi-port network modeling (MNM) has been proposed to accurately predict the input characteristics of these antennas. This approach can make use of the ordered nature of fractal geometries to simplify computations. Several prototype antennas have been fabricated and tested successfully to validate simulation and analytical results.
|
155 |
Performance Analysis Of Post Detection Integration Techniques In The Presence Of Model UncertaintiesChandrasekhar, J 06 1900 (has links) (PDF)
In this thesis, we analyze the performance of the Post Detection Integration (PDI) techniques used for detection of weak DS/CDMA signals in the presence of uncertainty in the frequency, noise variance and data bits. Such weak signal detection problems arise, for example, in the first step of code acquisition for applications such as the Global Navigation Satellite Systems (GNSS) based position localization. Typically, in such applications, a combination of coherent and post-coherent integration stages are used to improve the reliability of signal detection. We show that the feasibility of using fully coherent processing is limited due to the presence of unknown data-bits and/or frequency uncertainty. We analyze the performance of the two conventional PDI techniques, namely, the Non-coherent PDI (NC-PDI) and the Differential-PDI (D-PDI), in the presence of noise and data bit uncertainty, to establish their robustness for weak signal detection. We show that the NC-PDI technique is robust to uncertainty in the data bits, but a fundamental detection limit exists due to uncertainty in the noise variance. The D-PDI technique, on the other hand, is robust to uncertainty in the noise variance, but its performance degrades in the presence of unknown data bits. We also analyze the following different variants of the NC-PDI and D-PDI techniques: Quadratic NC-PDI technique, Non-quadratic NC-PDI, D-PDI with real component (D-PDI (Real)) and D-PDI with absolute component (D-PDI (Abs)). We show that the likelihood ratio based test statistic derived in the presence of data bits is non-robust in the presence of noise uncertainty.
We propose two novel PDI techniques as a solution to the above mentioned shortcomings in the conventional PDI methods. The first is a cyclostationarity based sub-optimal PDI technique, that exploits the periodicity introduced due to the data bits. We establish the exact mathematical relationship between the D-PDI and cyclostationarity-based signal detection methods. The second method we propose is a modified PDI technique, which is robust against both noise and data bit uncertainties. We derive two variants of the modified technique, which are tailored for data and pilot channels, respectively. We characterize the performance of the conventional and proposed PDI techniques in terms of their false alarm and detection probabilities and compare them through the receiver operating characteristic (ROC) curves. We derive the sample complexity of the test-statistic in order to achieve a given performance in terms of detection and false alarm probabilities in the presence of model uncertainties. We validate the theoretical results and illustrate the improved performance that can be obtained using our proposed PDI protocols through Monte-Carlo simulations.
|
156 |
Performance Modelling Of TCP-Controlled File Transfers In Wireless LANs, And Applications In AP-STA AssociationPradeepa, B K 03 1900 (has links) (PDF)
Our work focuses on performance modelling of TCP-controlled file transfers in infrastructure mode IEEE 802.11 wireless networks, and application of the models in developing association schemes. A comprehensive set of analytical models is used to study the behaviour of TCP-controlled long and short file transfers in IEEE 802.11 WLANs. The results can provide insight into the performance of TCP-controlled traffic in 802.11 WLANs in a variety of different network environments. First, we consider several WLAN stations associated at rates r1, r2, ...,rk with an Access Point. Each station (STA) is downloading a long file from a local server, located on the LAN to which the AP is attached, using TCP. We assume that a TCP ACK will be produced after the reception of d packets at an STA. We model these simultaneous TCP-controlled transfers using a semi-Markov process. Our analytical approach leads to a procedure to compute aggregate download as well as per-STA throughputs numerically, and the results match simulations very well. Performance analysis of TCP-controlled long file transfers in a WLAN in infrastructure mode is available in the literature with one of the main assumptions being equal window size for all TCP connections. We extend the analysis to TCP-controlled long file uploads and downloads with different TCP windows. Our approach is based on the semi- Markov process considered in above work, but with arbitrary window sizes. We present simulation results to show the accuracy of the analytical model. Then, we obtain an association policy for STAs in an IEEE 802.11 WLAN by taking into account explicitly an aspect of practical importance: TCP controlled short file downloads interspersed with read times (motivated by web browsing). Our approach is based on two steps. First, we consider the analytical model mentioned above to obtain the aggregate download throughput. Second, we present a 2-node closed queueing network model to approximate the expected average-sized file download time for a user who shares the AP with other users associated at a multiplicity of rates. These analytical results motivate the proposed association policy, called the Estimated Delay based Association (EDA) policy: Associate with the AP at which the expected file download time is the least. Simulations indicate that for a web-browsing type traffic scenario, EDA outperforms other policies that have been proposed earlier; the extent of improvement ranges from 12.8% to 46.4% for a 9-AP network.
We extend the performance model by considering _le sizes drawn from heavy-tailed distributions. We represent heavy-tailed distributions using a 1 mixture of exponential distributions (following Cox's method). We provide a closed queueing network model to approximate the expected average-sized file download time for a user who shares the AP with other users associated at a multiplicity of rates. Further, we analyze TCP-controlled bulk file transfers in a single station WLAN with nonzero propagation delay between the file server and the WLAN. Our approach is to model the flow of packets as a closed queueing network (BCMP network) with 3 service centres, one each for the Access Point and the STA, and the third for the propagation delay. The service rates of the first two are obtained by analyzing the WLAN MAC. We extend this work to obtain throughputs in multirate scenarios. Simulations show that our approach is able to predict observed throughputs with a high degree of accuracy.
|
157 |
Low Power Receiver Architecture And Algorithms For Low Data Rate Wireless Personal Area NetworksDwivedi, Satyam 12 1900 (has links) (PDF)
Sensor nodes in a sensor network is power constrained. Transceiver electronics of a node in sensor network consume a good share of total power consumed in the node. The thesis proposes receiver architecture and algorithms which reduces power consumption of the receiver. The work in the thesis ranges from designing low power architecture of the receiver to experimentally verifying the functioning of the receiver.
Concepts proposed in the thesis are:
Low power adaptive architecture :-A baseband digital receiver design is proposed which changes its sampling frequency and bit-width based on interference detection and SNR estimation. The approach is based on Look-up-table (LUT) in the digital section of the receiver. Interference detector and SNR estimator has been proposed which suits this approach. Settings of different sections of digital receiver changes as sampling frequency and bit-width varies. But, this change in settings ensures that the desired BER is achieved. Overall, the receiver reduces amount of processing when conditions are benign and does more processing when conditions are not favorable. It is shown that the power consumption by the digital baseband can be reduced by 85% (7 times) when there is no interference and SNR is high. Thus the proposed design meets our requirement of low power hardware. The design is coded in Verilog HDL and power and area estimation is done using Synopsys tools.
Faster Simulation Methodologies :-Usually physical layer simulations are done on baseband equivalent model of the signal in the receiver chain. Simulating Physical layer algorithms on bandpass signals for BER evaluation is very time consuming. We need to do the bandpass simulations to capture the effect of quantization on bandpass signal in the receiver. We have developed a variance measuring simulation methodology for faster simulation which reduces simulation time by a factor of 10.
Low power, Low area, Non-coherent, Non-data-aided joint tracking and acquisition algorithm :-Correlation is a very popular function used particularly in synchronization algorithms in the receivers. But correlation requires usage of multipliers. Multipliers are area and power consuming blocks. A very low power and low area joint tracking and acquisition algorithm is developed. The algorithm does not use any multiplier to synchronize. Even it avoids squaring and adding the signals to achieve non-coherency. Beside the algorithm is non-data-aided as well and does not require ROM to store the sequence. The Algorithm saves area/power of existing similar algorithms by 90%.
Experimental setup for performance evaluation of the receiver :-The developed baseband architecture and algorithms are experimentally verified on a wireless test setup. Wireless test setup consists of FPGA board, VSGs, Oscilloscopes, Spectrum analyzer and a discrete component RF board. Packet error and packet loss measurement is done by varying channel conditions. Many practical and interesting issues dealing with wireless test setup infrastructure were encountered and resolved.
|
158 |
Resource Management In Celluar And Mobile Opportunistic NetworksSingh, Chandramani Kishore 11 1900 (has links) (PDF)
In this thesis we study several resource management problems in two classes of wireless networks. The thesis is in two parts, the first being concerned with game theoretic approaches for cellular networks, and the second with control theoretic approaches for mobile opportunistic networks.
In Part I of the thesis, we first investigate optimal association and power control for the uplink of multichannel multicell cellular networks, in which each channel is used by exactly one base station (BS) (i.e., cell). Users have minimum signal to interference ratio(SINR) requirements and associate with BSs where least transmission powers are required. We formulate the problem as a non-cooperative game among users. We propose a distributed association and power update algorithm, and show its convergence to a Nash equilibrium of the game. We consider network models with discrete mobiles(yielding an atomic congestion game),as well as a continuum of mobiles(yielding a population game). We find that the equilibria need not be Pareto efficient, nor need they be system optimal. To address the lack of system optimality, we propose pricing mechanisms. We show that these prices weakly enforce system optimality in general, and strongly enforce it in special settings. We also show that these mechanisms can be implemented in distributed fashions.
Next, we consider the hierarchical problems of user association and BS placement, where BSs may belong to the same(or, cooperating) or to competing service providers. Users transmit with constant power, and associate with base stations that yield better SINRs. We formulate the association problem as a game among users; it determines the cell corresponding to each BS. Some intriguing observations we report are:(i)displacing a BS a little in one direction may result in a displacement of the boundary of the corresponding cell to the opposite direction;(ii)A cell corresponding to a BS may be the union of disconnected sub-cells. We then study the problem of the placement of BSs so as to maximize service providers’ revenues. The service providers need to take into account the mobiles’ behavior that will be induced by the placement decisions. We consider the cases of single frequency band and disjoint frequency bands of operation. We also consider the networks in which BSs employ successive interference cancellation(SIC) decoding. We observe that the BS locations are closer to each other in the competitive case than in the cooperative case, in all scenarios considered.
Finally, we study cooperation among cellular service providers. We consider networks in which communications involving different BSs do not interfere. If service providers jointly deploy and pool their resources, such as spectrum and BSs, and agree to serve each others’ customers, their aggregate payoff substantially increases. The potential of such cooperation can, however ,be realized only if the service providers intelligently determine who they would cooperate with, how they would deploy and share their resources, and how they would share the aggregate payoff. We first assume that the service providers can arbitrarily share the aggregate payoff. A rational basis for payoff sharing is imperative for the stability of the coalitions. We study cooperation using the theory of transferable payoff coalitional games. We show that the optimum cooperation strategy, which involves the acquisition of channels, and deployment and allocation of BSs to customers, is the solution of a concave or an integer optimization problem. We then show that the grand coalition is stable, i.e., if all the service providers cooperate, there is an operating point offering each service provider a share that eliminates the possibility of a subset of service providers splitting from the grand coalition; this operating point also maximizes the service providers’ aggregate payoff. These stabilizing payoff shares are computed by solving the dual of the above optimization problem. Moreover, the optimal cooperation strategy and the stabilizing payoff shares can be obtained in polynomial time using distributed computations and limited exchange of confidential information among the service providers. We then extend the analysis to the scenario where service providers may not be able to share their payoffs. We now model cooperation as a nontransferable payoff coalitional game. We again show that there exists a cooperation strategy that leaves no incentive for any subset of service providers to split from the grand coalition. To compute this cooperation strategy and the corresponding payoffs, we relate this game and its core to an exchange market and its equilibrium. Finally, we extend the formulations and the results to the case when customers are also decision makers in coalition formation.
In Part II of this thesis, we consider the problem of optimal message forwarding in mobile opportunistic wireless networks. A message originates at a node(source), and has to be delivered to another node (destination). In the network, there are several other nodes that can assist in relaying the message at the expense of additional transmission energies. We study the trade-off between delivery delay and energy consumption. First, we consider mobile opportunistic networks employing two-hop relaying. Because of the intermittent connectivity, the source may not have perfect knowledge of the delivery status at every instant. We formulate the problem as a stochastic control problem with partial information, and study structural properties of the optimal policy. We also propose a simple suboptimal policy. We then compare the performance of the suboptimal policy against that of the optimal control with perfect information. These are bounds on the performance of the proposed policy with partial information. We also discuss a few other related open loop policies.
Finally, we investigate the case where a message has to be delivered to several destinations, but we are concerned with delay until a certain fraction of them receive the message. The network employs epidemic relaying. We first assume that, at every instant, all the nodes know the number of relays carrying the packet and the number of destinations that have received the packet. We formulate the problem as a controlled continuous time Markov chain, and derive the optimal forwarding policy. As observed earlier, the intermittent connectivity in the network implies that the nodes may not have the required perfect knowledge of the system state. To address this issue, we then obtain an ODE(i.e., a deterministic fluid) approximation for the optimally controlled Markov chain. This fluid approximation also yields an asymptotically optimal deterministic policy. We evaluate the performance of this policy over finite networks, and demonstrate that this policy performs close to the optimal closed loop policy. We also briefly discuss the case where message forwarding is accomplished via two-hop relaying.
|
159 |
Design And Development Of Solutions To Some Of The Networking Problems In Hybrid Wireless Superstore NetworksShankaraiah, * 09 1900 (has links) (PDF)
Hybrid Wireless Networks (HWNs) are composite networks comprises of different technologies, possibly with overlapping coverage. Users with multimode terminals in HWNs are able to initiate connectivity that best suits their attributes and the requirements of their applications. There are many complexities in hybrid wireless networks due to changing data rates, frequency of operation, resource availability, QoS and also, complexities in terms of mobility management across different technologies.
A superstore is a very large retail store that serves as a one-stop shopping destination by offering a wide variety of goods that range from groceries to appliances. It provide all types services such as banking, photo center, catering, etc. The good examples of superstores are: Tesco (hypermarkets, United Kingdom), Carrefour(hypermarkets, France), etc.
Generally, the mobile customer communicates with superstore server using a transaction. A transaction corresponds to a finite number of interactive processes between the customer and superstore server. A few superstore transactions, examples are, product browsing, Technical details inquiry, Financial transactions, billing, etc.
This thesis aims to design and develop the following schemes to solve some of the above indicated problems of a hybrid wireless superstore network:
1 Transaction based bandwidth management.
2 Transaction-based resource management.
3 Transaction-based Quality of Service management.
4. Transactions-based topology management. We, herewith, present these developed schemes, the simulation carried out and results obtained, in brief.
Transaction-based bandwidth management
The designed Transaction-Based Bandwidth Management Scheme (TB-BMS) operates at application-level and intelligently allocates the bandwidth by monitoring the profit oriented sensitivity variations in the transactions, which are linked with various profit profiles created over type, time, and history of transactions. The scheme mainly consists of transaction classifier, bandwidth determination and transactions scheduling modules. We have deployed these scheme over a downlink of HWNs, since the uplink caries simple quires from customers to superstore server. The scheme uses transaction scheduling algorithm, which decides how to schedule an outgoing transaction based on its priority with efficient use of available BW.
As we observe, not all superstore transactions can have the same profit sensitive information, data size and operation type. Therefore, we classify the superstore transactions into four levels based on profit, data size, operation type and the degree of severity of information that they are handling. The aim of transaction classification module is to find the transaction sensitivity level(TSL) for a given transaction.
The bandwidth determination module estimates bandwidth requirement for each of the transactions. The transactions scheduling module schedules the transactions based on availability of bandwidth as per the TSL of the transaction. The scheme schedules the highest priority transactions first, keeping the lowest priority transaction pending. If all the highest priority transactions are over, then it continues with next priority level transactions, and so on, in every slot. We have simulated the hybrid wireless superstore network environment with WiFi and GSM technologies. We simulated four TSL levels with different bandwidth. The simulation under consideration uses different transactions with different bandwidth requirements.
The performance results describe that the proposed scheme considerably improves the bandwidth utilization by reducing transaction blocking and accommodating more essential transactions at the peak time of the business.
Transaction-based resource management
In the next work, we have proposed the transaction-based resource management scheme (TB-RMS) to allocate the required resources among the various customer services based on priority of transactions. The scheme mainly consists of transaction classifier, resource estimation and transactions scheduling modules. This scheme also uses a downlink transaction scheduling algorithm, which decides how to schedule an outgoing transaction based on its priority with efficient use of available resources.
The transaction-based resource management is similar to that of TB-BMS scheme, except that the scheme estimates the resources like buffer, bandwidth, processing time for each of transaction rather than bandwidth.
The performance results indicate that the proposed TB-RMS scheme considerably improves the resource utilization by reducing transaction blocking and accommodating more essential transactions at the peak time.
Transaction-based Quality of Service management
In the third segment, we have proposed a police-based transaction-aware QoS management architecture for the downlink QoS management. We derive a policy for the estimation of QoS parameters, like, delay, jitter, bandwidth, transaction loss for every transaction before scheduling on the downlink. We use Policy-based Transaction QoS Management(PTQM) to achieve the transaction based QoS management. Policies are rules that govern a transaction behavior, usually implemented in the form of if(condition) then(action) policies.
The QoS management scheme is fully centralized, and is based on the ideas of client-server interaction. Each mobile terminal is connected to a server via WiFi or GSM. The master policy controller (MPDF) connects to the policy controller of the WiFi network (WPDF)and the GSM policy controller(PDF).
We have considered the simulation environment similar to earlier schemes. The results shows that the policy-based transaction QoS management is improves performance and utilizes network resources efficiently at the peak time of the superstore business.
Transactions-Aware Topology Management(TATM)
Finally, we have proposed a topology management scheme to the superstore hybrid wireless networks. A wireless topology management that manages the activities and features of a wireless network connection. It may control the process of selecting an available access points, authentication and associating to it and setting up other parameters of the wireless connection.
The proposed topology management scheme consists of the transaction classifier, resource estimation module, network availability and status module and transaction-aware topology management module. The TATM scheme is to select the best network among available networks to provide transaction response(or execution).
We have simulated hybrid wireless superstore network with five WiFi and two GSM technologies. The performance results indicate that the transaction-based topology management scheme utilizes the available resources efficiently and distributed transaction loads evenly in both WiFi and GSM networks based on the capacity.
|
160 |
Nonstationary Techniques For Signal Enhancement With Applications To Speech, ECG, And Nonuniformly-Sampled SignalsSreenivasa Murthy, A January 2012 (has links) (PDF)
For time-varying signals such as speech and audio, short-time analysis becomes necessary to compute specific signal attributes and to keep track of their evolution. The standard technique is the short-time Fourier transform (STFT), using which one decomposes a signal in terms of windowed Fourier bases. An advancement over STFT is the wavelet analysis in which a function is represented in terms of shifted and dilated versions of a localized function called the wavelet. A specific modeling approach particularly in the context of speech is based on short-time linear prediction or short-time Wiener filtering of noisy speech. In most nonstationary signal processing formalisms, the key idea is to analyze the properties of the signal locally, either by first truncating the signal and then performing a basis expansion (as in the case of STFT), or by choosing compactly-supported basis functions (as in the case of wavelets). We retain the same motivation as these approaches, but use polynomials to model the signal on a short-time basis (“short-time polynomial representation”). To emphasize the local nature of the modeling aspect, we refer to it as “local polynomial modeling (LPM).”
We pursue two main threads of research in this thesis: (i) Short-time approaches for speech enhancement; and (ii) LPM for enhancing smooth signals, with applications to ECG, noisy nonuniformly-sampled signals, and voiced/unvoiced segmentation in noisy speech.
Improved iterative Wiener filtering for speech enhancement
A constrained iterative Wiener filter solution for speech enhancement was proposed by Hansen and Clements. Sreenivas and Kirnapure improved the performance of the technique by imposing codebook-based constraints in the process of parameter estimation. The key advantage is that the optimal parameter search space is confined to the codebook. The Nonstationary signal enhancement solutions assume stationary noise. However, in practical applications, noise is not stationary and hence updating the noise statistics becomes necessary. We present a new approach to perform reliable noise estimation based on spectral subtraction. We first estimate the signal spectrum and perform signal subtraction to estimate the noise power spectral density. We further smooth the estimated noise spectrum to ensure reliability. The key contributions are: (i) Adaptation of the technique for non-stationary noises; (ii) A new initialization procedure for faster convergence and higher accuracy; (iii) Experimental determination of the optimal LP-parameter space; and (iv) Objective criteria and speech recognition tests for performance comparison.
Optimal local polynomial modeling and applications
We next address the problem of fitting a piecewise-polynomial model to a smooth signal corrupted by additive noise. Since the signal is smooth, it can be represented using low-order polynomial functions provided that they are locally adapted to the signal. We choose the mean-square error as the criterion of optimality. Since the model is local, it preserves the temporal structure of the signal and can also handle nonstationary noise. We show that there is a trade-off between the adaptability of the model to local signal variations and robustness to noise (bias-variance trade-off), which we solve using a stochastic optimization technique known as the intersection of confidence intervals (ICI) technique. The key trade-off parameter is the duration of the window over which the optimum LPM is computed.
Within the LPM framework, we address three problems: (i) Signal reconstruction from noisy uniform samples; (ii) Signal reconstruction from noisy nonuniform samples; and (iii) Classification of speech signals into voiced and unvoiced segments.
The generic signal model is
x(tn)=s(tn)+d(tn),0 ≤ n ≤ N - 1.
In problems (i) and (iii) above, tn=nT(uniform sampling); in (ii) the samples are taken at nonuniform instants. The signal s(t)is assumed to be smooth; i.e., it should admit a local polynomial representation. The problem in (i) and (ii) is to estimate s(t)from x(tn); i.e., we are interested in optimal signal reconstruction on a continuous domain starting from uniform or nonuniform samples.
We show that, in both cases, the bias and variance take the general form:
The mean square error (MSE) is given by
where L is the length of the window over which the polynomial fitting is performed, f is a function of s(t), which typically comprises the higher-order derivatives of s(t), the order itself dependent on the order of the polynomial, and g is a function of the noise variance. It is clear that the bias and variance have complementary characteristics with respect to L. Directly optimizing for the MSE would give a value of L, which involves the functions f and g. The function g may be estimated, but f is not known since s(t)is unknown. Hence, it is not practical to compute the minimum MSE (MMSE) solution. Therefore, we obtain an approximate result by solving the bias-variance trade-off in a probabilistic sense using the ICI technique. We also propose a new approach to optimally select the ICI technique parameters, based on a new cost function that is the sum of the probability of false alarm and the area covered over the confidence interval. In addition, we address issues related to optimal model-order selection, search space for window lengths, accuracy of noise estimation, etc.
The next issue addressed is that of voiced/unvoiced segmentation of speech signal. Speech segments show different spectral and temporal characteristics based on whether the segment is voiced or unvoiced. Most speech processing techniques process the two segments differently. The challenge lies in making detection techniques offer robust performance in the presence of noise. We propose a new technique for voiced/unvoiced clas-sification by taking into account the fact that voiced segments have a certain degree of regularity, and that the unvoiced segments do not possess any smoothness. In order to capture the regularity in voiced regions, we employ the LPM. The key idea is that regions where the LPM is inaccurate are more likely to be unvoiced than voiced. Within this frame-work, we formulate a hypothesis testing problem based on the accuracy of the LPM fit and devise a test statistic for performing V/UV classification. Since the technique is based on LPM, it is capable of adapting to nonstationary noises. We present Monte Carlo results to demonstrate the accuracy of the proposed technique.
|
Page generated in 0.1711 seconds