Spelling suggestions: "subject:"cross entropy"" "subject:"cross syntropy""
11 |
Redução de perdas de sistemas de distribuição através do dimensionamento ótimo de bancos de capacitores via entropia cruzada / Losses reduction of distribution systems through optimal dimensioning of capacitor banks via cross entropyFabrício Bonfim Rodrigues de Oliveira 21 November 2016 (has links)
Os Sistemas de Distribuição são responsáveis pelo fornecimento da energia elétrica aos consumidores residenciais, industriais e comerciais com padrões de qualidade regulamentados pela Agência Nacional de Energia Elétrica (ANEEL). Assim, as concessionárias monitoram seu sistema para verificar o perfil de tensão na rede elétrica e as perdas técnicas do sistema. Este último critério de desempenho é extremamente relevante, pois representa o desperdício em energia e diminuição na capacidade de receita da empresa. Portanto, há interesse em fornecer a energia elétrica dentro das especificações regidas pela ANEEL e com as menores perdas elétricas possível. Contudo, técnicas como reconfiguração de linhas, recondutoramento, alocação de capacitores e geradores distribuídos são aplicadas. Em especial, a alocação de capacitores é uma técnica que visa identificar a quantidade, localização e tipo dos bancos de capacitores (BCs) que serão alocados no sistema com o intuito de minimizar as perdas, levando em consideração custos de implantação e operação. Para tal, métodos computacionais são utilizados para definir a melhor configuração dos BCs. As metaheurítiscas têm sido aplicadas na solução deste problema, cuja função objetivo é a minimização das perdas técnicas do sistema de distribuição. Desta forma, este trabalho tem o objetivo de propor uma abordagem de solução utilizando a metaheurística Entropia Cruzada implementada no software Python para redução das perdas de sistemas elétricos modelados no OpenDSS. A abordagem se mostrou uma importante ferramenta de análise de sistemas de distribuição, proporcionando resultados extremamente satisfatórios. / The distribution systems are responsible for providing electricity to residential, industrial and commercial consumers under quality standards regulated by the National Electric Energy Agency (ANEEL). Thus, utilities monitor the system to check the voltage profile in the grid and system technical losses. The latter quantity is an extremely important performance criterion, as it represents energy losses and decrease in revenue capacity of the company. Therefore, there is interest in providing electricity within specification stated by ANEEL with the lowest possible electrical losses. Techniques such as topology reconfiguration, reconductoring, allocation of capacitors and distributed generators are usually proposed in technical studies. Particularly, the allocation of capacitors is a technique that aims to identify the amount, location and type of capacitor banks (CBs), which are allocated in the system in order to minimize the losses, taking into consideration the implementation and operation costs. For this purpose, computational methods are used to determine the best configuration of CBs. Metaheuristics have been applied for the solution of this problem, with the objective to minimize the technical losses of distribution systems. This document shows the development of a solution method using the Cross Entropy metaheuristic implemented in Python programming language to reduce the losses of electrical systems modeled in OpenDSS program. The developed approach resulted in an important analysis tool for distribution systems, providing extremely satisfactory results.
|
12 |
Stochastic Modelling and Intervention of the Spread of HIV/AIDSAsrul Sani Unknown Date (has links)
Since the first cases of HIV/AIDS disease were recognised in the early 1980s, a large number of mathematical models have been proposed. However, the mobility of people among regions, which has an obvious impact on the spread of the disease, has not been much considered in the modelling studies. One of the main reasons is that the models for the spread of the disease in multiple populations are very complex and, as a consequence, they can easily become intractable. In this thesis we provide various new results pertaining to the spread of the disease in mobile populations, including epidemic intervention in multiple populations. We first develop stochastic models for the spread of the disease in a single heterosexual population, considering both constant and varying population sizes. In particular, we consider a class of continuous-time Markov chains (CTMCs). We establish deterministic and Gaussian diffusion analogues of these stochastic processes by applying the theory of density dependent processes. A range of numerical experiments are provided to show how well the deterministic and Gaussian counterparts approximate the dynamic behaviour of the processes. We derive threshold parameters, known as basic reproduction numbers, for both cases above the threshold which the disease is uniformly persistent and below the threshold which disease-free equilibrium is locally attractive. We find that the threshold conditions for both constant and varying population sizes have the same form. In order to take into account the mobility of people among regions, we extend the stochastic models to multiple populations. Various stochastic models for multiple populations are formulated as CTMCs. The deterministic and Gaussian diffusion counterparts of the corresponding stochastic processes for the multiple populations are also established. Threshold parameters for the persistence of the disease in the multiple population models are derived by applying the concept of next generation matrices. The results of this study can serve as a basic framework how to formulate and analyse a more realistic stochastic model for the spread of HIV in mobile heterogeneous populations—classifying all individuals by age, risk, and level of infectivities, and at the same time considering different modes of the disease transmission. Assuming an accurate mathematical model for the spread of HIV/AIDS disease, another question that we address in this thesis is how to control the spread of the disease in a mobile population. Most previous studies for the spread of the disease focus on identifying the most significant parameters in a model. In contrast, we study these problems as optimal epidemic intervention problems. The study is mostly motivated by the fact that more and more local governments allocate budgets over a certain period of time to combat the disease in their areas. The question is how to allocate this limited budget to minimise the number of new HIV cases, say on a country level, over a finite time horizon as people move among regions. The mathematical models developed in the first part of this thesis are used as dynamic constraints of the optimal control problems. In this thesis, we also introduce a novel approach to solve quite general optimal control problems using the Cross-Entropy (CE) method. The effectiveness of the CE method is demonstrated through several illustrative examples in optimal control. The main application is the optimal epidemic intervention problems discussed above. These are highly non-linear and multidimensional problems. Many existing numerical techniques for solving such optimal control problems suffer from the curse of dimensionality. However, we find that the CE technique is very efficient in solving such problems. The numerical results of the optimal epidemic strategies obtained via the CE method suggest that the structure of the optimal trajectories are highly synchronised among patches but the trajectories do not depend much on the structure of the models. Instead, the parameters of the models (such as the time horizon, the amount of available budget, infection rates) much affect the form of the solution.
|
13 |
Stochastic Modelling and Intervention of the Spread of HIV/AIDSAsrul Sani Unknown Date (has links)
Since the first cases of HIV/AIDS disease were recognised in the early 1980s, a large number of mathematical models have been proposed. However, the mobility of people among regions, which has an obvious impact on the spread of the disease, has not been much considered in the modelling studies. One of the main reasons is that the models for the spread of the disease in multiple populations are very complex and, as a consequence, they can easily become intractable. In this thesis we provide various new results pertaining to the spread of the disease in mobile populations, including epidemic intervention in multiple populations. We first develop stochastic models for the spread of the disease in a single heterosexual population, considering both constant and varying population sizes. In particular, we consider a class of continuous-time Markov chains (CTMCs). We establish deterministic and Gaussian diffusion analogues of these stochastic processes by applying the theory of density dependent processes. A range of numerical experiments are provided to show how well the deterministic and Gaussian counterparts approximate the dynamic behaviour of the processes. We derive threshold parameters, known as basic reproduction numbers, for both cases above the threshold which the disease is uniformly persistent and below the threshold which disease-free equilibrium is locally attractive. We find that the threshold conditions for both constant and varying population sizes have the same form. In order to take into account the mobility of people among regions, we extend the stochastic models to multiple populations. Various stochastic models for multiple populations are formulated as CTMCs. The deterministic and Gaussian diffusion counterparts of the corresponding stochastic processes for the multiple populations are also established. Threshold parameters for the persistence of the disease in the multiple population models are derived by applying the concept of next generation matrices. The results of this study can serve as a basic framework how to formulate and analyse a more realistic stochastic model for the spread of HIV in mobile heterogeneous populations—classifying all individuals by age, risk, and level of infectivities, and at the same time considering different modes of the disease transmission. Assuming an accurate mathematical model for the spread of HIV/AIDS disease, another question that we address in this thesis is how to control the spread of the disease in a mobile population. Most previous studies for the spread of the disease focus on identifying the most significant parameters in a model. In contrast, we study these problems as optimal epidemic intervention problems. The study is mostly motivated by the fact that more and more local governments allocate budgets over a certain period of time to combat the disease in their areas. The question is how to allocate this limited budget to minimise the number of new HIV cases, say on a country level, over a finite time horizon as people move among regions. The mathematical models developed in the first part of this thesis are used as dynamic constraints of the optimal control problems. In this thesis, we also introduce a novel approach to solve quite general optimal control problems using the Cross-Entropy (CE) method. The effectiveness of the CE method is demonstrated through several illustrative examples in optimal control. The main application is the optimal epidemic intervention problems discussed above. These are highly non-linear and multidimensional problems. Many existing numerical techniques for solving such optimal control problems suffer from the curse of dimensionality. However, we find that the CE technique is very efficient in solving such problems. The numerical results of the optimal epidemic strategies obtained via the CE method suggest that the structure of the optimal trajectories are highly synchronised among patches but the trajectories do not depend much on the structure of the models. Instead, the parameters of the models (such as the time horizon, the amount of available budget, infection rates) much affect the form of the solution.
|
14 |
Cross entropy-based analysis of spacecraft control systemsMujumdar, Anusha Pradeep January 2016 (has links)
Space missions increasingly require sophisticated guidance, navigation and control algorithms, the development of which is reliant on verification and validation (V&V) techniques to ensure mission safety and success. A crucial element of V&V is the assessment of control system robust performance in the presence of uncertainty. In addition to estimating average performance under uncertainty, it is critical to determine the worst case performance. Industrial V&V approaches typically employ mu-analysis in the early control design stages, and Monte Carlo simulations on high-fidelity full engineering simulators at advanced stages of the design cycle. While highly capable, such techniques present a critical gap between pessimistic worst case estimates found using analytical methods, and the optimistic outlook often presented by Monte Carlo runs. Conservative worst case estimates are problematic because they can demand a controller redesign procedure, which is not justified if the poor performance is unlikely to occur. Gaining insight into the probability associated with the worst case performance is valuable in bridging this gap. It should be noted that due to the complexity of industrial-scale systems, V&V techniques are required to be capable of efficiently analysing non-linear models in the presence of significant uncertainty. As well, they must be computationally tractable. It is desirable that such techniques demand little engineering effort before each analysis, to be applied widely in industrial systems. Motivated by these factors, this thesis proposes and develops an efficient algorithm, based on the cross entropy simulation method. The proposed algorithm efficiently estimates the probabilities associated with various performance levels, from nominal performance up to degraded performance values, resulting in a curve of probabilities associated with various performance values. Such a curve is termed the probability profile of performance (PPoP), and is introduced as a tool that offers insight into a control system's performance, principally the probability associated with the worst case performance. The cross entropy-based robust performance analysis is implemented here on various industrial systems in European Space Agency-funded research projects. The implementation on autonomous rendezvous and docking models for the Mars Sample Return mission constitutes the core of the thesis. The proposed technique is implemented on high-fidelity models of the Vega launcher, as well as on a generic long coasting launcher upper stage. In summary, this thesis (a) develops an algorithm based on the cross entropy simulation method to estimate the probability associated with the worst case, (b) proposes the cross entropy-based PPoP tool to gain insight into system performance, (c) presents results of the robust performance analysis of three space industry systems using the proposed technique in conjunction with existing methods, and (d) proposes an integrated template for conducting robust performance analysis of linearised aerospace systems.
|
15 |
全世界を対象とした人為起源の物質・エネルギーフロー勘定表の構築手法とその適用 / ゼンセカイ オ タイショウ ト シタ ジンイ キゲン ノ ブッシツ エネルギー フロー カンジョウヒョウ ノ コウチク シュホウ ト ソノ テキヨウ藤森, 真一郎 23 March 2009 (has links)
Kyoto University (京都大学) / 0048 / 新制・課程博士 / 博士(工学) / 甲第14559号 / 工博第3027号 / 新制||工||1451(附属図書館) / 26911 / UT51-2009-D271 / 京都大学大学院工学研究科都市環境工学専攻 / (主査)教授 松岡 譲, 教授 森澤 眞輔, 准教授 倉田 学児 / 学位規則第4条第1項該当
|
16 |
Simulation ranking and selection procedures and applications in network reliability designKiekhaefer, Andrew Paul 01 May 2011 (has links)
This thesis presents three novel contributions to the application as well as development of ranking and selection procedures. Ranking and selection is an important topic in the discrete event simulation literature concerned with the use of statistical approaches to select the best or set of best systems from a set of simulated alternatives. Ranking and selection is comprised of three different approaches: subset selection, indifference zone selection, and multiple comparisons. The methodology addressed in this thesis focuses primarily on the first two approaches: subset selection and indifference zone selection.
Our first contribution regards the application of existing ranking and selection procedures to an important body of literature known as system reliability design. If we are capable of modeling a system via a network of arcs and nodes, then the difficult problem of determining the most reliable network configuration, given a set of design constraints, is an optimization problem that we refer to as the network reliability design problem. In this thesis, we first present a novel solution approach for one type of network reliability design optimization problem where total enumeration of the solution space is feasible and desirable. This approach focuses on improving the efficiency of the evaluation of system reliabilities as well as quantifying the probability of correctly selecting the true best design based on the estimation of the expected system reliabilities through the use of ranking and selection procedures, both of which are novel ideas in the system reliability design literature. Altogether, this method eliminates the guess work that was previously associated with this design problem and maintains significant runtime improvements over the existing methodology.
Our second contribution regards the development of a new optimization framework for the network reliability design problem that is applicable to any topological and terminal configuration as well as solution sets of any sizes. This framework focuses on improving the efficiency of the evaluation and comparison of system reliabilities, while providing a more robust performance and user-friendly procedure in terms of the input parameter level selection. This is accomplished through the introduction of two novel statistical sampling procedures based on the concepts of ranking and selection: Sequential Selection of the Best Subset and Duplicate Generation. Altogether, this framework achieves the same convergence and solution quality as the baseline cross-entropy approach, but achieves runtime and sample size improvements on the order of 450% to 1500% over the example networks tested.
Our final contribution regards the development and extension of the general ranking and selection literature with novel procedures for the problem concerned with the selection of the -best systems, where system means and variances are unknown and potentially unequal. We present three new ranking and selection procedures: a subset selection procedure, an indifference zone selection procedure, and a combined two stage subset selection and indifference zone selection procedure. All procedures are backed by proofs of the theoretical guarantees as well as empirical results on the probability of correct selection. We also investigate the effect of various parameters on each procedure's overall performance.
|
17 |
Optimization Algorithms for Deterministic, Stochastic and Reinforcement Learning SettingsJoseph, Ajin George January 2017 (has links) (PDF)
Optimization is a very important field with diverse applications in physical, social and biological sciences and in various areas of engineering. It appears widely in ma-chine learning, information retrieval, regression, estimation, operations research and a wide variety of computing domains. The subject is being deeply studied both theoretically and experimentally and several algorithms are available in the literature. These algorithms which can be executed (sequentially or concurrently) on a computing machine explore the space of input parameters to seek high quality solutions to the optimization problem with the search mostly guided by certain structural properties of the objective function. In certain situations, the setting might additionally demand for “absolute optimum” or solutions close to it, which makes the task even more challenging.
In this thesis, we propose an optimization algorithm which is “gradient-free”, i.e., does not employ any knowledge of the gradient or higher order derivatives of the objective function, rather utilizes objective function values themselves to steer the search. The proposed algorithm is particularly effective in a black-box setting, where a closed-form expression of the objective function is unavailable and gradient or higher-order derivatives are hard to compute or estimate. Our algorithm is inspired by the well known cross entropy (CE) method. The CE method is a model based search method to solve continuous/discrete multi-extremal optimization problems, where the objective function has minimal structure. The proposed method seeks, in the statistical manifold of the parameters which identify the probability distribution/model defined over the input space to find the degenerate distribution concentrated on the global optima (assumed to be finite in quantity). In the early part of the thesis, we propose a novel stochastic approximation version of the CE method to the unconstrained optimization problem, where the objective function is real-valued and deterministic. The basis of the algorithm is a stochastic process of model parameters which is probabilistically dependent on the past history, where we reuse all the previous samples obtained in the process till the current instant based on discounted averaging. This approach can save the overall computational and storage cost. Our algorithm is incremental in nature and possesses attractive features such as stability, computational and storage efficiency and better accuracy. We further investigate, both theoretically and empirically, the asymptotic behaviour of the algorithm and find that the proposed algorithm exhibits global optimum convergence for a particular class of objective functions.
Further, we extend the algorithm to solve the simulation/stochastic optimization problem. In stochastic optimization, the objective function possesses a stochastic characteristic, where the underlying probability distribution in most cases is hard to comprehend and quantify. This begets a more challenging optimization problem, where the ostentatious nature is primarily due to the hardness in computing the objective function values for various input parameters with absolute certainty. In this case, one can only hope to obtain noise corrupted objective function values for various input parameters. Settings of this kind can be found in scenarios where the objective function is evaluated using a continuously evolving dynamical system or through a simulation. We propose a multi-timescale stochastic approximation algorithm, where we integrate an additional timescale to accommodate the noisy measurements and decimate the effects of the gratuitous noise asymptotically. We found that if the objective function and the noise involved in the measurements are well behaved and the timescales are compatible, then our algorithm can generate high quality solutions.
In the later part of the thesis, we propose algorithms for reinforcement learning/Markov decision processes using the optimization techniques we developed in the early stage. MDP can be considered as a generalized framework for modelling planning under uncertainty. We provide a novel algorithm for the problem of prediction in reinforcement learning, i.e., estimating the value function of a given stationary policy of a model free MDP (with large state and action spaces) using the linear function approximation architecture. Here, the value function is defined as the long-run average of the discounted transition costs. The resource requirement of the proposed method in terms of computational and storage cost scales quadratically in the size of the feature set. The algorithm is an adaptation of the multi-timescale variant of the CE method proposed in the earlier part of the thesis for simulation optimization. We also provide both theoretical and empirical evidence to corroborate the credibility and effectiveness of the approach.
In the final part of the thesis, we consider a modified version of the control problem in a model free MDP with large state and action spaces. The control problem most commonly addressed in the literature is to find an optimal policy which maximizes the value function, i.e., the long-run average of the discounted transition payoffs. The contemporary methods also presume access to a generative model/simulator of the MDP with the hidden premise that observations of the system behaviour in the form of sample trajectories can be obtained with ease from the model. In this thesis, we consider a modified version, where the cost function to be optimized is a real-valued performance function (possibly non-convex) of the value function. Additionally, one has to seek the optimal policy without presuming access to the generative model. In this thesis, we propose a stochastic approximation algorithm for this peculiar control problem. The only information, we presuppose, available to the algorithm is the sample trajectory generated using a priori chosen behaviour policy. The algorithm is data (sample trajectory) efficient, stable, robust as well as computationally and storage efficient. We provide a proof of convergence of our algorithm to a high performing policy relative to the behaviour policy.
|
18 |
La programmation DC et la méthode Cross-Entropy pour certaines classes de problèmes en finance, affectation et recherche d’informations : codes et simulations numériques / The DC programming and the cross- entropy method for some classes of problems in finance, assignment and search theoryNguyen, Duc Manh 24 February 2012 (has links)
La présente thèse a pour objectif principal de développer des approches déterministes et heuristiques pour résoudre certaines classes de problèmes d'optimisation en Finance, Affectation et Recherche d’Informations. Il s’agit des problèmes d’optimisation non convexe de grande dimension. Nos approches sont basées sur la programmation DC&DCA et la méthode Cross-Entropy (CE). Grâce aux techniques de formulation/reformulation, nous avons donné la formulation DC des problèmes considérés afin d’obtenir leurs solutions en utilisant DCA. En outre, selon la structure des ensembles réalisables de problèmes considérés, nous avons conçu des familles appropriées de distributions pour que la méthode Cross-Entropy puisse être appliquée efficacement. Toutes ces méthodes proposées ont été mises en œuvre avec MATLAB, C/C++ pour confirmer les aspects pratiques et enrichir notre activité de recherche. / In this thesis we focus on developing deterministic and heuristic approaches for solving some classes of optimization problems in Finance, Assignment and Search Information. They are large-scale nonconvex optimization problems. Our approaches are based on DC programming & DCA and the Cross-Entropy method. Due to the techniques of formulation/reformulation, we have given the DC formulation of considered problems such that we can use DCA to obtain their solutions. Also, depending on the structure of feasible sets of considered problems, we have designed appropriate families of distributions such that the Cross-Entropy method could be applied efficiently. All these proposed methods have been implemented with MATLAB, C/C++ to confirm the practical aspects and enrich our research works.
|
19 |
Estimation de la disponibilité par simulation, pour des systèmes incluant des contraintes logistiques / Availability estimation by simulations for systems including logisticsRai, Ajit 09 July 2018 (has links)
L'analyse des FDM (Reliability, Availability and Maintainability en anglais) fait partie intégrante de l'estimation du coût du cycle de vie des systèmes ferroviaires. Ces systèmes sont hautement fiables et présentent une logistique complexe. Les simulations Monte Carlo dans leur forme standard sont inutiles dans l'estimation efficace des paramètres des FDM à cause de la problématique des événements rares. C'est ici que l'échantillonnage préférentiel joue son rôle. C'est une technique de réduction de la variance et d'accélération de simulations. Cependant, l'échantillonnage préférentiel inclut un changement de lois de probabilité (changement de mesure) du modèle mathématique. Le changement de mesure optimal est inconnu même si théoriquement il existe et fournit un estimateur avec une variance zéro. Dans cette thèse, l'objectif principal est d'estimer deux paramètres pour l'analyse des FDM: la fiabilité des réseaux statiques et l'indisponibilité asymptotique pour les systèmes dynamiques. Pour ce faire, la thèse propose des méthodes pour l'estimation et l'approximation du changement de mesure optimal et l'estimateur final. Les contributions se présentent en deux parties: la première partie étend la méthode de l'approximation du changement de mesure de l'estimateur à variance zéro pour l'échantillonnage préférentiel. La méthode estime la fiabilité des réseaux statiques et montre l'application à de réels systèmes ferroviaires. La seconde partie propose un algorithme en plusieurs étapes pour l'estimation de la distance de l'entropie croisée. Cela permet d'estimer l'indisponibilité asymptotique pour les systèmes markoviens hautement fiables avec des contraintes logistiques. Les résultats montrent une importante réduction de la variance et un gain par rapport aux simulations Monte Carlo. / RAM (Reliability, Availability and Maintainability) analysis forms an integral part in estimation of Life Cycle Costs (LCC) of passenger rail systems. These systems are highly reliable and include complex logistics. Standard Monte-Carlo simulations are rendered useless in efficient estimation of RAM metrics due to the issue of rare events. Systems failures of these complex passenger rail systems can include rare events and thus need efficient simulation techniques. Importance Sampling (IS) are an advanced class of variance reduction techniques that can overcome the limitations of standard simulations. IS techniques can provide acceleration of simulations, meaning, less variance in estimation of RAM metrics in same computational budget as a standard simulation. However, IS includes changing the probability laws (change of measure) that drive the mathematical models of the systems during simulations and the optimal IS change of measure is usually unknown, even though theroretically there exist a perfect one (zero-variance IS change of measure). In this thesis, we focus on the use of IS techniques and its application to estimate two RAM metrics : reliability (for static networks) and steady state availability (for dynamic systems). The thesis focuses on finding and/or approximating the optimal IS change of measure to efficiently estimate RAM metrics in rare events context. The contribution of the thesis is broadly divided into two main axis : first, we propose an adaptation of the approximate zero-variance IS method to estimate reliability of static networks and show the application on real passenger rail systems ; second, we propose a multi-level Cross-Entropy optimization scheme that can be used during pre-simulation to obtain CE optimized IS rates of Markovian Stochastic Petri Nets (SPNs) transitions and use them in main simulations to estimate steady state unavailability of highly reliably Markovian systems with complex logistics involved. Results from the methods show huge variance reduction and gain compared to MC simulations.
|
20 |
New Selection Criteria for Tone Reservation Technique Based on Cross-Entropy Algorithm in OFDM SystemsChiu, Min-han 24 August 2011 (has links)
This thesis considers the use of the tone reservation (TR) technique in orthogonal frequency division multiplexing (OFDM) systems. The nonlinear distortion is usually introduces by the high power amplifiers (HPA) used in wireless communications systems. It orders to reduce the inter-modulation distortion (IMD) in OFDM systems. In addition to the original peak-to-average power ratio (PAPR)-reduction criterion, we propose signal-to-distortion plus noise power ratio (SDNR) criterion and distortion power plus inverse of signal power (DIS) criterion. Based on these criteria, the cross-entropy (CE) algorithm is introduced to determine desired values of the peak reduction carriers (PRCs) to improve the bit error rate (BER) of nonlinearly distorted. Computational complexity is always the major concern of PAPR technique. Therefore, the real-valued PRCs and the modified transform decomposition (MTD) method are introduced here to dramatically decrease complexity of inverse fast Fourier transform (IFFT) operation with slightly performance loss. The simulation results show that the proposed criteria provide a better BER performance and a lower computational complexity.
|
Page generated in 0.0547 seconds