Spelling suggestions: "subject:"square""
971 |
General Adaptive Penalized Least Squares 模型選取方法之模擬與其他方法之比較 / The Simulation of Model Selection Method for General Adaptive Penalized Least Squares and Comparison with Other Methods陳柏錞 Unknown Date (has links)
在迴歸分析中,若變數間具有非線性 (nonlinear) 的關係時,B-Spline線性迴歸是以無母數的方式建立模型。B-Spline函數為具有節點(knots)的分段多項式,選取合適節點的位置對B-Spline函數的估計有重要的影響,在希望得到B-Spline較好的估計量的同時,我們也想要只用少數的節點就達成想要的成效,於是Huang (2013) 提出了一種選擇節點的方式APLS (Adaptive penalized least squares),在本文中,我們以此方法進行一些更一般化的設定,並在不同的設定之下,判斷是否有較好的估計效果,且已修正後的方法與基於BIC (Bayesian information criterion)的節點估計方式進行比較,在本文中我們將一般化設定的APLS法稱為GAPLS,並且經由模擬結果我們發現此兩種以B-Spline進行迴歸函數近似的方法其近似效果都很不錯,只是節點的個數略有不同,所以若是對節點選取的個數有嚴格要求要取較少的節點的話,我們建議使用基於BIC的節點估計方式,除此之外GAPLS法也是不錯的選擇。 / In regression analysis, if the relationship between the response variable and the explanatory variables is nonlinear, B-splines can be used to model the nonlinear relationship. Knot selection is crucial in B-spline regression. Huang (2013) propose a method for adaptive estimation, where knots are selected based on penalized least squares. This method is abbreviated as APLS (adaptive penalized least squares) in this thesis. In this thesis, a more general version of APLS is proposed, which is abbreviated as GAPLS (generalized APLS). Simulation studies are carried out to compare the estimation performance between GAPLS and a knot selection method based on BIC (Bayesian information criterion). The simulation results show that both methods perform well and fewer knots are selected using the BIC approach than using GAPLS.
|
972 |
Forecasting Mid-Term Electricity Market Clearing Price Using Support Vector Machines2014 May 1900 (has links)
In a deregulated electricity market, offering the appropriate amount of electricity at the right time with the right bidding price is of paramount importance. The forecasting of electricity market clearing price (MCP) is a prediction of future electricity price based on given forecast of electricity demand, temperature, sunshine, fuel cost, precipitation and other related factors. Currently, there are many techniques available for short-term electricity MCP forecasting, but very little has been done in the area of mid-term electricity MCP forecasting. The mid-term electricity MCP forecasting focuses electricity MCP on a time frame from one month to six months. Developing mid-term electricity MCP forecasting is essential for mid-term planning and decision making, such as generation plant expansion and maintenance schedule, reallocation of resources, bilateral contracts and hedging strategies.
Six mid-term electricity MCP forecasting models are proposed and compared in this thesis: 1) a single support vector machine (SVM) forecasting model, 2) a single least squares support vector machine (LSSVM) forecasting model, 3) a hybrid SVM and auto-regression moving average with external input (ARMAX) forecasting model, 4) a hybrid LSSVM and ARMAX forecasting model, 5) a multiple SVM forecasting model and 6) a multiple LSSVM forecasting model. PJM interconnection data are used to test the proposed models. Cross-validation technique was used to optimize the control parameters and the selection of training data of the six proposed mid-term electricity MCP forecasting models. Three evaluation techniques, mean absolute error (MAE), mean absolute percentage error (MAPE) and mean square root error (MSRE), are used to analysis the system forecasting accuracy. According to the experimental results, the multiple SVM forecasting model worked the best among all six proposed forecasting models. The proposed multiple SVM based mid-term electricity MCP forecasting model contains a data classification module and a price forecasting module. The data classification module will first pre-process the input data into corresponding price zones and then the forecasting module will forecast the electricity price in four parallel designed SVMs. This proposed model can best improve the forecasting accuracy on both peak prices and overall system compared with other 5 forecasting models proposed in this thesis.
|
973 |
Multivariate data analysis using spectroscopic data of fluorocarbon alcohol mixtures / Nothnagel, C.Nothnagel, Carien January 2012 (has links)
Pelchem, a commercial subsidiary of Necsa (South African Nuclear Energy Corporation), produces a range of commercial fluorocarbon products while driving research and development initiatives to support the fluorine product portfolio. One such initiative is to develop improved analytical techniques to analyse product composition during
development and to quality assure produce.
Generally the C–F type products produced by Necsa are in a solution of anhydrous HF, and cannot be directly analyzed with traditional techniques without derivatisation. A technique such as vibrational spectroscopy, that can analyze these products directly without further preparation, will have a distinct advantage. However, spectra of mixtures of similar compounds are complex and not suitable for traditional quantitative regression analysis.
Multivariate data analysis (MVA) can be used in such instances to exploit the complex nature of spectra to extract quantitative information on the composition of mixtures.
A selection of fluorocarbon alcohols was made to act as representatives for fluorocarbon compounds. Experimental design theory was used to create a calibration range of mixtures
of these compounds. Raman and infrared (NIR and ATR–IR) spectroscopy were used to
generate spectral data of the mixtures and this data was analyzed with MVA techniques by
the construction of regression and prediction models. Selected samples from the mixture
range were chosen to test the predictive ability of the models.
Analysis and regression models (PCR, PLS2 and PLS1) gave good model fits (R2 values larger
than 0.9). Raman spectroscopy was the most efficient technique and gave a high prediction
accuracy (at 10% accepted standard deviation), provided the minimum mass of a
component exceeded 16% of the total sample.
The infrared techniques also performed well in terms of fit and prediction. The NIR spectra were subjected to signal saturation as a result of using long path length sample cells. This was shown to be the main reason for the loss in efficiency of this technique compared to Raman and ATR–IR spectroscopy.
It was shown that multivariate data analysis of spectroscopic data of the selected
fluorocarbon compounds could be used to quantitatively analyse mixtures with the
possibility of further optimization of the method. The study was a representative study
indicating that the combination of MVA and spectroscopy can be used successfully in the
quantitative analysis of other fluorocarbon compound mixtures. / Thesis (M.Sc. (Chemistry))--North-West University, Potchefstroom Campus, 2012.
|
974 |
Multivariate data analysis using spectroscopic data of fluorocarbon alcohol mixtures / Nothnagel, C.Nothnagel, Carien January 2012 (has links)
Pelchem, a commercial subsidiary of Necsa (South African Nuclear Energy Corporation), produces a range of commercial fluorocarbon products while driving research and development initiatives to support the fluorine product portfolio. One such initiative is to develop improved analytical techniques to analyse product composition during
development and to quality assure produce.
Generally the C–F type products produced by Necsa are in a solution of anhydrous HF, and cannot be directly analyzed with traditional techniques without derivatisation. A technique such as vibrational spectroscopy, that can analyze these products directly without further preparation, will have a distinct advantage. However, spectra of mixtures of similar compounds are complex and not suitable for traditional quantitative regression analysis.
Multivariate data analysis (MVA) can be used in such instances to exploit the complex nature of spectra to extract quantitative information on the composition of mixtures.
A selection of fluorocarbon alcohols was made to act as representatives for fluorocarbon compounds. Experimental design theory was used to create a calibration range of mixtures
of these compounds. Raman and infrared (NIR and ATR–IR) spectroscopy were used to
generate spectral data of the mixtures and this data was analyzed with MVA techniques by
the construction of regression and prediction models. Selected samples from the mixture
range were chosen to test the predictive ability of the models.
Analysis and regression models (PCR, PLS2 and PLS1) gave good model fits (R2 values larger
than 0.9). Raman spectroscopy was the most efficient technique and gave a high prediction
accuracy (at 10% accepted standard deviation), provided the minimum mass of a
component exceeded 16% of the total sample.
The infrared techniques also performed well in terms of fit and prediction. The NIR spectra were subjected to signal saturation as a result of using long path length sample cells. This was shown to be the main reason for the loss in efficiency of this technique compared to Raman and ATR–IR spectroscopy.
It was shown that multivariate data analysis of spectroscopic data of the selected
fluorocarbon compounds could be used to quantitatively analyse mixtures with the
possibility of further optimization of the method. The study was a representative study
indicating that the combination of MVA and spectroscopy can be used successfully in the
quantitative analysis of other fluorocarbon compound mixtures. / Thesis (M.Sc. (Chemistry))--North-West University, Potchefstroom Campus, 2012.
|
975 |
Estimation And Hypothesis Testing In Stochastic RegressionSazak, Hakan Savas 01 December 2003 (has links) (PDF)
Regression analysis is very popular among researchers in various fields but almost all the researchers use the classical methods which assume that X is nonstochastic and the error is normally distributed. However, in real life problems, X is generally stochastic and error can be nonnormal. Maximum likelihood (ML) estimation technique which is known to have optimal features, is very problematic in situations when the distribution of X (marginal part) or error (conditional part) is nonnormal.
Modified maximum likelihood (MML) technique which is asymptotically giving the estimators equivalent to the ML estimators, gives us the opportunity to conduct the estimation and the hypothesis testing procedures under nonnormal marginal and conditional distributions. In this study we show that MML estimators are highly efficient and robust. Moreover, the test statistics based on the MML estimators are much more powerful and robust compared to the test statistics based on least squares (LS) estimators which are mostly used in literature. Theoretically, MML estimators are asymptotically minimum variance bound (MVB) estimators but simulation results show that they are highly efficient even for small sample sizes. In this thesis, Weibull and Generalized Logistic distributions are used for illustration and the results given are based on these distributions.
As a future study, MML technique can be utilized for other types of distributions and the procedures based on bivariate data can be extended to multivariate data.
|
976 |
政府績效管理資訊化的交易成本分析:以「政府計畫管理資訊網」為例 / Information and communication technologies (ICTs) and government performance management: A case study of GPMnet in Taiwan謝叔芳, Hsieh, Hsu Fang Unknown Date (has links)
自1980年代政府再造潮流以來,績效管理及資訊通信技術業已成為政府提昇績效的重要工具,在此一背景下,我國亦於民國94年完成「政府計畫管理資訊網(GPMnet)」整合,用以協助執行績效管理作業。不過,由於資訊科技涵蓋面向相當寬廣,影響層面頗為廣泛,因此也引發樂觀、悲觀及務實主義等不同立場的爭辯,其運用成效確實有待進一步的評估。在相關文獻的基礎上,本研究採用交易成本理論途徑,首先透過問卷調查瞭解GPMnet使用者的態度及行為偏好,其次則經由訪談資料進一步解析資訊通信科技對於政府績效管理成本的增加與減少。
本研究採取混合方法論(mixed methodology)進行研究設計,兼採量化資料及質化資料蒐集分析。量化資料部分,以GPMnet使用者為分析單位進行問卷調查,回收148份有效樣本;質化資料部分,依主辦、主管、會審及研考等4項權限功能,選取8位GPMnet使用者進行訪談,以了解不同權限受訪者使用GPMnet的經驗與看法。
資料分析部分,本研究以偏最小平方法分析問卷資料,調查結果分析顯示,GPMnet系統使用的交易成本認知與態度、主觀系統績效有顯著負向關係;不確定性、資產專屬、使用頻率與交易成本之假設則未獲實證資料支持。此外,訪談資料分析發現,制度環境下,因受限於現行不同機關有不同資訊系統、GPMnet多個子系統,以及紙本流程仍然存在的情況下,使用GPMnet執行績效管理作業會增加行政成本負擔;此外,在實際使用的情形之下,因為系統可以保存過去資料、提供清楚欄位、網路化傳遞、進行進度控管及主動公開資訊等功能,減少了行政作業交易成本。相對的,也造成學習時間不符成本、溝通費時、校對、資訊過載、介面不友善及系統不穩定等負面影響,增加績效管理作業的交易成本。
最後,本研究建議在學術研究上,結構模式的觀察變項應更謹慎設計,資訊系統評估理論應重視成本觀點。至於在實務面則應全面落實電子化績效管理,在GPMnet系統資源環境更應進行資料備份,以減少資訊的過度負荷。 / Governments invest much more attention, time, and money on performance management and evaluation on the public sector today than ever before. To better utilize agency program management systems under the Executive Yuan, the Research, Development and Evaluation Commission (RDEC) has completed the planning of the "Policy Program Management Information System" (Government Program network, GPMnet). The system is a common service platform created to integrate various policy implementation management information systems to enhance the performance of different agencies in program management. However, the performance of GPMnet needs to be evaluated. In order to evaluate the system, this study introduces an empirical research which focuses on a transaction cost approach that has often been used to support the idea of information and communication technology and its positive impact on the economic system.
The data was collected by mixed methodology, combining quantitative data from 148 users and eight interviews with a semi-structured questionnaire. The Partial Least Squares was used to analyze the quantitative data. According to the research findings, information-related problems represent only some of the elements contributing to the transaction costs. These costs also emerge due to the institutional factors contributing to their growths. The study of the consequences associated with ICT design and its implementation, based on the transaction cost theory, should therefore consider the costs of ICTs.
|
977 |
電路設計中電流值之罕見事件的統計估計探討 / A study of statistical method on estimating rare event in IC Current彭亞凌, Peng, Ya Ling Unknown Date (has links)
距離期望值4至6倍標準差以外的罕見機率電流值,是當前積體電路設計品質的關鍵之一,但隨著精確度的標準提升,實務上以蒙地卡羅方法模擬電路資料,因曠日廢時愈發不可行,而過去透過參數模型外插估計或迴歸分析方法,也因變數蒐集不易、操作電壓減小使得電流值尾端估計產生偏差,上述原因使得尾端電流值估計困難。因此本文引進統計方法改善罕見機率電流值的估計:先以Box-Cox轉換觀察值為近似常態,改善尾端分配值的估計,再以加權迴歸方法估計罕見電流值,其中迴歸解釋變數為Log或Z分數轉換的經驗累積機率,而加權方法採用Down-weight加重極值樣本資訊的重要性,此外,本研究也考慮能蒐集完整變數的情況,改以電路資料作為解釋變數進行加權迴歸。另一方面,本研究也採用極值理論作為估計方法。
本文先以電腦模擬評估各方法的優劣,假設母體分配為常態、T分配、Gamma分配,以均方誤差作為衡量指標,模擬結果驗證了加權迴歸方法的可行性。而後參考模擬結果決定篩選樣本方式進行實證研究,資料來源為新竹某科技公司,實證結果顯示加權迴歸配合Box-Cox轉換能以十萬筆樣本數,準確估計左、右尾機率10^(-4) 、10^(-5)、10^(-6)、10^(-7)極端電流值。其中右尾部分的加權迴歸解釋變數採用對數轉換,而左尾部分的加權迴歸解釋變數採用Z分數轉換,估計結果較為準確,又若能蒐集電路資訊作為解釋變數,在左尾部份可以有最準確的估計結果;而篩選樣本尾端1%和整筆資料的方式對於不同方法的估計準確度各有利弊,皆可考慮。另外,1%門檻值比例的極值理論能穩定且中等程度的估計不同電壓下的電流值,且有短程估計最準的趨勢。 / To obtain the tail distribution of current beyond 4 to 6 sigma is nowadays a key issue in integrated circuit (IC) design and computer simulation is a popular tool to estimate the tail values. Since creating rare events via simulation is time-consuming, often the linear extrapolation methods (such as regression analysis) are applied to enhance efficiency. However, it is shown from past work that the tail values is likely to behave differently if the operating voltage is getting lower. In this study, a statistical method is introduced to deal with the lower voltage case. The data are evaluated via the Box-Cox (or power) transformation and see if they need to be transformed into normally distributed data, following by weighted regression to extrapolate the tail values. In specific, the independent variable is the empirical CDF with logarithm or z-score transformation, and the weight is down-weight in order to emphasize the information of extreme values observations. In addition to regression analysis, Extreme Value Theory (EVT) is also adopted in the research.
The computer simulation and data sets from a famous IC manufacturer in Hsinchu are used to evaluate the proposed method, with respect to mean squared error. In computer simulation, the data are assumed to be generated from normal, student t, or Gamma distribution. For empirical data, there are 10^8 observations and tail values with probabilities 10^(-4),10^(-5),10^(-6),10^(-7) are set to be the study goal given that only 10^5 observations are available. Comparing to the traditional methods and EVT, the proposed method has the best performance in estimating the tail probabilities. If the IC current is produced from regression equation and the information of independent variables can be provided, using the weighted regression can reach the best estimation for the left-tailed rare events. Also, using EVT can also produce accurate estimates provided that the tail probabilities to be estimated and the observations available are on the similar scale, e.g., probabilities 10^(-5)~10^(-7) vs.10^5 observations.
|
978 |
Three Essays on Estimation and Testing of Nonparametric ModelsMa, Guangyi 2012 August 1900 (has links)
In this dissertation, I focus on the development and application of nonparametric methods in econometrics. First, a constrained nonparametric regression method is developed to estimate a function and its derivatives subject to shape restrictions implied by economic theory. The constrained estimators can be viewed as a set of empirical likelihood-based reweighted local polynomial estimators. They are shown to be weakly consistent and have the same first order asymptotic distribution as the unconstrained estimators. When the shape restrictions are correctly specified, the constrained estimators can achieve a large degree of finite sample bias reduction and thus outperform the unconstrained estimators. The constrained nonparametric regression method is applied on the estimation of daily option pricing function and state-price density function.
Second, a modified Cumulative Sum of Squares (CUSQ) test is proposed to test structural changes in the unconditional volatility in a time-varying coefficient model. The proposed test is based on nonparametric residuals from local linear estimation of the time-varying coefficients. Asymptotic theory is provided to show that the new CUSQ test has standard null distribution and diverges at standard rate under the alternatives. Compared with a test based on least squares residuals, the new test enjoys correct size and good power properties. This is because, by estimating the model nonparametrically, one can circumvent the size distortion from potential structural changes in the mean. Empirical results from both simulation experiments and real data applications are presented to demonstrate the test's size and power properties.
Third, an empirical study of testing the Purchasing Power Parity (PPP) hypothesis is conducted in a functional-coefficient cointegration model, which is consistent with equilibrium models of exchange rate determination with the presence of trans- actions costs in international trade. Supporting evidence of PPP is found in the recent float exchange rate era. The cointegration relation of nominal exchange rate and price levels varies conditioning on the real exchange rate volatility. The cointegration coefficients are more stable and numerically near the value implied by PPP theory when the real exchange rate volatility is relatively lower.
|
979 |
In silico tools in risk assessment : of industrial chemicals in general and non-dioxin-like PCBs in particularStenberg, Mia January 2012 (has links)
Industrial chemicals in European Union produced or imported in volumes above 1 tonne annually, necessitate a registration within REACH. A common problem, concerning these chemicals, is deficient information and lack of data for assessing the hazards posed to human health and the environment. Animal studies for the type of toxicological information needed are both expensive and time consuming, and to that an ethical aspect is added. Alternative methods to animal testing are thereby requested. REACH have called for an increased use of in silico tools for non-testing data as structure-activity relationships (SARs), quantitative structure-activity relationships (QSARs), and read-across. The main objective of the studies underlying this thesis is related to explore and refine the use of in silico tools in a risk assessment context of industrial chemicals. In particular, try to relate properties of the molecular structure to the toxic effect of the chemical substance, by using principles and methods of computational chemistry. The initial study was a survey of all industrial chemicals; the Industrial chemical map was created. A part of this map was identified including chemicals of potential concern. Secondly, the environmental pollutants, polychlorinated biphenyls (PCBs) were examined and in particular the non-dioxin-like PCBs (NDL-PCBs). A set of 20 NDL-PCBs was selected to represent the 178 PCB congeners with three to seven chlorine substituents. The selection procedure was a combined process including statistical molecular design for a representative selection and expert judgements to be able to include congeners of specific interest. The 20 selected congeners were tested in vitro in as much as 17 different assays. The data from the screening process was turned into interpretable toxicity profiles with multivariate methods, used for investigation of potential classes of NDL-PCBs. It was shown that NDL-PCBs cannot be treated as one group of substances with similar mechanisms of action. Two groups of congeners were identified. A group including in general lower chlorinated congeners with a higher degree of ortho substitution showed a higher potency in more assays (including all neurotoxic assays). A second group included abundant congeners with a similar toxic profile that might contribute to a common toxic burden. To investigate the structure-activity pattern of PCBs effect on DAT in rat striatal synaptosomes, ten additional congeners were selected and tested in vitro. NDL-PCBs were shown to be potent inhibitors of DAT binding. The congeners with highest DAT inhibiting potency were tetra- and penta-chlorinated with 2-3 chlorine atoms in ortho-position. The model was not able to distinguish the congeners with activities in the lower μM range, which could be explained by a relatively unspecific response for the lower ortho chlorinated PCBs. / Den europeiska kemikalielagstiftningen REACH har fastställt att kemikalier som produceras eller importeras i en mängd över 1 ton per år, måste registreras och riskbedömmas. En uppskattad siffra är att detta gäller för 30 000 kemikalier. Problemet är dock att data och information ofta är otillräcklig för en riskbedömning. Till stor del har djurförsök använts för effektdata, men djurförsök är både kostsamt och tidskrävande, dessutom kommer den etiska aspekten in. REACH har därför efterfrågat en undersökning av möjligheten att använda in silico verktyg för att bidra med efterfrågad data och information. In silico har en ungefärlig betydelse av i datorn, och innebär beräkningsmodeller och metoder som används för att få information om kemikaliers egenskaper och toxicitet. Avhandlingens syfte är att utforska möjligheten och förfina användningen av in silico verktyg för att skapa information för riskbedömning av industrikemikalier. Avhandlingen beskriver kvantitativa modeller framtagna med kemometriska metoder för att prediktera, dvs förutsäga specifika kemikaliers toxiska effekt. I den första studien (I) undersöktes 56 072 organiska industrikemikalier. Med multivariata metoder skapades en karta över industrikemikalierna som beskrev dess kemiska och fysikaliska egenskaper. Kartan användes för jämförelser med kända och potentiella miljöfarliga kemikalier. De mest kända miljöföroreningarna visade sig ha liknande principal egenskaper och grupperade i kartan. Genom att specialstudera den delen av kartan skulle man kunna identifiera fler potentiellt farliga kemiska substanser. I studie två till fyra (II-IV) specialstuderades miljögiftet PCB. Tjugo PCBs valdes ut så att de strukturellt och fysiokemiskt representerade de 178 PCB kongenerna med tre till sju klorsubstituenter. Den toxikologiska effekten hos dessa 20 PCBs undersöktes i 17 olika in vitro assays. De toxikologiska profilerna för de 20 testade kongenerna fastställdes, dvs vilka som har liknande skadliga effekter och vilka som skiljer sig åt. De toxicologiska profilerna användes för klassificering av PCBs. Kvantitativa modeller utvecklades för prediktioner, dvs att förutbestämma effekter hos ännu icke testade PCBs, och för att få ytterligare kunskap om strukturella egenskaper som ger icke önskvärda effekter i människa och natur. Information som kan användas vid en framtida riskbedömning av icke-dioxinlika PCBs. Den sista studien (IV) är en struktur-aktivitets studie som undersöker de icke-dioxinlika PCBernas hämmande effekt av signalsubstansen dopamin i hjärnan.
|
980 |
Disturbance monitoring in distributed power systemsGlickman, Mark January 2007 (has links)
Power system generators are interconnected in a distributed network to allow sharing of power. If one of the generators cannot meet the power demand, spare power is diverted from neighbouring generators. However, this approach also allows for propagation of electric disturbances. An oscillation arising from a disturbance at a given generator site will affect the normal operation of neighbouring generators and might cause them to fail. Hours of production time will be lost in the time it takes to restart the power plant. If the disturbance is detected early, appropriate control measures can be applied to ensure system stability. The aim of this study is to improve existing algorithms that estimate the oscillation parameters from acquired generator data to detect potentially dangerous power system disturbances. When disturbances occur in power systems (due to load changes or faults), damped oscillations (or "modes") are created. Modes which are heavily damped die out quickly and pose no threat to system stability. Lightly damped modes, by contrast, die out slowly and are more problematic. Of more concern still are "negatively damped" modes which grow exponentially with time and can ultimately cause the power system to fail. Widespread blackouts are then possible. To avert power system failures it is necessary to monitor the damping of the oscillating modes. This thesis proposes a number of damping estimation algorithms for this task. If the damping is found to be very small or even negative, then additional damping needs to be introduced via appropriate control strategies. This thesis presents a number of new algorithms for estimating the damping of modal oscillations in power systems. The first of these algorithms uses multiple orthogonal sliding windows along with least-squares techniques to estimate the modal damping. This algorithm produces results which are superior to those of earlier sliding window algorithms (that use only one pair of sliding windows to estimate the damping). The second algorithm uses a different modification of the standard sliding window damping estimation algorithm - the algorithm exploits the fact that the Signal to Noise Ratio (SNR) within the Fourier transform of practical power system signals is typically constant across a wide frequency range. Accordingly, damping estimates are obtained at a range of frequencies and then averaged. The third algorithm applied to power system analysis is based on optimal estimation theory. It is computationally efficient and gives optimal accuracy, at least for modes which are well separated in frequency.
|
Page generated in 0.2191 seconds