• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 682
  • 252
  • 79
  • 57
  • 42
  • 37
  • 30
  • 26
  • 25
  • 14
  • 9
  • 8
  • 7
  • 7
  • 7
  • Tagged with
  • 1503
  • 1029
  • 249
  • 238
  • 223
  • 215
  • 195
  • 185
  • 167
  • 163
  • 151
  • 124
  • 123
  • 122
  • 111
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1221

Origins of genetic variation and population structure of foxsnakes across spatial and temporal scales

ROW, JEFFREY 11 January 2011 (has links)
Understanding the events and processes responsible for patterns of within species diversity, provides insight into major evolutionary themes like adaptation, species distributions, and ultimately speciation itself. Here, I combine ecological, genetic and spatial perspectives to evaluate the roles that both historical and contemporary factors have played in shaping the population structure and genetic variation of foxsnakes (Pantherophis gloydi). First, I determine the likely impact of habitat loss on population distribution, through radio-telemetry (32 individuals) at two locations varying in habitat patch size. As predicted, individuals had similar habitat use patterns, but restricted movements to patches of suitable habitat at the more disturbed site. Also, occurrence records spread across a fragmented region were non-randomly distributed and located close to patches of usable habitat, suggesting habitat distribution limits population distribution. Next, I combined habitat suitability modeling with population genetics (589 individuals, 12 microsatellite loci) to infer how foxsnakes disperse through a mosaic of natural and altered landscape features. Boundary regions between genetic clusters were comprised of low suitability habitat (e.g. agricultural fields). Island populations were grouped into a single genetic cluster suggesting open water presents less of a barrier than non-suitable terrestrial habitat. Isolation by distance models had a stronger correlation with genetic data when including resistance values derived from habitat suitability maps, suggesting habitat degradation limits dispersal for foxsnakes. At larger temporal and spatial scales I quantified patterns of genetic diversity and population structure using mitochondrial (101 cytochrome b sequences) and microsatellite (816 individuals, 12 loci) DNA and used Approximate Bayesian computation to test competing models of demographic history. Supporting my predictions, I found models with populations which have undergone population size drops and splitting events continually had more support than models with small founding populations expanding to stable populations. Based on timing, the most likely cause was the cooling of temperatures and infilling of deciduous forest since the Hypisthermal. On a smaller scale, evidence suggested anthropogenic habitat loss has caused further decline and fragmentation. Mitochondrial DNA structure did not correspond to fragmented populations and the majority of foxsnakes had an identical haplotype, suggesting a past bottleneck or selective sweep. / Thesis (Ph.D, Biology) -- Queen's University, 2011-01-11 10:40:52.476
1222

Regularization in reinforcement learning

Farahmand, Amir-massoud Unknown Date
No description available.
1223

Liberalisation of trade in services :enhancing the temporary movement of natural persons (mode 4), a least developed countries' perspective

Edna Katushabe Mubiru January 2009 (has links)
<p>The purpose of this research is to examine the impact of liberalisation of trade in services on African LDCs by highlighting the importance of services trade through Mode 4 (temporary movement of natural persons).37 The paper will examine the nature of liberalisation to this Mode under the existing GATS framework, critically analyse the constraints on engaging in negotiations, specifically the national barriers that are hindering this movement, and make suggestions on ways of improving the nature of commitments on movement of natural persons in terms of Mode 4 to favour LDCs as laid down in Article VI of the GATS.</p>
1224

Algorithmes d'approximation parcimonieuse inspirés d'Orthogonal Least Squares pour les problèmes inverses

Soussen, Charles 28 November 2013 (has links) (PDF)
Ce manuscrit synthétise mon activité de recherche au CRAN entre 2005 et 2013. Les projets menés s'inscrivent dans les domaines des problèmes inverses en traitement du signal et des images, de l'approximation parcimonieuse, de l'analyse d'images hyperspectrales et de la reconstruction d'images 3D. Je détaille plus particulièrement les travaux concernant la conception, l'analyse et l'utilisation d'algorithmes d'approximation parcimonieuse pour des problèmes inverses caractérisés par un dictionnaire mal conditionné. Dans un premier chapitre, je présente les algorithmes heuristiques conçus pour minimiser des critères mixtes L2-L0. Ce sont des algorithmes gloutons << bidirectionnels >> définis en tant qu'extension de l'algorithme Orthogonal Least Squares (OLS). Leur développement est motivé par le bon comportement empirique d'OLS et de ses versions dérivées lorsque le dictionnaire est une matrice mal conditionnée. Le deuxième chapitre est une partie applicative en microscopie de force atomique, où les algorithmes du premier chapitre sont utilisés avec un dictionnaire particulier dans le but de segmenter automatiquement des signaux. Cette segmentation permet finalement de fournir une cartographie 2D de différents paramètres électrostatiques et bio-mécaniques. Le troisième chapitre est une partie théorique visant à analyser les algorithmes gloutons OMP (Orthogonal Matching Pursuit) et OLS. Une première analyse de reconstruction exacte par OLS en k itérations est proposée. De plus, une comparaison poussée des conditions de reconstruction exacte lorsqu'un certain nombre d'itérations ont déjà été effectuées fournit un éclairage sur le meilleur comportement d'OLS (par rapport à OMP) pour les problèmes mal conditionnés. Dans un quatrième chapitre, je dresse quelques perspectives méthodologiques et appliquées dans le domaine de l'analyse parcimonieuse en lien avec les chapitres précédents.
1225

General Adaptive Penalized Least Squares 模型選取方法之模擬與其他方法之比較 / The Simulation of Model Selection Method for General Adaptive Penalized Least Squares and Comparison with Other Methods

陳柏錞 Unknown Date (has links)
在迴歸分析中,若變數間具有非線性 (nonlinear) 的關係時,B-Spline線性迴歸是以無母數的方式建立模型。B-Spline函數為具有節點(knots)的分段多項式,選取合適節點的位置對B-Spline函數的估計有重要的影響,在希望得到B-Spline較好的估計量的同時,我們也想要只用少數的節點就達成想要的成效,於是Huang (2013) 提出了一種選擇節點的方式APLS (Adaptive penalized least squares),在本文中,我們以此方法進行一些更一般化的設定,並在不同的設定之下,判斷是否有較好的估計效果,且已修正後的方法與基於BIC (Bayesian information criterion)的節點估計方式進行比較,在本文中我們將一般化設定的APLS法稱為GAPLS,並且經由模擬結果我們發現此兩種以B-Spline進行迴歸函數近似的方法其近似效果都很不錯,只是節點的個數略有不同,所以若是對節點選取的個數有嚴格要求要取較少的節點的話,我們建議使用基於BIC的節點估計方式,除此之外GAPLS法也是不錯的選擇。 / In regression analysis, if the relationship between the response variable and the explanatory variables is nonlinear, B-splines can be used to model the nonlinear relationship. Knot selection is crucial in B-spline regression. Huang (2013) propose a method for adaptive estimation, where knots are selected based on penalized least squares. This method is abbreviated as APLS (adaptive penalized least squares) in this thesis. In this thesis, a more general version of APLS is proposed, which is abbreviated as GAPLS (generalized APLS). Simulation studies are carried out to compare the estimation performance between GAPLS and a knot selection method based on BIC (Bayesian information criterion). The simulation results show that both methods perform well and fewer knots are selected using the BIC approach than using GAPLS.
1226

Lasso顯著性檢定與向前逐步迴歸變數選取方法之比較 / A Comparison between Lasso Significance Test and Forward Stepwise Selection Method

鄒昀庭, Tsou, Yun Ting Unknown Date (has links)
迴歸模式的變數選取是很重要的課題,Tibshirani於1996年提出最小絕對壓縮挑選機制(Least Absolute Shrinkage and Selection Operator;簡稱Lasso),主要特色是能在估計的過程中自動完成變數選取。但因為Lasso本身並沒有牽扯到統計推論的層面,因此2014年時Lockhart et al.所提出的Lasso顯著性檢定是重要的突破。由於Lasso顯著性檢定的建構過程與傳統向前逐步迴歸相近,本研究接續Lockhart et al.(2014)對兩種變數選取方法的比較,提出以Bootstrap來改良傳統向前逐步迴歸;最後並比較Lasso、Lasso顯著性檢定、傳統向前逐步迴歸、以AIC決定變數組合的向前逐步迴歸,以及以Bootstrap改良的向前逐步迴歸等五種方法變數選取之效果。最後發現Lasso顯著性檢定雖然不容易犯型一錯誤,選取變數時卻過於保守;而以Bootstrap改良的向前逐步迴歸跟Lasso顯著性檢定一樣不容易犯型一錯誤,而選取變數上又比起Lasso顯著性檢定更大膽,因此可算是理想的方法改良結果。 / Variable selection of a regression model is an essential topic. In 1996, Tibshirani proposed a method called Lasso (Least Absolute Shrinkage and Selection Operator), which completes the matter of selecting variable set while estimating the parameters. However, the original version of Lasso does not provide a way for making inference. Therefore, the significance test for lasso proposed by Lockhart et al. in 2014 is an important breakthrough. Based on the similarity of construction of statistics between Lasso significance test and forward selection method, continuing the comparisons between the two methods from Lockhart et al. (2014), we propose an improved version of forward selection method by bootstrap. And at the second half of our research, we compare the variable selection results of Lasso, Lasso significance test, forward selection, forward selection by AIC, and forward selection by bootstrap. We find that although the Type I error probability for Lasso Significance Test is small, the testing method is too conservative for including new variables. On the other hand, the Type I error probability for forward selection by bootstrap is also small, yet it is more aggressive in including new variables. Therefore, based on our simulation results, the bootstrap improving forward selection is rather an ideal variable selecting method.
1227

Forecasting Mid-Term Electricity Market Clearing Price Using Support Vector Machines

2014 May 1900 (has links)
In a deregulated electricity market, offering the appropriate amount of electricity at the right time with the right bidding price is of paramount importance. The forecasting of electricity market clearing price (MCP) is a prediction of future electricity price based on given forecast of electricity demand, temperature, sunshine, fuel cost, precipitation and other related factors. Currently, there are many techniques available for short-term electricity MCP forecasting, but very little has been done in the area of mid-term electricity MCP forecasting. The mid-term electricity MCP forecasting focuses electricity MCP on a time frame from one month to six months. Developing mid-term electricity MCP forecasting is essential for mid-term planning and decision making, such as generation plant expansion and maintenance schedule, reallocation of resources, bilateral contracts and hedging strategies. Six mid-term electricity MCP forecasting models are proposed and compared in this thesis: 1) a single support vector machine (SVM) forecasting model, 2) a single least squares support vector machine (LSSVM) forecasting model, 3) a hybrid SVM and auto-regression moving average with external input (ARMAX) forecasting model, 4) a hybrid LSSVM and ARMAX forecasting model, 5) a multiple SVM forecasting model and 6) a multiple LSSVM forecasting model. PJM interconnection data are used to test the proposed models. Cross-validation technique was used to optimize the control parameters and the selection of training data of the six proposed mid-term electricity MCP forecasting models. Three evaluation techniques, mean absolute error (MAE), mean absolute percentage error (MAPE) and mean square root error (MSRE), are used to analysis the system forecasting accuracy. According to the experimental results, the multiple SVM forecasting model worked the best among all six proposed forecasting models. The proposed multiple SVM based mid-term electricity MCP forecasting model contains a data classification module and a price forecasting module. The data classification module will first pre-process the input data into corresponding price zones and then the forecasting module will forecast the electricity price in four parallel designed SVMs. This proposed model can best improve the forecasting accuracy on both peak prices and overall system compared with other 5 forecasting models proposed in this thesis.
1228

Multivariate data analysis using spectroscopic data of fluorocarbon alcohol mixtures / Nothnagel, C.

Nothnagel, Carien January 2012 (has links)
Pelchem, a commercial subsidiary of Necsa (South African Nuclear Energy Corporation), produces a range of commercial fluorocarbon products while driving research and development initiatives to support the fluorine product portfolio. One such initiative is to develop improved analytical techniques to analyse product composition during development and to quality assure produce. Generally the C–F type products produced by Necsa are in a solution of anhydrous HF, and cannot be directly analyzed with traditional techniques without derivatisation. A technique such as vibrational spectroscopy, that can analyze these products directly without further preparation, will have a distinct advantage. However, spectra of mixtures of similar compounds are complex and not suitable for traditional quantitative regression analysis. Multivariate data analysis (MVA) can be used in such instances to exploit the complex nature of spectra to extract quantitative information on the composition of mixtures. A selection of fluorocarbon alcohols was made to act as representatives for fluorocarbon compounds. Experimental design theory was used to create a calibration range of mixtures of these compounds. Raman and infrared (NIR and ATR–IR) spectroscopy were used to generate spectral data of the mixtures and this data was analyzed with MVA techniques by the construction of regression and prediction models. Selected samples from the mixture range were chosen to test the predictive ability of the models. Analysis and regression models (PCR, PLS2 and PLS1) gave good model fits (R2 values larger than 0.9). Raman spectroscopy was the most efficient technique and gave a high prediction accuracy (at 10% accepted standard deviation), provided the minimum mass of a component exceeded 16% of the total sample. The infrared techniques also performed well in terms of fit and prediction. The NIR spectra were subjected to signal saturation as a result of using long path length sample cells. This was shown to be the main reason for the loss in efficiency of this technique compared to Raman and ATR–IR spectroscopy. It was shown that multivariate data analysis of spectroscopic data of the selected fluorocarbon compounds could be used to quantitatively analyse mixtures with the possibility of further optimization of the method. The study was a representative study indicating that the combination of MVA and spectroscopy can be used successfully in the quantitative analysis of other fluorocarbon compound mixtures. / Thesis (M.Sc. (Chemistry))--North-West University, Potchefstroom Campus, 2012.
1229

Multivariate data analysis using spectroscopic data of fluorocarbon alcohol mixtures / Nothnagel, C.

Nothnagel, Carien January 2012 (has links)
Pelchem, a commercial subsidiary of Necsa (South African Nuclear Energy Corporation), produces a range of commercial fluorocarbon products while driving research and development initiatives to support the fluorine product portfolio. One such initiative is to develop improved analytical techniques to analyse product composition during development and to quality assure produce. Generally the C–F type products produced by Necsa are in a solution of anhydrous HF, and cannot be directly analyzed with traditional techniques without derivatisation. A technique such as vibrational spectroscopy, that can analyze these products directly without further preparation, will have a distinct advantage. However, spectra of mixtures of similar compounds are complex and not suitable for traditional quantitative regression analysis. Multivariate data analysis (MVA) can be used in such instances to exploit the complex nature of spectra to extract quantitative information on the composition of mixtures. A selection of fluorocarbon alcohols was made to act as representatives for fluorocarbon compounds. Experimental design theory was used to create a calibration range of mixtures of these compounds. Raman and infrared (NIR and ATR–IR) spectroscopy were used to generate spectral data of the mixtures and this data was analyzed with MVA techniques by the construction of regression and prediction models. Selected samples from the mixture range were chosen to test the predictive ability of the models. Analysis and regression models (PCR, PLS2 and PLS1) gave good model fits (R2 values larger than 0.9). Raman spectroscopy was the most efficient technique and gave a high prediction accuracy (at 10% accepted standard deviation), provided the minimum mass of a component exceeded 16% of the total sample. The infrared techniques also performed well in terms of fit and prediction. The NIR spectra were subjected to signal saturation as a result of using long path length sample cells. This was shown to be the main reason for the loss in efficiency of this technique compared to Raman and ATR–IR spectroscopy. It was shown that multivariate data analysis of spectroscopic data of the selected fluorocarbon compounds could be used to quantitatively analyse mixtures with the possibility of further optimization of the method. The study was a representative study indicating that the combination of MVA and spectroscopy can be used successfully in the quantitative analysis of other fluorocarbon compound mixtures. / Thesis (M.Sc. (Chemistry))--North-West University, Potchefstroom Campus, 2012.
1230

以基因演算法優化最小二乘支持向量機於坐標轉換之研究 / Coordinate Transformation Using Genetic Algorithm Based Least Square Support Vector Machine

黃鈞義 Unknown Date (has links)
由於採用的地球原子不同,目前,台灣地區有兩種坐標系統存在,TWD67(Taiwan Datum 1967) 和TWD97(Taiwan Datum 1997)。在應用上,必須進行不同地球原子間之坐標轉換。坐標轉換方面,有許多方法可供選擇,如六參數轉換、支持向量機(Support Vector Machine, SVM)轉換等。 最小二乘支持向量機(Least Square Support Vector Machine, LSSVM),為SVM的一種演算法,是一種非線性模型。LSSVM在運用上所需之參數少,能夠解決小樣本、非線性、高維度和局部極小點等問題。目前,LSSVM,已經被成功運用在影像分類和統計迴歸等領域上。 本研究將利用LSSVM採用不同之核函數:線性核函數(LIN)、多項式核函數(POLY)及徑向基核函數(RBF)進行TWD97和TWD67之坐標轉換。研究中並使用基因演算法來調整LSSVM的RBF核函數之系統參數(後略稱RBF+GA),找出較佳之系統參數組合以進行坐標轉換。模擬與實測之地籍資料,將被用以測試LSSVM及六參數坐標轉換方法的轉換精度。 研究結果顯示,RBF+GA在各實驗區之轉換精度優於參數優化前RBF之轉換精度,且RBF+GA之轉換精度也較六參數轉換之轉換精度高。 進行參數優化後,RBF+GA相對於RBF的精度提升率如下:(1)模擬實驗區:參考點與檢核點數量比分別為1:1、2:1、3:1、1:2及1:3時,精度提升率分別為15.2%、21.9%、33.2%、12.0%、11.7%;(2)真實實驗區:花蓮縣、台中市及台北市實驗區之精度提升率分別為20.1%、32.4% 、22.5%。 / There are two coordinate systems with different geodetic datum in Taiwan region, i.e., TWD67 (Taiwan Datum 1967) and TWD97 (Taiwan Datum 1997). In order to maintain the consistency of cadastral coordinates, it is necessary to transform from one coordinate system to another. There are many coordinate transformation methods, such as, 2-dimension 6-parameter transformation, and support vector machine (SVM). Least Square Support Vector Machine (LSSVM), is one type of SVM algorithms, and it is also a non-linear model。LSSVM needs a few parameters to solve non-linear, high-dimension problems, and it has been successfully applied to the fields of image classification, and statistical regression. The goal of this paper is to apply LSSVM with different kernel functions (POLY、LIN、RBF) to cadastral coordinate transformation between TWD67 and TWD97. Genetic Algorithm will be used to find out an appropriate set of system parameters for LSSVM with RBF kernel to transform the cadastral coordinates. The simulated and real data sets will be used to test the performances, and coordinate transformation accuracies of LSSVM with different kernel functions and 6-parameter transformation. According to the test results, it is found that after optimizing the RBF parameters (RBF+GA), the transformation accuracies using RBF+GA are better than RBF, and even better than those of 6-parameter transformation. Comparing with the transformation accuracies using RBF, the transformation accuracy improving rate of RBF+GA are : (1) The simulated data sets: when the amount ratio of reference points and check points comes to 1:1, 2:1, 3:1, 1:2 and 1:3, the transformation accuracy improving rate are 15.2%, 21.9%, 33.2%, 12.0% and 11.7%, respectively; (2) The real data sets: the transformation accuracy improving rate of RBF+GA for the Hualien, Taichung and Taipei data sets are 20.1%, 32.4% and 22.5%, respectively.

Page generated in 0.0234 seconds