• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 81
  • 17
  • 9
  • 7
  • 7
  • 6
  • 5
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 169
  • 169
  • 42
  • 41
  • 36
  • 32
  • 30
  • 29
  • 23
  • 22
  • 18
  • 18
  • 17
  • 16
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Variable Selection and Function Estimation Using Penalized Methods

Xu, Ganggang 2011 December 1900 (has links)
Penalized methods are becoming more and more popular in statistical research. This dissertation research covers two major aspects of applications of penalized methods: variable selection and nonparametric function estimation. The following two paragraphs give brief introductions to each of the two topics. Infinite variance autoregressive models are important for modeling heavy-tailed time series. We use a penalty method to conduct model selection for autoregressive models with innovations in the domain of attraction of a stable law indexed by alpha is an element of (0, 2). We show that by combining the least absolute deviation loss function and the adaptive lasso penalty, we can consistently identify the true model. At the same time, the resulting coefficient estimator converges at a rate of n^(?1/alpha) . The proposed approach gives a unified variable selection procedure for both the finite and infinite variance autoregressive models. While automatic smoothing parameter selection for nonparametric function estimation has been extensively researched for independent data, it is much less so for clustered and longitudinal data. Although leave-subject-out cross-validation (CV) has been widely used, its theoretical property is unknown and its minimization is computationally expensive, especially when there are multiple smoothing parameters. By focusing on penalized modeling methods, we show that leave-subject-out CV is optimal in that its minimization is asymptotically equivalent to the minimization of the true loss function. We develop an efficient Newton-type algorithm to compute the smoothing parameters that minimize the CV criterion. Furthermore, we derive one simplification of the leave-subject-out CV, which leads to a more efficient algorithm for selecting the smoothing parameters. We show that the simplified version of CV criteria is asymptotically equivalent to the unsimplified one and thus enjoys the same optimality property. This CV criterion also provides a completely data driven approach to select working covariance structure using generalized estimating equations in longitudinal data analysis. Our results are applicable to additive, linear varying-coefficient, nonlinear models with data from exponential families.
92

Bootstrap bandwidth selection in kernel hazard rate estimation / S. Jansen van Vuuren

Van Vuuren, Stefan Jansen January 2011 (has links)
The purpose of this study is to thoroughly discuss kernel hazard function estimation, both in the complete sample case as well as in the presence of random right censoring. Most of the focus is on the very important task of automatic bandwidth selection. Two existing selectors, least–squares cross validation as described by Patil (1993a) and Patil (1993b), as well as the bootstrap bandwidth selector of Gonzalez–Manteiga, Cao and Marron (1996) will be discussed. The bandwidth selector of Hall and Robinson (2009), which uses bootstrap aggregation (or 'bagging'), will be extended to and evaluated in the setting of kernel hazard rate estimation. We will also make a simple proposal for a bootstrap bandwidth selector. The performance of these bandwidth selectors will be compared empirically in a simulation study. The findings and conclusions of this study are reported. / Thesis (M.Sc. (Statistics))--North-West University, Potchefstroom Campus, 2011.
93

Bootstrap bandwidth selection in kernel hazard rate estimation / S. Jansen van Vuuren

Van Vuuren, Stefan Jansen January 2011 (has links)
The purpose of this study is to thoroughly discuss kernel hazard function estimation, both in the complete sample case as well as in the presence of random right censoring. Most of the focus is on the very important task of automatic bandwidth selection. Two existing selectors, least–squares cross validation as described by Patil (1993a) and Patil (1993b), as well as the bootstrap bandwidth selector of Gonzalez–Manteiga, Cao and Marron (1996) will be discussed. The bandwidth selector of Hall and Robinson (2009), which uses bootstrap aggregation (or 'bagging'), will be extended to and evaluated in the setting of kernel hazard rate estimation. We will also make a simple proposal for a bootstrap bandwidth selector. The performance of these bandwidth selectors will be compared empirically in a simulation study. The findings and conclusions of this study are reported. / Thesis (M.Sc. (Statistics))--North-West University, Potchefstroom Campus, 2011.
94

Progressive Validity Metamodel Trust Region Optimization

Thomson, Quinn Parker 26 February 2009 (has links)
The goal of this work was to develop metamodels of the MDO framework piMDO and provide new research in metamodeling strategies. The theory of existing metamodels is presented and implementation details are given. A new trust region scheme --- metamodel trust region optimization (MTRO) --- was developed. This method uses a progressive level of minimum validity in order to reduce the number of sample points required for the optimization process. Higher levels of validity require denser point distributions, but the reducing size of the region during the optimization process mitigates an increase the number of points required. New metamodeling strategies include: inherited optimal latin hypercube sampling, hybrid latin hypercube sampling, and kriging with BFGS. MTRO performs better than traditional trust region methods for single discipline problems and is competitive against other MDO architectures when used with a CSSO algorithm. Advanced metamodeling methods proved to be inefficient in trust region methods.
95

Progressive Validity Metamodel Trust Region Optimization

Thomson, Quinn Parker 26 February 2009 (has links)
The goal of this work was to develop metamodels of the MDO framework piMDO and provide new research in metamodeling strategies. The theory of existing metamodels is presented and implementation details are given. A new trust region scheme --- metamodel trust region optimization (MTRO) --- was developed. This method uses a progressive level of minimum validity in order to reduce the number of sample points required for the optimization process. Higher levels of validity require denser point distributions, but the reducing size of the region during the optimization process mitigates an increase the number of points required. New metamodeling strategies include: inherited optimal latin hypercube sampling, hybrid latin hypercube sampling, and kriging with BFGS. MTRO performs better than traditional trust region methods for single discipline problems and is competitive against other MDO architectures when used with a CSSO algorithm. Advanced metamodeling methods proved to be inefficient in trust region methods.
96

知識創新學習環境量表之編製 / The development of the knowledge building environment scale

林奎宇, Lin, Kuei Yu Unknown Date (has links)
本研究旨在編製「知識創新學習環境量表」,以瞭解學習環境中知識創新氛圍的程度。透過三個獨立樣本A、B及C,分別進行探索性因素分析、驗證性因素分析及複核效化分析。樣本A(332人)以探索性因素分析獲得因素成份,結果顯示此份量表有三個因素,分別命名為「想法因素」、「自主學習者因素」及「社群因素」。其次,透過建立本量表的一系列競爭模式,以樣本B(536人)進行驗證性因素分析之評鑑,結果顯示二階單因素模式為最簡效模式,並且量表具有良好之信、效度。而樣本C(536人)則作為複核效化之分析,結果顯示二階單因素模式具有穩定性與預測力。希冀本量表能提供相關單位做為教學及研究之應用。 / The purpose of this study was to develop the Knowledge Building Environment Scale (KBES). Three independent samples was used to validte the reliability and validity of the scale. Firstly, sample A (n=332) was used to generate the factors through exploratory factor analysis. It resulted in a scale of three factors which contains ‘idea’ factor, ‘agent’ factor and ‘community’ factor. Secondly, a series of competing models was established and evaluated by confirmatory factor analysis through sample B (n=536). Comparing with several competing models, hierarchical model was found to be the most efficient model with good reliability and validity. Finally, the cross-validation was tested by sample C (n=536) for hierarchical model to confirm the stability and predictive power of this model. The KBES can provide relevant institutions as a tool for evaluating learning environments.
97

Charge Density Analysis of Low-Valent Tetrels

Niepötter, Benedikt 15 January 2016 (has links)
No description available.
98

Classificação automática de modulações mono e multiportadoras utilizando método de extração de características e classificadores SVM

Amoedo, Diego Alves, 69-98468-0910 19 July 2017 (has links)
Submitted by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2018-02-23T14:45:53Z No. of bitstreams: 2 Dissertação_Diego A. Amoedo.pdf: 21597862 bytes, checksum: a9e7494163dfed228afe8750f777a7fc (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Approved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2018-02-23T14:46:21Z (GMT) No. of bitstreams: 2 Dissertação_Diego A. Amoedo.pdf: 21597862 bytes, checksum: a9e7494163dfed228afe8750f777a7fc (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Made available in DSpace on 2018-02-23T14:46:21Z (GMT). No. of bitstreams: 2 Dissertação_Diego A. Amoedo.pdf: 21597862 bytes, checksum: a9e7494163dfed228afe8750f777a7fc (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2017-07-19 / Cognitive radio is a new technology that aims to solve the spectrumunderutilization problem, through spectrum sensing, whose objective is to detect the so called spectrum holes. Automatic modulation classi cation plays an important role in this scenario, since it provides information about primary users, with the goal of aiding in spectrum sensing tasks. In the present dissertation, we propose a methodology for multiclass and hierarchical classi cation of modulated signal using support vector machines (SVM), with a set of prede ned parameters. In literature, other works deal with automatic modulation classi cation with SVM and other classi ers, however, few of them take a deep look at classi er design. SVM is known by its high discrimantion capacity, but its performance is very sensitive to the parameters used during classi ers design. With the use of a prede ned set of parameters, we seek to analyze the behavior of the classi er broadly and to investigate the in uence of parameter changes on the constitution of classi ers. In addition, we use one-versus-all and one-versus-one, error-correcting output codes and hierarchical decomposition. Finally, nine types of modulations (AM, FM, BPSK, QPSK, 16QAM, 64QAM, GMSK, OFDM and WCDMA) are used. The types of modulation as well as the decomposition techniques used cover almost all decomposition techniques and modulation classes present in the literature. / O Rádio Cognitivo é uma nova tecnologia que busca resolver o problema de subutilização do espectro de radiofrequências, por meio do sensoriamento do espectro, cujo objetivo é detectar os buracos espectrais. A classi cação automática de modulação desempenha um papel importante neste cenário, pois, provém informa- ção sobre os usuários primários de modo a auxiliar nas tarefas de sensoriamento do espectro. Nesta dissertação, propomos uma metodologia para a classi cação multiclasse e hierárquica de sinais modulados utilizando SVM, com um conjunto de parâmetros pré-de nidos. Na literatura, outros trabalhos tratam da classi cação automática de modulação tanto com SVM como com outros tipos de classi cadores, porém, poucos fazem uma análise detalhada do projeto dos classi cadores. O SVM é conhecido por sua alta capacidade de discriminação, todavia, seu desempenho é bastante sensível aos parâmetros usados na geração dos classi cadores. Com a utilização de um conjunto pré-de nido de parâmetros, buscamos analisar o comportamento do classi cador de forma ampla e investigar a in uência das mudanças de parâmetros na constituição de classi cadores. Além disso, utiliza-se as técnicas de decomposição multiclasse um-contra-todos, um-contra-um, códigos de saída corretores de erros e hierárquica. Por m, foram utilizados nove tipos de modulações (AM, FM, BPSK, QPSK, 16QAM, 64QAM, GMSK, OFDM e WCDMA). Tanto os tipos de modulação quanto as técnicas de decomposição abrangem quase a totalidade de técnicas de decomposição e de classes de modulação presentes na literatura.
99

Méthodes statistiques pour la modélisation des facteurs influençant la distribution et l’abondance de populations : application aux rapaces diurnes nichant en France / Statistical methods for modelling the distribution and abundance of populations : application to raptors breeding in France

Le Rest, Kévin 19 December 2013 (has links)
Face au déclin global de la biodiversité, de nombreux suivis de populations animales et végétales sont réalisés sur de grandes zones géographiques et durant une longue période afin de comprendre les facteurs déterminant la distribution, l’abondance et les tendances des populations. Ces suivis à larges échelles permettent de statuer quantitativement sur l’état des populations et de mettre en place des plans de gestion appropriés en accord avec les échelles biologiques. L’analyse statistique de ce type de données n’est cependant pas sans poser un certain nombre de problèmes. Classiquement, on utilise des modèles linéaires généralisés (GLM), formalisant les liens entre des variables supposées influentes (par exemple caractérisant l’environnement) et la variable d’intérêt (souvent la présence / absence de l’espèce ou des comptages). Il se pose alors un problème majeur qui concerne la manière de sélectionner ces variables influentes dans un contexte de données spatialisées. Cette thèse explore différentes solutions et propose une méthode facilement applicable, basée sur une validation croisée tenant compte des dépendances spatiales. La robustesse de la méthode est évaluée par des simulations et différents cas d’études dont des données de comptages présentant une variabilité plus forte qu’attendue (surdispersion). Un intérêt particulier est aussi porté aux méthodes de modélisation pour les données ayant un nombre de zéros plus important qu’attendu (inflation en zéro). La dernière partie de la thèse utilise ces enseignements méthodologiques pour modéliser la distribution, l’abondance et les tendances des rapaces diurnes en France. / In the context of global biodiversity loss, more and more surveys are done at a broad spatial extent and during a long time period, which is done in order to understand processes driving the distribution, the abundance and the trends of populations at the relevant biological scales. These studies allow then defining more precise conservation status for species and establish pertinent conservation measures. However, the statistical analysis of such datasets leads some concerns. Usually, generalized linear models (GLM) are used, trying to link the variable of interest (e.g. presence/absence or abundance) with some external variables suspected to influence it (e.g. climatic and habitat variables). The main unresolved concern is about the selection of these external variables from a spatial dataset. This thesis details several possibilities and proposes a widely usable method based on a cross-validation procedure accounting for spatial dependencies. The method is evaluated through simulations and applied on several case studies, including datasets with higher than expected variability (overdispersion). A focus is also done for methods accounting for an excess of zeros (zero-inflation). The last part of this manuscript applies these methodological developments for modelling the distribution, abundance and trend of raptors breeding in France.
100

基於眼動軌跡之閱讀模式分析 / Classification of reading patterns based on gaze information

張晉文, Chang, Chin Wen Unknown Date (has links)
閱讀是吸收知識的途徑,不同的閱讀模式所帶來的閱讀成效也會不同。如何透過機器學習的方式,從凝視點找出閱讀行為的關聯性,將是本研究的目標。實驗選擇低成本眼動儀紀錄讀者閱讀過程中的眼動資料,採用dispersion-based演算法找出凝視點,以計算凝視點特徵,包含凝視時間、凝視距離、凝視位置以及凝視方向。 本研究將閱讀模式分成五種類別,包含快讀、慢讀、精讀、跳讀與關鍵字識別,透過不同文章的呈現,引導30位測試者遵循其內容進行閱讀,藉此收集不同行為模式的眼動資料。實驗流程中所有的眼動資料會隨機被分成為兩份,依序建立不同維度的訓練資料,由交叉驗證的分類結果找出理想之特徵與維度。以每次挑選6位測試者的眼動數據為測試資料進行5次分類驗證,其平均正確率為78.24%、74.19%、93.75%、87.96%以及96.20%,均達到不錯的分類結果。 / Reading is one of the paths to acquire knowledge. The efficiency is different when different reading patterns are involved. It is the objective of this research to classify reading patterns from fixation data using machine learning techniques. In our experiment, a low-cost eye tracker is employed to record the eye movements during the reading process. A dispersion-based algorithm is implemented to identify fixation from the recorded data. Features pertaining to fixation including duration, path length, landing position and fixation direction are extracted for classification purposes. Five categories of reading pattern are defined and investigated in this study, namely, speed reading, slow reading, in-depth reading, skim-and-skip, and keyword spotting. We have recruited thirty subjects to participate in our experiment. The participants are instructed to read different articles using specific styles designated by the experimenter in order to assign label to the collected data. Feature selection is achieved by analyzing the predictive results of cross-validation from the training data obtained from all subjects. The average classification accuracies in five-fold cross-validation are 78.24%, 74.19%, 93.75%, 87.96% and 96.20% using the eye movements of the six randomly selected subjects as test data.

Page generated in 0.1013 seconds