• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 307
  • 92
  • 59
  • 51
  • 12
  • 10
  • 7
  • 6
  • 6
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 644
  • 280
  • 161
  • 138
  • 137
  • 100
  • 72
  • 69
  • 67
  • 66
  • 66
  • 63
  • 57
  • 49
  • 48
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
471

Modelling and forecasting economic time series with single hidden-layer feedforward autoregressive artificial neural networks

Rech, Gianluigi January 2001 (has links)
This dissertation consists of 3 essays In the first essay, A Simple Variable Selection Technique for Nonlinear Models, written in cooperation with Timo Teräsvirta and Rolf Tschernig, I propose a variable selection method based on a polynomial expansion of the unknown regression function and an appropriate model selection criterion. The hypothesis of linearity is tested by a Lagrange multiplier test based on this polynomial expansion. If rejected, a kth order general polynomial is used as a base for estimating all submodels by ordinary least squares. The combination of regressors leading to the lowest value of the model selection criterion is selected.  The second essay, Modelling and Forecasting Economic Time Series with Single Hidden-layer Feedforward Autoregressive Artificial Neural Networks, proposes an unified framework for artificial neural network modelling. Linearity is tested and the selection of regressors performed by the methodology developed in essay I. The number of hidden units is detected by a procedure based on a sequence of Lagrange multiplier (LM) tests. Serial correlation of errors and parameter constancy are checked by LM tests as well. A Monte-Carlo study, the two classical series of the lynx and the sunspots, and an application on the monthly S&amp;P 500 index return series are used to demonstrate the performance of the overall procedure. In the third essay, Forecasting with Artificial Neural Network Models (in cooperation with Marcelo Medeiros), the methodology developed in essay II, the most popular methods for artificial neural network estimation, and the linear autoregressive model are compared by forecasting performance on 30 time series from different subject areas. Early stopping, pruning, information criterion pruning, cross-validation pruning, weight decay, and Bayesian regularization are considered. The findings are that 1) the linear models very often outperform the neural network ones and 2) the modelling approach to neural networks developed in this thesis stands up well with in comparison when compared to the other neural network modelling methods considered here. / <p>Diss. Stockholm : Handelshögskolan, 2002. Spikblad saknas</p>
472

A Cognitive Radio Tracking System for Indoor Environments

Kushki, Azadeh 26 February 2009 (has links)
Advances in wireless communication have enabled mobility of personal computing services equipped with sensing and computing capabilities. This has motivated the development of location-based services (LBS) that are implemented on top of existing communication infrastructures to cater to changing user contexts. To enable and support the delivery of LBS, accurate, reliable, and realtime user location information is needed. This thesis introduces a cognitive dynamic system for tracking the position of mobile users using received signal strength (RSS) in Wireless Local Area Networks (WLAN). The main challenge in WLAN positioning is the unpredictable nature of the RSS-position relationship. Existing system rely on a set of training samples collected at a set of anchor points with known positions in the environment to characterize this relationship. The first contribution of this thesis is the use of nonparametric kernel density estimation for minimum mean square error positioning using the RSS training data. This formulation enables the rigorous study of state-space filtering in the context of WLAN positioning. The outcome is the Nonparametric Information (NI) filter, a novel recursive position estimator that incorporates both RSS measurements and a dynamic model of pedestrian motion during estimation. In contrast to traditional Kalman filtering approaches, the NI filter does not require the explicit knowledge of RSS-position relationship and is therefore well-suited for the WLAN positioning problem. The use of the dynamic motion model by the NI filter leads to the design of a cognitive dynamic tracking system. This design harnesses the benefits of feedback and position predictions from the filter to guide the selection of anchor points and radio sensors used during estimation. Experimental results using real measurement from an office environment demonstrate the effectiveness of proactive determination of sensing and estimation parameters in mitigating difficulties that arise due to the unpredictable nature of the indoor radio environment. In particular, the results indicate that the proposed cognitive design achieves an improvement of 3.19m (56\%) in positioning error relative to memoryless positioning alone.
473

A Cognitive Radio Tracking System for Indoor Environments

Kushki, Azadeh 26 February 2009 (has links)
Advances in wireless communication have enabled mobility of personal computing services equipped with sensing and computing capabilities. This has motivated the development of location-based services (LBS) that are implemented on top of existing communication infrastructures to cater to changing user contexts. To enable and support the delivery of LBS, accurate, reliable, and realtime user location information is needed. This thesis introduces a cognitive dynamic system for tracking the position of mobile users using received signal strength (RSS) in Wireless Local Area Networks (WLAN). The main challenge in WLAN positioning is the unpredictable nature of the RSS-position relationship. Existing system rely on a set of training samples collected at a set of anchor points with known positions in the environment to characterize this relationship. The first contribution of this thesis is the use of nonparametric kernel density estimation for minimum mean square error positioning using the RSS training data. This formulation enables the rigorous study of state-space filtering in the context of WLAN positioning. The outcome is the Nonparametric Information (NI) filter, a novel recursive position estimator that incorporates both RSS measurements and a dynamic model of pedestrian motion during estimation. In contrast to traditional Kalman filtering approaches, the NI filter does not require the explicit knowledge of RSS-position relationship and is therefore well-suited for the WLAN positioning problem. The use of the dynamic motion model by the NI filter leads to the design of a cognitive dynamic tracking system. This design harnesses the benefits of feedback and position predictions from the filter to guide the selection of anchor points and radio sensors used during estimation. Experimental results using real measurement from an office environment demonstrate the effectiveness of proactive determination of sensing and estimation parameters in mitigating difficulties that arise due to the unpredictable nature of the indoor radio environment. In particular, the results indicate that the proposed cognitive design achieves an improvement of 3.19m (56\%) in positioning error relative to memoryless positioning alone.
474

Testing for spatial correlation and semiparametric spatial modeling of binary outcomes with application to aberrant crypt foci in colon carcinogenesis experiments

Apanasovich, Tatiyana Vladimirovna 01 November 2005 (has links)
In an experiment to understand colon carcinogenesis, all animals were exposed to a carcinogen while half the animals were also exposed to radiation. Spatially, we measured the existence of aberrant crypt foci (ACF), namely morphologically changed colonic crypts that are known to be precursors of colon cancer development. The biological question of interest is whether the locations of these ACFs are spatially correlated: if so, this indicates that damage to the colon due to carcinogens and radiation is localized. Statistically, the data take the form of binary outcomes (corresponding to the existence of an ACF) on a regular grid. We develop score??type methods based upon the Matern and conditionally autoregression (CAR) correlation models to test for the spatial correlation in such data, while allowing for nonstationarity. Because of a technical peculiarity of the score??type test, we also develop robust versions of the method. The methods are compared to a generalization of Moran??s test for continuous outcomes, and are shown via simulation to have the potential for increased power. When applied to our data, the methods indicate the existence of spatial correlation, and hence indicate localization of damage. Assuming that there are correlations in the locations of the ACF, the questions are how great are these correlations, and whether the correlation structures di?er when an animal is exposed to radiation. To understand the extent of the correlation, we cast the problem as a spatial binary regression, where binary responses arise from an underlying Gaussian latent process. We model these marginal probabilities of ACF semiparametrically, using ?xed-knot penalized regression splines and single-index models. We ?t the models using pairwise pseudolikelihood methods. Assuming that the underlying latent process is strongly mixing, known to be the case for many Gaussian processes, we prove asymptotic normality of the methods. The penalized regression splines have penalty parameters that must converge to zero asymptotically: we derive rates for these parameters that do and do not lead to an asymptotic bias, and we derive the optimal rate of convergence for them. Finally, we apply the methods to the data from our experiment.
475

Empirical likelihood and extremes

Gong, Yun 17 January 2012 (has links)
In 1988, Owen introduced empirical likelihood as a nonparametric method for constructing confidence intervals and regions. Since then, empirical likelihood has been studied extensively in the literature due to its generality and effectiveness. It is well known that empirical likelihood has several attractive advantages comparing to its competitors such as bootstrap: determining the shape of confidence regions automatically using only the data; straightforwardly incorporating side information expressed through constraints; being Bartlett correctable. The main part of this thesis extends the empirical likelihood method to several interesting and important statistical inference situations. This thesis has four components. The first component (Chapter II) proposes a smoothed jackknife empirical likelihood method to construct confidence intervals for the receiver operating characteristic (ROC) curve in order to overcome the computational difficulty when we have nonlinear constrains in the maximization problem. The second component (Chapter III and IV) proposes smoothed empirical likelihood methods to obtain interval estimation for the conditional Value-at-Risk with the volatility model being an ARCH/GARCH model and a nonparametric regression respectively, which have applications in financial risk management. The third component(Chapter V) derives the empirical likelihood for the intermediate quantiles, which plays an important role in the statistics of extremes. Finally, the fourth component (Chapter VI and VII) presents two additional results: in Chapter VI, we present an interesting result by showing that, when the third moment is infinity, we may prefer the Student's t-statistic to the sample mean standardized by the true standard deviation; in Chapter VII, we present a method for testing a subset of parameters for a given parametric model of stationary processes.
476

Résumé des Travaux en Statistique et Applications des Statistiques

Clémençon, Stéphan 01 December 2006 (has links) (PDF)
Ce rapport présente brièvement l'essentiel de mon activité de recherche depuis ma thèse de doctorat [53], laquelle visait principalement à étendre l'utilisation des progrès récents de l'Analyse Harmonique Algorithmique pour l'estimation non paramétrique adaptative dans le cadre d'observations i.i.d. (tels que l'analyse par ondelettes) à l'estimation statistique pour des données markoviennes. Ainsi qu'il est éxpliqué dans [123], des résultats relatifs aux propriétés de concentration de la mesure (i.e. des inégalités de probabilité et de moments sur certaines classes fonctionnelles, adaptées à l'approximation non linéaire) sont indispensables pour exploiter ces outils d'analyse dans un cadre probabiliste et obtenir des procédures d'estimation statistique dont les vitesses de convergence surpassent celles de méthodes antérieures. Dans [53] (voir également [54], [55] et [56]), une méthode d'analyse fondée sur le renouvellement, la méthode dite 'régénérative' (voir [185]), consistant à diviser les trajectoires d'une chaîne de Markov Harris récurrente en segments asymptotiquement i.i.d., a été largement utilisée pour établir les résultats probabilistes requis, le comportement à long terme des processus markoviens étant régi par des processus de renouvellement (définissant de façon aléatoire les segments de la trajectoire). Une fois l'estimateur construit, il importe alors de pouvoir quantifier l'incertitude inhérente à l'estimation fournie (mesurée par des quantiles spécifiques, la variance ou certaines fonctionnelles appropriées de la distribution de la statistique considérée). A cet égard et au delà de l'extrême simplicité de sa mise en oeuvre (puisqu'il s'agit simplement d'eectuer des tirages i.i.d. dans l'échantillon de départ et recalculer la statistique sur le nouvel échantillon, l'échantillon bootstrap), le bootstrap possède des avantages théoriques majeurs sur l'approximation asymptotique gaussienne (la distribution bootstrap approche automatiquement la structure du second ordre dans le développement d'Edegworth de la distribution de la statistique). Il m'est apparu naturel de considérer le problème de l'extension de la procédure traditionnelle de bootstrap aux données markoviennes. Au travers des travaux réalisés en collaboration avec Patrice Bertail, la méthode régénérative s'est avérée non seulement être un outil d'analyse puissant pour établir des théorèmes limites ou des inégalités, mais aussi pouvoir fournir des méthodes pratiques pour l'estimation statistique: la généralisation du bootstrap proposée consiste à ré-échantillonner un nombre aléatoire de segments de données régénératifs (ou d'approximations de ces derniers) de manière à imiter la structure de renouvellement sous-jacente aux données. Cette approche s'est révélée également pertinente pour de nombreux autres problèmes statistiques. Ainsi la première partie du rapport vise essentiellement à présenter le principe des méthodes statistiques fondées sur le renouvellement pour des chaînes de Markov Harris. La seconde partie du rapport est consacrée à la construction et à l'étude de méthodes statistiques pour apprendre à ordonner des objets, et non plus seulement à les classer (i.e. leur aecter un label), dans un cadre supervisé. Ce problème difficile est d'une importance cruciale dans de nombreux domaines d' application, allant de l'élaboration d'indicateurs pour le diagnostic médical à la recherche d'information (moteurs de recherche) et pose d'ambitieuses questions théoriques et algorithmiques, lesquelles ne sont pas encore résolues de manière satisfaisante. Une approche envisageable consiste à se ramener à la classification de paires d'observations, ainsi que le suggère un critère largement utilisé dans les applications mentionnées ci-dessus (le critère AUC) pour évaluer la pertinence d'un ordre. Dans un travail mené en collaboration avec Gabor Lugosi et Nicolas Vayatis, plusieurs résultats ont été obtenus dans cette direction, requérant l'étude de U-processus: l'aspect novateur du problème résidant dans le fait que l'estimateur naturel du risque a ici la forme d'une U-statistique. Toutefois, dans de nombreuses applications telles que la recherche d'information, seul l'ordre relatif aux objets les plus pertinents importe véritablement et la recherche de critères correspondant à de tels problèmes (dits d'ordre localisé) et d'algorithmes permettant de construire des règles pour obtenir des 'rangements' optimaux à l'égard de ces derniers constitue un enjeu crucial dans ce domaine. Plusieurs développements en ce sens ont été réalisés dans une série de travaux (se poursuivant encore actuellement) en collaboration avec Nicolas Vayatis. Enfin, la troisième partie du rapport reflète mon intérêt pour les applications des concepts probabilistes et des méthodes statistiques. Du fait de ma formation initiale, j'ai été naturellement conduit à considérer tout d'abord des applications en finance. Et bien que les approches historiques ne suscitent généralement pas d'engouement dans ce domaine, j'ai pu me convaincre progressivement du rôle important que pouvaient jouer les méthodes statistiques non paramétriques pour analyser les données massives (de très grande dimension et de caractère 'haute fréquence') disponibles en finance afin de détecter des structures cachées et en tirer partie pour l'évaluation du risque de marché ou la gestion de portefeuille par exemple. Ce point de vue est illustré par la brève présentation des travaux menés en ce sens en collaboration avec Skander Slim dans cette troisième partie. Ces dernières années, j'ai eu l'opportunité de pouvoir rencontrer des mathématiciens appliqués et des scientifiques travaillant dans d'autres domaines, pouvant également bénéficier des avancées de la modélisation probabiliste et des méthodes statistiques. J'ai pu ainsi aborder des applications relatives à la toxicologie, plus précisément au problème de l'évaluation des risque de contamination par voie alimentaire, lors de mon année de délégation auprès de l'Institut National de la Recherche Agronomique au sein de l'unité Metarisk, unité pluridisciplinaire entièrement consacrée à l'analyse du risque alimentaire. J'ai pu par exemple utiliser mes compétences dans le domaine de la modélisation maarkovienne afin de proposer un modèle stochastique décrivant l'évolution temporelle de la quantité de contaminant présente dans l'organisme (de manère à prendre en compte à la fois le phénomène d'accumulation du aux ingestions successives et la pharmacocinétique propre au contaminant régissant le processus d'élimination) et des méthodes d'inférence statistique adéquates lors de travaux en collaboration avec Patrice Bertail et Jessica Tressou. Cette direction de recherche se poursuit actuellement et l'on peut espérer qu'elle permette à terme de fonder des recommandations dans le domaine de la santé publique. Par ailleurs, j'ai la chance de pouvoir travailler actuellement avec Hector de Arazoza, Bertran Auvert, Patrice Bertail, Rachid Lounes et Viet-Chi Tran sur la modélisation stochastique de l'épidémie du virus VIH à partir des données épidémiologiques recensées sur la population de Cuba, lesquelles constituent l'une des bases de données les mieux renseignées sur l'évolution d'une épidémie de ce type. Et bien que ce projet vise essentiellement à obtenir un modèle numérique (permettant d'effectuer des prévisions quant à l'incidence de l'épidémie à court terme, de manière à pouvoir planifier la fabrication de la quantité d'anti-rétroviraux nécéssaire par exemple), il nous a conduit à aborder des questions théoriques ambitieuses, allant de l'existence d'une mesure quasi-stationnaire décrivant l'évolution à long terme de l'épidémie aux problèmes relatifs au caractère incomplet des données épidémiologiques disponibles. Il m'est malheureusement impossible d'évoquer ces questions ici sans risquer de les dénaturer, la présentation des problèmes mathématiques rencontrés dans ce projet mériterait à elle seule un rapport entier.
477

模糊抽樣調查及無母數檢定 / Fuzzy Sampling Survey with Nonparametric Tests

林國鎔, Lin,Guo-Rong Unknown Date (has links)
本文主要的目的是藉由The Geometer's Sketchpad (GSP)軟體的設計,幫助我們得到一組連續型模糊樣本。另外對於模糊數的無母數檢定我們提供了一個較為一般的方法,可以針對梯型、三角型,區間型的模糊樣本同時進行處理。 藉由利用GSP. 軟體所設計的模糊問卷,可以較清楚地紀錄受訪者的感覺,此外我們所提供之對於模糊數的無母數檢定方法比其他方法較為有效力。 在未來的研究裡,我們仍有一些問題需要解決,呈述如下:當所施測的樣本數很大時,如何有效率的在網路上紀錄受測者所建構的隸屬度函數? / The purpose of this paper is to develop a methodology for getting a continuous fuzzy data by using the software The Geometer's Sketchpad (GSP). And we propose a general method for nonparametric tests with fuzzy data that can deal with trapezoid, triangular, and interval-valued data simultaneously. Using the fuzzy questionnaire designed by GSP. can help respondents to record their thoughts more precisely. Additionally our method for nonparametric tests with fuzzy data is more powerful than others. Additional research issues for further investigation are expressed by question such as follows: how to record the membership function on line, especially when the sample size is large?
478

Temporal and Spatial Analysis of Monogenetic Volcanic Fields

Kiyosugi, Koji 01 January 2012 (has links)
Achieving an understanding of the nature of monogenetic volcanic fields depends on identification of the spatial and temporal patterns of volcanism in these fields, and their relationships to structures mapped in the shallow crust and inferred in the deep crust and mantle through interpretation of geochemical, radiometric and geophysical data. We investigate the spatial and temporal distributions of volcanism in the Abu Monogenetic Volcano Group, Southwest Japan. E-W elongated volcano distribution, which is identified by a nonparametric kernel method, is found to be consistent with the spatial extent of P-wave velocity anomalies in the lower crust and upper mantle, supporting the idea that the spatial density map of volcanic vents reflects the geometry of a mantle diapir. Estimated basalt supply to the lower crust is constant. This observation and the spatial distribution of volcanic vents suggest stability of magma productivity and essentially constant two-dimensional size of the source mantle diapir. We mapped conduits, dike segments, and sills in the San Rafael sub-volcanic field, Utah, where the shallowest part of a Pliocene magmatic system is exceptionally well exposed. The distribution of conduits matches the major features of dike distribution, including development of clusters and distribution of outliers. The comparison of San Rafael conduit distribution and the distributions of volcanoes in several recently active volcanic fields supports the use of statistical models, such as nonparametric kernel methods, in probabilistic hazard assessment for distributed volcanism. We developed a new recurrence rate calculation method that uses a Monte Carlo procedure to better reflect and understand the impact of uncertainties of radiometric age determinations on uncertainty of recurrence rate estimates for volcanic activity in the Abu, Yucca Mountain Region, and Izu-Tobu volcanic fields. Results suggest that the recurrence rates of volcanic fields can change by more than one order of magnitude on time scales of several hundred thousand to several million years. This suggests that magma generation rate beneath volcanic fields may change over these time scales. Also, recurrence rate varies more than one order of magnitude between these volcanic fields, consistent with the idea that distributed volcanism may be influenced by both the rate of magma generation and the potential for dike interaction during ascent.
479

基於Penalized Spline的信賴帶之比較與改良 / Comparison and Improvement for Confidence Bands Based on Penalized Spline

游博安, Yu, Po An Unknown Date (has links)
迴歸分析中,若變數間有非線性(nonlinear)的關係,此時我們可以用B-spline線性迴歸,一種無母數的方法,建立模型。Penalized spline是B-spline方法的一種改良,其想法是增加一懲罰項,避免估計函數時出現過度配適的問題。本文中,考慮三種方法:(a) Marginal Mixed Model approach, (b) Conditional Mixed Model approach, (c) 貝氏方法建立信賴帶,其中,我們對第一二種方法內的估計式作了一點調整,另外,懲罰項中的平滑參數也是我們考慮的問題。我們發現平滑參數確實會影響信賴帶,所以我們使用cross-validation來選取平滑參數。在調整的cross-validation下,Marginal Mixed Model的信賴帶估計不平滑的函數效果較好,Conditional Mixed Model的信賴帶估計平滑函數的效果較好,貝氏的信賴帶估計函數效果較差。 / In regression analysis, we can use B-spline to estimate regression function nonparametrically when the regression function is nonlinear. Penalized splines have been proposed to improve the performance of B-splines by including a penalty term to prevent over-fitting. In this article, we compare confidence bands constructed by three estimation methods: (a) Marginal Mixed Model approach, (b) Conditional Mixed Model approach, and (c) Bayesian approach. We modify the first two methods slightly. In addition, the selection of smoothing parameter of penalization is considered. We found that the smoothing parameter affects confidence bands a lot, so we use cross-validation to choose the smoothing parameter. Finally, based on the restricted cross-validation, Marginal Mixed Model performs better for less smooth regression functions, Conditional Mixed Model performs better for smooth regression functions and Bayesian approach performs badly.
480

Διορθωμένη - για - κίνδυνο κατάταξη απόδοσης των ελληνικών μετοχικών αμοιβαίων κεφαλαίων

Δημητρακόπουλος, Ιωάννης 30 March 2009 (has links)
Σε αυτήν την έρευνα, κατασκευάσαμε την διορθωμένη για κίνδυνο κατάταξη αποδόσεων για την περίπτωση των ελληνικών μετοχικών αμοιβαίων κεφαλαίων. Η διορθωμένη για κίνδυνο απόδοση μετρά τη ποσότητα του κινδύνου και εκφράζεται γενικά ως αριθμός ή κατάταξη. Οι διορθωμένες για κίνδυνο αποδόσεις εφαρμόζονται σε μεμονωμένα αξιόγραφα, επενδυτικά κεφάλαια και σε χαρτοφυλάκια. Η εμμονή ορίζεται ως ένα φαινόμενο όπου η σχετική (κατάταξη) απόδοση τείνει να επαναλαμβάνεται σε διαδοχικά χρονικά διαστήματα. Εφαρμόσαμε διάφορα τεστ προκειμένου να αξιολογηθεί η παρουσία ή όχι της εμμονής. Τα εμπειρικά αποτελέσματα μας έδειξαν ότι η εμμονή γίνεται πιο αδύναμη σε μακροπρόθεσμο χρονικό ορίζοντα. / In this research we constructed the ranking of the risk adjusted returns in the case of the Greek equity mutual funds market. Risk adjusted returns is a concept that refines an investment's return by measuring how much risk is involved in producing that return, which is generally expressed as a number or rating. Risk-adjusted returns are applied to individual securities and investment funds and portfolios. Persistence is defined as a phenomenon where relative (ranked) performance tends to repeat across successive time intervals. We apply various tests in order to assess the presence or not of persistence. Our analysis documents that persistence becomes weaker as the investment horizon is increased.

Page generated in 0.0676 seconds