• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 288
  • 67
  • 48
  • 32
  • 28
  • 18
  • 14
  • 13
  • 12
  • 9
  • 3
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 668
  • 668
  • 360
  • 360
  • 150
  • 148
  • 101
  • 72
  • 66
  • 66
  • 65
  • 63
  • 62
  • 60
  • 60
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
421

Comparison Of Missing Value Imputation Methods For Meteorological Time Series Data

Aslan, Sipan 01 September 2010 (has links) (PDF)
Dealing with missing data in spatio-temporal time series constitutes important branch of general missing data problem. Since the statistical properties of time-dependent data characterized by sequentiality of observations then any interruption of consecutiveness in time series will cause severe problems. In order to make reliable analyses in this case missing data must be handled cautiously without disturbing the series statistical properties, mainly as temporal and spatial dependencies. In this study we aimed to compare several imputation methods for the appropriate completion of missing values of the spatio-temporal meteorological time series. For this purpose, several missing imputation methods are assessed on their imputation performances for artificially created missing data in monthly total precipitation and monthly mean temperature series which are obtained from the climate stations of Turkish State Meteorological Service. Artificially created missing data are estimated by using six methods. Single Arithmetic Average (SAA), Normal Ratio (NR) and NR Weighted with Correlations (NRWC) are the three simple methods used in the study. On the other hand, we used two computational intensive methods for missing data imputation which are called Multi Layer Perceptron type Neural Network (MLPNN) and Monte Carlo Markov Chain based on Expectation-Maximization Algorithm (EM-MCMC). In addition to these, we propose a modification in the EM-MCMC method in which results of simple imputation methods are used as auxiliary variables. Beside the using accuracy measure based on squared errors we proposed Correlation Dimension (CD) technique for appropriate evaluation of imputation performances which is also important subject of Nonlinear Dynamic Time Series Analysis.
422

Computational modeling, stochastic and experimental analysis with thermoelastic stress analysis for fiber reinforced polymeric composite material systems

Johnson, Shane Miguel 05 May 2010 (has links)
Many studies with Thermoelastic Stress Analysis (TSA) and Infrared Thermography, in Fiber Reinforced Polymeric materials (FRPs), are concerned with surface detection of "hot spots" in order to locate and infer damage. Such experimental analyses usually yield qualitative relations where correlations between stress state and damage severity cannot be obtained. This study introduces quantitative experimental methodologies for TSA and Digital Image Correlation to expand the use of remote sensing technologies for static behavior, static damage initiation detection, and fatigue damage in FRPs. Three major experimental studies are conducted and coupled with nonlinear anisotropic material modeling: static and TSA of hybrid bio-composite material systems, a new stochastic model for fatigue damage of FRPs, and fracture analysis for FRP single-lap joints. Experimental calibration techniques are developed to validate the proposed macromechanical and micromechanical nonlinear anisotropic modeling frameworks under multi-axial states of stress. The High Fidelity Generalized Method of Cells (HFGMC) is a sophisticated micromechanical model developed for analysis of multi-phase composites with nonlinear elastic and elastoplastic constituents is employed in this study to analyze hybrid bio-composites. Macro-mechanical nonlinear anisotropic models and a linear orthotropic model for fracture behavior using the Extended Finite Element method (XFEM) are also considered and compared with the HFGMC method. While micromechanical and FE results provide helpful results for correlating with quasi-static behavior, analyzing damage progression after damage initiation is not straightforward and involves severe energy dissipation, especially with increasing damage progression. This is especially true for fatigue damage evolution, such as that of composite joints as it is associated with uncertainty and randomness. Towards that goal, stochastic Markov Chain fatigue damage models are used to predict cumulative damage with the new damage indices proposed using full-field TSA image analysis algorithms developed for continuously acquired measurements during fatigue loading of S2-Glass/E733FR unidirectional single-lap joints. Static damage initiation is also investigated experimentally with TSA in single-lap joints with thick adherends providing for new design limitations. The computational modeling, stochastic and experimental methods developed in this study have a wide range of applications for static, fracture and fatigue damage of different FRP material and structural systems.
423

L'uso delle reti sociali per la costruzione di campioni probabilistici: possibilità e limiti per lo studio di popolazioni senza lista di campionamento

VITALINI, ALBERTO 04 March 2011 (has links)
Il campionamento a valanga è considerato un tipo di campionamento non probabilistico, la cui rappresentatività può essere valutata solo sulla base di considerazioni soggettive. D’altro canto esso risulta spesso il solo praticamente utilizzabile nel caso di popolazioni senza lista di campionamento. La tesi si divide in due parti. La prima, teorica, descrive alcuni tentativi proposti in letteratura di ricondurre le forme di campionamento a valanga nell’alveo dei campionamenti probabilistici; tra questi è degno di nota il Respondent Driven Sampling, un disegno campionario che dovrebbe combinare il campionamento a valanga con un modello matematico che pesa le unità estratte in modo da compensare la non casualità dell’estrazione e permettere così l’inferenza statistica. La seconda, empirica, indaga le prestazioni del RDS sia attraverso simulazioni sia con una web-survey su una comunità virtuale in Internet, di cui si conoscono la struttura delle relazioni e alcune caratteristiche demografiche per ogni individuo. Le stime RDS, calcolate a partire dai dati delle simulazioni e della web-survey, sono confrontate con i valori veri della popolazione e le potenziali fonti di distorsione (in particolare quelle relative all’assunzione di reclutamento casuale) sono analizzate. / Populations without sampling frame are inherently hard to sample by conventional sampling designs. Often the only practical methods of obtaining the sample involve following social links from some initially identified respondents to add more research participants to the sample. These kinds of link-tracing designs make the sample liable to various forms of bias and make extremely difficult to generalize the results to the population studied. This thesis is divided into two parts. The first part of the thesis describes some attempts to build a statistical theory of link-tracing designs and illustrates, deeply, the Respondent-Driven Sampling, a link-tracing sampling design that should allow researchers to make, in populations without sampling frame, asymptotically unbiased estimates under certain conditions. The second part of the thesis investigates the performance of RDS by simulating sampling from a virtual community on the Internet, which are available in both the network structure of the population and demographic traits for each individual. In addition to simulations, this thesis tests the RDS by making a web-survey of the same population. RDS estimates from simulations and web-survey are compared to true population values and potential sources of bias (in particular those related to the random recruitment assumption) are discussed.
424

Résumé des Travaux en Statistique et Applications des Statistiques

Clémençon, Stéphan 01 December 2006 (has links) (PDF)
Ce rapport présente brièvement l'essentiel de mon activité de recherche depuis ma thèse de doctorat [53], laquelle visait principalement à étendre l'utilisation des progrès récents de l'Analyse Harmonique Algorithmique pour l'estimation non paramétrique adaptative dans le cadre d'observations i.i.d. (tels que l'analyse par ondelettes) à l'estimation statistique pour des données markoviennes. Ainsi qu'il est éxpliqué dans [123], des résultats relatifs aux propriétés de concentration de la mesure (i.e. des inégalités de probabilité et de moments sur certaines classes fonctionnelles, adaptées à l'approximation non linéaire) sont indispensables pour exploiter ces outils d'analyse dans un cadre probabiliste et obtenir des procédures d'estimation statistique dont les vitesses de convergence surpassent celles de méthodes antérieures. Dans [53] (voir également [54], [55] et [56]), une méthode d'analyse fondée sur le renouvellement, la méthode dite 'régénérative' (voir [185]), consistant à diviser les trajectoires d'une chaîne de Markov Harris récurrente en segments asymptotiquement i.i.d., a été largement utilisée pour établir les résultats probabilistes requis, le comportement à long terme des processus markoviens étant régi par des processus de renouvellement (définissant de façon aléatoire les segments de la trajectoire). Une fois l'estimateur construit, il importe alors de pouvoir quantifier l'incertitude inhérente à l'estimation fournie (mesurée par des quantiles spécifiques, la variance ou certaines fonctionnelles appropriées de la distribution de la statistique considérée). A cet égard et au delà de l'extrême simplicité de sa mise en oeuvre (puisqu'il s'agit simplement d'eectuer des tirages i.i.d. dans l'échantillon de départ et recalculer la statistique sur le nouvel échantillon, l'échantillon bootstrap), le bootstrap possède des avantages théoriques majeurs sur l'approximation asymptotique gaussienne (la distribution bootstrap approche automatiquement la structure du second ordre dans le développement d'Edegworth de la distribution de la statistique). Il m'est apparu naturel de considérer le problème de l'extension de la procédure traditionnelle de bootstrap aux données markoviennes. Au travers des travaux réalisés en collaboration avec Patrice Bertail, la méthode régénérative s'est avérée non seulement être un outil d'analyse puissant pour établir des théorèmes limites ou des inégalités, mais aussi pouvoir fournir des méthodes pratiques pour l'estimation statistique: la généralisation du bootstrap proposée consiste à ré-échantillonner un nombre aléatoire de segments de données régénératifs (ou d'approximations de ces derniers) de manière à imiter la structure de renouvellement sous-jacente aux données. Cette approche s'est révélée également pertinente pour de nombreux autres problèmes statistiques. Ainsi la première partie du rapport vise essentiellement à présenter le principe des méthodes statistiques fondées sur le renouvellement pour des chaînes de Markov Harris. La seconde partie du rapport est consacrée à la construction et à l'étude de méthodes statistiques pour apprendre à ordonner des objets, et non plus seulement à les classer (i.e. leur aecter un label), dans un cadre supervisé. Ce problème difficile est d'une importance cruciale dans de nombreux domaines d' application, allant de l'élaboration d'indicateurs pour le diagnostic médical à la recherche d'information (moteurs de recherche) et pose d'ambitieuses questions théoriques et algorithmiques, lesquelles ne sont pas encore résolues de manière satisfaisante. Une approche envisageable consiste à se ramener à la classification de paires d'observations, ainsi que le suggère un critère largement utilisé dans les applications mentionnées ci-dessus (le critère AUC) pour évaluer la pertinence d'un ordre. Dans un travail mené en collaboration avec Gabor Lugosi et Nicolas Vayatis, plusieurs résultats ont été obtenus dans cette direction, requérant l'étude de U-processus: l'aspect novateur du problème résidant dans le fait que l'estimateur naturel du risque a ici la forme d'une U-statistique. Toutefois, dans de nombreuses applications telles que la recherche d'information, seul l'ordre relatif aux objets les plus pertinents importe véritablement et la recherche de critères correspondant à de tels problèmes (dits d'ordre localisé) et d'algorithmes permettant de construire des règles pour obtenir des 'rangements' optimaux à l'égard de ces derniers constitue un enjeu crucial dans ce domaine. Plusieurs développements en ce sens ont été réalisés dans une série de travaux (se poursuivant encore actuellement) en collaboration avec Nicolas Vayatis. Enfin, la troisième partie du rapport reflète mon intérêt pour les applications des concepts probabilistes et des méthodes statistiques. Du fait de ma formation initiale, j'ai été naturellement conduit à considérer tout d'abord des applications en finance. Et bien que les approches historiques ne suscitent généralement pas d'engouement dans ce domaine, j'ai pu me convaincre progressivement du rôle important que pouvaient jouer les méthodes statistiques non paramétriques pour analyser les données massives (de très grande dimension et de caractère 'haute fréquence') disponibles en finance afin de détecter des structures cachées et en tirer partie pour l'évaluation du risque de marché ou la gestion de portefeuille par exemple. Ce point de vue est illustré par la brève présentation des travaux menés en ce sens en collaboration avec Skander Slim dans cette troisième partie. Ces dernières années, j'ai eu l'opportunité de pouvoir rencontrer des mathématiciens appliqués et des scientifiques travaillant dans d'autres domaines, pouvant également bénéficier des avancées de la modélisation probabiliste et des méthodes statistiques. J'ai pu ainsi aborder des applications relatives à la toxicologie, plus précisément au problème de l'évaluation des risque de contamination par voie alimentaire, lors de mon année de délégation auprès de l'Institut National de la Recherche Agronomique au sein de l'unité Metarisk, unité pluridisciplinaire entièrement consacrée à l'analyse du risque alimentaire. J'ai pu par exemple utiliser mes compétences dans le domaine de la modélisation maarkovienne afin de proposer un modèle stochastique décrivant l'évolution temporelle de la quantité de contaminant présente dans l'organisme (de manère à prendre en compte à la fois le phénomène d'accumulation du aux ingestions successives et la pharmacocinétique propre au contaminant régissant le processus d'élimination) et des méthodes d'inférence statistique adéquates lors de travaux en collaboration avec Patrice Bertail et Jessica Tressou. Cette direction de recherche se poursuit actuellement et l'on peut espérer qu'elle permette à terme de fonder des recommandations dans le domaine de la santé publique. Par ailleurs, j'ai la chance de pouvoir travailler actuellement avec Hector de Arazoza, Bertran Auvert, Patrice Bertail, Rachid Lounes et Viet-Chi Tran sur la modélisation stochastique de l'épidémie du virus VIH à partir des données épidémiologiques recensées sur la population de Cuba, lesquelles constituent l'une des bases de données les mieux renseignées sur l'évolution d'une épidémie de ce type. Et bien que ce projet vise essentiellement à obtenir un modèle numérique (permettant d'effectuer des prévisions quant à l'incidence de l'épidémie à court terme, de manière à pouvoir planifier la fabrication de la quantité d'anti-rétroviraux nécéssaire par exemple), il nous a conduit à aborder des questions théoriques ambitieuses, allant de l'existence d'une mesure quasi-stationnaire décrivant l'évolution à long terme de l'épidémie aux problèmes relatifs au caractère incomplet des données épidémiologiques disponibles. Il m'est malheureusement impossible d'évoquer ces questions ici sans risquer de les dénaturer, la présentation des problèmes mathématiques rencontrés dans ce projet mériterait à elle seule un rapport entier.
425

電子化服務傳遞之協同式定價模式研究 / iPrice: A Collaborative Pricing Model for e-Service Bundle Delivery

張瑋倫, Chang,Wei-Lun Unknown Date (has links)
Information goods pricing is an essential and emerging topic in the era of information economy. Myriad researchers have devoted considerable attention to developing and testing methods of information goods pricing. Nevertheless, in addition; there are still certain shortcomings as the challenges to be overcome. This study encompasses several unexplored concepts that have attracted research attention in other disciplines lately, such as collaborative prototyping, prospect theory, ERG theory, and maintenance from design, economic, psychological, and software engineering respectively. This study proposes a novel conceptual framework for information goods pricing and investigates the impact of three advantages: (1) provides collaborative process that could generate several prototypes via trial and error in pricing process, (2) deliberates the belief of consumer and producer by maximizing utility and profit, and (3) offers an appropriate service bundle by interacting with consumer and discovering the actual needs. Due to the unique cost structure and product characteristics of information goods, conventional pricing strategies are unfeasible, and a differential pricing strategy is crucial. Nevertheless, few models exist for pricing information goods in the e-service industry. This study proposes a novel collaborative pricing model in which customers are active participants in determining product prices and adopt prices and services that meet their changing needs. This study also shows that the collaborative pricing model generates an optimal bundle price at equilibrium with optimal profit and utility. Theoretical proofs and practical implications justify this pricing model, which is essential for future information goods pricing in information economy. Moreover, we apply iCare e-service delivery as an exemplar and scenario for our system. The objective of iCare is to provide quality e-services to the elderly people anywhere and anytime. The new pricing method will go beyond the current iCare e-service delivery process which furnishes personalized and collaborative bundles. iPrice system for pricing information goods fills the gap among previous literatures which only considers consumers or providers. Different from existing works, iPrice system is novel in integrating distinctively important concepts yielding more benefits to consumers and profits to more providers. Thus, iPrice also guides and provides a roadmap for information goods pricing for future research. / Information goods pricing is an essential and emerging topic in the era of information economy. Myriad researchers have devoted considerable attention to developing and testing methods of information goods pricing. Nevertheless, in addition; there are still certain shortcomings as the challenges to be overcome. This study encompasses several unexplored concepts that have attracted research attention in other disciplines lately, such as collaborative prototyping, prospect theory, ERG theory, and maintenance from design, economic, psychological, and software engineering respectively. This study proposes a novel conceptual framework for information goods pricing and investigates the impact of three advantages: (1) provides collaborative process that could generate several prototypes via trial and error in pricing process, (2) deliberates the belief of consumer and producer by maximizing utility and profit, and (3) offers an appropriate service bundle by interacting with consumer and discovering the actual needs. Due to the unique cost structure and product characteristics of information goods, conventional pricing strategies are unfeasible, and a differential pricing strategy is crucial. Nevertheless, few models exist for pricing information goods in the e-service industry. This study proposes a novel collaborative pricing model in which customers are active participants in determining product prices and adopt prices and services that meet their changing needs. This study also shows that the collaborative pricing model generates an optimal bundle price at equilibrium with optimal profit and utility. Theoretical proofs and practical implications justify this pricing model, which is essential for future information goods pricing in information economy. Moreover, we apply iCare e-service delivery as an exemplar and scenario for our system. The objective of iCare is to provide quality e-services to the elderly people anywhere and anytime. The new pricing method will go beyond the current iCare e-service delivery process which furnishes personalized and collaborative bundles. iPrice system for pricing information goods fills the gap among previous literatures which only considers consumers or providers. Different from existing works, iPrice system is novel in integrating distinctively important concepts yielding more benefits to consumers and profits to more providers. Thus, iPrice also guides and provides a roadmap for information goods pricing for future research.
426

Additive Latent Variable (ALV) Modeling: Assessing Variation in Intervention Impact in Randomized Field Trials

Toyinbo, Peter Ayo 23 October 2009 (has links)
In order to personalize or tailor treatments to maximize impact among different subgroups, there is need to model not only the main effects of intervention but also the variation in intervention impact by baseline individual level risk characteristics. To this end a suitable statistical model will allow researchers to answer a major research question: who benefits or is harmed by this intervention program? Commonly in social and psychological research, the baseline risk may be unobservable and have to be estimated from observed indicators that are measured with errors; also it may have nonlinear relationship with the outcome. Most of the existing nonlinear structural equation models (SEM’s) developed to address such problems employ polynomial or fully parametric nonlinear functions to define the structural equations. These methods are limited because they require functional forms to be specified beforehand and even if the models include higher order polynomials there may be problems when the focus of interest relates to the function over its whole domain. To develop a more flexible statistical modeling technique for assessing complex relationships between a proximal/distal outcome and 1) baseline characteristics measured with errors, and 2) baseline-treatment interaction; such that the shapes of these relationships are data driven and there is no need for the shapes to be determined a priori. In the ALV model structure the nonlinear components of the regression equations are represented as generalized additive model (GAM), or generalized additive mixed-effects model (GAMM). Replication study results show that the ALV model estimates of underlying relationships in the data are sufficiently close to the true pattern. The ALV modeling technique allows researchers to assess how an intervention affects individuals differently as a function of baseline risk that is itself measured with error, and uncover complex relationships in the data that might otherwise be missed. Although the ALV approach is computationally intensive, it relieves its users from the need to decide functional forms before the model is run. It can be extended to examine complex nonlinearity between growth factors and distal outcomes in a longitudinal study.
427

Energy storage sizing for improved power supply availability during extreme events of a microgrid with renewable energy sources

Song, Junseok 11 October 2012 (has links)
A new Markov chain based energy storage model to evaluate the power supply availability of microgrids with renewable energy generation for critical loads is proposed. Since critical loads require above-average availability to ensure reliable operation during extreme events, e.g., natural disasters, using renewable energy generation has been considered to diversify sources. However, the low availability and high variability of renewable energy sources bring a challenge in achieving the required availability for critical loads. Hence, adding energy storage systems to renewable energy generation becomes vital for ensuring the generation of enough power during natural disasters. Although adding energy storage systems would instantaneously increase power supply availability, there is another critical aspect that should be carefully considered; energy storage sizing to meet certain availability must be taken into account in order to avoid oversizing or undersizing capacity, which are two undesirable conditions leading to inadequate availability or increased system cost, respectively. This dissertation proposes to develop a power supply availability framework for renewable energy generation in a given location and to suggest the optimal size of energy storage for the required availability to power critical loads. In particular, a new Markov chain based energy storage model is presented in order to model energy states in energy storage system, which provides an understanding of the nature of charge and discharge rates for energy storage that affect the system's power output. Practical applications of the model are exemplified using electrical vehicles with photovoltaic roofs. Moreover, the minimal cut sets method is used to analyze the effects of microgrid architectures on availability characteristics of the microgrid power supply in the presence of renewable energy sources and energy storage. In addition, design considerations for energy storage power electronics interfaces and a comparison of various energy storage methods are also presented. / text
428

A computational framework for the solution of infinite-dimensional Bayesian statistical inverse problems with application to global seismic inversion

Martin, James Robert, Ph. D. 18 September 2015 (has links)
Quantifying uncertainties in large-scale forward and inverse PDE simulations has emerged as a central challenge facing the field of computational science and engineering. The promise of modeling and simulation for prediction, design, and control cannot be fully realized unless uncertainties in models are rigorously quantified, since this uncertainty can potentially overwhelm the computed result. While statistical inverse problems can be solved today for smaller models with a handful of uncertain parameters, this task is computationally intractable using contemporary algorithms for complex systems characterized by large-scale simulations and high-dimensional parameter spaces. In this dissertation, I address issues regarding the theoretical formulation, numerical approximation, and algorithms for solution of infinite-dimensional Bayesian statistical inverse problems, and apply the entire framework to a problem in global seismic wave propagation. Classical (deterministic) approaches to solving inverse problems attempt to recover the “best-fit” parameters that match given observation data, as measured in a particular metric. In the statistical inverse problem, we go one step further to return not only a point estimate of the best medium properties, but also a complete statistical description of the uncertain parameters. The result is a posterior probability distribution that describes our state of knowledge after learning from the available data, and provides a complete description of parameter uncertainty. In this dissertation, a computational framework for such problems is described that wraps around the existing forward solvers, as long as they are appropriately equipped, for a given physical problem. Then a collection of tools, insights and numerical methods may be applied to solve the problem, and interrogate the resulting posterior distribution, which describes our final state of knowledge. We demonstrate the framework with numerical examples, including inference of a heterogeneous compressional wavespeed field for a problem in global seismic wave propagation with 10⁶ parameters.
429

On some special-purpose hidden Markov models / Einige Erweiterungen von Hidden Markov Modellen für spezielle Zwecke

Langrock, Roland 28 April 2011 (has links)
No description available.
430

Assessing the Effect of Prior Distribution Assumption on the Variance Parameters in Evaluating Bioequivalence Trials

Ujamaa, Dawud A. 02 August 2006 (has links)
Bioequivalence determines if two drugs are alike. The three kinds of bioequivalence are Average, Population, and Individual Bioequivalence. These Bioequivalence criteria can be evaluated using aggregate and disaggregate methods. Considerable work assessing bioequivalence in a frequentist method exists, but the advantages of Bayesian methods for Bioequivalence have been recently explored. Variance parameters are essential to any of theses existing Bayesian Bioequivalence metrics. Usually, the prior distributions for model parameters use either informative priors or vague priors. The Bioequivalence inference may be sensitive to the prior distribution on the variances. Recently, there have been questions about the routine use of inverse gamma priors for variance parameters. In this paper we examine the effect that changing the prior distribution of the variance parameters has on Bayesian models for assessing Bioequivalence and the carry-over effect. We explore our method with some real data sets from the FDA.

Page generated in 0.0546 seconds