• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 627
  • 267
  • 111
  • 73
  • 43
  • 43
  • 35
  • 22
  • 17
  • 11
  • 8
  • 7
  • 5
  • 5
  • 5
  • Tagged with
  • 1429
  • 530
  • 171
  • 160
  • 157
  • 147
  • 114
  • 104
  • 104
  • 100
  • 100
  • 97
  • 95
  • 94
  • 93
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Investor sentiment and the mean-variance relationship: European evidence

Wang, Wenzhao 09 March 2020 (has links)
Yes / This paper investigates the impact of investor sentiment on the mean-variance relationship in 14 European stock markets. Applying three approaches to define investors’ neutrality and determine high and low sentiment periods, we find that individual investors’ increased presence and trading over high-sentiment periods would undermine the risk-return tradeoff. More importantly, we report that investors’ optimism (pessimism) is more determined by their normal sentiment state, represented by the all-period average sentiment level, rather than the neutrality value set in sentiment surveys.
252

The mean-variance relation and the role of institutional investor sentiment

Wang, Wenzhao 09 March 2020 (has links)
Yes / This paper investigates the role of institutional investor sentiment in the mean–variance relation. We find market returns are negatively (positively) related to market’s conditional volatility over bullish (bearish) periods. The evidence indicates institutional investors to be sentiment traders as well.
253

The mean–variance relation: A 24-hour story

Wang, Wenzhao 07 October 2021 (has links)
Yes / This paper investigates the mean-variance relation during different time periods within trading days. We reveal that there is a positive mean-variance relation when the stock market is closed (i.e., overnight), but the positive relation is distorted when the market is open (i.e., intraday). The evidence offers a new explanation for the weak risk-return tradeoff in stock markets.
254

Graphical assessment of the prediction capability of response surface designs

Giovannitti-Jensen, Ann January 1987 (has links)
A response surface analysis is concerned with the exploration of a system in order to determine the behavior of the response of the system as levels of certain factors which influence the response are changed. It is often of particular interest to predict the response in some region of the allowable factor values and to find the optimal operating conditions of the system. In an experiment to search for the optimum response of a surface it is advantageous to predict the response with equal, or nearly equal, precision at all combinations of the levels of the variables which represent locations which are the same distance from the center of the experimental region. Measures of the quality of prediction at locations on the surface of a hypersphere are presented in this thesis. These measures are used to form a graphical method of assessing the overall prediction capability of an experimental design throughout the region of interest. Rotatable designs give equal variances of predicted values corresponding to locations on the same sphere. In this case, the center of the sphere coincides with the center of the rotatable design. However, there is a need for a method to quantify the prediction capability on spheres for non-rotatable designs. The spherical variance is a measure of the average prediction variance at locations on the surface of a sphere. The spherical variance obtained with a design provides an assessment of how well the response is being estimated on the average at locations which are the same distance from the region center. This thesis introduces two measures which describe the dispersion in the variances of the predicted responses at aH locations on the surface of a sphere. These prediction variance dispersion (PVD) measures are used to evaluate the ability of a design to estimate the response with consistent precision at locations which are the same distance from the region center. The PVD measures are used in conjunction with the spherical variance to assess the prediction capability of a design. A plot of the spherical variance and the maximum and minimum prediction variances for locations on a sphere against the radius of the sphere gives a comprehensive picture of the behavior of the prediction variances throughout a region, and, hence, of the quality of the predicted responses, obtained with a particular design. Such plots are used to investigate and compare the prediction capabilities of certain response surface designs currently available to the researcher. The plots are also used to investigate the robustness of a design under adverse experimental conditions and to determine the effects of taking an additional experimental run on the quality of the predicted responses. / Ph. D.
255

Design and analysis for a two level factorial experiment in the presence of dispersion effects

Mays, Darcy P. 10 October 2005 (has links)
Standard response surface methodology experimental designs for estimating location models involve the assumption of homogeneous variance throughout the design region. However, with heterogeneity of variance these standard designs are not optimal. Using the D and Q-optimality criteria, this dissertation proposes a two-stage experimental design procedure that gives more efficient designs than the standard designs when heterogeneous variance exists. Several multiple variable location models, with and without interactions, are considered. For each the first stage estimates the heterogeneous variance structure, while the second stage then augments the first stage to produce a D or Q-optimal design for fitting the location model under the estimated variance structure. However, there is a potential instability of the variance estimates in the first stage that can lower the efficiency of the two-stage procedure. This problem can be addressed and the efficiency of the procedure enhanced if certain mild assumptions concerning the variance structure are made and formulated as a prior distribution to produce a Bayes estimator. With homogeneous variance, designs are analyzed using ordinary least squares. However, with heterogeneous variance the correct analysis is to use weighted least squares. This dissertation also examines the effects that analysis by weighted least squares can have and compares this procedure to the proposed two-stage procedure. / Ph. D.
256

A graphical approach for evaluating the potential impact of bias due to model misspecification in response surface designs

Vining, G. Geoffrey January 1988 (has links)
The basic purpose of response surface analysis is to generate a relatively simple model to serve as an adequate approximation for a more complex phenomenon. This model then may be used for other purposes, for example prediction or optimization. Since the proposed model is only an approximation, the analyst almost always faces the potential of bias due to model misspecification. The ultimate impact of this bias depends upon the choice both of the experimental design and of the region for conducting the experiment. This dissertation proposes a graphical approach for evaluating the impact of bias upon response surface designs. Essentially, it extends the work of Giovannitti-Jensen (1987) and Giovannitti-Jensen and Myers (1988) who have developed a graphical technique for displaying a design's prediction variance capabilities. This dissertation extends this concept: (1) to the prediction bias due to model misspecification; (2) the prediction bias due to the presence of a single outlier; and (3) to a mean squared error of prediction. Several common first and second-order response surface designs are evaluated through this approach. / Ph. D.
257

Les modèles VAR(p)

Chukunyere, Amenan Christiane 31 July 2019 (has links)
Ce mémoire a pour objectif d’étudier une famille de méthodes pour modéliser de façon conjointe plusieurs séries temporelles. Nous nous servons de ces méthodes pour prédire le comportement de cinq séries temporelles américaines et de ressortir les liens dynamiques qui pourraient exister entre elles. Pour ce faire, nous utilisons les modèles de vecteurs autorégressifs d’ordre p proposés par Sims (1980) qui sont une généralisation multivariée des modèles de Box et Jenkins. Tout d’abord, nous définissons un ensemble de concepts et outils statistiques qui seront utiles à la compréhension de notions utilisées par la suite dans ce mémoire. S’ensuit la présentation des modèles et de la méthode de Box et Jenkins. Cette méthode est appliquée à chacune des cinq séries en vue d’avoir des modèles univariés. Puis, nous présentons les modèles VAR(p) et nous faisons un essai d’ajustement de ces modèles à un vecteur dont les composantes sont les cinq séries. Nous discutons de la valeur ajoutée de l’analyse multivariée par rapport à l’ensemble des analyses univariées / This thesis aims to study a family of methods to jointly model several time series. We use these methods to predict the behavior of five US time series and to highlight the dynamic links that might exist between them. To do this, we use the p-order autoregressive vector models proposed by Sims (1980), which are a multivariate generalization of the Box and Jenkins models. First, we define a set of concepts and statistical tools that will be useful for the understanding of notions used later in this thesis. Follows the presentation of the models and the method of Box and Jenkins. This method is applied to each of the five series in order to have univariate models. Then, we present the VAR(p) models and we test the fit of these models to a vector series whose components are the five aforementioned series. We discuss the added value of multivariate analysis compared to the five univariate analyzes.
258

Optimization Techniques for Multi-object Detection and Tracking on Live-cell Fluorescence Microscopy Images and Their Applications

Wang, Mengfan 24 July 2024 (has links)
Fluorescence microscopy is a pivotal imaging technique to visualize biological processes and has been extensively utilized in live-cell morphology analysis. Despite its utility, related object detection and tracking tasks still face challenges due to large data scales, inferior data quality, and insufficient annotations, leading to reliance on adaptive thresholding. Current adaptive thresholding approaches have two significant limitations: Firstly, they cannot handle the heteroscedasticity of image data well and result in biased outputs. Secondly, they deal with frames of time-series imaging data independently and result in inconsistent detections over time. We introduce two novel optimization techniques to address these limitations and enhance detection and tracking results in live-cell imaging. The first one, ConvexVST, is a convex optimization approach to transform heteroscedastic data into homoscedastic data, making them more tractable for subsequent analysis. The second one, Joint Thresholding, is a graph-based approach to get the optimal adaptive thresholds while maintaining temporal consistency. Our methods demonstrate superior performance across various object detection and tracking tasks. Specifically, when applied to microglia imaging data, our techniques enable the acquisition of more complete cell morphology and more accurate detection of microglia tips. Furthermore, by integrating these techniques with existing frameworks, we propose an advanced pipeline for embryonic cell detection and tracking in light-sheet microscopy images, which is streets ahead of state-of-the-art peer methods and sets a new benchmark in the field. / Doctor of Philosophy / Fluorescence microscopy is an important imaging tool for observing biological processes and is widely used to study live-cell structures and activities. However, detecting and tracking objects in these images can be difficult because of the large amount of data, poor image quality, and lack of accurate annotations. It leads to the reliance on basic image segmentation approaches, which try to distinguish foreground from background by setting intensity thresholds. These methods have two main problems: they don't handle varying noise in image data well, resulting in inaccurate outputs, and they analyze each frame in a sequence of images independently, causing inconsistencies over time. To solve these issues, we developed two new techniques to improve detection performance in live-cell imaging. The first one, ConvexVST, makes the noise levels in image data more uniform, simplifying the following analysis. The second one, Joint Thresholding, can find the best intensity thresholds while maintaining consistency across frames over time. Our methods have shown significant improvements in detecting and tracking objects. For example, when applied to images of microglia (a type of brain cell), they provide more complete cell shapes and more accurate detection of cell structures. Additionally, by combining these techniques with existing frameworks, we create an advanced pipeline for detecting and tracking embryonic cells that outperforms current leading methods.
259

Měření výkonnosti klasifikačního systému DRG v České republice / Measuring the DRG classification system performance in the Czech Republic

Nový, Petr January 2016 (has links)
No description available.
260

Numerical methods for homogenization : applications to random media / Techniques numériques d'homogénéisation : application aux milieux aléatoires

Costaouec, Ronan 23 November 2011 (has links)
Le travail de cette thèse a porté sur le développement de techniques numériques pour l'homogénéisation de matériaux présentant à une petite échelle des hétérogénéités aléatoires. Sous certaines hypothèses, la théorie mathématique de l'homogénéisation stochastique permet d'expliciter les propriétés effectives de tels matériaux. Néanmoins, en pratique, la détermination de ces propriétés demeure difficile. En effet, celle-ci requiert la résolution d'équations aux dérivées partielles stochastiques posées sur l'espace tout entier. Dans cette thèse, cette difficulté est abordée de deux manières différentes. Les méthodes classiques d'approximation conduisent à approcher les propriétés effectives par des quantités aléatoires. Réduire la variance de ces quantités est l'objectif des travaux de la Partie I. On montre ainsi comment adapter au cadre de l'homogénéisation stochastique une technique de réduction de variance déjà éprouvée dans d'autres domaines. Les travaux de la Partie II s'intéressent à des cas pour lesquels le matériau d'intérêt est considéré comme une petite perturbation aléatoire d'un matériau de référence. On montre alors numériquement et théoriquement que cette simplification de la modélisation permet effectivement une réduction très importante du coût calcul / In this thesis we investigate numerical methods for the homogenization of materials the structures of which, at fine scales, are characterized by random heterogenities. Under appropriate hypotheses, the effective properties of such materials are given by closed formulas. However, in practice the computation of these properties is a difficult task because it involves solving partial differential equations with stochastic coefficients that are additionally posed on the whole space. In this work, we address this difficulty in two different ways. The standard discretization techniques lead to random approximate effective properties. In Part I, we aim at reducing their variance, using a well-known variance reduction technique that has already been used successfully in other domains. The works of Part II focus on the case when the material can be seen as a small random perturbation of a periodic material. We then show both numerically and theoretically that, in this case, computing the effective properties is much less costly than in the general case

Page generated in 0.0536 seconds