• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 764
  • 222
  • 87
  • 68
  • 60
  • 33
  • 30
  • 24
  • 20
  • 15
  • 10
  • 7
  • 7
  • 6
  • 5
  • Tagged with
  • 1554
  • 272
  • 203
  • 188
  • 154
  • 147
  • 144
  • 143
  • 128
  • 125
  • 87
  • 87
  • 85
  • 81
  • 81
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
591

Failure analysis of globe valve

Park, Kibin January 1996 (has links)
No description available.
592

Root Mean Square-Delay Spread Characteristics for Outdoor to Indoor Wireless Channels in the 5 GHz Band

Kurri, Prasada Reddy 26 July 2011 (has links)
No description available.
593

Bayesian inference on dynamics of individual and population hepatotoxicity via state space models

Li, Qianqiu 24 August 2005 (has links)
No description available.
594

Minimum disparity inference for discrete ranked set sampling data

Alexandridis, Roxana Antoanela 12 September 2005 (has links)
No description available.
595

Modeling The Output From Computer Experiments Having Quantitative And Qualitative Input Variables And Its Applications

Han, Gang 10 December 2008 (has links)
No description available.
596

Multidimensional Khintchine-Marstrand-type Problems

Easwaran, Hiranmoy 29 August 2012 (has links)
No description available.
597

Simulation & Analysis of Peer-to-Peer Network Quality for Measurement Scheduling : Online algorithms, Application for Network QoS Monitoring

Lindstedt, Gustaf January 2019 (has links)
With the growing dependency on Internet connectivity in our daily lives, monitoring connection quality to ensure a good quality of service has become increasingly important. The CheesePi project aims to build a platform for monitoring connection quality from the home user’s perspective. And with peer to peer technologies becoming more prevalent the need for quality of service monitoring between peers become more important. This thesis analyses the problem of scheduling connection quality measurements between peers in a network. A method is presented for scheduling measurements which make use of statistical models of the individual links in the network based on previous measurement data. The method applies the ADWIN1 adaptive windowing algorithm over the models and decides a priority based on the relative window sizes for each link. This method is evaluated against a round-robin scheduler through simulation and is shown to provide a better scheduling than round-robin in most cases in terms of achieving the most “information gain” per measurement iteration. The results show that for sudden changes in a network link the scheduler prioritises measurements for that link and therefore converge its view of the network to the new stable state more quickly than when using round-robin scheduling. The scheduling method was developed to be practically applicable to the CheesePi project and might effectively be deployed in real systems running the CheesePi platform. The thesis also contains an evaluation of two online algorithms for mean and variance as to how they react to change in the data source from which the samples are taken. / Med det ökade beroendet på uppkoppling till internet i vårt dagliga liv har det blivit allt viktigare att kontrollera uppkopplingskvaliteten för att säkerställa att slutanvändaren får en bra service. CheesePi-projektet har som mål att bygga en plattform för att monitorera uppkopplingskvaliteten från en hemanvändares perspektiv. I samband med att peer-to-peer teknologier förekommer mer blir det också allt viktigare att säkerställa uppkopplingskvaliteten mellan hemanvändare. Den här rapporten analyserar problemet med att planera mätningar av uppkopplingskvaliteten mellan hemanvändar-noder i ett nätverk. En metod för att planera mätningar presenteras, som använder sig av statistiska modeller av de individuella länkarna i nätverket som baseras på tidigare mätdata. Metoden applicerar ADWIN1 algoritmen, som använder adaptiva fönster, över de statistiska modellerna och bestämmer en prioritet baserat på fönstrens relativa storlek för varje länk. Denna metod utvärderas mot en “round-robin”-planerare genom simulering och demonstreras ge bättre planeringsresultat än “round-robin” i de flesta fall, när det kommer till att uppnå bäst “informations-ökning” varje mätcykel. Resultaten visar att för plötsliga förändringar i en nätverkslänk prioriterar planeraren mätningar för den länken, och därför konvergerar dess vy av nätverket till det nya stabila tillståndet fortare än för “round-robin”-planeraren. Planeringsmetoden har utvecklats för att vara användbart för CheesePi-projektet och har en möjlighet att användas på riktiga system som kör CheesePi-plattformen. Rapporten innehåller också en utvärdering av två “online”-algoritmer för att beräkna medeltalet och variansen, med avseende på hur de reagerar till förändringar i datakällan som mätvärdena utvinns från.
598

Implementation of mean-variance and tail optimization based portfolio choice on risky assets

Djehiche, Younes, Bröte, Erik January 2016 (has links)
An asset manager's goal is to provide a high return relative the risk taken, and thus faces the challenge of how to choose an optimal portfolio. Many mathematical methods have been developed to achieve a good balance between these attributes and using di erent risk measures. In thisthesis, we test the use of a relatively simple and common approach: the Markowitz mean-variance method, and a more quantitatively demanding approach: the tail optimization method. Using active portfolio based on data provided by the Swedish fund management company Enter Fonderwe implement these approaches and compare the results. We analyze how each method weighs theunderlying assets in order to get an optimal portfolio.
599

Investment Behaviour of Canadian Life Insurance Companies: A Mean-Variance Approach

Krinsky, Itzhak 10 1900 (has links)
<p>In recent years, considerable effort has been directed toward establishing the nature of the investment behaviour of life insurance companies. In this dissertation an extended portfolio analysis model was developed for the simultaneous determination of the efficient composition of insurance and investment activities of a life insurance company. This was done within a model that takes advantage of the existing finance foundations and the concepts and techniques of modern demand system analysis.</p> <p>Unlike current models which used quadratic programming techniques and are interested in the construction of efficient sets, we have used a utility maximization approach. A two parameter portfolio model was constructed utilizing elements of utility theory and of the theory of insurance. The model provided us with the proportion of assets held in the balance sheet as well as which liabilities are used to raise the necessary capital.</p> <p>The model developed has sufficient empirical content to yield hypotheses' about life insurance portfolio behaviour and thus was tested using appropriate econometric techniques. A comparative static analysis yielded elasticities of substitution between financial assets and liabilities. The estimation of these elasticities in the context of a flexible functional form model, forms a central part of this dissertation. More specifically, by utilizing a mean-variance portfolio framework and a general Box-Cox utility function we were able to model the demand for assets and liabilities by an insurance company. On empirical grounds we found that, in general, the square root quadratic utility function best fits the data. We also tried to evaluate the square root quadratic approximation by showing that, broadly speaking, it yields signs for elasticities of substitution which are consistent with the theory.</p> <p>A by-product of the model developed is the ability to compare stock and mutual life insurance companies. The common belief that mutual companies follow a riskier path in the way they conduct their business was supported by the results in this study.</p> <p>The results obtained from the study are of significant importance since life insurance companies have substantial obligations to millions of households in the economy. Furthermore, despite the extraordinary decline in the importance of the life insurance industry in the bond and mortgage markets during the sixties and the seventies, the industry is still a major supplier of funds to those markets.</p> / Doctor of Philosophy (PhD)
600

The use of temporally aggregated data on detecting a structural change of a time series process

Lee, Bu Hyoung January 2016 (has links)
A time series process can be influenced by an interruptive event which starts at a certain time point and so a structural break in either mean or variance may occur before and after the event time. However, the traditional statistical tests of two independent samples, such as the t-test for a mean difference and the F-test for a variance difference, cannot be directly used for detecting the structural breaks because it is almost certainly impossible that two random samples exist in a time series. As alternative methods, the likelihood ratio (LR) test for a mean change and the cumulative sum (CUSUM) of squares test for a variance change have been widely employed in literature. Another point of interest is temporal aggregation in a time series. Most published time series data are temporally aggregated from the original observations of a small time unit to the cumulative records of a large time unit. However, it is known that temporal aggregation has substantial effects on process properties because it transforms a high frequency nonaggregate process into a low frequency aggregate process. In this research, we investigate the effects of temporal aggregation on the LR test and the CUSUM test, through the ARIMA model transformation. First, we derive the proper transformation of ARIMA model orders and parameters when a time series is temporally aggregated. For the LR test for a mean change, its test statistic is associated with model parameters and errors. The parameters and errors in the statistic should be changed when an AR(p) process transforms upon the mth order temporal aggregation to an ARMA(P,Q) process. Using the property, we propose a modified LR test when a time series is aggregated. Through Monte Carlo simulations and empirical examples, we show that the aggregation leads the null distribution of the modified LR test statistic being shifted to the left. Hence, the test power increases as the order of aggregation increases. For the CUSUM test for a variance change, we show that two aggregation terms will appear in the test statistic and have negative effects on test results when an ARIMA(p,d,q) process transforms upon the mth order temporal aggregation to an ARIMA(P,d,Q) process. Then, we propose a modified CUSUM test to control the terms which are interpreted as the aggregation effects. Through Monte Carlo simulations and empirical examples, the modified CUSUM test shows better performance and higher test powers to detect a variance change in an aggregated time series than the original CUSUM test. / Statistics

Page generated in 0.0305 seconds