• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • Tagged with
  • 5
  • 5
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Development of a Block Processing Carrier to Noise Ratio Estimator for the Global Positioning System

Sayre, Michelle Marie 10 December 2003 (has links)
No description available.
2

Extreme behavior and VaR of Short-term interest rate of Taiwan

Chiang, Ming-Chu 21 July 2008 (has links)
The current study empirically analyzes the extreme behavior and the impact of deregulation policies as well as financial turmoil on the extreme behavior of changes of Taiwan short term interest rate. A better knowledge of short-term interest rate properties, such as heavy tails, asymmetry, and uneven tail fatness between right and left tails, provide an insight to the extreme behavior of short-term interest rate as well as a more accurate estimation of interest risk. The predicting performances of filtered and unfiltered VaR (Value at risk) models are also examined to suggest the proper models for management of interest rate risk. By applying Extreme Value theory (EVT), tail behavior is analyzed and tested and the VaR based on parametric and non-parametric EVT models are calculated.The empirical findings show that, first, the distribution of change of rate are heavy-tailed indicating that the actual risk would be underestimated based on normality assumption. Second, the unconditional distribution is consistent with the heavier-tailed distributions such as ARCH process or Student¡¦t. Third, the right tail of distribution of change of rate are significantly heavier than the left one pointing out that the probabilities and magnitudes of rise in rate could be higher than those of drop in rate. Fourth, the amount of tail-fatness in tail of distribution of change of rate increase after 1999 and the vital factors to cause structural break in tail index are the interest rate policies taken by central bank of Taiwan instead of the deregulation policies in money market. Fifth, based on the two break points found in tail index of right and left tail, long sample of CP rates should not be treated as samples from a single distribution. Sixth, the dependent and heteroscedastic properties of data series should be considered in applying EVT to improve accuracy of VaR forecasts. Finally, EVT models predict VaR accurately before 2001 and the benchmark model, HS and GARCH, generally are superior to EVT models after 2001. Among EVT models, MRE and CHE are relative consistent and reliable in VaR prediction.
3

Estimateur neuronal de ratio pour l'inférence de la constante de Hubble à partir de lentilles gravitationnelles fortes

Campeau-Poirier, Ève 12 1900 (has links)
Les deux méthodes principales pour mesurer la constante de Hubble, soit le taux d’expansion actuel de l’Univers, trouvent des valeurs différentes. L’une d’elle s’appuie lourdement sur le modèle cosmologique aujourd’hui accepté pour décrire le cosmos et l’autre, sur une mesure directe. Le désaccord éveille donc des soupçons sur l’existence d’une nouvelle physique en dehors de ce modèle. Si une autre méthode, indépendante des deux en conflit, soutenait une des deux valeurs, cela orienterait les efforts des cosmologistes pour résoudre la tension. Les lentilles gravitationnelles fortes comptent parmi les méthodes candidates. Ce phénomène se produit lorsqu’une source lumineuse s’aligne avec un objet massif le long de la ligne de visée d’un télescope. La lumière dévie de sa trajectoire sur plusieurs chemins en traversant l’espace-temps déformé dans le voisinage de la masse, résultant en une image déformée, gros- sie et amplifiée. Dans le cas d’une source lumineuse ponctuelle, deux ou quatre images se distinguent nettement. Si cette source est aussi variable, une de ses fluctuations apparaît à différents moments sur chaque image, puisque chaque chemin a une longueur différente. Le délai entre les signaux des images dépend intimement de la constante de Hubble. Or, cette approche fait face à de nombreux défis. D’abord, elle requiert plusieurs jours à des spécialistes pour exécuter la méthode de Monte-Carlo par chaînes de Markov (MCMC) qui évalue les paramètres d’un seul système de lentille à la fois. Avec les détections de milliers de systèmes prévues par l’observatoire Rubin dans les prochaines années, cette approche est inconcevable. Elle introduit aussi des simplifications qui risquent de biaiser l’inférence, ce qui contrevient à l’objectif de jeter la lumière sur le désaccord entre les mesures de la constante de Hubble. Ce mémoire présente une stratégie basée sur l’inférence par simulations pour remédier à ces problèmes. Plusieurs travaux antérieurs accélèrent la modélisation de la lentille grâce à l’ap- prentissage automatique. Notre approche complète leurs efforts en entraînant un estimateur neuronal de ratio à déterminer la distribution de la constante de Hubble, et ce, à partir des produits de la modélisation et des mesures de délais. L’estimateur neuronal de ratio s’exécute rapidement et obtient des résultats qui concordent avec ceux de l’analyse traditionnelle sur des simulations simples, qui ont une cohérence statistique acceptable et qui sont non-biaisés. / The two main methods to measure the Hubble constant, the current expansion rate of the Universe, find different values. One of them relies heavily on today’s accepted cosmological model describing the cosmos and the other, on a direct measurement. The disagreement thus arouses suspicions about the existence of new physics outside this model. If another method, independent of the two in conflict, supported one of the two values, it would guide cosmologists’ efforts to resolve the tension. Strong gravitational lensing is among the candidate methods. This phenomenon occurs when a light source aligns with a massive object along a telescope line of sight. When crossing the curved space-time in the vicinity of the mass, the light deviates from its trajectory on several paths, resulting in a distorted and magnified image. In the case of a point light source, two or four images stand out clearly. If this source is also variable, the luminosity fluctuations will appear at different moments on each image because each path has a different length. The time delays between the image signals depend intimately on the Hubble constant. This approach faces many challenges. First, it requires several days for specialists to perform the Markov Chain Monte-Carlo (MCMC) which evaluates the parameters of a single lensing system at a time. With the detection of thousands of lensing systems forecasted by the Rubin Observatory in the coming years, this method is inconceivable. It also introduces simplifications that risk biasing the inference, which contravenes the objective of shedding light on the discrepancy between the Hubble constant measurements. This thesis presents a simulation-based inference strategy to address these issues. Several previous studies have accelerated the lens modeling through machine learning. Our approach complements their efforts by training a neural ratio estimator to determine the distribution of the Hubble constant from lens modeling products and time delay measurements. The neural ratio estimator results agree with those of the traditional analysis on simple simulations, have an acceptable statistical consistency, are unbiased, and are obtained significantly faster.
4

On unequal probability sampling designs

Grafström, Anton January 2010 (has links)
The main objective in sampling is to select a sample from a population in order to estimate some unknown population parameter, usually a total or a mean of some interesting variable. When the units in the population do not have the same probability of being included in a sample, it is called unequal probability sampling. The inclusion probabilities are usually chosen to be proportional to some auxiliary variable that is known for all units in the population. When unequal probability sampling is applicable, it generally gives much better estimates than sampling with equal probabilities. This thesis consists of six papers that treat unequal probability sampling from a finite population of units. A random sample is selected according to some specified random mechanism called the sampling design. For unequal probability sampling there exist many different sampling designs. The choice of sampling design is important since it determines the properties of the estimator that is used. The main focus of this thesis is on evaluating and comparing different designs. Often it is preferable to select samples of a fixed size and hence the focus is on such designs. It is also important that a design has a simple and efficient implementation in order to be used in practice by statisticians. Some effort has been made to improve the implementation of some designs. In Paper II, two new implementations are presented for the Sampford design. In general a sampling design should also have a high level of randomization. A measure of the level of randomization is entropy. In Paper IV, eight designs are compared with respect to their entropy. A design called adjusted conditional Poisson has maximum entropy, but it is shown that several other designs are very close in terms of entropy. A specific situation called real time sampling is treated in Paper III, where a new design called correlated Poisson sampling is evaluated. In real time sampling the units pass the sampler one by one. Since each unit only passes once, the sampler must directly decide for each unit whether or not it should be sampled. The correlated Poisson design is shown to have much better properties than traditional methods such as Poisson sampling and systematic sampling.
5

Evaluating the use of clock frequency ratio estimators in the playout from video distribution networks / Utvärdering av klockfrekvensratiosuppskattare i videoutspelning från ett distributionsnätverk

Myresten, Emil January 2023 (has links)
As traditional TV-broadcasters utilize the Internet to transport video streams, they often employ third party distribution networks to ensure that the Quality of Service of the packet stream remain high. In the last step of such a distribution network, a playout scheduler will schedule the packets so that their intervals are as close as possible to the intervals with which they were initially sent by the source. This is done with the aim to minimize the amount of packet delay variation experienced by the final destination. Due to the source and distribution network not always being synchronized to the same reference clock, reconstructing the packet intervals back into the initial values is subject to the issue of clock skew; the clocks run at different frequencies. In the presence of clock skew, each packet interval will be reconstructed with a slight error, which will accumulate throughout the packet stream. This thesis evaluates how clock frequency ratio estimators can be implemented as part of the playout scheduler, allowing it to better reconstruct the packet intervals in the face of clock skew. Two clock frequency ratio estimators presented in the literature are implemented as a part of playout schedulers, and their use in the context of a video distribution network is evaluated and compared to other playout schedulers. All in all, four of the considered playout schedulers employ clock frequency ratio estimation, and four do not. The playout schedulers are tested on a test bed consisting of two unsynchronized computers, physically separated into a source and a destination connected via Ethernet, to ensure the presence of clock skew. The source generates a video stream, which is sent to the destination. The destination is responsible for packet interval reconstruction and data collection, that allows for comparison of the eight playout schedulers. Each playout scheduler is evaluated under three different network scenarios, each network scenario with increasing amounts of packet delay variation added to the packet stream. The results show that the Cumulative Ratio Scaling with Warm-up scheduler, which employs a clock frequency ratio estimator based on accumulating inter-packet times, performs well under all three network scenarios. The behaviour of the playout scheduler is predictable and the frequency ratio estimate seems to converge towards the true clock frequency ratio as more packets arrive at the playout scheduler. While this playout scheduler is not perfect, its behaviour shows promise in being extended. / När traditionella TV-bolag sänder från avlägsna platser skickas ofta videoströmmen till huvudanläggningen via Internet. För att säkerställa att paketströmmen levereras till huvudanläggningen med hög kvalitet används ofta distributionsnätverk som tillhandahålls av en tredje part. Det sista steget i ett sådant distributionsnätverk utgörs av en utspelningsschemaläggare som schemalägger paketen så att de skickas ut med intervall så lika som möjligt de intervall paketen ursprungligen skickades med, en så kallad återkonstruktion av paketintervallen. Detta görs för att minimera mängden fördröjningsvariation som upplevs av den slutgiltiga destinationen. På grund av att källan och distributionsnätverket inte alltid är synkroniserade till samma referensklocka kommer återkonstruktionen av paketintervallen påverkas av klockskevning; klockorna i källan och det sista steget i distributionsnätverket går i olika takt. Klockskevningen innebär att varje paketintervall återskapas med ett litet fel – ett fel som ackumuleras över tid. Denna uppsats utvärderar hur klockfrekvensratiouppskattare kan användas i en utspelningsschemaläggare, och huruvida uppskattaren kan bidra till att bättre återkonstruera paketintervallen. Två uppskattare som presenterats i tidigare forskning implementeras i utspelningsschemaläggare, och dess användbarhet utvärderas och jämförs inom kontexten för videodistributionsnätverk. Fyra av de utvärderade utspelningsschemaläggarna använder sig av uppskattare och fyra gör det inte. Utspelningsschemaläggarna testas på en testbädd bestående av två osynkroniserade datorer, sammankopplade via Ethernet, för att säkerställa förekomsten av klockskevning. Källan skickar en videoström till destinationen, som i sig ansvarar för återkonstruktion av paketintervallen samt insamling av den data som möjliggör jämförelser mellan de åtta utspelningsschemaläggarna. Varje utspelningsschemaläggare testas under tre olika nätverksscenarion, där varje nätverksscenario utsätter paketströmmen för olika grader av fördröjningsvariation. Resultaten visar att en av utspelningsschemaläggarna, som använder en uppskattare där paketintervall ackumuleras över tid, presterar bra under alla tre nätverksscenarion. Schemaläggaren beter sig förutsägbart, och uppskattningen av klockfrekvensration verkar konvergera till den sanna klockfrekvensration i takt med att allt fler paket inkluderas i beräkningen. Utspelningsschemaläggaren är inte perfekt, men uppvisar lovande beteende för framtida förbättringar.

Page generated in 0.0978 seconds