• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 86
  • 26
  • 13
  • 12
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 163
  • 163
  • 26
  • 26
  • 24
  • 22
  • 21
  • 20
  • 19
  • 18
  • 18
  • 17
  • 17
  • 16
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Adaptive filtering for maritime target tracking from an airborne radar

Zimmer, Loïc January 2018 (has links)
Maritime target tracking from an airborne radar faces many issues due to the features of theenvironment, the targets to be tracked and the movement of the radar platform. Therefore, aunique tracking algorithm is not always able to reach the best possible performance for everyencountered situation. It needs to self-adapt to the environment and to the targets which areobserved in order to always be as ecient as possible. Adaptability is thus a key issue of radartracking.Several implementations of the mathematical Bayesian estimation theory, commonly called lters,have been used in the literature in order to estimate as precisely as possible targets trajectory.Depending on the situations and the assumptions that are considered, some of themare expected to perform better. This thesis suggests to look deeper into the tracking techniquesthat can be found in the literature and compare them in order to dene more precisely the advantagesof each of them over the others. This should enable to wisely choose the method thatis most likely to provide the best performance for a given situation. In particular, the nonlinearconversion between the Cartesian coordinates with which the state vector is dened and thespherical coordinates used for the measurements is investigated. A measure of nonlinearity isintroduced, studied and used to compare the extended Kalman lter and the particle lter.The size of the detected maritime targets is a special feature that makes it possible to draw amaneuverability-based classication which enables to adapt the tracking technique to be used.Joint tracking and classication (JTC) has already been described in the literature with a specicmeasurement model. This thesis makes this model more realistic using a random distribution ofthe reection point on the target's shape. The tracking method is modied to take into accountthis new measurement model and some simulations are run.This modied JTC algorithm proves to be more ecient than the JTC structure presented inthe literature. Eventually, this thesis shows that nonlinearity is a paramount issue that needsto be considered to implement an ecient self-adapatable radar tracking algorithm, this beingespecially true for extended targets. / Maritim malfoljning fran en luftburen radar star infor manga problem pa grund av miljons karaktar, de mal som ska sparas och radarplattformens rorelse. Darfor kan en unik sparningsalgoritminte na basta mojliga prestanda for varje situation som uppstar. Den maste anpassa sig sjalvtill miljon och till de mal som overvakas for att bli sa eektiv som mojligt. Anpassningsformagaar alltsa en viktig fraga inom radarsparning.Flera implementeringar av den matematiska Bayesianska berakningsteorin, vanligtvis kalladelter, har anvants i litteraturen for att forutsaga malbanor sa exakt som mojligt. Beroendepa situationer och antaganden som beaktas forvantas vissa av dem bli battre. Denna avhandlingforeslar att noggrant undersoka sparningsteknikerna som kan hittas i litteraturen ochjamfora dem for att mer precist deniera fordelarna av var och en framfor de andra. Det skulleunderlatta ett klokt val av metoden som mest sannolikt ger basta prestanda for varje given situation.Sarskilt undersoks den icke-linjara omvandlingen mellan kartesiska koordinatsystemet,som denierar tillstandsvektorn, och sfariska koordinater som anvands for matningarna. Ettmatt pa icke-linjaritet presenteras, studeras och anvands for att jamfora ett utokat Kalmanltermed partikelltret.Storleken pa de detekterade maritima malen ar en speciell egenskap som gor det mojligt attgora en klassicering baserad pa manovrerbarhet som hjalper till att anpassa sparningsteknikensom ska anvandas. Simultan foljning och klassiering, "joint tracking and classication" (JTC)pa engelska, har redan beskrivits i litteraturen med en specik matmodell. Denna avhandlinggor modellen mer realistisk med hjalp av en slumpmassig fordelning av reektionspunkten pamalets form. Sparningsmetoden ar modierad for att beakta denna nya matmodell och nagrasimuleringar utfors.Denna modierade JTC-struktur visar sig mer eektiv an JTC-strukturen som presenteras ilitteraturen. Slutligen visar denna avhandling att icke-linjaritet ar en viktig fraga som mastebeaktas for att erhalla en eektiv radarsparningsalgoritm som kan anpassa sig sjalv. Dettagaller sarskilt for utstrackta mal.
152

Map-aided localization for autonomous driving using a particle filter

Eriksson, Simon January 2020 (has links)
Vehicles losing their GPS signal is a considerable issue for autonomous vehicles and can be a danger to people in their vicinity. To circumvent this issue, a particle filter localization technique using pre-generated offline Open Street Map (OSM) maps was investigated in a software simulation of Scania’s heavy-duty trucks. The localization technique runs in real-time and provides a way to localize the vehicle safely if the starting position is known. Access to global localization was limited, and the particle filter still succeeded in localizing the vehicle in the vicinity of the correct road segment by creating a graph of the map information and matching the trajectory to the vehicle’s sensor data. The mean error of the Particle filter localization technique in optimal conditions is 16m, which is 20% less than an optimally tuned dead reckoning solution. The mean error is about 50% larger compared to a Global Positioning System. The final product shows potential for expansion but requires more investigation to allow for real-world deployment. / Att fordon kan mista sin GPS-signal är ett stort problem för autonoma fordon och kan vara en fara för människor i dess närhet. För att undvika detta problem föreslås en icke-global lokaliseringsteknik som använder Open Street Maps-kartor (OSM) och ett partikelfilter för att lokalisera fordonet i en mjukvarusimulation. Implementering körs i realtid och anger fordonets position med en tillräcklig träffsäkerhet för att det inte ska utgöra någon fara om dess startposition är känd. Globala lokaliseringsmöjligheter var begränsade, och partikelfiltret lyckades lokalisera fordonet till rätt vägsegment genom att konstruera en graf över den kartinformation den läst in och para ihop fordonets nuvarande färdväg med denna. Resultatet ger en lösning som optimalt har ett medelfel på 16m, vilket är 20% mindre än medelfelet jämfört med optimiserad dödräkning. Lösningen har ca 50% större medelfel än positionering med GPS. Slutresultatet visar en potential att användas i verkliga situationer, men kräver mer undersökningar.
153

DeePMOS: Deep Posterior Mean-Opinion-Score for Speech Quality Assessment : DNN-based MOS Prediction Using a Posterior / DeePMOS: Deep Posterior Mean-Opinion-Score för talkvalitetsbedömning : DNN-baserad MOS-prediktion med hjälp av en posterior

Liang, Xinyu January 2024 (has links)
This project focuses on deep neural network (DNN)-based non-intrusive speech quality assessment, specifically addressing the challenge of predicting mean-opinion-score (MOS) with interpretable posterior distributions. The conventional approach of providing a single point estimate for MOS lacks interpretability and doesn't capture the uncertainty inherent in subjective assessments. This thesis introduces DeePMOS, a novel framework capable of producing MOS predictions in the form of posterior distributions, offering a more nuanced and understandable representation of speech quality. DeePMOS adopts a CNN-BLSTM architecture with multiple prediction heads to model Gaussian and Beta posterior distributions. For robust training, we use a combination of maximum-likelihood learning, stochastic gradient noise, and a student-teacher learning setup to handle limited and noisy training data. Results showcase DeePMOS's competitive performance, particularly with DeePMOS-B achieving state-of-the-art utterance-level performance. The significance lies in providing accurate predictions along with a measure of confidence, enhancing transparency and reliability. This opens avenues for application in domains such as telecommunications and audio-processing systems. Future work could explore additional posterior distributions, evaluate the model on high-quality datasets, and consider incorporating listener-dependent scores. / Detta projekt fokuserar på icke-intrusiv bedömning av tal-kvalitet med hjälp av djupa neurala nätverk (DNN), särskilt för att hantera utmaningen att förutsäga mean-opinion-score (MOS) med tolkningsbara posteriora fördelningar. Den konventionella metoden att ge en enda punktsuppskattning för MOS saknar tolkningsbarhet och fångar inte osäkerheten som är inneboende i subjektiva bedömningar. Denna avhandling introducerar DeePMOS, en ny ramverk kapabel att producera MOS-förutsägelser i form av posteriora fördelningar, vilket ger en mer nyanserad och förståelig representation av tal-kvalitet. DeePMOS antar en CNN-BLSTM-arkitektur med flera förutsägelsehuvuden för att modellera Gaussiska och Beta-posteriora fördelningar. För robust träning använder vi en kombination av maximum-likelihood learning, stokastisk gradientbrus och en student-lärare inlärningsuppsättning för att hantera begränsad och brusig träningsdata. Resultaten visar DeePMOS konkurrenskraftiga prestanda, särskilt DeePMOS-B som uppnår state-of-the-art prestanda på uttalnivå. Signifikansen ligger i att ge noggranna förutsägelser tillsammans med en mått på förtroende, vilket ökar transparensen och tillförlitligheten. Detta öppnar möjligheter för tillämpningar inom områden som telekommunikation och ljudbehandlingssystem. Framtida arbete kan utforska ytterligare posteriora fördelningar, utvärdera modellen på högkvalitativa dataset och överväga att inkludera lyssnarberoende poäng.
154

Essays on oil price fluctuations and macroeconomic activity

Mètoiolè Somé, Dommèbèiwin Juste 08 1900 (has links)
Dans cette thèse, je me suis intéressé aux effets des fluctuations du prix de pétrole sur l'activité macroéconomique selon la cause sous-jacente ces fluctuations. Les modèles économiques utilisés dans cette thèse sont principalement les modèles d'équilibre général dynamique stochastique (de l'anglais Dynamic Stochastic General Equilibrium, DSGE) et les modèles Vecteurs Autorégressifs, VAR. Plusieurs études ont examiné les effets des fluctuations du prix de pétrole sur les principaux variables macroéconomiques, mais très peu d'entre elles ont fait spécifiquement le lien entre les effets des fluctuations du prix du pétrole et la l'origine de ces fluctuations. Pourtant, il est largement admis dans les études plus récentes que les augmentations du prix du pétrole peuvent avoir des effets très différents en fonction de la cause sous-jacente de cette augmentation. Ma thèse, structurée en trois chapitres, porte une attention particulière aux sources de fluctuations du prix de pétrole et leurs impacts sur l'activité macroéconomique en général, et en particulier sur l'économie du Canada. Le premier chapitre examine comment les chocs d'offre de pétrole, de demande agrégée, et de demande de précaution de pétrole affectent l'économie du Canada, dans un Modèle d'équilibre Général Dynamique Stochastique estimé. L'estimation est réalisée par la méthode Bayésienne, en utilisant des données trimestrielles canadiennes sur la période 1983Q1 à 2010Q4. Les résultats montrent que les effets dynamiques des fluctuations du prix du pétrole sur les principaux agrégats macro-économiques canadiens varient en fonction de leurs sources. En particulier, une augmentation de 10% du prix réel du pétrole causée par des chocs positifs sur la demande globale étrangère a un effet positif significatif de l'ordre de 0,4% sur le PIB réel du Canada au moment de l'impact et l'effet reste positif sur tous les horizons. En revanche, une augmentation du prix réel du pétrole causée par des chocs négatifs sur l'offre de pétrole ou par des chocs positifs de la demande de pétrole de précaution a un effet négligeable sur le PIB réel du Canada au moment de l'impact, mais provoque une baisse légèrement significative après l'impact. En outre, parmi les chocs pétroliers identifiés, les chocs sur la demande globale étrangère ont été relativement plus important pour expliquer la fluctuation des principaux agrégats macroéconomiques du Canada au cours de la période d'estimation. Le deuxième chapitre utilise un modèle Structurel VAR en Panel pour examiner les liens entre les chocs de demande et d'offre de pétrole et les ajustements de la demande de travail et des salaires dans les industries manufacturières au Canada. Le modèle est estimé sur des données annuelles désagrégées au niveau industriel sur la période de 1975 à 2008. Les principaux résultats suggèrent qu'un choc positif de demande globale a un effet positif sur la demande de travail et les salaires, à court terme et à long terme. Un choc négatif sur l'offre de pétrole a un effet négatif relativement faible au moment de l'impact, mais l'effet devient positif après la première année. En revanche, un choc positif sur la demande précaution de pétrole a un impact négatif à tous les horizons. Les estimations industrie-par-industrie confirment les précédents résultats en panel. En outre, le papier examine comment les effets des différents chocs pétroliers sur la demande travail et les salaires varient en fonction du degré d'exposition commerciale et de l'intensité en énergie dans la production. Il ressort que les industries fortement exposées au commerce international et les industries fortement intensives en énergie sont plus vulnérables aux fluctuations du prix du pétrole causées par des chocs d'offre de pétrole ou des chocs de demande globale. Le dernier chapitre examine les implications en terme de bien-être social de l'introduction des inventaires en pétrole sur le marché mondial à l'aide d'un modèle DSGE de trois pays dont deux pays importateurs de pétrole et un pays exportateur de pétrole. Les gains de bien-être sont mesurés par la variation compensatoire de la consommation sous deux règles de politique monétaire. Les principaux résultats montrent que l'introduction des inventaires en pétrole a des effets négatifs sur le bien-être des consommateurs dans chacun des deux pays importateurs de pétrole, alors qu'il a des effets positifs sur le bien-être des consommateurs dans le pays exportateur de pétrole, quelle que soit la règle de politique monétaire. Par ailleurs, l'inclusion de la dépréciation du taux de change dans les règles de politique monétaire permet de réduire les coûts sociaux pour les pays importateurs de pétrole. Enfin, l'ampleur des effets de bien-être dépend du niveau d'inventaire en pétrole à l'état stationnaire et est principalement expliquée par les chocs sur les inventaires en pétrole. / In this thesis, I am interested in the effects of fluctuations in oil prices on macroeconomic activity depending on the underlying cause of these fluctuations. The economic models used in this thesis include the Dynamic Stochastic General Equilibrium (DSGE) Models and Vector Autoregressive (VAR) Models. Several studies have examined the effects of fluctuations in oil price on the main macroeconomic variables, but very few of theses studies have specifically made the link between the effects of fluctuations in oil prices and the origin of these fluctuations. However, it is widely accepted in more recent studies that oil price increases may have very different effects depending on the underlying cause of that increase. My thesis, structured in three chapters, is focused on the sources of fluctuations in oil price and their impacts on the macroeconomic activity in general, and in particular on the canadian economy. The first chapter of the thesis investigates how oil supply shocks, aggregate demand shocks, and precautionary oil demand shocks affect Canada's economy, within an estimated Dynamic Stochastic General Equilibrium (DSGE) model. The estimation is conducted using Bayesian methods, with Canadian quarterly data from 1983Q1 to 2010Q4. The results suggest that the dynamic effects of oil price shocks on Canadian macroeconomic variables vary according to their sources. In particular, a 10% increase in the real price of oil driven by positive foreign aggregate demand shocks has a significant positive effect of about 0.4% on Canada's real GDP upon impact and the effect remains positive over time. In contrast, an increase in the real price of oil driven by negative foreign oil supply shocks or by positive precautionary oil demand shocks causes an insignificant effect on Canada's real GDP upon impact but causes a slightly significant decline afterwards. The intuition is that a positive innovation in aggregate demand tends to increase the demand for Canada's overall exports. Oil supply disruptions in foreign countries or positive precautionary oil demand shocks increase the uncertainty about future oil prices, which leads firms to postpone irreversible investment expenditures, and tends to reduce Canada's real GDP. Furthermore, among the identified oil shocks, foreign aggregate demand shocks have been relatively more important in explaining the variations of most of Canadian macroeconomic variables over the estimation period. The second chapter examines the links between oil demand and supply shocks and labor market adjustments in Canadian manufacturing industries using a panel structural VAR model. The model is estimated with disaggregated annual data at the industry level from 1975 to 2008. The results show that a positive aggregate demand shock increases both labor and the price of labor over a 20-year period. A negative oil supply shock has a relatively small negative effect upon impact but the effect turns positive after the first year. In contrast, a positive precautionary oil demand shock has a negative impact over all horizons. The paper also examines how the responses to different types of oil shocks vary from industry to industry. The results suggest that industries with higher net trade exposure/oil-intensity are more vulnerable to oil price increases driven by oil supply shocks and aggregate demand shocks. The third chapter examines the welfare implications of introducing competitive storage on the global oil market using a three country DSGE model characterized by two oil-importing countries and one oil-exporting country. The welfare gains are measured by consumption compensating variation under two alternative monetary policy rules. The main results indicate that the introduction of oil storage has negative welfare effects for each of the two oil importing countries, while it has positive welfare effects for the oil exporting country, whatever the monetary policy rule. I also found that including the exchange rate depreciation in the monetary policy rules allows to slightly reduce the welfare costs for both oil importing countries. Finally, the magnitude of the welfare effects depends on the steady state level of oil storage and is mainly driven by oil storage shocks.
155

Hazard functions and macroeconomic dynamics

Yao, Fang 24 January 2011 (has links)
In dieser Arbeit werden die Folgen der Calvo-Annahme in dynamischen makroökonomischen Modellen untersucht. Dafür wird die Calvo-Annahme unter Anwendung des Konzepts der statistischen Hazardfunktion verallgemeinert. Ich untersuche zwei mögliche Anwendungen dieses Ansatzes innerhalb von DSGE-Modellen. Im ersten Artikel zeige ich, dass der Zugewinn an Handhabbarkeit, der aus der Calvo-Annahme für Neu-Keynesianische Modelle folgt, mit unerwünschten Folgen in Bezug auf die Inflationsdynamiken einher geht. Der zweite Artikel schätzt die aggregierte Hazardfunktion unter Verwendung des theoretischen Rahmens des ersten Artikels. Es zeigt sich, dass die Annahme einer konstanten Hazardfunktion, die aus der Calvo-Annahme folgt, von den Daten eindeutig abgelehnt wird. Im dritten Artikel analysiere ich die Implikationen der empirisch geschätzten Hazardfunktion für die Persistenz von Inflation und die Geldpolitik. Die Untersuchungen zeigen, dass mittels der empirisch plausiblen aggregierten Hazardfunktion Zeitreihen simuliert werden können, die mit der Persistenz der inflatorischen Lücke im US Verbraucherpreisindex konsistent sind. Anhand dieser Ergebnisse komme ich zu dem Schluss, dass die Hazardfunktion eine entscheidende Rolle für die dynamischen Eigenschaften von Inflation spielt. Der letzte Artikel wendet den selben Modellierungsansatz auf ein Real-Business-Cycle Model mit rigidem Arbeitsmarkt an. Unter Verwendung eines allgemeineren stochastischen Anpassungsprozess stelle ich fest, dass die Arbeitsmarktdynamiken von einem Parameter beinflusst werden, der das Monotonieverhalten der Hazardfunktion bestimmt. Insbesondere steigt die Volatilität des Beschäftigungsniveaus, wohingegen dessen Persistenz mit zunehmendem Parameterwert abnimmt. / The Calvo assumption (Calvo, 1983) is widely used in the macroeconomic literature to model market frictions that limit the ability of economic agents to re-optimize their control variables. In spite of its virtues, the Calvo assumption also implies singular adjustment behavior at the firm level as well as a restrictive aggregation mechanism for the whole economy. In this study, I examine implications of the Calvo assumption for macroeconomic dynamics. To do so, I extend the Calvo assumption to a more general case based on the concept of the statistical hazard function. Two applications of this approach are studied in the DSGE framework. In the first essay, I apply this approach to a New Keynesian model, and demonstrate that tractability gained from the Calvo pricing assumption is costly in terms of inflation dynamics. The second essay estimates aggregate price reset hazard function using the theoretical framework constructed in the first essay, and shows that the constant hazard function implied by the Calvo assumption is strongly rejected by the aggregate data. In the third essay, I further explore implications of the empirically based hazard function for inflation persistence and monetary policy. I find that the empirically plausible aggregate price reset hazard function can generate simulated data that are consistent with inflation gap persistence found in the US CPI data. Based on these results, I conclude that the price reset hazard function plays a crucial role for generating inflation dynamics. The last essay applies the same modeling approach to a RBC model with employment rigidity. I find that, when introducing a more general stochastic adjustment process, the employment dynamics vary with a parameter, which determines the monotonic property of the hazard function. In particular, the volatility of employment is increasing, but the persistence is decreasing in the value of the parameter.
156

Advanced signal processing techniques for multi-target tracking

Daniyan, Abdullahi January 2018 (has links)
The multi-target tracking problem essentially involves the recursive joint estimation of the state of unknown and time-varying number of targets present in a tracking scene, given a series of observations. This problem becomes more challenging because the sequence of observations is noisy and can become corrupted due to miss-detections and false alarms/clutter. Additionally, the detected observations are indistinguishable from clutter. Furthermore, whether the target(s) of interest are point or extended (in terms of spatial extent) poses even more technical challenges. An approach known as random finite sets provides an elegant and rigorous framework for the handling of the multi-target tracking problem. With a random finite sets formulation, both the multi-target states and multi-target observations are modelled as finite set valued random variables, that is, random variables which are random in both the number of elements and the values of the elements themselves. Furthermore, compared to other approaches, the random finite sets approach possesses a desirable characteristic of being free of explicit data association prior to tracking. In addition, a framework is available for dealing with random finite sets and is known as finite sets statistics. In this thesis, advanced signal processing techniques are employed to provide enhancements to and develop new random finite sets based multi-target tracking algorithms for the tracking of both point and extended targets with the aim to improve tracking performance in cluttered environments. To this end, firstly, a new and efficient Kalman-gain aided sequential Monte Carlo probability hypothesis density (KG-SMC-PHD) filter and a cardinalised particle probability hypothesis density (KG-SMC-CPHD) filter are proposed. These filters employ the Kalman- gain approach during weight update to correct predicted particle states by minimising the mean square error between the estimated measurement and the actual measurement received at a given time in order to arrive at a more accurate posterior. This technique identifies and selects those particles belonging to a particular target from a given PHD for state correction during weight computation. The proposed SMC-CPHD filter provides a better estimate of the number of targets. Besides the improved tracking accuracy, fewer particles are required in the proposed approach. Simulation results confirm the improved tracking performance when evaluated with different measures. Secondly, the KG-SMC-(C)PHD filters are particle filter (PF) based and as with PFs, they require a process known as resampling to avoid the problem of degeneracy. This thesis proposes a new resampling scheme to address a problem with the systematic resampling method which causes a high tendency of resampling very low weight particles especially when a large number of resampled particles are required; which in turn affect state estimation. Thirdly, the KG-SMC-(C)PHD filters proposed in this thesis perform filtering and not tracking , that is, they provide only point estimates of target states but do not provide connected estimates of target trajectories from one time step to the next. A new post processing step using game theory as a solution to this filtering - tracking problem is proposed. This approach was named the GTDA method. This method was employed in the KG-SMC-(C)PHD filter as a post processing technique and was evaluated using both simulated and real data obtained using the NI-USRP software defined radio platform in a passive bi-static radar system. Lastly, a new technique for the joint tracking and labelling of multiple extended targets is proposed. To achieve multiple extended target tracking using this technique, models for the target measurement rate, kinematic component and target extension are defined and jointly propagated in time under the generalised labelled multi-Bernoulli (GLMB) filter framework. The GLMB filter is a random finite sets-based filter. In particular, a Poisson mixture variational Bayesian (PMVB) model is developed to simultaneously estimate the measurement rate of multiple extended targets and extended target extension was modelled using B-splines. The proposed method was evaluated with various performance metrics in order to demonstrate its effectiveness in tracking multiple extended targets.
157

伴隨估計風險時的動態資產配置 / Dynamic asset allocation with estimation risk

湯美玲, Tang, Mei Ling Unknown Date (has links)
本文包含關於估計風險與動態資產配置的兩篇研究。第一篇研究主要就當須估計的投資組合其投入參數具有高維度特質的觀點下,探究因忽略不確定性通膨而對資產配置過程中帶來的估計風險。此研究基於多重群組架構下所發展出的新投資決策法則,能夠確實地評價不確定性通膨對資產報酬的影響性,並在應用於建構大規模投資組合時,能有效減少進行最適化投資決策過程中所需的演算時間與成本。而將此模型應用於建構全球ETFs投資組合的實證結果則進一步顯示,若在均值變異數架構下,因建構大型投資組合時須估計高維度投入參數而伴隨有大量估計風險時,參數估計方式建議結合採用貝氏估計方法來估算資產報酬的一階與二階動差,其所對應得到的投資組合樣本外績效會比直接採用歷史樣本動差來得佳。此實證結果亦隱含:在均值變異數架構下,穩定的參數估計值比起最新且即時的參數估計資訊對於投資組合的績效來得有益。同時,若當投入參數的樣本估計值波動很大時,增加放空限制亦能有利投組樣本外績效。 第二篇文章則主要處理當處於對數常態證券市場下時,投資組合報酬率不具有有限動差並導致無法在均值變異數架構下發展出最適化封閉解時的難題。本研究示範此時可透過漸近方法的應用,有效發展出在具有放空限制下,考量了估計風險後的簡單投資組合配置法則,並且展示如何將其應用至實務上的資產配置過程以建構全球投資組合。本文的數值範例與實證模擬結果皆顯示,估計風險的存在對於最適投資組合的選擇有實質的影響,無估計風險下得出的最適投資組合,不必然是存有估計風險下的最適投資組合。此外,實證模擬結果亦證明,當存有估計風險時,本文所發展的簡單法則,能使建構出的投資組合具有較佳的樣本外績效表現。 / This dissertation consists of two essays on dynamic asset allocation with regard to dealing with estimation risk as being in different uncertainties in the mean-variance framework. The first essay concerns estimation errors from disregarding uncertain inflation in terms of the need in estimating high-dimensional input parameters for portfolio optimization. This study presents simplified and valid criteria referred to as the EGP-IMG model based on the multi-group framework to be capable of pricing inflation risk in a world of uncertainty. Empirical studies shows the proposed model indeed provides a smart way in picking worldwide ETFs that serves well to reduce the amount of costs and time in constructing a global portfolio when facing a large number of investment products. The effect of Bayesian estimation on improving estimation risk as the decision maker is subject to history sample moments for input parameters estimations is meanwhile examined. The results indicate portfolios implementing the Stein estimation and shrinkage estimators offer better performance compared with those applying the history sample estimators. It implicitly demonstrates that yielding stable estimates for means and covariances is more critical in the MV framework than getting the newest up-to-date parameters estimates for improving portfolio performance. Though short-sales constraints intuitively should hurt, they do practically contribute to uplift portfolio performance as being subject to volatile estimates of returns moments. The second essay undertakes the difficulty that the probability distribution of a portfolio's returns may not have finite moments in a lognormal-securities market, and thus leads to the arduous problem in solving the closed-form solutions for the optimal portfolio under the mean-variance framework. As being in a lognormal-securities market, this study systematically delivers a simple rule in optimization with regard to the presence of estimation risk. The simple rule is derived accordingly by means of asymptotic properties when short sales are not allowed. The consequently numerical example specifies the detailed procedures and shows that the optimal portfolio with estimation risk is not equivalent to that ignoring the existence of estimation risk. In addition, the portfolio performance based on the proposed simple rule is examined to present a better out-of-sample portfolio performance relative to the benchmarks.
158

Estimación óptima de secuencias caóticas con aplicación en comunicaciones

Luengo García, David 23 November 2006 (has links)
En esta Tesis se aborda la estimación óptima de señales caóticas generadas por mapas unidimensionales y contaminadas por ruido aditivo blanco Gaussiano, desde el punto de vista de los dos marcos de inferencia estadística más extendidos: máxima verosimilitud (ML) y Bayesiano. Debido al elevado coste computacional de estos estimadores, se proponen asimismo diversos estimadores subóptimos, aunque computacionalmente eficientes, con un rendimiento similar al de los óptimos. Adicionalmente se analiza el problema de la estimación de los parámetros de un mapa caótico explotando la relación conocida entre muestras consecutivas de la secuencia caótica. Por último, se considera la aplicación de los estimadores anteriores al diseño de receptores para dos esquemas de comunicaciones caóticas diferentes: conmutación caótica y codificación simbólica o caótica. / This Thesis studies the optimal estimation of chaoticsignals generated iterating unidimensional maps and contaminated by additive white Gaussian noise, from the point of view of the two most common frameworks in statistical inference: maximum likelihood (ML) and Bayesian. Due to the high computational cost of optimum estimators, several suboptimal but computationally efficient estimators are proposed, which attain a similar performance as the optimum ones. Additionally, the estimation of the parameters of a chaotic map is analyzed, exploiting the known relation between consecutive samples of the chaotic sequence. Finally, we consider the application of the estimators developed in the design of receivers for two different schemes of chaotic communications: chaotic switching and symbolic or chaotic coding.
159

資訊檢索之學術智慧 / Research Intelligence Involving Information Retrieval

杜逸寧, Tu, Yi-Ning Unknown Date (has links)
偵測新興議題對於研究者而言是一個相當重要的問題,研究者如何在有限的時間和資源下探討同一領域內的新興議題將比解決已經成熟的議題帶來較大的貢獻和影響力。本研究將致力於協助研究者偵測新興且具有未來潛力的研究議題,並且從學術論文中探究對於研究者在做研究中有幫助的學術智慧。在搜尋可能具有研究潛力的議題時,我們假設具有研究潛力的議題將會由同一領域中較具有影響力的作者和刊物發表出,因此本研究使用貝式估計的方法去推估同一領域中相關的研究者和學術刊物對於該領域的影響力,進而藉由這些資訊可以找出未來具有潛力的新興候選議題。此外就我們所知的議題偵測文獻中對於認定一個議題是否已經趨於成熟或者是否新穎且具有研究的潛力仍然缺乏有效及普遍使用的衡量工具,因此本研究試圖去發展有效的衡量工具以評估議題就本身的發展生命週期是否仍然具有繼續投入的學術價值。 本研究從許多重要的資料庫中挑選了和資料探勘和資訊檢索相關的論文並且驗證這些在會議論文中所涵蓋的議題將會領導後續幾年期刊論文相似的議題。此外本研究也使用了一些已經存在的演算法並且結合這些演算法發展一個檢測的流程幫助研究者去偵測學術論文中的領導趨勢並發掘學術智慧。本研究使用貝式估計的方法試圖從已經發表的資訊和被引用的資訊來建構估計作者和刊物的影響力的事前機率與概似函數,並且計算出同一領域重要的作者和刊物的影響力,當這些作者和刊物的論文發表時將會相對的具有被觀察的價值,進而檢定這些新興候選議題是否會成為新興議題。而找出的重要研究議題雖然已經縮小探索的範圍,但是仍然有可能是發展成熟的議題使得具有影響力的作者和刊物都必須討論,因此需要評估議題未來潛力的指標或工具。然而目前文獻中對於評估議題成熟的方法僅著重在議題的出現頻率而忽視了議題的新穎度也是重要的指標,另一方面也有只為了找出新議題並沒有顧及這個議題是否具有未來的潛力。更重要的是單一的使用出現頻率的曲線只能在議題已經成熟之後才能確定這是一個重要的議題,使得這種方法成為落後的指標。 本研究試圖提出解決這些困境的指標進而發展成衡量新興議題潛力的方法。這些指標包含了新穎度指標、發表量指標和偵測點指標,藉由這些指標和曲線可以在新興議題的偵測中提供更多前導性的資訊幫助研究者去建構各自領域中新興議題的偵測標準。偵測點所代表的意義並非這個議題開始新興的正確日期,它代表了這個議題在自己發展的生命週期上最具有研究的潛力和價值的時間點,因此偵測點會根據後來的蓬勃發展而在時間上產生遞延的結果,這表示我們的指標可以偵測出議題生命力的延續。相對於傳統的次數分配曲線可以看出議題的崛起和衰退,本研究的發表量指標更能以生命週期的概念去看出議題在各個時間點的發展潛力。本研究希望從這些過程中所發現的學術智慧可以幫助研究者建構各自領域的議題偵測標準,節省大量人力與時間於探究新興議題。本研究所提出的新方法不僅可以解決影響因子這個指標的缺點,此外還可以使用作者和刊物的影響力去針對一個尚未累積任何索引次數的論文進行潛力偵測,解決Google 學術搜尋目前總是在論文已經被很多檢索之後才能確定論文重要性的缺點,學者總是希望能夠領先發現重要的議題或論文。然而,我們以議題為導向的檢索方法相信可以更確實的滿足研究者在搜尋議題或論文上的需求。 / This research presents endeavors that seek to identify the emerging topics for researchers and pinpoint research intelligence via academic papers. It is intended to reveal the connection between topics investigated by conference papers and journal papers which can help the research decrease the plenty of time and effort to detect all the academic papers. In order to detect the emerging research topics the study uses the Bayesian estimation approach to estimate the impact of the authors and publications may have on a topic and to discover candidate emerging topics by the combination of the impact authors and publications. Finally the research also develops the measurement tools which could assess the research potential of these topics to find the emerging topics. This research selected huge of papers in data mining and information retrieval from well-known databases and showed that the topics covered by conference papers in a year often leads to similar topics covered by journal papers in the subsequent year and vice versa. This study also uses some existing algorithms and combination of these algorithms to propose a new detective procedure for the researchers to detect the new trend and get the academic intelligence from conferences and journals. The research uses the Bayesian estimation approach and citation analysis methods to construct the prior distribution and likelihood function of the authors and publications in a topic. Because the topics published by these authors and publications will get more attention and valuable than others. Researchers can assess the potential of these candidate emerging topics. Although the topics we recommend decrease the range of the searching space, these topics may so popular that even all of the impact authors and publications discuss it. The measurement tools or indices are need. But the current methods only focus on the frequency of subjects, and ignore the novelty of subjects which is critical and beyond the frequency study or only focus one of them and without considering the potential of the topics. Some of them only use the curve of published frequency will make the index as a backward one. This research tackles the inadequacy to propose a set of new indices of novelty for emerging topic detection. They are the novelty index (NI) and the published volume index (PVI). These indices are then utilized to determine the detection point (DP) of emerging topics. The detection point (DP) is not the real time which the topic starts to be emerging, but it represents the topic have the highest potential no matter in novelty or hotness for research in its life cycle. Different from the absolute frequent method which can really find the exact emerging period of the topic, the PVI uses the accumulative relative frequency and tries to detect the research potential timing of its life cycle. Following the detection points, the intersection decides the worthiness of a new topic. Readers following the algorithms presented this thesis will be able to decide the novelty and life span of an emerging topic in their field. The novel methods we proposed can improve the limitations of impact factor proposed by ISI. Besides, it uses the impact power of the authors and the publication in a topic to measure the impact power of a paper before it really has been an impact paper can solve the limitations of Google scholar’s approach. We suggest that the topic oriented thinking of our methods can really help the researchers to solve their problems of searching the valuable topics.
160

Estimation of a class of nonlinear time series models.

Sando, Simon Andrew January 2004 (has links)
The estimation and analysis of signals that have polynomial phase and constant or time-varying amplitudes with the addititve noise is considered in this dissertation.Much work has been undertaken on this problem over the last decade or so, and there are a number of estimation schemes available. The fundamental problem when trying to estimate the parameters of these type of signals is the nonlinear characterstics of the signal, which lead to computationally difficulties when applying standard techniques such as maximum likelihood and least squares. When considering only the phase data, we also encounter the well known problem of the unobservability of the true noise phase curve. The methods that are currently most popular involve differencing in phase followed by regression, or nonlinear transformations. Although these methods perform quite well at high signal to noise ratios, their performance worsens at low signal to noise, and there may be significant bias. One of the biggest problems to efficient estimation of these models is that the majority of methods rely on sequential estimation of the phase coefficients, in that the highest-order parameter is estimated first, its contribution removed via demodulation, and the same procedure applied to estimation of the next parameter and so on. This is clearly an issue in that errors in estimation of high order parameters affect the ability to estimate the lower order parameters correctly. As a result, stastical analysis of the parameters is also difficult. In thie dissertation, we aim to circumvent the issues of bias and sequential estiamtion by considering the issue of full parameter iterative refinement techniques. ie. given a possibly biased initial estimate of the phase coefficients, we aim to create computationally efficient iterative refinement techniques to produce stastically efficient estimators at low signal to noise ratios. Updating will be done in a multivariable manner to remove inaccuracies and biases due to sequential procedures. Stastical analysis and extensive simulations attest to the performance of the schemes that are presented, which include likelihood, least squares and bayesian estimation schemes. Other results of importance to the full estimatin problem, namely when there is error in the time variable, the amplitude is not constant, and when the model order is not known, are also condsidered.

Page generated in 0.6079 seconds