• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 115
  • 22
  • 19
  • 15
  • 7
  • 5
  • 5
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 236
  • 236
  • 90
  • 44
  • 43
  • 37
  • 30
  • 30
  • 27
  • 25
  • 24
  • 22
  • 21
  • 20
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

A multi-wavelength study of a sample of galaxy clusters / Susan Wilson

Wilson, Susan January 2012 (has links)
In this dissertation we aim to perform a multi-wavelength analysis of galaxy clusters. We discuss various methods for clustering in order to determine physical parameters of galaxy clusters required for this type of study. A selection of galaxy clusters was chosen from 4 papers, (Popesso et al. 2007b, Yoon et al. 2008, Loubser et al. 2008, Brownstein & Mo at 2006) and restricted by redshift and galactic latitude to reveal a sample of 40 galaxy clusters with 0.0 < z < 0.15. Data mining using Virtual Observatory (VO) and a literature survey provided some background information about each of the galaxy clusters in our sample with respect to optical, radio and X-ray data. Using the Kayes Mixture Model (KMM) and the Gaussian Mixing Model (GMM), we determine the most likely cluster member candidates for each source in our sample. We compare the results obtained to SIMBADs method of hierarchy. We show that the GMM provides a very robust method to determine member candidates but in order to ensure that the right candidates are chosen we apply a select choice of outlier tests to our sources. We determine a method based on a combination of GMM, the QQ Plot and the Rosner test that provides a robust and consistent method for determining galaxy cluster members. Comparison between calculated physical parameters; velocity dispersion, radius, mass and temperature, and values obtained from literature show that for the majority of our galaxy clusters agree within 3 range. Inconsistencies are thought to be due to dynamically active clusters that have substructure or are undergoing mergers, making galaxy member identi cation di cult. Six correlations between di erent physical parameters in the optical and X-ray wavelength were consistent with published results. Comparing the velocity dispersion with the X-ray temperature, we found a relation of T0:43 as compared to T0:5 obtained from Bird et al. (1995). X-ray luminosity temperature and X-ray luminosity velocity dispersion relations gave the results LX T2:44 and LX 2:40 which lie within the uncertainty of results given by Rozgacheva & Kuvshinova (2010). These results all suggest that our method for determining galaxy cluster members is e cient and application to higher redshift sources can be considered. Further studies on galaxy clusters with substructure must be performed in order to improve this method. In future work, the physical parameters obtained here will be further compared to X-ray and radio properties in order to determine a link between bent radio sources and the galaxy cluster environment. / MSc (Space Physics), North-West University, Potchefstroom Campus, 2013
182

雙界二分選擇詢價法-願付價格之起價點偏誤研究

吳孟勳 Unknown Date (has links)
為了處理在願付價格的研究中,極端受訪者對於估計結果所造成的誤差。本文沿用Tsai(2005)所建議採用的三要素混合模型,將受訪者區分為價格再高都願意支付、願意支付合理價格以及價格再低都不願意支付等三種類型。在評估願付價格時,以加速失敗模型(accelerated failure time model,簡稱AFT model)針對願意支付合理價格的受訪者進行估計,並且在考慮不同起價點可能會造成不同程度的起價點偏誤(starting point bias)或是定錨效果(anchoring effect)的情形下,提出一個起價點偏誤調整模型來做探討。我們並以CVDFACTS中的高血壓之願付價格資料進行實證分析。分析結果發現,教育程度越高的男性對於能降低高血壓病患罹患心臟血管相關疾病之新藥願意付較高的金額。此外我們也發現在此筆資料中,不同起價點確實會造成不同程度的偏誤,經由偏誤調整後會得到較高的願付金額。 / A study of willingness-to-pay often suffers from the bias introduced by extreme respondents who are willing to or not willing to pay any price. To overcome the problem, a three-component model proposed by Tsai (2005) is adopted. Under such a circumstance, respondents are classified into three categories, i.e. respondents who are willing to pay any price, unwilling to pay any price, or willing to pay a reasonable price. The willingness-to-pay for those subjects who are willing to pay a reasonable price is again modeled by an accelerated failure time model (AFT model). In this study, we, however, propose an unified model that allows us to look into the issue related to starting point bias and anchoring effect, simultaneously. Willingness-to-pay for cardiovascular disease treatment from a longitudinal follow-up survey- CVDFACTS, is investigated using the new model. Through the use of the model, we are able to detect the effects of starting point biases, and make a proper adjustment accordingly. Our analysis indicates that male respondents with higher education level have an inclination to pay higher price for the new treatment. Besides, we also discover that starting point bias does exist in this dataset.
183

波動度微笑之LM模型應用與結構型商品評價與分析-以匯率連動商品為例

陳益利, Chen, Yi Li Unknown Date (has links)
本篇論文共分為兩部分,第一部份是以每年交易量非常大的外匯選擇權(FX Option)市場以及台指選擇權為例,以Brigo 及Mercurio這兩位學者於2000年提出的Lognormal Mixture model (簡稱LM model)為基礎,捕捉選擇權市場中典型的波動度微笑(Volatility smile)曲線之特性。第二部份係商品評價之應用,是以大陸地區發行的匯率連動結構型商品(Structure Notes)為主。 第一部份中我們分別採用LM 模型(Lognormal Mixture Model)、Shifting LM模型(Shifting Lognormal Mixture Model)及LMDM模型(Lognormal Mixture with Different Mean Model)等三種模型,用以衡量其實際上在外匯選擇權市場及台指選擇權中波動微笑曲線校準的準確性。結果顯示LM模型、Shifting LM模型及LMDM模型均能有效地反應並捕捉出選擇權市場中波動度微笑曲線之特性,而其中又以LMDM模型的效果最佳,其無論在波動度校準或是選擇權價格評價上的誤差均最小。 第二部分是以「中國銀行匯聚寶0709G掛鉤美元兌加元匯率之加元產品」的匯率連動結構型商品為例,以Garman and Kohlhagen(1983)外匯選擇權模型求出其封閉解並作發行商期初利潤分析,然後再用蒙地卡羅模擬法進行投資人期末報酬分析。此外,亦針對此種商品的敏感性與避險參數作分析。
184

條件評估法中處理「不知道」回應之研究 / Analysis of contingency valuation survey data with “Don’t Know” responses

王昱博, Wang, Yu Bo Unknown Date (has links)
本文主要著重在處理條件評估法下,「不知道」受訪者的回應。當「不知道」受訪者的產生機制並未符合完全隨機時,考量他們的真實意向就顯得極為重要。 文中使用中央研究院生醫所在其研究計畫「竹東及朴子地區心臟血管疾病長期追蹤研究」(CardioVascular Disease risk FACtor Two-township Study,簡稱CVDFACTS)第五循環中的研究調查資料。   由於以往的文獻對於「不知道」受訪者的處理,皆有不足之處。如Wang (1997)所提出的方法,就只能針對某種特定的「不知道」受訪者來做處理;而Caudill and Groothuis (2005)所提的方法,由於將「不知道」受訪者的差補與願付價格的估計分開,亦使其估計結果不具備一些好的性質。在本文中,我們提出一個能同時處理「不知道」受訪者且估計願付價格的方法。除了使得統計上較有效率外,也保有EM演算法的一個特性:願付價格模型中的估計參數為最大概似估計值。此外,在加入三要素混合模型(Tsai (2005))後,我們也可避免用到極端受訪者的訊息去差補那些「不知道」受訪者的意向。   在分析願付價格的過程中,我們發現此筆資料的「不知道」受訪者,其產生的機制為隨機,而非為完全隨機,這意謂著不考量「不知道」受訪者的分析結果,必定會產生偏差。而在比較有考量「不知道」受訪者與沒有的情況後,其結果確實應證了我們的想法:只要「不知道」受訪者不是完全隨機產生的,那麼不考量他們必定會產生某種程度的偏差。 / This paper investigates how to deal with “Don’t Know” (DK) responses in contingent valuation surveys, which must be taken into consideration when they are not completely at random. The data we use is collected from the fifth cycle of the Cardiovascular Disease Risk Factor Two-township Study (CVDFACTS), which is a series of long-term surveys conducted by the Institute of Biomedical Sciences, Academia Sinica. Previous methods used in dealing with DK responses have not been satisfactory because they only focus on some types of DK respondents (Wang (1997)), or separate the imputation of DK responses from the WTP estimation (Caudill and Groothuis (2005)). However, in this paper, we introduce an integrated method to cope with the incomplete data caused by DK responses. Besides being more efficient, the single-step method guarantees maximum likelihood estimates of the WTP model to be obtained due to the good property that the EM algorithm possesses. Furthermore, by adding the concept of the three-component mixture model (Tsai (2005)), some extreme information are drawn out when imputing the DK inclinations. In this hypertension data, the mechanism of the DK responses is “Don’t know at random”, which means the analysis of DK-dropped results in a bias. By using our method, the difference between DK-dropped and DK-included is actually revealed, which proves our suspicion that a DK-dropped analysis is accompanied by a biased result when DK is not completely at random.
185

Abdominal aortic aneurysm inception and evolution - A computational model

Grytsan, Andrii January 2016 (has links)
Abdominal aortic aneurysm (AAA) is characterized by a bulge in the abdominal aorta. AAA development is mostly asymptomatic, but such a bulge may suddenly rupture, which is associated with a high mortality rate. Unfortunately, there is no medication that can prevent AAA from expanding or rupturing. Therefore, patients with detected AAA are monitored until treatment indication, such as maximum AAA diameter of 55 mm or expansion rate of 1 cm/year. Models of AAA development may help to understand the disease progression and to inform decision-making on a patient-specific basis. AAA growth and remodeling (G&amp;R) models are rather complex, and before the challenge is undertaken, sound clinical validation is required. In Paper A, an existing thick-walled model of growth and remodeling of one layer of an AAA slice has been extended to a two-layered model, which better reflects the layered structure of the vessel wall. A parameter study was performed to investigate the influence of mechanical properties and G&amp;R parameters of such a model on the aneurysm growth. In Paper B, the model from Paper A was extended to an organ level model of AAA growth. Furthermore, the model was incorporated into a Fluid-Solid-Growth (FSG) framework. A patient-specific geometry of the abdominal aorta is used to illustrate the model capabilities. In Paper C, the evolution of the patient-specific biomechanical characteristics of the AAA was investigated. Four patients with five to eight Computed Tomography-Angiography (CT-A) scans at different time points were analyzed. Several non-trivial statistical correlations were found between the analyzed parameters. In Paper D, the effect of different growth kinematics on AAA growth was investigated. The transverse isotropic in-thickness growth was the most suitable AAA growth assumption, while fully isotropic growth and transverse isotropic in-plane growth produced unrealistic results. In addition, modeling of the tissue volume change improved the wall thickness prediction, but still overestimated thinning of the wall during aneurysm expansion. / Bukaortaaneurysm (AAA) kännetecknas av en utbuktning hos aortaväggen i buken. Tillväxt av en AAA är oftast asymtomatisk, men en sådan utbuktning kan plö̈tsligt brista, vilket har hög dödlighet. Tyvärr finns det inga mediciner som kan förhindra AAA från att expandera eller brista. Patienter med upptä̈ckt AAA hålls därför under uppsikt tills operationskrav är uppnådda, såsom maximal AAA-diameter på 55 mm eller expansionstakt på 1 cm/år. Modeller för AAA-tillväxt kan bidra till att öka förståelsen för sjukdomsförloppet och till att förbättra beslutsunderlaget på en patientspecifik basis. AAA modeller för tillväxt och strukturförändring (G&amp;R) är ganska komplicerade och innan man tar sig an denna utmaning krävs de god klinisk validering. I Artikel A har en befintlig tjockväggig modell för tillväxt av ett skikt av en AAA-skiva utö̈kats till en två-skiktsmodell. Denna modell återspeglar bättre den skiktade strukturen hos kärlväggen. Genom en parameterstudie undersö̈ktes påverkan av mekaniska egenskaper och G&amp;R-parametrar hos en sådan modell för AAA-tillväxt. I Artikel B utvidgades modellen från Artikel A till en organnivå-modell för AAA-tillväxt. Vidare inkorporerades modellen i ett “Fluid–Solid–Growth” (FSG) ramverk. En patientspecifik geometri hos bukaortan användes för att illustrera möjligheterna med modellen. I Artikel C undersöktes utvecklingen av patientspecifika biomekaniska egenskaper hos AAA. Fyra patienter som skannats fem till åtta gånger med “Computed Tomography-Angiography” (CT-A) vid olika tillfällen analyserades. Flera icke triviala statistiska samband konstaterades mellan de analyserade parametrarna. I Artikel D undersöktes effekten av olika tillväxt-kinematik för AAA tillväxt. En modell med transversellt-isotrop-i-tjockleken-tillväxt var den bäst lämpade för AAA tillväxt, medans antagandet om fullt-isotrop-tillväxt och transversellt-isotrop-i-planet-tillväxt producerade orimliga resultat. Dessutom gav modellering av vävnadsvolymsförändring ett förbättrat väggtjockleks resultat men en fortsatt överskattning av väggförtunningen under AAA-expansionen. / <p>QC 20161201</p>
186

Développement d’un modèle de classification probabiliste pour la cartographie du couvert nival dans les bassins versants d’Hydro-Québec à l’aide de données de micro-ondes passives

Teasdale, Mylène 09 1900 (has links)
Chaque jour, des décisions doivent être prises quant à la quantité d'hydroélectricité produite au Québec. Ces décisions reposent sur la prévision des apports en eau dans les bassins versants produite à l'aide de modèles hydrologiques. Ces modèles prennent en compte plusieurs facteurs, dont notamment la présence ou l'absence de neige au sol. Cette information est primordiale durant la fonte printanière pour anticiper les apports à venir, puisqu'entre 30 et 40% du volume de crue peut provenir de la fonte du couvert nival. Il est donc nécessaire pour les prévisionnistes de pouvoir suivre l'évolution du couvert de neige de façon quotidienne afin d'ajuster leurs prévisions selon le phénomène de fonte. Des méthodes pour cartographier la neige au sol sont actuellement utilisées à l'Institut de recherche d'Hydro-Québec (IREQ), mais elles présentent quelques lacunes. Ce mémoire a pour objectif d'utiliser des données de télédétection en micro-ondes passives (le gradient de températures de brillance en position verticale (GTV)) à l'aide d'une approche statistique afin de produire des cartes neige/non-neige et d'en quantifier l'incertitude de classification. Pour ce faire, le GTV a été utilisé afin de calculer une probabilité de neige quotidienne via les mélanges de lois normales selon la statistique bayésienne. Par la suite, ces probabilités ont été modélisées à l'aide de la régression linéaire sur les logits et des cartographies du couvert nival ont été produites. Les résultats des modèles ont été validés qualitativement et quantitativement, puis leur intégration à Hydro-Québec a été discutée. / Every day, decisions must be made about the amount of hydroelectricity produced in Quebec. These decisions are based on the prediction of water inflow in watersheds based on hydrological models. These models take into account several factors, including the presence or absence of snow. This information is critical during the spring melt to anticipate future flows, since between 30 and 40 % of the flood volume may come from the melting of the snow cover. It is therefore necessary for forecasters to be able to monitor on a daily basis the snow cover to adjust their expectations about the melting phenomenon. Some methods to map snow on the ground are currently used at the Institut de recherche d'Hydro-Québec (IREQ), but they have some shortcomings. This master thesis's main goal is to use remote sensing passive microwave data (the vertically polarized brightness temperature gradient ratio (GTV)) with a statistical approach to produce snow maps and to quantify the classification uncertainty. In order to do this, the GTV has been used to calculate a daily probability of snow via a Gaussian mixture model using Bayesian statistics. Subsequently, these probabilities were modeled using linear regression models on logits and snow cover maps were produced. The models results were validated qualitatively and quantitatively, and their integration at Hydro-Québec was discussed.
187

Table tennis event detection and classification

Oldham, Kevin M. January 2015 (has links)
It is well understood that multiple video cameras and computer vision (CV) technology can be used in sport for match officiating, statistics and player performance analysis. A review of the literature reveals a number of existing solutions, both commercial and theoretical, within this domain. However, these solutions are expensive and often complex in their installation. The hypothesis for this research states that by considering only changes in ball motion, automatic event classification is achievable with low-cost monocular video recording devices, without the need for 3-dimensional (3D) positional ball data and representation. The focus of this research is a rigorous empirical study of low cost single consumer-grade video camera solutions applied to table tennis, confirming that monocular CV based detected ball location data contains sufficient information to enable key match-play events to be recognised and measured. In total a library of 276 event-based video sequences, using a range of recording hardware, were produced for this research. The research has four key considerations: i) an investigation into an effective recording environment with minimum configuration and calibration, ii) the selection and optimisation of a CV algorithm to detect the ball from the resulting single source video data, iii) validation of the accuracy of the 2-dimensional (2D) CV data for motion change detection, and iv) the data requirements and processing techniques necessary to automatically detect changes in ball motion and match those to match-play events. Throughout the thesis, table tennis has been chosen as the example sport for observational and experimental analysis since it offers a number of specific CV challenges due to the relatively high ball speed (in excess of 100kph) and small ball size (40mm in diameter). Furthermore, the inherent rules of table tennis show potential for a monocular based event classification vision system. As the initial stage, a proposed optimum location and configuration of the single camera is defined. Next, the selection of a CV algorithm is critical in obtaining usable ball motion data. It is shown in this research that segmentation processes vary in their ball detection capabilities and location out-puts, which ultimately affects the ability of automated event detection and decision making solutions. Therefore, a comparison of CV algorithms is necessary to establish confidence in the accuracy of the derived location of the ball. As part of the research, a CV software environment has been developed to allow robust, repeatable and direct comparisons between different CV algorithms. An event based method of evaluating the success of a CV algorithm is proposed. Comparison of CV algorithms is made against the novel Efficacy Metric Set (EMS), producing a measurable Relative Efficacy Index (REI). Within the context of this low cost, single camera ball trajectory and event investigation, experimental results provided show that the Horn-Schunck Optical Flow algorithm, with a REI of 163.5 is the most successful method when compared to a discrete selection of CV detection and extraction techniques gathered from the literature review. Furthermore, evidence based data from the REI also suggests switching to the Canny edge detector (a REI of 186.4) for segmentation of the ball when in close proximity to the net. In addition to and in support of the data generated from the CV software environment, a novel method is presented for producing simultaneous data from 3D marker based recordings, reduced to 2D and compared directly to the CV output to establish comparative time-resolved data for the ball location. It is proposed here that a continuous scale factor, based on the known dimensions of the ball, is incorporated at every frame. Using this method, comparison results show a mean accuracy of 3.01mm when applied to a selection of nineteen video sequences and events. This tolerance is within 10% of the diameter of the ball and accountable by the limits of image resolution. Further experimental results demonstrate the ability to identify a number of match-play events from a monocular image sequence using a combination of the suggested optimum algorithm and ball motion analysis methods. The results show a promising application of 2D based CV processing to match-play event classification with an overall success rate of 95.9%. The majority of failures occur when the ball, during returns and services, is partially occluded by either the player or racket, due to the inherent problem of using a monocular recording device. Finally, the thesis proposes further research and extensions for developing and implementing monocular based CV processing of motion based event analysis and classification in a wider range of applications.
188

Modelos de mistura beta mistos sob abordagem bayesiana / Mixture of beta mixed models: a Bayesian approach

Zerbeto, Ana Paula 14 December 2018 (has links)
Os modelos de mistura são muito eficazes para analisar dados compostos por diferentes subpopulações com alocações desconhecidas ou que apresentam assimetria, multimodalidade ou curtose. Esta tese propõe relacionar a distribuição de probabilidade beta e a técnica de ajuste de modelos mistos à metodologia de modelos de mistura para que sejam adequados na análise de dados que assumem valores em um intervalo restrito conhecido e que também são caracterizados por possuírem uma estrutura de agrupamento ou hierárquica. Foram especificados os modelos de mistura beta mistos linear, com dispersão constante e variável, e não linear. Foi considerada uma abordagem bayesiana com uso de métodos de Monte Carlo via Cadeias de Markov (MCMC). Estudos de simulação foram delineados para avaliar os resultados inferenciais destes modelos em relação à acurácia da estimação pontual dos parâmetros, ao desempenho de critérios de informação na seleção do número de elementos da mistura e ao diagnóstico de identificabilidade obtido com o algoritmo data cloning. O desempenho dos modelos foi muito promissor, principalmente pela boa acurácia da estimação pontual dos parâmetros e por não haver evidências de falta de identificabilidade. Três bancos de dados reais das áreas de saúde, marketing e educação foram estudados por meio das técnicas propostas. Tanto nos estudos de simulação quanto na aplicação a dados reais se obtiveram resultados muito satisfatórios que evidenciam tanto a utilidade dos modelos desenvolvidos aos objetivos tratados quanto a potencialidade de aplicação. Ressaltando que a metodologia apresentada também pode ser aplicada e estendida a outros modelos de mistura. / Mixture models are very effective for analyzing data composed of different subpopulations with unknown allocations or with asymmetry, multimodality or kurtosis. This work proposes to link the beta probability distribution and the mixed models to the methodology of mixture models so that they are suitable to analyse data with values in a restricted and known interval and that also are characterized by having a grouping or hierarchical structure. There were specified the linear beta mixture models with random effects, with constant and varying dispersion, and also the nonlinear one with constant dispersion. It was considered a Bayesian approach using Markov Chain Monte Carlo (MCMC) methods. Simulation studies were designed to evaluate the inferential results of these models in relation to the accuracy of the parameter estimation, to the performance of information criteria in the selection of the number of elements of the mixture and to the diagnosis of identifiability obtained with the algorithm data cloning. The performance of the models was very promising, mainly due to the good accuracy of the point estimation of the parameters and because there was no evidence of lack of identifiability of the model. Three real databases of health, marketing and education were studied using the proposed techniques. In both the simulation studies and the application to real data had very satisfactory results that show both the usefulness of the models developed to the treated objectives and the potentiality of application. Note that the presented methodology can also be applied and extended to other mixing models.
189

Model-based clustering and model selection for binned data. / Classification automatique à base de modèle et choix de modèles pour les données discrétisées

Wu, Jingwen 28 January 2014 (has links)
Cette thèse étudie les approches de classification automatique basées sur les modèles de mélange gaussiens et les critères de choix de modèles pour la classification automatique de données discrétisées. Quatorze algorithmes binned-EM et quatorze algorithmes bin-EM-CEM sont développés pour quatorze modèles de mélange gaussiens parcimonieux. Ces nouveaux algorithmes combinent les avantages des données discrétisées en termes de réduction du temps d’exécution et les avantages des modèles de mélange gaussiens parcimonieux en termes de simplification de l'estimation des paramètres. Les complexités des algorithmes binned-EM et bin-EM-CEM sont calculées et comparées aux complexités des algorithmes EM et CEM respectivement. Afin de choisir le bon modèle qui s'adapte bien aux données et qui satisfait les exigences de précision en classification avec un temps de calcul raisonnable, les critères AIC, BIC, ICL, NEC et AWE sont étendus à la classification automatique de données discrétisées lorsque l'on utilise les algorithmes binned-EM et bin-EM-CEM proposés. Les avantages des différentes méthodes proposées sont illustrés par des études expérimentales. / This thesis studies the Gaussian mixture model-based clustering approaches and the criteria of model selection for binned data clustering. Fourteen binned-EM algorithms and fourteen bin-EM-CEM algorithms are developed for fourteen parsimonious Gaussian mixture models. These new algorithms combine the advantages in computation time reduction of binning data and the advantages in parameters estimation simplification of parsimonious Gaussian mixture models. The complexities of the binned-EM and the bin-EM-CEM algorithms are calculated and compared to the complexities of the EM and the CEM algorithms respectively. In order to select the right model which fits well the data and satisfies the clustering precision requirements with a reasonable computation time, AIC, BIC, ICL, NEC, and AWE criteria, are extended to binned data clustering when the proposed binned-EM and bin-EM-CEM algorithms are used. The advantages of the different proposed methods are illustrated through experimental studies.
190

Two-phase flows in gas-evolving electrochemical applications

Wetind, Ruben January 2001 (has links)
No description available.

Page generated in 0.0423 seconds