• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 91
  • 33
  • 11
  • 7
  • 5
  • 5
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 200
  • 29
  • 25
  • 21
  • 20
  • 17
  • 16
  • 16
  • 15
  • 15
  • 14
  • 14
  • 13
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Construção de ferramenta computacional para estimação de custos na presença de censura utilizando o método da Ponderação pela Probabilidade Inversa

Sientchkovski, Paula Marques January 2016 (has links)
Introdução: Dados de custo necessários na Análise de Custo-Efetividade (CEA) são, muitas vezes, obtidos de estudos longitudinais primários. Neste contexto, é comum a presença de censura caracterizada por não se ter os dados de custo a partir de certo momento, devido ao fato de que indivíduos saem do estudo sem esse estar finalizado. A ideia da Ponderação pela Probabilidade Inversa (IPW – do inglês, Inverse Probability Weighting) vem sendo bastante estudada na literatura relacionada a esse problema, mas é desconhecida a disponibilidade de ferramentas computacionais para esse contexto. Objetivo: Construir ferramentas computacionais em software Excel e R, para estimação de custos pelo método IPW conforme proposto por Bang e Tsiatis (2000), com o objetivo de lidar com o problema da censura em dados de custos. Métodos: Através da criação de planilhas eletrônicas em software Excel e programação em software R, e utilizando-se bancos de dados hipotéticos com situações diversas, busca-se propiciar ao pesquisador maior entendimento do uso desse estimador bem como a interpretação dos seus resultados. Resultados: As ferramentas desenvolvidas, ao proporcionarem a aplicação do método IPW de modo intuitivo, se mostraram como facilitadoras para a estimação de custos na presença de censura, possibilitando calcular a ICER a partir de dados de custo. Conclusão: As ferramentas desenvolvidas permitem ao pesquisador, além de uma compreensão prática do método, a sua aplicabilidade em maior escala, podendo ser considerada como alternativa satisfatória às dificuldades postas pelo problema da censura na CEA. / Introduction: Cost data needed in Cost-Effectiveness Analysis (CEA) are often obtained from longitudinal primary studies. In this context, it is common the presence of censoring characterized by not having cost data after a certain point, due to the fact that individuals leave the study without this being finalized. The idea of Inverse Probability Weighting (IPW) has been extensively studied in the literature related to this problem, but is unknown the availability of computational tools for this context. Objective: To develop computational tools in software Excel and software R, to estimate costs by IPW method, as proposed by Bang and Tsiatis (2000), in order to deal with the problem of censorship in cost data. Methods: By creating spreadsheets in Excel software and programming in R software, and using hypothetical database with different situations, we seek to provide to the researcher most understanding of the use of IPW estimator and the interpretation of its results. Results: The developed tools, affording the application of IPW method in an intuitive way, showed themselves as facilitators for the cost estimation in the presence of censorship, allowing to calculate the ICER from more accurate cost data. Conclusion: The developed tools allow the researcher, besides a practical understanding of the method, its applicability on a larger scale, and may be considered a satisfactory alternative to the difficulties posed by the problem of censorship in CEA.
82

Évaluation du système vestibulaire et ses fonctions chez les personnes sourdes avec ou sans implant cochléaire

Maheu, Maxime 05 1900 (has links)
No description available.
83

Effects of Age and Hearing Loss on Perception of Dynamic Speech Cues

Szeto, Mei-Wa Tam 07 November 2008 (has links)
Older listeners, both with and without hearing loss, often complain of difficulty understanding conversational speech. One reason for such difficulty may be a decreased ability to process the rapid changes in intensity, frequency, or temporal information that serve to differentiate speech sounds. Two important cues for the identification of stop consonants are the duration of the interruption of airflow (i.e., closure duration) and rapid spectral changes following the release of closure. Many researchers have shown that age and hearing loss affect a listener's cue weighting strategies and trading relationship between spectral and temporal cues. The study of trading relationships between speech cues enables researchers to investigate how much various listeners rely on different speech cues. Different cue weighting strategies and trading relationships have been demonstrated for individuals with hearing loss, compared to listeners with normal hearing. These differences have been attributed to the decreased ability of the individuals with hearing loss to process spectral information. While it is established that processing of temporal information deteriorates with age, it is not known whether the speech processing difficulties of older listeners are due solely to the effects of hearing loss or to separate age-related effects as well. The present study addresses this question by comparing the performance on a series of psychoacoustic and speech identification tasks of three groups of listeners (young with normal-hearing, older with normal-hearing, and older with impaired hearing) using synthetic word pairs ("slit" and "split"), in which spectral and temporal cues are altered systematically. Results of the present study suggest different cue weighting strategies and trading relationships for all three groups of listeners, with older listeners with hearing loss showing the least effect of spectral cue changes and young listeners with normal hearing showing the greatest effect of spectral cue changes. Results are consistent with previous studies showing that older listeners with and without hearing loss seem to weight spectral information less heavily than young listeners with normal hearing. Each listener group showed a different pattern of cue weighting strategies when spectral and temporal cues varied.
84

時空數列分析在蔬菜價格變動與產銷策略之研究 / Spatial Time Series Analysis and it's Application : A Production- Marketing Strategy for the Vegetables Price

譚光榮, Tan, kuang Jung Unknown Date (has links)
蔬菜的供給彈性非常小,收成之後,不僅產量會決定售價的高低,同類蔬 菜之間的替代效果,對於價格變化也有很大的影響力。因此若能事先預測 同類蔬菜未來的價格變化,即可計劃各類蔬菜的生產量。在本篇論文中, 我們試著將時空數列應用在非空間系統的經濟領域上。以臺灣地區三種常 見的蔬菜為例,分別以時空數列的 STARMA 模式與單變量 ARIMA 時間數 列,利用蔬菜批發價格建立模式,並比較其短期預測效果。最後,就價格 變動與產銷策略之關係進行討論。 / The supply elasticity of vegetables is so small. Once the production has been known, it would reflect on the price as soon as possible. And at the same time, the substitute effect between the vegetables has also great influence on the change of the price. However, if we could forecast the variation of the vegetables price,then the production-marketing strategy would be planned in advance. In this paper, we apply the spatial time series analysis on the field of economic, which is included in the non-spatial system. An investigate about the price variation for three kinds of vegetables in Taiwan.And the comparison of short-term forecasting performance for the STARMA model and traditional ARIMA model are also made. Finally, we discuss in detail about the relationship between the change of vegetable price and production-marketing strategy.
85

The use of weights to account for non-response and drop-out

Höfler, Michael, Pfister, Hildegard, Lieb, Roselind, Wittchen, Hans-Ulrich 19 February 2013 (has links) (PDF)
Background: Empirical studies in psychiatric research and other fields often show substantially high refusal and drop-out rates. Non-participation and drop-out may introduce a bias whose magnitude depends on how strongly its determinants are related to the respective parameter of interest. Methods: When most information is missing, the standard approach is to estimate each respondent’s probability of participating and assign each respondent a weight that is inversely proportional to this probability. This paper contains a review of the major ideas and principles regarding the computation of statistical weights and the analysis of weighted data. Results: A short software review for weighted data is provided and the use of statistical weights is illustrated through data from the EDSP (Early Developmental Stages of Psychopathology) Study. The results show that disregarding different sampling and response probabilities can have a major impact on estimated odds ratios. Conclusions: The benefit of using statistical weights in reducing sampling bias should be balanced against increased variances in the weighted parameter estimates.
86

Geo-spatial Object Detection Using Local Descriptors

Aytekin, Caglar 01 August 2011 (has links) (PDF)
There is an increasing trend towards object detection from aerial and satellite images. Most of the widely used object detection algorithms are based on local features. In such an approach, first, the local features are detected and described in an image, then a representation of the images are formed using these local features for supervised learning and these representations are used during classification . In this thesis, Harris and SIFT algorithms are used as local feature detector and SIFT approach is used as a local feature descriptor. Using these tools, Bag of Visual Words algorithm is examined in order to represent an image by the help of histograms of visual words. Finally, SVM classifier is trained by using positive and negative samples from a training set. In addition to the classical bag of visual words approach, two novel extensions are also proposed. As the first case, the visual words are weighted proportional to their importance of belonging to positive samples. The important features are basically the features occurring more in the object and less in the background. Secondly, a principal component analysis after forming the histograms is processed in order to remove the undesired redundancy and noise in the data, reduce the dimension of the data to yield better classifying performance. Based on the test results, it could be argued that the proposed approach is capable to detecting a number of geo-spatial objects, such as airplane or ships, for a reasonable performance.
87

On the effect of INQUERY term-weighting scheme on query-sensitive similarity measures

Kini, Ananth Ullal 12 April 2006 (has links)
Cluster-based information retrieval systems often use a similarity measure to compute the association among text documents. In this thesis, we focus on a class of similarity measures named Query-Sensitive Similarity (QSS) measures. Recent studies have shown QSS measures to positively influence the outcome of a clustering procedure. These studies have used QSS measures in conjunction with the ltc term-weighting scheme. Several term-weighting schemes have superseded the ltc term-weighing scheme and demonstrated better retrieval performance relative to the latter. We test whether introducing one of these schemes, INQUERY, will offer any benefit over the ltc scheme when used in the context of QSS measures. The testing procedure uses the Nearest Neighbor (NN) test to quantify the clustering effectiveness of QSS measures and the corresponding term-weighting scheme. The NN tests are applied on certain standard test document collections and the results are tested for statistical significance. On analyzing results of the NN test relative to those obtained for the ltc scheme, we find several instances where the INQUERY scheme improves the clustering effectiveness of QSS measures. To be able to apply the NN test, we designed a software test framework, Ferret, by complementing the features provided by dtSearch, a search engine. The test framework automates the generation of NN coefficients by processing standard test document collection data. We provide an insight into the construction and working of the Ferret test framework.
88

Valuing ecosystem services - linking ecology and policy

Noring, Maria January 2014 (has links)
Ecosystem services constitute a precondition for human welfare and survival. This concept has also become increasingly popular among both scientists and policymakers. Several initiatives have been taken to identify and value ecosystem services. Several services are threatened, and it has been concluded that in order to better manage ecosystem services they need to be further investigated and valued. By measuring them using a common metric—monetary value—they can be more easily compared and included in decision-making tools. This thesis contributes to this goal by presenting values for several ecosystem services and also including them in decision-making tools. Starting with a discussion of the concept of ecosystem services, this thesis aims to present values for certain ecosystem services and to illustrate the use of these values in systems-analysis tools such as cost-benefit analyses (CBA) and a weighting set. Links between ecology, economics and policy are discussed within a broader framework of ecosystem services. Five papers are included, in which two contingent valuation studies (CV) have been used to find values for different ecosystem services. One valuation study is focused on the effects from tributyltin (TBT) in Swedish marine waters. In addition, a quantitative assessment framework has been developed in order to simplify analysis of environmental status, progress in environmental surveillance and the relevance of different measures. It is suggested that the framework should also be used when assessing the impacts of other substances affecting the environment. The second valuation study investigates the risk of an oil spill in northern Norway. The results have been included in two CBAs and a weighting set. The first CBA compares costs for remediation of polluted sediments, caused by TBT, with the benefits of reducing TBT levels. The second CBA compares costs and benefits for reducing the probability of an oil spill. The weighting set includes monetary values on a number of impact categories where marine toxicity is based on the valuation study on TBT.  One study also examines the inclusion of environmental costs in life cycle costing (LCC) in different sectors in Sweden. Results show that respondents consider ecosystem values to be important. The values of Swedish marine waters and coastal areas outside Lofoten-Vesterålen in Norway have been identified and quantified in terms of biodiversity, habitat, recreation and scenery. In the Norwegian case, an ongoing debate on the issue of oil and gas exploration has had an impact on the number of protest bids found in the study. Based on the cost and benefits of limiting impacts on ecosystem services derived from the valuation studies, CBAs show that the suggested measures are most likely beneficial for society, and the results contribute to policy recommendations. A weighting set has been updated with new values through value transfer. The weighting set is compatible with LCA. The final study shows that companies and public organisations use environmental costs (internal and external) in a limited manner. In this thesis the ecosystem service concept is used both as an introduction and a guiding thread for the reader, as a way to frame the studies undertaken. The concept of ecosystem services can be useful, as it emphasises the importance of the services to humans. By finding and presenting values of ecosystem services, such services are more easily incorporated into decision-making. / <p>QC 20141121</p>
89

Construção de ferramenta computacional para estimação de custos na presença de censura utilizando o método da Ponderação pela Probabilidade Inversa

Sientchkovski, Paula Marques January 2016 (has links)
Introdução: Dados de custo necessários na Análise de Custo-Efetividade (CEA) são, muitas vezes, obtidos de estudos longitudinais primários. Neste contexto, é comum a presença de censura caracterizada por não se ter os dados de custo a partir de certo momento, devido ao fato de que indivíduos saem do estudo sem esse estar finalizado. A ideia da Ponderação pela Probabilidade Inversa (IPW – do inglês, Inverse Probability Weighting) vem sendo bastante estudada na literatura relacionada a esse problema, mas é desconhecida a disponibilidade de ferramentas computacionais para esse contexto. Objetivo: Construir ferramentas computacionais em software Excel e R, para estimação de custos pelo método IPW conforme proposto por Bang e Tsiatis (2000), com o objetivo de lidar com o problema da censura em dados de custos. Métodos: Através da criação de planilhas eletrônicas em software Excel e programação em software R, e utilizando-se bancos de dados hipotéticos com situações diversas, busca-se propiciar ao pesquisador maior entendimento do uso desse estimador bem como a interpretação dos seus resultados. Resultados: As ferramentas desenvolvidas, ao proporcionarem a aplicação do método IPW de modo intuitivo, se mostraram como facilitadoras para a estimação de custos na presença de censura, possibilitando calcular a ICER a partir de dados de custo. Conclusão: As ferramentas desenvolvidas permitem ao pesquisador, além de uma compreensão prática do método, a sua aplicabilidade em maior escala, podendo ser considerada como alternativa satisfatória às dificuldades postas pelo problema da censura na CEA. / Introduction: Cost data needed in Cost-Effectiveness Analysis (CEA) are often obtained from longitudinal primary studies. In this context, it is common the presence of censoring characterized by not having cost data after a certain point, due to the fact that individuals leave the study without this being finalized. The idea of Inverse Probability Weighting (IPW) has been extensively studied in the literature related to this problem, but is unknown the availability of computational tools for this context. Objective: To develop computational tools in software Excel and software R, to estimate costs by IPW method, as proposed by Bang and Tsiatis (2000), in order to deal with the problem of censorship in cost data. Methods: By creating spreadsheets in Excel software and programming in R software, and using hypothetical database with different situations, we seek to provide to the researcher most understanding of the use of IPW estimator and the interpretation of its results. Results: The developed tools, affording the application of IPW method in an intuitive way, showed themselves as facilitators for the cost estimation in the presence of censorship, allowing to calculate the ICER from more accurate cost data. Conclusion: The developed tools allow the researcher, besides a practical understanding of the method, its applicability on a larger scale, and may be considered a satisfactory alternative to the difficulties posed by the problem of censorship in CEA.
90

Using a weighted bootstrap approach to identify risk factors associated with the sexual activity of entering first-year students at UWC

Brydon, Humphrey January 2013 (has links)
Magister Scientiae - MSc / This thesis looks at the effect that the introduction of various techniques (weighting, bootstrapping and variable selection) has on the accuracy of the modelling process when using logistic regression. The data used in the modelling process is based on the sexual activity of entering first-year students at the University of the Western Cape, therefore, by constructing logistic regression models based on this data, certain predictor variables or factors associated with the sexual activity of these students are identified. The sample weighting technique utilized in this thesis assigned a weight to a student based on gender and racial representations within a sample when compared to the population of the entering first-year. The use of a sample weighting technique is shown to produce a more effective modelling process than a modelling process without weighting. The bootstrapping procedure is shown to produce logistic regression models that are more accurate. Utilizing more than 200 bootstrap samples did not necessarily produce logistic regression models that were more accurate than using a total of 200 bootstrap samples. It is, however, concluded that a weighted bootstrap modelling procedure will result in more accurate models compared to a procedure without this intervention. The forward, backward, stepwise, Newton-Raphson and Fisher variable selection methods are used. The Newton-Raphson and Fisher methods are found not to be effective when used in a logistic modelling process, whereas the forward, backward and stepwise methods are all shown to produce very similar results. Six predictor variables or factors are identified with respect to the sexual activity of the specified students: the age of the student; whether they consume alcohol or not; their racial grouping; whether an HIV test has been taken; the importance of religion in influencing their sexual behaviour; and whether they smoke or not.i i Conclusions are reached with respect to improvements that could be made to the HIV prevention programme at UWC with reference to the sexual activity of entering first-years.

Page generated in 0.0388 seconds