• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 307
  • 92
  • 59
  • 51
  • 12
  • 10
  • 7
  • 6
  • 6
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 644
  • 280
  • 161
  • 138
  • 137
  • 100
  • 72
  • 69
  • 67
  • 66
  • 66
  • 63
  • 57
  • 49
  • 48
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
581

Um estudo sobre a produtividade total dos fatores em setores de diferentes intensidades tecnólogicas

Souza, José Antonio de 26 November 2009 (has links)
Made available in DSpace on 2010-04-20T20:58:08Z (GMT). No. of bitstreams: 4 JOSE_ANTONIO_DE_SOUZA_DISSERTACAO_VERSAO_FINAL.pdf.jpg: 17403 bytes, checksum: e10358d7e12871cef2b0316788b3e376 (MD5) JOSE_ANTONIO_DE_SOUZA_DISSERTACAO_VERSAO_FINAL.pdf.txt: 192545 bytes, checksum: bd219ee8e4077c7534538b47349df885 (MD5) license.txt: 4810 bytes, checksum: 5a0053361f1e90aa2d2231437718e9df (MD5) JOSE_ANTONIO_DE_SOUZA_DISSERTACAO_VERSAO_FINAL.pdf: 761735 bytes, checksum: 652c91e503be04b555cfabf4dcbabf1e (MD5) Previous issue date: 2009-11-26T00:00:00Z / The basic hypothesis and core of this dissertation is that various methods of estimating production functions produce different results when applied to sectors of different technological intensity. This dissertation focused on determining the total factor productivity in several industries. Four sectors with high technological intensity and four sectors of low technology intensity were selected for assess this hypothesis. Production functions were estimated and its residue used to calculate the productivity. The correlation between residuals and the explanatory variables inherent to this procedure, including that of simultaneity, omitted variables and selection, was taken into account. One goal of this study was to identify whether a particular method would be more suitable to estimate production functions for industries with low/high technological intensity. This work studied several methods to estimate production functions, including: Olley & Pakes, and Levinsohn & Petrin. Our results show that, for industries with low and with high technological intensity, the Olley & Pakes method estimates are marginally better than the ones from Levinsohn & Petrin. In our opinion, such results do not provide enough advantage to put away the Levinsohn & Petrin method as a method to estimate production functions. The sensitivity of results to the different methods suggests that all of them should be consulted. In addition to the previously stated results, this work identified that the sectors studied experienced a productivity decline or stagnation from 1996 to 2005. / Este trabalho investigou o problema da determinação da produtividade total dos fatores em diversos setores industriais. Tal determinação se dá por meio de estimação de funções de produção, obtendo-se a produtividade a partir do resíduo destas estimações. A questão que aflora deste procedimento é a existência de correlação entre os resíduos e as variáveis explicativas, implicando em diversos vieses, entre eles o de simultaneidade, de variáveis omitidas e de seleção. Neste trabalho foram abordados diversos métodos de estimação de funções de produção, entre eles os métodos de Olley e Pakes e Levinsohn e Petrin. Todos os métodos foram aplicados a diversos setores da economia. A escolha dos setores se deu com base na intensidade tecnológica de cada um, sendo então escolhidos quatro setores de alta intensidade tecnológica e quatro de baixa intensidade tecnológica. A hipótese básica, fio condutor deste trabalho, é que os diversos métodos de estimação de funções de produção apresentam diferentes resultados quando aplicados a setores de diferentes intensidades tecnológicas. Um dos objetivos deste estudo foi identificar se determinado método seria mais adequado a setores de baixa intensidade tecnológica, enquanto outro seria mais apropriado a setores de alta intensidade tecnológica. Conclui-se que o método de Olley e Pakes é levemente superior ao de Levinsohn e Petrin em ambos os grupos de setores, mas não a ponto de se descartar o segundo método. A sensibilidade dos resultados aos diferentes métodos sugere que todos devem ser consultados. Um resultado adicional deste trabalho é a constatação de que houve queda ou estagnação da produtividade nos setores selecionados para a década de 1996 a 2005.
582

Regresní a korelační analýza časového vývoje počtu dopravních nehod při přepravě nebezpečných látek ve vybraném regionu. / Regression and correlation analysis of time development of the traffic accidents number at transportation of dangerous substances in a selected region.

VÁVRA, Martin January 2012 (has links)
The aim of this thesis was to conduct a statistical survey and the measurement of statistical dependences of time development of the traffic accident rate at transportation of dangerous solid, liquid and gaseous substances, including their total number and also in case of leakage of these substances in a selected region "the Czech Republic". The purpose of the thesis was to verify statistical data, or more precisely, verification of two basic hypotheses H1 and H2, and five sub-hypotheses H11, H12, H13, H14, H15. For these verifications methods of descriptive and mathematical statistics were used, especially regression and correlation analysis in hypothesis H1. To verify hypothesis H2, nonparametric normality test as a technique of mathematical statistics was applied. Verification of hypotheses H1 and their sub-hypotheses H11, H12, H13, H14, H15 enabled to prove linear regression associated with negative correlation within the development of traffic accidents at transportation of dangerous substances in annual units of time (2002 to 2011). Verification of hypothesis H2 enabled to demonstrate normality in distribution of the number of accidents at transportation of dangerous substances within individual months of the monitored period from 2007 to 2011. As benefits of this thesis both the proposal of the sequence of statistical methods for examining the research topic and the application of the mentioned statistical methods to the number of traffic accidents at transportation of dangerous substances can be considered Based on the results of this study, possible follow-up research work may be suggested. A research is proposed which would survey the ways of prevention or other factors leading to negative correlation dependence. Another possibility of follow-up research work could be, for example, statistical surveys and the measurement of statistical dependences in regions of the CR or investigation of the theoretical distribution of the number of traffic accidents at transportation of dangerous substances within a different time unit.
583

Estimation non-paramétrique du quantile conditionnel et apprentissage semi-paramétrique : applications en assurance et actuariat / Nonparametric estimation of conditional quantile and semi-parametric learning : applications on insurance and actuarial data

Knefati, Muhammad Anas 19 November 2015 (has links)
La thèse se compose de deux parties : une partie consacrée à l'estimation des quantiles conditionnels et une autre à l'apprentissage supervisé. La partie "Estimation des quantiles conditionnels" est organisée en 3 chapitres : Le chapitre 1 est consacré à une introduction sur la régression linéaire locale, présentant les méthodes les plus utilisées, pour estimer le paramètre de lissage. Le chapitre 2 traite des méthodes existantes d’estimation nonparamétriques du quantile conditionnel ; Ces méthodes sont comparées, au moyen d’expériences numériques sur des données simulées et des données réelles. Le chapitre 3 est consacré à un nouvel estimateur du quantile conditionnel et que nous proposons ; Cet estimateur repose sur l'utilisation d'un noyau asymétrique en x. Sous certaines hypothèses, notre estimateur s'avère plus performant que les estimateurs usuels.<br> La partie "Apprentissage supervisé" est, elle aussi, composée de 3 chapitres : Le chapitre 4 est une introduction à l’apprentissage statistique et les notions de base utilisées, dans cette partie. Le chapitre 5 est une revue des méthodes conventionnelles de classification supervisée. Le chapitre 6 est consacré au transfert d'un modèle d'apprentissage semi-paramétrique. La performance de cette méthode est montrée par des expériences numériques sur des données morphométriques et des données de credit-scoring. / The thesis consists of two parts: One part is about the estimation of conditional quantiles and the other is about supervised learning. The "conditional quantile estimate" part is organized into 3 chapters. Chapter 1 is devoted to an introduction to the local linear regression and then goes on to present the methods, the most used in the literature to estimate the smoothing parameter. Chapter 2 addresses the nonparametric estimation methods of conditional quantile and then gives numerical experiments on simulated data and real data. Chapter 3 is devoted to a new conditional quantile estimator, we propose. This estimator is based on the use of asymmetrical kernels w.r.t. x. We show, under some hypothesis, that this new estimator is more efficient than the other estimators already used.<br> The "supervised learning" part is, too, with 3 chapters: Chapter 4 provides an introduction to statistical learning, remembering the basic concepts used in this part. Chapter 5 discusses the conventional methods of supervised classification. Chapter 6 is devoted to propose a method of transferring a semiparametric model. The performance of this method is shown by numerical experiments on morphometric data and credit-scoring data.
584

Contributions à l'estimation de quantiles extrêmes. Applications à des données environnementales / Some contributions to the estimation of extreme quantiles. Applications to environmental data.

Methni, Jonathan El 07 October 2013 (has links)
Cette thèse s'inscrit dans le contexte de la statistique des valeurs extrêmes. Elle y apporte deux contributions principales. Dans la littérature récente en statistique des valeurs extrêmes, un modèle de queues de distributions a été introduit afin d'englober aussi bien les lois de type Pareto que les lois à queue de type Weibull. Les deux principaux types de décroissance de la fonction de survie sont ainsi modélisés. Un estimateur des quantiles extrêmes a été déduit de ce modèle mais il dépend de deux paramètres inconnus, le rendant inutile dans des situations pratiques. La première contribution de cette thèse est de proposer des estimateurs de ces paramètres. Insérer nos estimateurs dans l'estimateur des quantiles extrêmes précédent permet alors d'estimer des quantiles extrêmes pour des lois de type Pareto aussi bien que pour des lois à queue de type Weibull d'une façon unifiée. Les lois asymptotiques de nos trois nouveaux estimateurs sont établies et leur efficacité est illustrée sur des données simulées et sur un jeu de données réelles de débits de la rivière Nidd se situant dans le Yorkshire en Angleterre. La seconde contribution de cette thèse consiste à introduire et estimer une nouvelle mesure de risque appelé Conditional Tail Moment. Elle est définie comme le moment d'ordre a>0 de la loi des pertes au-delà du quantile d'ordre p appartenant à ]0,1[ de la fonction de survie. Estimer le Conditional Tail Moment permet d'estimer toutes les mesures de risque basées sur les moments conditionnels telles que la Value-at-Risk, la Conditional Tail Expectation, la Conditional Value-at-Risk, la Conditional Tail Variance ou la Conditional Tail Skewness. Ici, on s'intéresse à l'estimation de ces mesures de risque dans le cas de pertes extrêmes c'est-à-dire lorsque p tend vers 0 lorsque la taille de l'échantillon augmente. On suppose également que la loi des pertes est à queue lourde et qu'elle dépend d'une covariable. Les estimateurs proposés combinent des méthodes d'estimation non-paramétrique à noyau avec des méthodes issues de la statistique des valeurs extrêmes. Le comportement asymptotique de nos estimateurs est établi et illustré aussi bien sur des données simulées que sur des données réelles de pluviométrie provenant de la région Cévennes-Vivarais. / This thesis can be viewed within the context of extreme value statistics. It provides two main contributions to this subject area. In the recent literature on extreme value statistics, a model on tail distributions which encompasses Pareto-type distributions as well as Weibull tail-distributions has been introduced. The two main types of decreasing of the survival function are thus modeled. An estimator of extreme quantiles has been deduced from this model, but it depends on two unknown parameters, making it useless in practical situations. The first contribution of this thesis is to propose estimators of these parameters. Plugging our estimators in the previous extreme quantiles estimator allows us to estimate extreme quantiles from Pareto-type and Weibull tail-distributions in an unified way. The asymptotic distributions of our three new estimators are established and their efficiency is illustrated on a simulation study and on a real data set of exceedances of the Nidd river in the Yorkshire (England). The second contribution of this thesis is the introduction and the estimation of a new risk measure, the so-called Conditional Tail Moment. It is defined as the moment of order a>0 of the loss distribution above the quantile of order p in (0,1) of the survival function. Estimating the Conditional Tail Moment permits to estimate all risk measures based on conditional moments such as the Value-at-Risk, the Conditional Tail Expectation, the Conditional Value-at-Risk, the Conditional Tail Variance or the Conditional Tail Skewness. Here, we focus on the estimation of these risk measures in case of extreme losses i.e. when p converges to 0 when the size of the sample increases. It is moreover assumed that the loss distribution is heavy-tailed and depends on a covariate. The estimation method thus combines nonparametric kernel methods with extreme-value statistics. The asymptotic distribution of the estimators is established and their finite sample behavior is illustrated both on simulated data and on a real data set of daily rainfalls in the Cévennes-Vivarais region (France).
585

Ecologia tr?fica de anf?bios anuros: rela??es filogen?ticas em diferentes escalas

Amado, Talita Ferreira 17 April 2014 (has links)
Made available in DSpace on 2014-12-17T14:33:11Z (GMT). No. of bitstreams: 1 TalitaFA_DISSERT.pdf: 2140188 bytes, checksum: 0083999c43d74876b9f2f21898381161 (MD5) Previous issue date: 2014-04-17 / Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior / Understand the origin, maintenance and the mechanisms that operate in the current biodiversity is the major goal of ecology. Species ecology can be influenced by different factors at different scales. There are three approaches about the ecological differences between species: the first brings that differences result from current processes on niche characteristics (e.g. diet, time, space); the second that species differences are explained by random patterns of speciation, extinction and dispersion, the third that historical events explain the formation and composition of species in communities. This study aims to evaluate the influence of phylogenetic relationships in determining ecological characteristics in amphibians (globally) and test with that, if ecological differences between species of frogs are the result of ancient pre-existing differences or as result of current interactions. Another objective of this study is to verify if ecological, historical or current characteristics determine the size of species geographical distribution. The diet data for analysis of trophic ecology were collected from published literature. We performed a non-parametric MANOVA to test the existence of phylogenetic effects in diet shifts on frogs history. Thus, it is expected to know the main factors that allow the coexistence of anuran species. We performed a phylogenetic regression to analyze if niche breadth, body size and evolutionary age variables determine the size of the geographical distribution of amphibians in the Amazon. In the present study, new contributions to knowledge of major ecological patterns of anurans are discussed under a phylogenetic perspective / Entender a origem, manuten??o e os mecanismos que operam na biodiversidade atual s?o um dos principais objetivos da Ecologia. A ecologia das esp?cies pode ser influenciada por diferentes fatores em diferentes escalas. Existem tr?s abordagens a cerca das diferen?as ecol?gicas entre as esp?cies: a primeira traz essas diferen?as resultam de processos atuais atuando sobre as caracter?sticas do nicho (dieta, tempo, espa?o, etc); a segunda que diverg?ncias no nicho das esp?cies s?o explicadas por padr?es rand?micos de especia??o, dispers?o e extin??o; a terceira que eventos hist?ricos explicam a forma??o e a composi??o das esp?cies nas comunidades. Este estudo tem como objetivo avaliar a influ?ncia das rela??es filogen?ticas na determina??o de caracter?sticas ecol?gicas em anf?bios (globalmente) e testar, com isso, se as diferen?as ecol?gicas entre as esp?cies de anuros s?o resultado de diferen?as antigas pr?-existentes ou como o resultado de intera??es ecol?gicas mais recentes. Outro objetivo deste estudo ? verificar que caracter?sticas ecol?gicas, hist?ricas ou atuais, determinam e influenciam o tamanho da distribui??o geogr?fica das esp?cies. Os dados de dieta para a an?lise da ecologia tr?fica dos anf?bios foram coletados a partir da literatura j? publicada. Realizamos uma MANOVA n?o param?trica para testar a exist?ncia de efeitos filogen?ticos nas principais diverg?ncias na dieta dos anuros. Com isso, espera-se conhecer os principais fatores que permitem a coexist?ncia das esp?cies de anf?bios anuros e quais os principais n?s da filogenia de anf?bios respons?veis pelas diferen?as observadas atualmente no nicho tr?fico das esp?cies. Realizamos uma regress?o filogen?tica para analisar se as vari?veis de largura de nicho, tamanho corporal e tempo de diverg?ncia determinam o tamanho da distribui??o geogr?fica dos anf?bios anuros da Amaz?nia. Neste trabalho, novas contribui??es ao conhecimento dos padr?es ecol?gicos apresentados pelos anuros s?o fornecidas e discutidas sob uma perspectiva filogen?tica
586

O uso de quase U-estatísticas para séries temporais uni e multivaridas / The use of quasi U-statistics for univariate and multivariate time series

Valk, Marcio 17 August 2018 (has links)
Orientador: Aluísio de Souza Pinheiro / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Matemática Estatítica e Computação Científica / Made available in DSpace on 2018-08-17T14:57:09Z (GMT). No. of bitstreams: 1 Valk_Marcio_D.pdf: 2306844 bytes, checksum: 31162915c290291a91806cdc6f69f697 (MD5) Previous issue date: 2011 / Resumo: Classificação e agrupamento de séries temporais são problemas bastante explorados na literatura atual. Muitas técnicas são apresentadas para resolver estes problemas. No entanto, as restrições necessárias, em geral, tornam os procedimentos específicos e aplicáveis somente a uma determinada classe de séries temporais. Além disso, muitas dessas abordagens são empíricas. Neste trabalho, propomos métodos para classificação e agrupamento de séries temporais baseados em quase U-estatísticas(Pinheiro et al. (2009) e Pinheiro et al. (2010)). Como núcleos das U-estatísticas são utilizadas métricas baseadas em ferramentas bem conhecidas na literatura de séries temporais, entre as quais o periodograma e a autocorrelação amostral. Três situações principais são consideradas: séries univariadas; séries multivariadas; e séries com valores aberrantes. _E demonstrada a normalidade assintética dos testes propostos para uma ampla classe de métricas e modelos. Os métodos são estudados também por simulação e ilustrados por aplicação em dados reais. / Abstract: Classifcation and clustering of time series are problems widely explored in the current literature. Many techniques are presented to solve these problems. However, the necessary restrictions in general, make the procedures specific and applicable only to a certain class of time series. Moreover, many of these approaches are empirical. We present methods for classi_cation and clustering of time series based on Quasi U-statistics (Pinheiro et al. (2009) and Pinheiro et al. (2010)). As kernel of U-statistics are used metrics based on tools well known in the literature of time series, including the sample autocorrelation and periodogram. Three main situations are considered: univariate time series, multivariate time series, and time series with outliers. It is demonstrated the asymptotic normality of the proposed tests for a wide class of metrics and models. The methods are also studied by simulation and applied in a real data set. / Doutorado / Estatistica / Doutor em Estatística
587

Associação entre diabetes mellitus e demência: estudo neuropatológico / Association between Alzheimer\'s disease and dementia: a neuropathologic study

Maria Niures Pimentel dos Santos Matioli 05 September 2016 (has links)
A literatura científica vem debatendo sobre a existência de uma associação entre diabetes mellitus (DM) e demência, doença de Alzheimer (DA) e demência vascular (DV). O DM é um conhecido fator de risco para a doença cerebrovascular (DCV) e DV, porém não há consenso até o momento do real papel do DM no desenvolvimento das alterações neuropatológicas da DA. Objetivos: verificar a associação entre DM e demência, DM e alterações neuropatológicas da DA e DV. Métodos: os dados foram coletados do Banco de Encéfalos Humanos do Grupo de Estudos em Envelhecimento Cerebral da FMUSP estudados de 2004 a 2015. A amostra foi dividida em dois grupos: não diabéticos e diabéticos. Os diagnósticos de DM e de demência foram estabelecidos post-mortem mediante entrevista com informante. O diagnóstico de demência exigiu escore >= 1 na Escala de Avaliação Clínica da Demência (CDR) e Questionário sobre Declínio Cognitivo no Idoso (IQCODE) >= 3,42. O diagnóstico etiológico da demência foi determinado por exame neuropatológico por imuno-histoquímica. A proporção de casos de demência, de DA e de DV de não diabéticos e diabéticos foi determinada, assim como a relação entre DM e placas neuríticas (PN) e emaranhados neurofibrilares (ENF), e neuropatologia vascular. As análises estatísticas empregadas foram o teste de Mann-Whitney e regressão linear múltipla para variáveis quantitativas, teste de ?2, teste exato de Fisher e regressão logística múltipla para variáveis categóricas. Resultados: amostra total foi de 1.037 indivíduos, sendo 758 não diabéticos (73,1%) e 279 diabéticos (26,9%). Demência foi constatada em 28,7% em diabéticos. O DM não se associou à frequência mais elevada de demência (OR: 1,22; IC 95%: 0,81-1,82; p=0,34). O DM não está associado com ENF (p=0,81), PN (p=0,31), grupo infarto (p=0,94), angiopatia amiloide (p=0,42) e arteriolosclerose hialina (p=0,07). Após o ajuste para variáveis demográficas e para os fatores de risco vascular, o diagnóstico de DM não se associou ao diagnóstico neuropatológico de DA e vascular. Conclusão: o DM não está associado à demência e às alterações neuropatológicas da DA e de DV / The scientific literature has been debating the existence of an association between diabetes mellitus (DM) and dementia, Alzheimer\'s disease (AD) and vascular dementia (VaD). DM is a known risk factor for cerebrovascular disease (CVD) and VaD, but there is still no consensus on the real role of DM in the development of AD neuropathology. Objectives: to investigate the association among DM and dementia, neuropathology (NP) of AD and VaD. Methods: Data were collected from the cases included in the Brain Bank of the Brazilian Aging Brain Study Group between 2004 and 2015. Cases were divided into 2 groups: no diabetics and diabetics. Clinical diagnosis of dementia was determined by the scores >= 1.0 in the Clinical Dementia Rating (CDR) and >= 3.42 in the Informant Questionnaire on Cognitive Decline in the Elderly (IQCODE). Etiological diagnoses of dementia were determined by neuropathological examination, using immunohistochemistry. The proportion of dementia cases, AD and VaD of no diabetics and diabetics were investigated as well as the relationship among DM and neuritic plaques (NPq) and neurofibrillary tangles (NFT). Mann-Whitney test and multiple linear regression for quantitative variables, and chi-square test and multiple logistic regression for categorical variables were the statistical analyses applied. Results: Total sample included 1037 subjects, divided in 758 (73.1%) no diabetics and 279 diabetics (26.9%). Dementia was present in 27.8% of diabetics. DM did not increase the frequency for dementia (OR: 1.22; IC 95%: 0.81-1.82; p=0.34). DM was not associated with NFT (p=0.81), NPq (p=0.31), infarct group (0.94), cerebral amyloid angiopathy (0.42) and hyaline arteriolosclerosis (p=0.07). After adjustment for demographic variables and vascular risk factors, DM was not associated with DA and vascular NP. Conclusion: DM is not associated with dementia, AD and vascular neuropathology
588

Escala cartográfica linear: estratégias de ensino-aprendizagem junto aos estudantes de geografia do IGDEMA/UFAL - 2013 / Linear cartographic scale: teaching/learning strategies with students of geography of the IGDEMA/UFAL 2013

Umbelino Oliveira de Andrade 16 March 2015 (has links)
Uma proporção significativa dos alunos dos cursos de graduação em Geografia do IGDEMA/UFAL apresenta dificuldades na aprendizagem de Cartografia, particularmente de escala cartográfica linear. Pouquíssimos trabalhos apresentaram situações similares em outras universidades do Brasil e propuseram alternativas mitigadoras, embora com ênfase no curso de licenciatura. Nesse contexto, o presente trabalho tomou como objetivo desenvolver um procedimento de otimização da aprendizagem de escala cartográfica linear por meio da conscientização e motivação prévias discentes e contrapartidas bilaterais na aplicação de um processo de ensino-aprendizagem junto aos alunos do segundo período de graduação em Geografia do IGDEMA/UFAL em 2013/2. As bases teóricas adotadas para tal foram um conceito da psicologia pedagógica processo educativo trilateral , dois conceitos da teoria socioconstrutivista internalização das funções psicológicas superiores e zona de desenvolvimento proximal e a teoria da andragogia. Coerente com o objetivo e com respaldos das bases teóricas, foi aplicado o método de aula expositiva adaptado à implementação do processo pedagógico. Este processo envolveu a fase de avaliação prévia (exposição e prática preparatórias e posterior diálogo) e a fase de avaliação definitiva (exposições e práticas mais concentradas). Por ser preponderante, a avaliação definitiva precisou atender às exigências de planejamento e procedimentos administrativos, a fim de se minimizar a relativa falta de fidedignidade de seus escores para, em seguida, submeter-se a duas etapas obrigatórias do processo da sua validação. A primeira, que foi a verificação do requisito da validade, se deu por processo qualitativo em prol da representatividade de seu conteúdo mediante o universo Escala Cartográfica e dessa aprendizagem; e a segunda etapa, verificação do requisito da fidedignidade, processou-se pela análise estatística de consistência interna entre seus quesitos. Como a avaliação definitiva atendeu a esses requisitos de validação, as suas medidas de aprendizagem se tornaram confiáveis para os testes de diferenças aplicados conjuntamente com as medidas de aprendizagem similares da avaliação prévia. Assim, obteve-se o nível de êxito do processo pedagógico aplicado. Como resultado, a comparação dos dados das duas avaliações não indicou evolução esperada das notas de cada aluno. Então como causas desse resultado, em função da parte expressiva dos alunos, podem ser citadas: o processo aplicado se revelou ambicioso, a prática de variados exercícios mesmo com auxílio de demonstrações de cálculos revelou-se um desafio e modificações de escala cartográfica se revelaram problemática. Dessa forma, a conclusão é que esse processo de ensino-aprendizagem precisa ser revisto em parte, ou seja, revelam-se necessários procedimentos pedagógicos para esses estudantes ainda dependentes em virtude de fatores limitantes, particularmente a base matemática ineficiente. / A significant proportion of the undergraduates in the geography courses of the IGDEMA/UFAL present learning difficulties, particularly in relation to liner cartographic scale. Very few papers have identified similar situations in other universities in Brazil and have proposed mitigation alternatives, although with an emphasis on teaching degree courses. In this context, this work aimed at developing a learning procedure in order to optimize the learning of linear cartographic scale through awareness development and previous student motivation, as well as through bilateral counterparts in implementing a teaching/learning process focused on the se undergraduates of the second term in the first year of studies in Geography course of the IGDEMA/UFAL program, in 2013/2. The theoretical framework of the study included one concept of the pedagogical psychology trilateral educational process , two concepts of the social constructivist theory internalization of higher psychological functions, and proximal development zone as well as the andragogy theory. In order to be coherent with the study objective and the adopted theoretical framework, the expositive teaching method was used, although adapted to the target pedagogical process. This process involved a prior evaluation phase (presentation and preparatory practices and subsequent dialogue) and the phase of final assessment (presentations and more focused practices). Because it is preponderant, the definitive assessment had to meet planning requirements and administrative procedures, in order to minimize the relative unreliability of the scores, so that it could undergo the two mandatory steps of the process of validation. The first step verification of the validity requirement was implemented through a qualitative process, observing the representativeness of its content, based upon the Cartographic Scale universe and related learning; and the second step, verification of the reliability requirement, was developed through statistical analysis for the internal consistency of the adopted questions. As the final evaluation met these validation requirements, their learning measures were considered to be reliable for testing differences, applied within the similar learning measures of the prior assessment. As a result, the comparative data of both evaluations did not indicate the expected evolution in the students grades. Then, as those results reasons, considering the biggest amount of the students, we may cite: the applied process was too much ambitious, the practice of varied exercises though with calculation demonstrations, could be considered a challenge, and cartographic scale changes seemed to cause problems to them. Hence, the conclusion is that this teaching/learning process needs to be revised in part, which means, pedagogical proceeds might be necessary for those still dependent students, considering these limitation factors, particularly the insufficient mathematics basis.
589

Modelagem não-paramétrica da dinâmica da taxa de juros instantânea utilizando contratos futuros da taxa média dos depósitos interfinanceiros de 1 dia (DI1)

Diaz, José Ignacio Valencia 26 August 2013 (has links)
Submitted by José Ignacio Valencia Díaz (jivalenciadiaz@gmail.com) on 2013-09-17T00:13:33Z No. of bitstreams: 1 Dissertacao MPFE Jose Ignacio Valencia Diaz.pdf: 1741345 bytes, checksum: b45af943bf4f6e8a2a9963c07038d9dc (MD5) / Approved for entry into archive by Suzinei Teles Garcia Garcia (suzinei.garcia@fgv.br) on 2013-09-17T12:05:59Z (GMT) No. of bitstreams: 1 Dissertacao MPFE Jose Ignacio Valencia Diaz.pdf: 1741345 bytes, checksum: b45af943bf4f6e8a2a9963c07038d9dc (MD5) / Made available in DSpace on 2013-09-17T12:54:35Z (GMT). No. of bitstreams: 1 Dissertacao MPFE Jose Ignacio Valencia Diaz.pdf: 1741345 bytes, checksum: b45af943bf4f6e8a2a9963c07038d9dc (MD5) Previous issue date: 2013-08-26 / Prediction models based on nonparametric estimation are in continuous development and have been permeating the quantitative community. Their main feature is that they do not consider as known a priori the form of the probability distributions functions (PDF), but allow the data to be used directly in order to build their own PDFs. In this work it is implemented the nonparametric pooled estimators from Sam and Jiang (2009) for drift and diffusion functions for the short rate diffusion process, by means of the use of yield series of different maturities provided by One Day Future Interbank Deposit contracts (ID1). The estimators are built from the perspective of kernel functions and they are optimized with a particular kernel format, in our case, Epanechnikov’s kernel, and with a smoothing parameter (bandwidth). Empiric experience indicates that the smoothing parameter is critical to find the probability density function that provides an optimal estimation in terms of MISE (Mean Integrated Squared Error) when testing the model with the traditional k-folds cross-validation method. Exceptions arise when the series do not have appropriate sizes, but the structural break of the diffusion process of the Brazilian interest short rate, since 2006, requires the reduction of the length of the series to the cost of reducing the predictive power of the model. This structural break represents the evolution of the Brazilian market, in an attempt to converge towards mature markets and it explains largely the unsatisfactory performance of the proposed estimator. / Modelos de predição baseados em estimações não-paramétricas continuam em desenvolvimento e têm permeado a comunidade quantitativa. Sua principal característica é que não consideram a priori distribuições de probabilidade conhecidas, mas permitem que os dados passados sirvam de base para a construção das próprias distribuições. Implementamos para o mercado brasileiro os estimadores agrupados não-paramétricos de Sam e Jiang (2009) para as funções de drift e de difusão do processo estocástico da taxa de juros instantânea, por meio do uso de séries de taxas de juros de diferentes maturidades fornecidas pelos contratos futuros de depósitos interfinanceiros de um dia (DI1). Os estimadores foram construídos sob a perspectiva da estimação por núcleos (kernels), que requer para a sua otimização um formato específico da função-núcleo. Neste trabalho, foi usado o núcleo de Epanechnikov, e um parâmetro de suavizamento (largura de banda), o qual é fundamental para encontrar a função de densidade de probabilidade ótima que forneça a estimação mais eficiente em termos do MISE (Mean Integrated Squared Error - Erro Quadrado Integrado Médio) no momento de testar o modelo com o tradicional método de validação cruzada de k-dobras. Ressalvas são feitas quando as séries não possuem os tamanhos adequados, mas a quebra estrutural do processo de difusão da taxa de juros brasileira, a partir do ano 2006, obriga à redução do tamanho das séries ao custo de reduzir o poder preditivo do modelo. A quebra estrutural representa um processo de amadurecimento do mercado brasileiro que provoca em grande medida o desempenho insatisfatório do estimador proposto.
590

Optimum Savitzky-Golay Filtering for Signal Estimation

Krishnan, Sunder Ram January 2013 (has links) (PDF)
Motivated by the classic works of Charles M. Stein, we focus on developing risk-estimation frameworks for denoising problems in both one-and two-dimensions. We assume a standard additive noise model, and formulate the denoising problem as one of estimating the underlying clean signal from noisy measurements by minimizing a risk corresponding to a chosen loss function. Our goal is to incorporate perceptually-motivated loss functions wherever applicable, as in the case of speech enhancement, with the squared error loss being considered for the other scenarios. Since the true risks are observed to depend on the unknown parameter of interest, we circumvent the roadblock by deriving finite-sample un-biased estimators of the corresponding risks based on Stein’s lemma. We establish the link with the multivariate parameter estimation problem addressed by Stein and our denoising problem, and derive estimators of the oracle risks. In all cases, optimum values of the parameters characterizing the denoising algorithm are determined by minimizing the Stein’s unbiased risk estimator (SURE). The key contribution of this thesis is the development of a risk-estimation approach for choosing the two critical parameters affecting the quality of nonparametric regression, namely, the order and bandwidth/smoothing parameters. This is a classic problem in statistics, and certain algorithms relying on derivation of suitable finite-sample risk estimators for minimization have been reported in the literature (note that all these works consider the mean squared error (MSE) objective). We show that a SURE-based formalism is well-suited to the regression parameter selection problem, and that the optimum solution guarantees near-minimum MSE (MMSE) performance. We develop algorithms for both glob-ally and locally choosing the two parameters, the latter referred to as spatially-adaptive regression. We observe that the parameters are so chosen as to tradeoff the squared bias and variance quantities that constitute the MSE. We also indicate the advantages accruing out of incorporating a regularization term in the cost function in addition to the data error term. In the more general case of kernel regression, which uses a weighted least-squares (LS) optimization, we consider the applications of image restoration from very few random measurements, in addition to denoising of uniformly sampled data. We show that local polynomial regression (LPR) becomes a special case of kernel regression, and extend our results for LPR on uniform data to non-uniformly sampled data also. The denoising algorithms are compared with other standard, performant methods available in the literature both in terms of estimation error and computational complexity. A major perspective provided in this thesis is that the problem of optimum parameter choice in nonparametric regression can be viewed as the selection of optimum parameters of a linear, shift-invariant filter. This interpretation is provided by deriving motivation out of the hallmark paper of Savitzky and Golay and Schafer’s recent article in IEEE Signal Processing Magazine. It is worth noting that Savitzky and Golay had shown in their original Analytical Chemistry journal article, that LS fitting of a fixed-order polynomial over a neighborhood of fixed size is equivalent to convolution with an impulse response that is fixed and can be pre-computed. They had provided tables of impulse response coefficients for computing the smoothed function and smoothed derivatives for different orders and neighborhood sizes, the resulting filters being referred to as Savitzky-Golay (S-G) filters. Thus, we provide the new perspective that the regression parameter choice is equivalent to optimizing for the filter impulse response length/3dB bandwidth, which are inversely related. We observe that the MMSE solution is such that the S-G filter chosen is of longer impulse response length (equivalently smaller cutoff frequency) at relatively flat portions of the noisy signal so as to smooth noise, and vice versa at locally fast-varying portions of the signal so as to capture the signal patterns. Also, we provide a generalized S-G filtering viewpoint in the case of kernel regression. Building on the S-G filtering perspective, we turn to the problem of dynamic feature computation in speech recognition. We observe that the methodology employed for computing dynamic features from the trajectories of static features is in fact derivative S-G filtering. With this perspective, we note that the filter coefficients can be pre-computed, and that the whole problem of delta feature computation becomes efficient. Indeed, we observe an advantage by a factor of 104 on making use of S-G filtering over actual LS polynomial fitting and evaluation. Thereafter, we study the properties of first-and second-order derivative S-G filters of certain orders and lengths experimentally. The derivative filters are bandpass due to the combined effects of LPR and derivative computation, which are lowpass and highpass operations, respectively. The first-and second-order S-G derivative filters are also observed to exhibit an approximately constant-Q property. We perform a TIMIT phoneme recognition experiment comparing the recognition accuracies obtained using S-G filters and the conventional approach followed in HTK, where Furui’s regression formula is made use of. The recognition accuracies for both cases are almost identical, with S-G filters of certain bandwidths and orders registering a marginal improvement. The accuracies are also observed to improve with longer filter lengths, for a particular order. In terms of computation latency, we note that S-G filtering achieves delta and delta-delta feature computation in parallel by linear filtering, whereas they need to be obtained sequentially in case of the standard regression formulas used in the literature. Finally, we turn to the problem of speech enhancement where we are interested in de-noising using perceptually-motivated loss functions such as Itakura-Saito (IS). We propose to perform enhancement in the discrete cosine transform domain using risk-minimization. The cost functions considered are non-quadratic, and derivation of the unbiased estimator of the risk corresponding to the IS distortion is achieved using an approximate Taylor-series analysis under high signal-to-noise ratio assumption. The exposition is general since we focus on an additive noise model with the noise density assumed to fall within the exponential class of density functions, which comprises most of the common densities. The denoising function is assumed to be pointwise linear (modified James-Stein (MJS) estimator), and parallels between Wiener filtering and the optimum MJS estimator are discussed.

Page generated in 0.0588 seconds