• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 201
  • 65
  • 26
  • 26
  • 16
  • 11
  • 11
  • 10
  • 10
  • 6
  • 4
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 463
  • 63
  • 56
  • 56
  • 55
  • 48
  • 44
  • 43
  • 41
  • 40
  • 37
  • 37
  • 35
  • 33
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Kernel smoothing dos dados de chuva no Nordeste

BARBOSA, Nyedja Fialho Morais 22 March 2013 (has links)
Submitted by (ana.araujo@ufrpe.br) on 2016-08-09T13:11:01Z No. of bitstreams: 1 Nyedja Fialho Morais Barbosa.pdf: 3325046 bytes, checksum: 58f0c964732402cfaf2333cb5ea24c35 (MD5) / Made available in DSpace on 2016-08-09T13:11:01Z (GMT). No. of bitstreams: 1 Nyedja Fialho Morais Barbosa.pdf: 3325046 bytes, checksum: 58f0c964732402cfaf2333cb5ea24c35 (MD5) Previous issue date: 2013-03-22 / Northeastern Brazil has great climatic adversity, is considered a very complex region, attracting the interest of scholars from around the world. The rainfall over this region is considered by seasonal behave more intensely on three internal zones of the region in different periods of the year, lasting three months, besides suffering heavily influenced by the incidence of El Niño, La Niña and other phenomena acting on the basins of the tropical Pacific and Atlantic oceans. In this work the technique was applied computational mathematics-interpolation Kernel Smoothing the data of rain on northeastern Brazil collected in the period from 1904 to 1998, from 2283 conventional weather stations located in all states of the Northeast. The calculations were performed on the GPU developed "Cluster Neumann" Program Graduate in Applied Statistics and Biometry, Department of Statistics and Informatics UFRPE through software "kernel" written in C language and Cuda. This tool allowed to do the interpolation of more than 26 million measurements of rainfall over the entire Northeast, allowing generate maps of rainfall intensity over the entire region, and make estimates in areas of missing data, and calculate statistics for precipitation Northeast in general scope and seasonal. According to the interpolations made could be detected among the studied period, the driest years and wettest, the spatial distribution of rainfall in each month as well as the characteristic of rainfall in times of El Niño and La Niña. / O Nordeste do Brasil possui grande diversidade climática, sendo considerada uma região bastante complexa, despertando o interesse de estudiosos de todo o mundo. O regime de chuvas sobre esta região é considerada sazonal por comportar-se de forma mais intensa sobre três zonas internas da região, em períodos do ano diferenciados, com duração de três meses, além de sofrer fortes influências pela incidência do El Niño, La Niña e outros fenômenos atuantes sobre as bacias dos oceanos Pacífico e Atlântico Tropicais. Neste trabalho foi aplicada a técnica matemática-computacional de interpolação do Kernel Smoothing nos dados de chuva sobre a Região Nordeste do Brasil coletados no período de 1904 a 1998, provenientes de 2.283 estações meteorológicas convencionais localizadas em todos os estados do Nordeste. Os cálculos realizados foram desenvolvidos no GPU "Cluster Neumann" do Programa de Pós-Graduação em Biometria e Estatística Aplicada do Departamento de Estatística e Informática da UFRPE através do software "Kernel" escrito em linguagem C e Cuda. Tal ferramenta possibilitou fazer a interpolação de mais de 26 milhões de medidas de precipitação de chuva sobre todo o Nordeste, permitindo gerar mapas de intensidade de chuva sobre toda a região, além de fazer estimativas em áreas de dados ausentes, e calcular estatísticas para a precipitação do Nordeste em âmbito geral e sazonal. De acordo com as interpolações realizadas foi possível detectar, dentre o período estudado, os anos mais secos e mais chuvosos, a distribuição espacial das chuvas em cada mês, bem como a característica da precipitação pluviométrica em épocas de El Niño e La Niña.
162

Testes de hipoteses para dados funcionais baseados em distancias : um estudo usando splines / Distances approach to test hypothesis for functional data

Souza, Camila Pedroso Estevam de 25 April 2008 (has links)
Orientador: Ronaldo Dias / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matematica, Estatistica e Computação Cientifica / Made available in DSpace on 2018-08-10T22:55:48Z (GMT). No. of bitstreams: 1 Souza_CamilaPedrosoEstevamde_M.pdf: 4239065 bytes, checksum: 099f19df22c0b40a411d07eacc2fe0d1 (MD5) Previous issue date: 2008 / Resumo: Avanços na tecnologia moderna têm facilitado a coleta e análise de dados de alta dimensão, ou dados que são formados por medidas repetidas de um mesmo objeto. Quando os dados são registrados densamente ao longo do tempo, freqüentemente por máquinas, eles são tipicamente chamados de dados funcionais, com uma curva (ou função) observada por objeto em estudo. A análise estatística de uma amostra de n curvas como essas é comumente chamada de análise de dados funcionais, ou ADF. Conceitualmente, dados funcionais são continuamente definidos. Claro que na prática eles geralmente são observados em pontos discretos. Não há exigência para que os dados sejam suaves, mas freqüentemente a suavidade ou outra regularidade será um aspecto chave da análise, em alguns casos derivadas das funções observadas serão importantes. Nessa dissertação diferentes técnicas de suavização serão apresentadas e discutidas, principalmente aquelas baseadas em funções splines...Observação: O resumo, na íntegra, poderá ser visualizado no texto completo da tese digital / Abstract: Advances in modern technology have facilitated the collection and analysis of high-dimensional data, or data that are repeated measurements of the same subject. When the data are recorded densely over time, often by machine, they are typically termed functional or curve data, with one observed curve (or function) per subject. The statistical analysis of a sample of n such curves is commonly termed functional data analysis, or FDA. Conceptually, functional data are continuously defined. Of course, in practice they are usually observed at discrete points. There is no general requirement that the data be smooth, but often smoothness or other regularity will be a key aspect of the analysis, in some cases derivatives of the observed functions will be important. In this project different smooth techniques are presented and discussed, mainly those based on splines functions...Note: The complete abstract is available with the full electronic digital thesis or dissertations / Mestrado / Estatistica Não Parametrica / Mestre em Estatística
163

Filtro de difusão anisotrópica anômala como método de melhoramento de imagens de ressonância magnética nuclear ponderada em difusão / Anisotropic anomalous filter as image enhancement method to nuclear magnetic resonance diffusion weighted imaging

Antonio Carlos da Silva Senra Filho 25 July 2013 (has links)
Métodos de suavização através de processos de difusão é frequentemente utilizado como etapa prévia em diferentes procedimentos em imagens. Apesar da difusão anômala ser um processo físico conhecido, ainda não é aplicada à suavização de imagens como a difusão clássica. Esta dissertação propõe e relata a implementação e avaliação de filtros de difusão anômala, tanto isotrópico quanto anisotrópico, como um método de melhoramento em imagens ponderadas em difusão (DWI) e imagens de tensor de difusão (DTI) dentro do imageamento por ressonãncia magnética (MRI). Aqui propõe-se generalizar a difusão anisotrópica e isotrópica com o conceito de difusão anômala em processamento de imagens. Como metodologia implementou-se computacionalmente as equações de difusão bidimensional e aplicou às imagens MRI para avaliar o seu potencial como filtro de melhoramento. Foram utilizadas imagens de ressonância magnética de aquisição DTI em voluntários saudáveis. Os resultados obtidos neste estudo foram a verificação que métodos baseados em difusão anômala melhoram a qualidade em processamento das imagens DTI e DWI quando observadas medidas de qualidade como a relação sinal ruído (SNR) e índice de similaridade estrutural (SSIM), e assim determinou-se parâmetros ótimos para as diferentes imagens e situações que foram avaliadas em função dos seus parâmetros de controle, em especial o parâmetro anômalo, chamado de q. Os resultados apresentados aqui permitem prever não apenas uma melhora na qualidade das imagens DTI e DWI resultantes do processamento proposto, como também possível redução de repetições na sequência de aquisição de MRI para um SNR predeterminado. / Smoothing methods through diffusion processes is often used as a preliminary step in different procedures in images. Although the anomalous diffusion is a known physical process, it is not applied to image smoothing as the classical diffusion. This paper proposes and describes implementation and evaluation of anomalous diffusion filters, both isotropic and anisotropic, as a method of improving on diffusion-weighted images (DWI) and diffusion tensor images (DTI) within the magnetic resonance imaging (MRI). Hereby is proposed to generalize the isotropic and anisotropic diffusion with the concept of anomalous diffusion in image processing. The methodology is implemented computationally as bidimensional diffusion equations and applied to MRI images to evaluate its potential as a filter for quality improvement. We used DTI and DWI imaging to acquire from healthy volunteers as image set. The results of this study verified that methods based on anomalous diffusion improved DWI and DTI image processing when observed quality measures such as signal to noise ratio (SNR) and structural similarity index (SSIM), and determined filter optimal parameters for different images and situations evaluated in terms of their control parameters, particularly the anomalous parameter, called q. The results presented here can predict not only an improvement in the quality of DWI and DTI images resulting from the proposed method, and additionally the possible reduction of repetitions following acquisition of MRI for a predetermined SNR.
164

Uma abordagem para a previsão de carga crítica do sistema elétrico brasileiro = An approach for critical load forecasting of brasilian power system / An approach for critical load forecasting of brasilian power system

Barreto, Mateus Neves, 1989- 03 July 2014 (has links)
Orientadores: Takaaki Ohishi, Ricardo Menezes Salgado / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação / Made available in DSpace on 2018-08-24T15:46:34Z (GMT). No. of bitstreams: 1 Barreto_MateusNeves_M.pdf: 6008302 bytes, checksum: ae210360a5363404761ca9b3566732ab (MD5) Previous issue date: 2014 / Resumo: O Sistema Elétrico Brasileiro abastece cerca de 97% da demanda de energia nacional. Frente ao extenso território brasileiro, necessita-se de um sistema de transmissão de larga escala, devido as grandes distâncias entre as gerações, das hidroelétricas, e a principal concentração da demanda, no Sudeste brasileiro. Para garantir segurança e economia da operação do Sistema Elétrico Brasileiro são realizadas análises da operação do sistema de geração e transmissão frente às condições de cargas críticas. A ideia é preparar o sistema para suportar as condições mais severas de carga. A curva de carga crítica é calculada para cada mês com discretização horária (ou menor). A mesma é composta pela carga mínima observada num dado mês no período da primeira a oitava hora, e pela carga máxima observada no mês para as horas restantes. Utilizando históricos de demanda pertencentes aos agentes do Setor Elétrico Brasil, foi possível criar um histórico de cinco anos, 60 meses, de curvas de carga crítica. Esses dados foram disponibilizados pelo Operador Nacional do Sistema Elétrico Brasileiro ¿ ONS, em conjunto com o desenvolvimento de um projeto de pesquisa, através de um sistema de suporte a decisão nomeado SysPrev. Nesta dissertação são propostos três modelos para realizar a previsão da curva de carga crítica. Dois modelos utilizam Redes Neurais Artificiais e um modelo utiliza Suavização Exponencial de Holt-Winters (HW). Os resultados obtidos por todos os modelos foram satisfatórios. O modelo de Suavização Exponencial se destacou perante os outros dois modelos atingindo erros médios absolutos próximos a 3%. Esses resultados justificam-se devido às séries históricas de curvas de carga crítica possuírem características de tendência e sazonalidade e o modelo de HW ser projetado especificamente para séries temporais com estas características / Abstract: The Brazilian Power System supplies around 97 % of national energy demand. By reason of the broad Brazilian territory, it requires a transmission system of large scale, due to the large distances between the generations of hydropower and the main concentration of demand that stay in southeastern of Brazil. To ensure security and economy of operation of the Brazilian Electric System are performed analyzes the operation of generation and transmission system due to the conditions of critical loads. The idea is to prepare the system to resist the harshest load conditions. The curve of critical load is calculated for each month with hourly discretization (or less). It's made with the minimum load observed in a given month between the first to eighth hour, and to maximum load observed in the month for the rest of hours. Using the demand agents¿ history belonging to the Brazilian Power System, it was possible to create a record of five years, 60 months, of curves of critical load. These datas were available by the National Operator of the Brazilian Power System as part of the development of a research project, made available by a decision support system named SysPrev. This dissertation proposed three models to perform the forecasting of the critical load curve. Two models using Artificial Neural Networks and one model using Exponential Smoothing Holt-Winters (HW). The results obtained by all the models were satisfactory. The exponential smoothing model stood out against the other two models, this having absolute average errors near 3%. These results are justified due to the historical series of curves of critical load has characteristics of trend and seasonality and the HW model is specifically designed for time series with such characteristics / Mestrado / Engenharia de Computação / Mestre em Engenharia Elétrica
165

[en] NON GAUSSIAN STATE SPACE MODELS FOR COUNT DATA: THE DURBIN AND KOOPMAN METHODOLOGY / [pt] MODELOS DE ESPAÇO DE ESTADO NÃO GAUSSIANOS PARA DADOS DE CONTAGEM: METODOLOGIA DURBIN-KOOPMAN

MAYTE SUAREZ FARINAS 15 February 2006 (has links)
[pt] O objetivo desta tese é o de apresentar e investigar a metodologia de Durbin e Koopman (DK) usada para estimar o espaço de estado de modelos de séries temporais não- Gaussianos, dentro do contexto de modelos estruturais. A abordagem de DK está baseada na avaliação da verossimilhança usando uma eficiente simulação de Monte Carlo, por meio de amostragem por importância e técnicas de redução de variância, tais como variáveis antitéticas e variáveis de controle. Ela também integra conhecidas técnicas existentes no caso Gaussiano tais como o Filtro de Kalman Siavizado e o algoritmo de simulação suavizada. Uma vez que os hiperparâmetros do modelo são estimados, o estado, que contém as componentes do modelo, é estimado pela avaliação da moda a posteriori. Propomos então aproximações para avaliar a média e a variância da distribuição preditiva. São consideradas aplicações usando o modelo de Poisson. / [en] The aim of this thesis is to present and investigate the methodology of Durbin and Koopman (DK) used to estimate non-Gaussian state space time series models, within the context of structural models. DK`s approach is based on evaluating the likelihood using efficient Monte Carlo simulation, by means of importance sampling and variance- reduction techniques, such as antithetic variables and control variables. It also contents known existent techniques for the Gaussian case as the Kalman Filter smoother Simulation algorithm. Once the model hyperparameters are estimated, the state, which encapsulates the model`s components, is estimated by evaluating its posterior mode. Proposals are approximated to evaluate mean and variance for the predictive distribution. Applications are considered using the Poisson model.
166

Use of repeated tests and rolling breath averages affects the precision of quantifying the VO2 response profile in moderate intensity cycling.

Pedini, Daniela Marie 08 1900 (has links)
The purpose of this study was to determine whether working in the field of deaf education, as opposed to general education, results in a higher level of technology integration. A secondary goal was to determine if deaf educators who are deaf integrate technology at a higher level than their hearing counterparts. The instrument chosen for this study was the LoTi Technology Use Profile, a tool used to explore the role of technology in the classroom. A total of 92 participates were included in the study of which 48 were regular educators and 44 were deaf educators. The participants were selected from a population pool whereby teachers were presumably pre-disposed to using technology based upon their attendance at a technology training session in the form of a conference or a class. Deaf educators as a whole did not perform as well as general educators on the LoTi scales. Given the fact that the technology-minded general educators who comprised the sample population of this study scored exceptionally high on the LoTi scales, further research is needed to ensure comparability between the two groups. The findings of the current study do suggest, though, that deaf educators who are deaf have the potential to integrate technology to a greater degree than deaf educators who are hearing. Thus, a primary recommendation is to conduct a national LoTi survey of typical, rather than technology-minded, deaf educators as a comparison to the 2004 national survey of typical general educators.
167

[en] METHODOLOGY FOR IMPLEMENTATION OF SYSTEMS TO FORECAST DEMAND: A CASE STUDY IN A CHEMICALS DISTRIBUTOR / [pt] METODOLOGIA PARA IMPLEMENTAÇÃO DE SISTEMAS DE PREVISÃO DE DEMANDA: UM ESTUDO DE CASO EM UM DISTRIBUIDOR DE PRODUTOS QUÍMICOS

LAURA GONÇALVES CARVALHO 25 March 2011 (has links)
[pt] Esta dissertação teve como objetivo o desenvolvimento e a implantação de uma metodologia de previsão de vendas e dimensionamento de lotes de encomenda num distribuidor atacadista de produtos químicos. Para tanto, abordou técnicas quantitativas de previsão de demanda de curto prazo e medidas de variância dos erros de previsão a fim de suportar decisões empresariais na aplicação da metodologia, capazes de projetar padrões passados num cenário futuro. A aplicação da metodologia possibilitará à empresa a formalização de um processo atualmente subjetivo, outorgando maior precisão na previsão de vendas, redução de custos com estoque e uma base mais concreta para alocação de recursos financeiros. / [en] This thesis has as objective the developing and implantation of a methodology for forecasting sales and design of batch ordering in a wholesale distributor of chemical products. For this purpose, it approached short term quantitative techniques of demand forecast and measures of variance of forecast errors in order to support business decisions on the application of the methodology, able to design past patterns on a future scenario. The application of the methodology will enable the company the formalization of a process currently subjective, granting a greater accuracy in forecasting sales, reduction in the inventory costs and a more concrete basis for resource allocation.
168

Exploring online brand choice at the SKU level : the effects of internet-specific attributes

WANG, Yanan 01 January 2004 (has links)
E-Commerce research shows that existing studies on online consumer choice behavior has focused on comparative studies of channel or store choice (online or offline), or online store choice (different e-tailers). Relatively less effort has been devoted to consumers’ online brand choice behavior within a single e-tailer. The goal of this research is to model online brand choice, including generating loyalty variables, setting up base model, and exploring the effects of Internet-specific attributes, i.e., order delivery, webpage display and order confirmation, on online brand choice at the SKU level. Specifically, this research adopts the Multinomial Logit Model (MNL) as the estimation methods. To minimize the model bias, the refined smoothing constants for loyalty variables (brand loyalty, size loyalty, and SKU loyalty) are generated using the Nonlinear Estimation Algorithm (NEA). The findings suggest that SKU loyalty is a better predictor of online brand choice than brand loyalty and size loyalty. While webpage display has little effect on the brand choice, order delivery has positive effect on the choice. Online order confirmation turns out to be helpful in choice estimation. Moreover, online consumers are not sensitive to net price of the alternatives, but quite sensitive to price promotion. These results have meaningful implications for marketing promotions in the online environment and suggestions for future research.
169

The Association Between the Establishment of Audit Committees Composed of Outside Directors and a Change in the Objectivity of the Management Results-Reporting Function: an Empirical Investigation Into Income Smoothing Patterns

Roubi, Raafat Ramadan 12 1900 (has links)
The purpose of this research was to empirically examine the effect of the establishment of outside audit committees on the objectivity of the management results-reporting practices of those companies that established such committees in response to the New York Stock Exchange mandate effective June 30, 1978. Management income smoothing behavior is taken as a measurable surrogate for the objectivity of the management results-reporting practices. This research involved the testing of one research problem. The research question asks, "Will the establishment of outside audit committees by companies that had no such committees prior to the New York Stock Exchange mandate effective June 30, 1978, be associated with a decrease in the degree of smoothing in the net income series for the period after that date relative to the degree of smoothing prior to that date?" The answer to this question required the selection of an experimental and a control group. Each group was composed of fifty New York Stock Exchange listed firms. Linear and semi-log regression models were used to measure each firm's degree of income smoothing (defined as reducing the variability of a net income series about its trend line). The change in mean square errors of the experimental and control groups was compared using the chisquare and median tests. Neither the chi-square or the median test found a statistically significant increase in the objectivity of the management results-reporting function for the firms that established outside audit committees in response to the NYSE mandate effective June 30, 1978.
170

Projekce úmrtnostních tabulek a jejich vplyv na implicitní hodnotu pojišťovny / Projection of mortality tables and their influence on insurance embedded value

Filka, Jakub January 2016 (has links)
We study development of mortality tables from 1950 to present in Czech Republic. Our aim is to look at the 6 basic models, which can be potentially used to describe behavior of dying for people over 60 years. Models that are being investigated vary from generally accepted Gompertz-Makeham model to logistic models of Thatcher and Kannisto. We also introduce Coale-Kisker and Heligman- Pollard model. Our analysis is concentrated mostly on projecting abilities of given models to the highest ages. Especially for women, where data do not show such dispersion as in the case of men, there is a visible trend that can be described better by using logistic models instead of Gompertz-Makeham model, which has a tendency to overestimate the probabilities of dying in higher ages. Keywords: projection of mortality tables, Gompertz-Makeham, logistic models 1

Page generated in 0.0356 seconds