• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 104
  • 67
  • 36
  • 32
  • 20
  • 20
  • 18
  • 6
  • 6
  • 4
  • 4
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 344
  • 344
  • 71
  • 65
  • 63
  • 53
  • 53
  • 40
  • 34
  • 33
  • 32
  • 28
  • 26
  • 25
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Regressão binária nas abordagens clássica e Bayesiana / Binary regression in the classical and Bayesian approaches

Amélia Milene Correia Fernandes 16 December 2016 (has links)
Este trabalho tem como objetivo estudar o modelo de regressão binária nas abordagens clássica e bayesiana utilizando as funções de ligações probito, logito, complemento log-log, transformação box-cox e probito-assimétrico. Na abordagem clássica apresentamos as suposições e o procedimento para ajustar o modelo de regressão e verificamos a precisão dos parâmetros estimados, construindo intervalos de confiança e testes de hipóteses. Enquanto que, na inferência bayesiana fizemos um estudo comparativo utilizando duas metodologias. Na primeira metodologia consideramos densidades a priori não informativas e utilizamos o algoritmo Metropolis-Hastings para ajustar o modelo. Na segunda metodologia utilizamos variáveis auxiliares para obter a distribuição a posteriori conhecida, facilitando a implementação do algoritmo do Amostrador de Gibbs. No entanto, a introdução destas variáveis auxiliares podem gerar valores correlacionados, o que leva à necessidade de se utilizar o agrupamento das quantidades desconhecidas em blocos para reduzir a autocorrelação. Através do estudo de simulação mostramos que na inferência clássica podemos usar os critérios AIC e BIC para escolher o melhor modelo e avaliamos se o percentual de cobertura do intervalo de confiança assintótica está de acordo com o esperado na teoria assintótica. Na inferência bayesiana constatamos que o uso de variáveis auxiliares resulta em um algoritmo mais eficiente segundo os critérios: erro quadrático médio (EQM), erro percentual absoluto médio (MAPE) e erro percentual absoluto médio simétrico (SMAPE). Como ilustração apresentamos duas aplicações com dados reais. Na primeira, consideramos um conjunto de dados da variação do Ibovespa e a variação do valor diário do fechamento da cotação do dólar no período de 2013 a 2016. Na segunda aplicação, trabalhamos com um conjunto de dados educacionais (INEP-2013), focando nos estudos das variáveis que influenciam a aprovação do aluno. / The objective of this work is to study the binary regression model under the frequentist and Bayesian approaches using the probit, logit, log-log complement, Box-Cox transformation and skewprobit as link functions. In the classical approach we presented assumpti- ons and procedures used in the regression modeling. We verified the accuracy of the estimated parameters by building confidence intervals and conducting hypothesis tests. In the Bayesian approach we made a comparative study using two methodologies. For the first methodology, we considered non-informative prior distributions and the Metropolis-Hastings algorithm to estimate the model. In the second methodology we used auxiliary variables to obtain the known a posteriori distribution, allowing the use of the Gibbs Sampler algorithm. However, the introduction of these auxiliary variables can generate correlated values and needs the use of clustering of unknown quantities in blocks to reduce the autocorrelation. In the simulation study we used the AIC and BIC information criteria to select the most appropriate model and we evaluated whether the coverage probabilities of the confidence interval is in agre- ement with that expected by the asymptotic theory. In Bayesian approach we found that the inclusion of auxiliary variables in the model results in a more efficient algoritm according to the MSE, MAPE and SMAPE criteria. In this work we also present applications to two real datasets. The first dataset used is the variation of the Ibovespa and variation of the daily value of the American dollar at the time of closing the 2013 to 2016. The second dataset, used is an educational data set (INEP-2013), where we are interested in studying the factors that influence the approval of the student.
152

Regression models to assess the thermal performance of Brazilian low-cost houses: consideration of opaque envelope / Modelos de regressão para avaliação do desempenho térmico de habitações de interesse social: considerações da envolvente opaca

Ana Paula Oliveira Favretto 26 January 2016 (has links)
This study examines the potential to conduct building thermal performance simulation (BPS) of unconditioned low-cost housing during the early design stages. By creating a set of regression models (meta-models) based on EnergyPlus simulations, this research aims to promote and simplify BPS in the building envelope design process. The meta-models can be used as tools adapted for three Brazilian cities: Curitiba, São Paulo and Manaus, providing decision support to designers by enabling rapid feedback that links early design decisions to the buildings thermal performance. The low-cost housing unit studied is a detached onestory house with an area of approximately 51m2, which includes two bedrooms, a combined kitchen and living room, and one bathroom. This representative configuration is based on collected data about the most common residence options in some Brazilian cities. This naturally ventilated residence is simulated in the Airflow Network module in EnergyPlus, which utilizes the average wind pressure coefficients provided by the software. The parametric simulations vary the house orientation, U-value, heat capacity and absorptance of external walls and the roof, the heat capacity of internal walls, the window-to-wall ratio, type of window (slider or casement), and the existence of horizontal and/or vertical shading devices with varying dimensions. The models predict the resulting total degree-hours of discomfort in a year due to heat and cold, based on comfort limits defined by the adaptive method for naturally ventilated residences according to ANSI ASHRAE Standard 55. The methodology consists of (a) analyzing a set of Brazilian low-cost housing projects and defining a geometric model that can represent it; (b) determining a list of design parameters relevant to thermal comfort and defining value ranges to be considered; (c) defining the input data for the 10.000 parametric simulations used to create and test the meta-models for each analyzed climate; (d) simulating thermal performance using Energy Plus; (e) using 60% of the simulated cases to develop the regression models; and (f) using the remaining 40% data to validate the meta-models. Except by Heat discomfort regression models for the cities of Curitiba and São Paulo the meta-models show R2 values superior to 0.9 indicating accurate predictions when compared to the discomfort predicted with the output data from EnergyPlus, the original simulation software. Meta-models application tests are performed and the meta-models show great potential to guide designers decisions during the early design. / Esta pesquisa avalia as potencialidades do uso de simulações do desempenho térmico (SDT) nas etapas iniciais de projetos de habitações de interesse social (HIS) não condicionadas artificialmente. Busca-se promover e simplificar o uso de SDT no processo de projeto da envolvente de edificações através da criação de modelos de regressão baseados em simulações robustas através do software EnergyPlus. Os meta-modelos são adaptados ao clima de três cidades brasileiras: Curitiba, São Paulo e Manaus, e permitem uma rápida verificação do desconforto térmico nas edificações podendo ser usados como ferramentas de suporte às decisões de projeto nas etapas iniciais. A HIS considerada corresponde a uma unidade térrea com aproximadamente 51m2, composta por dois quartos, um banheiro e cozinha integrada à sala de jantar. Esta configuração é baseada em um conjunto de projetos representativos coletados em algumas cidades brasileiras (como São Paulo, Curitiba e Manaus). Estas habitações naturalmente ventiladas são simuladas pelo módulo Airflow Network utilizando o coeficiente médio de pressão fornecido pelo EnergyPlus. As simulações consideram a parametrização da orientação da edificação, transmitância térmica (U), capacidade térmica (Ct) e absortância () das paredes externas e cobertura; Ct e U das paredes internas; relação entre área de janela e área da parede; tipo da janela (basculante ou de correr); existência e dimensão de dispositivos verticais e horizontais de sombreamento. Os meta-modelos desenvolvidos fornecem a predição anual dos graus-hora de desconforto por frio e calor, calculados com base nos limites de conforto definidos pelo método adaptativo para residências naturalmente ventiladas (ANSI ASHRAE, 2013). A metodologia aplicada consiste em: (a) análise de um grupo de projetos de HIS brasileiras e definição de um modelo geométrico que os represente; (b) definição dos parâmetros relevantes ao conforto térmico, assim como seus intervalos de variação; (c) definição dos dados de entrada para as 10.000 simulações paramétricas utilizadas na criação e teste de confiabilidade dos meta-modelos para cada clima analisado; (d) simulação do desempenho térmico por meio do software EnergyPlus; (e) utilização de 60% dos casos simulados para o desenvolvimento dos modelos de regressão; e (f) uso dos 40% dos dados restantes para testar a confiabilidade do modelo. Exceto pelos modelos para predição do desconforto por calor para Curitiba e São Paulo, os demais meta-modelos apresentaram valores de R2 superiores a 0.9, indicando boa adequação das predições de desconforto dos modelos gerados ao desconforto calculado com base no resultado das simulações no EnergyPlus. Um teste de aplicação dos meta-modelos foi realizado, demonstrando seu grande potencial para guiar os projetistas nas decisões tomadas durante as etapas inicias de projeto.
153

Prediktivní modelování v oblasti řízení kreditních rizik / Predictive Modeling in Credit Risk Management

Švastalová, Iva January 2012 (has links)
The diploma thesis is focused on predictive modeling in credit risk management. Banks and financial institutions are mainly interested in it to estimate the probability of client's default in order to make a decision about which client will be accepted and which client will be rejected. The theoretical part includes an introduction of credit scoring and a description of discrete choice models. The linear probability model, the probit model and the logit model are described in detail. The logit model is afterwards used for the prediction of client's default. The practical part is focused on a statistical description of the dataset and a description of how to work with it before we start with the development of the credit scoring model. After that follows the estimation of the model on testing sample, its testing and the estimation of the model on full sample with a description of individual steps of calculation and outputs of the program SPSS.
154

Statistická klasifikace pomocí zobecněných lineárních modelů. / Statistical Classification by means of generalized linear models

Sladká, Vladimíra January 2010 (has links)
The goal of this thesis is introduce the theory of generalized linear models, namely probit and logit model. This models are especially used for medical data processing. In our concrete case these mentioned models are applied to data file obtained in teaching hospital Brno. The aim is statically analyzed immune response of child patients in dependence of twelve selected types of genes and find out which combinations of these genes influence septic state of patients.
155

Zavedení a aplikace obecného regresního modelu / The Introduction and Application of General Regression Model

Hrabec, Pavel January 2015 (has links)
This thesis sumarizes in detail general linear regression model, including testing statistics for coefficients, submodels, predictions and mostly tests of outliers and large leverage points. It describes how to include categorial variables into regression model. This model was applied to describe saturation of photographs of bread, where input variables were, type of flour, type of addition and concntration of flour. After identification of outliers it was possible to create mathematical model with high coefficient of determination, which will be usefull for experts in food industry for preliminar identification of possible composition of bread.
156

FROM THE WAYNE STATE TOLERANCE CURVE TO MACHINE LEARNING: A NEW FRAMEWORK FOR ANALYZING HEAD IMPACT KINEMATICS

Breana R Cappuccilli (12174029) 20 April 2022 (has links)
Despite the alarming incidence rate and potential for debilitating outcomes of sports-related concussion, the underlying mechanisms of injury remain to be expounded. Since as early as 1950, researchers have aimed to characterize head impact biomechanics through in-lab and in-game investigations. The ever-growing body of literature within this area has supported the inherent connection between head kinematics during impact and injury outcomes. Even so, traditional metrics of peak acceleration, time window, and HIC have outlived their potential. More sophisticated analysis techniques are required to advance the understanding of concussive vs subconcussive impacts. The work presented within this thesis was motivated by the exploration of advanced approaches to 1) experimental theory and design of impact reconstructions and 2) characterization of kinematic profiles for model building. These two areas of investigation resulted in the presentation of refined, systematic approaches to head impact analysis that should begin to replace outdated standards and metrics.
157

Eco-Efficiency and Eco-Productivity Assessments of the States in the United States: A Two-Stage Non-parametric Analysis

Demiral, Elif E., Sağlam, Ümit 01 December 2021 (has links)
This study implements radial and non-radial Data Envelopment Analysis (DEA) models to assess eco-efficiency and eco-productivity of the 50 states in the United States in 2018. The models are based on three inputs (capital stock, employment, and energy consumption), a single desirable output (real gross domestic product) and a single undesirable output variable (CO2 emissions). The radial DEA models reveal that at least 32 states are operated efficiently. Five states perform at the most optimal scale size, whereas 17 states have considerable potential to boost their productive efficiencies by enlarging available resources, and 28 states are overinvested in their input variables given their current output levels. The non-radial DEA models show that, overall, the states’ capital efficiency is very high, whereas energy and emission efficiencies are very low. The states’ eco-productivity is relatively higher than the eco-efficiency levels. In the second stage of the analysis, non-parametric statistical tests and Tobit regressions are conducted for further investigation. According to the non-parametric statistical test, high capital stock, labor force, and energy usage do not affect the states’ productive efficiency. However, states with low carbon dioxide emissions have significantly higher eco-efficiency and eco-productivity levels. The Tobit regression results illustrate that nuclear power and renewable energy consumption significantly affect the states’ relative efficiencies.
158

Atmospheric behaviors and control measures of persistent organic pollutants: case studies on polybrominated diphenyl ethers and pentachlorophenol / 残留性有機汚染物質の大気挙動と制御方策:ポリ臭素化ジフェニルエーテルとペンタクロロフェノールの事例研究

Nguyen, Thanh Dien 23 September 2016 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(工学) / 甲第19986号 / 工博第4230号 / 新制||工||1654(附属図書館) / 33082 / 京都大学大学院工学研究科都市環境工学専攻 / (主査)教授 酒井 伸一, 教授 米田 稔, 准教授 平井 康宏 / 学位規則第4条第1項該当 / Doctor of Philosophy (Engineering) / Kyoto University / DFAM
159

Decomposing Residential Monthly Electric Utility Bill Into HVAC Energy Use Using Machine Learning

Yakkali, Sai Santosh 02 August 2019 (has links)
No description available.
160

The role of video game quality in financial markets

Surminski, Nikolai January 2023 (has links)
Product quality is an often-overlooked factor in the financial analysis of video games. Quality measurements have been proven to work as a reliable predictor of sales while also directly influencing performance in financial markets. If markets are efficient in reflecting new information, perception of video game quality will lead to a rational response. This thesis examines the market reaction to this information set. The release structure in the video game industry allows for a direct observation of the isolated quality effect through third-party reviews. These reviews form an objective measurement of game quality without having other revealing characteristics, as all other information is released prior to these reviews. The possibility to exploit this unique case motivates the analysis through multiple empirical designs. Results from a multivariate regression model show a statistically significant positive effect of higher quality on short-term returns over all models. The release of a lower quality game reduces returns only for high-profile games. Both of these results are confirmed by the results from a rules-based trading strategy. These effects subside in the face of longer holding periods and higher exposure. This thesis finds sufficient evidence that video game quality should be an important factor in the analysis of video game companies. At the same time, these effects are only persistent in the short-time validating an efficient response to new information by financial investors.

Page generated in 0.0763 seconds