601 |
Decision making methods for water resources management under deep uncertaintyRoach, Thomas Peter January 2016 (has links)
Substantial anthropogenic change of the Earth’s climate is modifying patterns of rainfall, river flow, glacial melt and groundwater recharge rates across the planet, undermining many of the stationarity assumptions upon which water resources infrastructure has been historically managed. This hydrological uncertainty is creating a potentially vast range of possible futures that could threaten the dependability of vital regional water supplies. This, combined with increased urbanisation and rapidly growing regional populations, is putting pressures on finite water resources. One of the greatest international challenges facing decision makers in the water industry is the increasing influences of these “deep” climate change and population growth uncertainties affecting the long-term balance of supply and demand and necessitating the need for adaptive action. Water companies and utilities worldwide are now under pressure to modernise their management frameworks and approaches to decision making in order to identify more sustainable and cost-effective water management adaptations that are reliable in the face of uncertainty. The aim of this thesis is to compare and contrast a range of existing Decision Making Methods (DMMs) for possible application to Water Resources Management (WRM) problems, critically analyse on real-life case studies their suitability for handling uncertainties relating to climate change and population growth and then use the knowledge generated this way to develop a new, resilience-based WRM planning methodology. This involves a critical evaluation of the advantages and disadvantages of a range of methods and metrics developed to improve on current engineering practice, to ultimately compile a list of suitable recommendations for a future framework for WRM adaptation planning under deep uncertainty. This thesis contributes to the growing vital research and literature in this area in several distinct ways. Firstly, it qualitatively reviews a range of DMMs for potential application to WRM adaptation problems using a set of developed criteria. Secondly, it quantitatively assesses two promising and contrasting DMMs on two suitable real-world case studies to compare highlighted aspects derived from the qualitative review and evaluate the adaptation outputs on a practical engineering level. Thirdly, it develops and reviews a range of new potential performance metrics that could be used to quantitatively define system resilience to help answer the water industries question of how best to build in more resilience in future water resource adaptation planning. This leads to the creation and testing of a novel resilience driven methodology for optimal water resource planning, combining optimal aspects derived from the quantitative case study work with the optimal metric derived from the resilience metric investigation. Ultimately, based on the results obtained, a list of suitable recommendations is compiled on how to improve the existing methodologies for future WRM planning under deep uncertainty. These recommendations include the incorporation of more complex simulation models into the planning process, utilisation of multi-objective optimisation algorithms, improved uncertainty characterisation and assessments, an explicit robustness examination and the incorporation of additional performance metrics to increase the clarity of the strategy assessment process.
|
602 |
[en] A PROPOSAL TO DISCLOSE THE PREFERENCES OF SPECIALIST COMMITTEES VIA AHP METHOD: AN APPLICATION TO THE ELECTRICAL SECTOR / [pt] PROPOSTA PARA REVELAR AS PREFERÊNCIAS DE COMITÊS DE ESPECIALISTAS A PARTIR DO MÉTODO AHP: UMA APLICAÇÃO AO SETOR ELÉTRICOBRUNO AGRÉLIO RIBEIRO 11 December 2017 (has links)
[pt] Processos decisórios envolvendo um número diverso de critérios são comumente problemas complexos. Em geral, tais problemas procuram atender interesses conflitantes, logo, soluções únicas tendem a não contemplar as preferências de todos os agentes envolvidos no processo. Esse é o caso do problema de seleção de modelos de geração de cenários estocásticos de Energia Natural Afluente (ENA), os quais são insumo ao cálculo do despacho hidrotérmico de médio prazo no planejamento da operação do Sistema Elétrico Brasileiro (SEB). Neste trabalho é proposta uma extensão de um consagrado método de apoio à decisão multicritério, para que este se torne apto a revelar a preferência de comitês de especialistas, e a partir destas preferências reveladas derivar soluções mais adequadas para cada comitê. A aplicação dessa metodologia proposta no contexto do SEB é feita de forma a auxiliar diferentes segmentos do setor (academia, indústria e órgão regulador) a identificarem qual modelo de geração de cenários melhor se adequa às preferências destes segmentos. Para tal, os especialistas destes três setores foram agrupados e a partir da revelação das preferências de cada comitê foi proposta uma nova ordenação dos modelos geradores de cenários. As preferências reveladas para os comitês da academia e da indústria corroboraram as conjecturas sobre as predileções destes setores, fidelidade na representação dos momentos para a academia e capacidade de replicação da variância para a indústria, já a hipótese por predileção à replicação de déficit não pode ser verificada para o comitê do órgão regulador. Dentre as novas soluções obtidas, o modelo melhor classificado para a academia e indústria foi o PAR(p) Multiplicativo e para o órgão regulador o PAR(p) Boot-MC. / [en] Decision-making processes involving a large number of criteria are often complex problems. In general, such problems seek to meet conflicting interests, so single solutions tend not to address the preferences of all agents involved in the process. This is the case of the problem of selection models for generation of stochastic scenarios of Natural Inflow Energy (NIE), which are input to the calculation of the medium term hydrothermal dispatch in the planning of the operation of the Brazilian Electric System (BES). This paper proposes an extension of a well-established multicriteria decision support method, so that it becomes able to reveal the preference of expert committees, and from these revealed preferences to derive more adequate solutions for each committee. The application of this methodology proposed in the context of BES is done to help different segments of the industry (academia, industry and regulator) to identify which model of scenario generation best suits the preferences of these segments. To this end, the specialists of these three sectors were grouped and from the revelation of the preferences of each committee a new ordering of the scenario generating models was proposed. The preferences revealed for the academy and industry committees corroborated the conjectures about the predilections of these sectors, fidelity in the representation of the moments for the academy and capacity of replication of the variance for the industry, however the hypothesis for predilection for the replication of deficit was not able to be verified to the regulatory committee. Among the new solutions obtained, the best classified model for the academy and industry was the PAR (p) Multiplicative and for the regulatory committee the PAR (p) Boot-MC.
|
603 |
Collective decision making under qualitative possibilistic uncertainty : principles and characterization / Décision collective sous incertitude qualitative possibiliste : principes et caractérisationEssghaier, Fatma 29 September 2016 (has links)
Cette Thèse pose la question de la décision collective sous incertitude possibiliste. On propose différents règles de décision collective qualitative et on montre que dans un contexte possibiliste, l'utilisation d'une fonction d'agrégation collective pessimiste égalitariste ne souffre pas du problème du Timing Effect. On étend ensuite les travaux de Dubois et Prade (1995, 1998) relatifs à l'axiomatisation des règles de décision qualitatives (l'utilité pessimiste) au cadre de décision collective et montre que si la décision collective comme les décisions individuelles satisfont les axiomes de Dubois et Prade ainsi que certains axiomes relatifs à la décision collective, particulièrement l'axiome de Pareto unanimité, alors l'agrégation collective égalitariste s'impose. Le tableau est ensuite complété par une axiomatisation d'un pendant optimiste de cette règle de décision collective. Le système axiomatique que nous avons développé peut être vu comme un pendant ordinal du théorème de Harsanyi (1955). Ce résultat á été démontré selon un formalisme qui et basé sur le modèle de de Von NeuMann and Morgenstern (1948) et permet de comparer des loteries possibilistes. Par ailleurs, on propose une première tentative pour la caractérisation des règles de décision collectives qualitatives selon le formalisme de Savage (1972) qui offre une représentation des décisions par des actes au lieu des loteries. De point de vue algorithmique, on considère l'optimisation des stratégies dans les arbres de décision possibilistes en utilisant les critères de décision caractérisés dans la première partie de ce travail. On offre une adaptation de l'algorithme de Programmation Dynamique pour les critères monotones et on propose un algorithme de Programmation Multi-dynamique et un algorithme de Branch and Bound pour les critères qui ne satisfont pas la monotonie. Finalement, on établit une comparaison empirique des différents algorithmes proposés. On mesure les CPU temps d'exécution qui augmentent linéairement en fonction de la taille de l'arbre mais restent abordable même pour des grands arbres. Ensuite, nous étudions le pourcentage d'exactitude de l'approximation des algorithmes exacts par Programmation Dynamique: Il apparaît que pour le critère U-max ante l'approximation de l'algorithme de Programmation Multi-dynamique n'est pas bonne. Mais, ceci n'est pas si dramatique puisque cet algorithme est polynomial (et efficace dans la pratique). Cependant, pour la règle U+min ante l'approximation par Programmation Dynamique est bonne et on peut dire qu'il devrait être possible d'éviter une énumération complète par Branch and Bound pour obtenir les stratégies optimales. / This Thesis raises the question of collective decision making under possibilistic uncertainty. We propose several collective qualitative decision rules and show that in the context of a possibilistic representation of uncertainty, the use of an egalitarian pessimistic collective utility function allows us to get rid of the Timing Effect. Making a step further, we prove that if both the agents' preferences and the collective ranking of the decisions satisfy Dubois and Prade's axioms (1995, 1998) and some additional axioms relative to collective choice, in particular Pareto unanimity, then the egalitarian collective aggregation is compulsory. The picture is then completed by the proposition and the characterization of an optimistic counterpart of this pessimistic decision rule. Our axiomatic system can be seen as an ordinal counterpart of Harsanyi's theorem (1955). We prove this result in a formalism that is based on Von NeuMann and Morgenstern framework (1948) and compares possibilisitc lotteries. Besides, we propose a first attempt to provide a characterization of collective qualitative decision rules in Savage's formalism; where decisions are represented by acts rather than by lotteries. From an algorithmic standpoint, we consider strategy optimization in possibilistic decision trees using the decision rules characterized in the first part of this work. So, we provide an adaptation of the Dynamic Programming algorithm for criteria that satisfy the property of monotonicity and propose a Multi-Dynamic programming and a Branch and Bound algorithm for those that are not monotonic. Finally, we provide an empirical comparison of the different algorithms proposed. We measure the execution CPU times that increases linearly according to the size of the tree and it remains affordable in average even for big trees. Then, we study the accuracy percentage of the approximation of the pertinent exact algorithms by Dynamic Programming: It appears that for U-max ante criterion the approximation of Multi-dynamic programming is not so good. Yet, this is not so dramatic since this algorithm is polynomial (and efficient in practice). However, for U+min ante decision rule the approximation by Dynamic Programming is good and we can say that it should be possible to avoid a full Branch and Bound enumeration to find optimal strategies.
|
604 |
Decision Theory in the automotive marketSARMENTO, Rafaella Azevedo de Lucena 31 January 2011 (has links)
Made available in DSpace on 2014-06-12T17:35:11Z (GMT). No. of bitstreams: 2
arquivo2620_1.pdf: 2101146 bytes, checksum: 9393974b81107b7d181fbe8a43fa8a48 (MD5)
license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5)
Previous issue date: 2011 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Azevedo de Lucena Sarmento, Rafaella; Menezes Campello de Souza, Fernando. Decision Theory in the automotive market. 2011. Dissertação (Mestrado). Programa de Pós-Graduação em Engenharia de Produção, Universidade Federal de Pernambuco, Recife, 2011.
|
605 |
Considerações sobre a relação entre distribuições de cauda pesada e conflitos de informação em inferencia bayesiana / Considerations on the relation between hevay tailed distributions and conflict of information in bayesian inferenceSantos Junior, James Dean Oliveira dos 13 March 2007 (has links)
Orientadores: Veronica Andrea Gonzales-Lopez, Laura Leticia Ramos Rifo / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matematica, Estatistica e Computação Cientifica / Made available in DSpace on 2018-08-08T04:30:52Z (GMT). No. of bitstreams: 1
SantosJunior_JamesDeanOliveirados_M.pdf: 1844173 bytes, checksum: 122644f8bc0dedaaa7d7633d9b25eb9c (MD5)
Previous issue date: 2006 / Resumo: Em inferência bayesiana lidamos com informações provenientes dos dados e com informações a priori. Eventualmente, um ou mais outliers podem causar um conflito entre as fontes de informação. Basica!llente, resolver um conflito entre as fontes de informações implica em encontrar um conjunto de restrições tais que uma das fontes domine, em certo sentido, as demais. Têm-se utilizado na literatura distribuições amplamente aceitas como sendo de cauda pesada para este fim. Neste trabalho, mostramos as relações existentes entre alguns resultados da teoria de conflitos e as distribuições de caudas pesadas. Também mostramos como podemos resolver conflitos no caso locação utilizando modelos subexponenciais e como utilizar a medida credence para resolver problemas no caso escala / Abstract: In bayesian inference we deal with information proceeding from the data and prior information. Eventually, one ar more outliers can cause a conflict between the sources information. Basically, to decide a conflict between the sources of information implies in finding a set of restrictions such that one of the sources dominates, in certain sense, the outher. Widely distributions have been used in literature as being of heavy tailed for this end. In this work, we show the relations between some results of the theory of conflicts and the heavy tailed distributions. Also we show how we can decide a conflicts in the location case using subexponential models and how to use the measure credence to decide problems in the scale case / Mestrado / Inferencia Bayesiana / Mestre em Estatística
|
606 |
Inferencia Bayesiana para valores extremos / Bayesian inference for extremesBernardini, Diego Fernando de, 1986- 15 August 2018 (has links)
Orientador: Laura Leticia Ramos Rifo / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matematica, Estatistica e Computação Cientifica / Made available in DSpace on 2018-08-15T01:44:09Z (GMT). No. of bitstreams: 1
Bernardini_DiegoFernandode_M.pdf: 1483229 bytes, checksum: ea77acd21778728138eea2f27e59235b (MD5)
Previous issue date: 2010 / Resumo: Iniciamos o presente trabalho apresentando uma breve introdução a teoria de valores extremos, estudando especialmente o comportamento da variável aleatória que representa o máximo de uma sequência de variáveis aleatórias independentes e identicamente distribuídas. Vemos que o Teorema dos Tipos Extremos (ou Teorema de Fisher-Tippett) constitui uma ferramenta fundamental no que diz respeito ao estudo do comportamento assintóticos destes máximos, permitindo a modelagem de dados que representem uma sequência de observações de máximos de um determinado fenômeno ou processo aleatório, através de uma classe de distribuições conhecida como família de distribuições de Valor Extremo Generalizada (Generalized Extreme Value - GEV). A distribuição Gumbel, associada ao máximo de distribuições como a Normal ou Gama entre outras, é um caso particular desta família. Torna-se interessante, assim, realizar inferência para os parâmetros desta família. Especificamente, a comparação entre os modelos Gumbel e GEV constitui o foco principal deste trabalho. No Capítulo 1 estudamos, no contexto da inferência clássica, o método de estimação por máxima verossimilhança para estes parâmetros e um procedimento de teste de razão de verossimilhanças adequado para testar a hipótese nula que representa o modelo Gumbel contra a hipótese que representa o modelo completo GEV. Prosseguimos, no Capítulo 2, com uma breve revisão em teoria de inferência Bayesiana obtendo inferências para o parâmetro de interesse em termos de sua distribuição a posteriori. Estudamos também a distribuição preditiva para valores futuros. No que diz respeito à comparação de modelos, estudamos inicialmente, neste contexto bayesiano, o fator de Bayes e o fator de Bayes a posteriori. Em seguida estudamos o Full Bayesian Significance Test (FBST), um teste de significância particularmente adequado para testar hipóteses precisas, como a hipótese que caracteriza o modelo Gumbel. Além disso, estudamos outros dois critérios para comparação de modelos, o BIC (Bayesian Information Criterion) e o DIC (Deviance Information Criterion). Estudamos as medidas de evidência especificamente no contexto da comparação entre os modelos Gumbel e GEV, bem como a distribuição preditiva, além dos intervalos de credibilidade e inferência a posteriori para os níveis de retorno associados a tempos de retorno fixos. O Capítulo 1 e parte do Capítulo 2 fornecem os fundamentos teóricos básicos deste trabalho, e estão fortemente baseados em Coles (2001) e O'Hagan (1994). No Capítulo 3 apresentamos o conhecido algoritmo de Metropolis-Hastings para simulação de distribuições de probabilidade e o algoritmo particular utilizado neste trabalho para a obtenção de amostras simuladas da distribuição a posteriori dos parâmetros de interesse. No capítulo seguinte formulamos a modelagem dos dados observados de máximos, apresentando a função de verossimilhança e estabelecendo a distribuição a priori para os parâmetros. Duas aplicações são apresentadas no Capítulo 5. A primeira delas trata das observações dos máximos trimestrais das taxas de desemprego nos Estados Unidos da América, entre o primeiro trimestre de 1994 e o primeiro trimestre de 2009. Na segunda aplicação estudamos os máximos semestrais dos níveis de maré em Newlyn, no sudoeste da Inglaterra, entre 1990 e 2007. Finalmente, uma breve discussão é apresentada no Capítulo 6. / Abstract: We begin this work presenting a brief introduction to the extreme value theory, specifically studying the behavior of the random variable which represents the maximum of a sequence of independent and identically distributed random variables. We see that the Extremal Types Theorem (or Fisher-Tippett Theorem) is a fundamental tool in the study of the asymptotic behavior of those maxima, allowing the modeling of data which represent a sequence of maxima observations of a given phenomenon or random process, through a class of distributions known as Generalized Extreme Value (GEV) family. We are interested in making inference about the parameters of this family. Specifically, the comparison between the Gumbel and GEV models constitute the main focus of this work. In Chapter 1 we study, in the context of classical inference, the method of maximum likelihood estimation for these parameters and likelihood ratio test procedure suitable for testing the null hypothesis associated to the Gumbel model against the hypothesis that represents the complete GEV model. We proceed, in Chapter 2, with a brief review on Bayesian inference theory. We also studied the predictive distribution for future values. With respect to the comparison of models, we initially study the Bayes factor and the posterior Bayes factor, in the Bayesian context. Next we study the Full Bayesian Significance Test (FBST), a significance test particularly suitable to test precise hypotheses, such as the hypothesis characterizing the Gumbel model. Furthermore, we study two other criteria for comparing models, the BIC (Bayesian Information Criterion) and the DIC (Deviance Information Criterion). We study the evidence measures specifically in the context of the comparison between the Gumbel and GEV models, as well as the predictive distribution, beyond the credible intervals and posterior inference to the return levels associated with fixed return periods. Chapter 1 and part of Chapter 2 provide the basic theoretical foundations of this work, and are strongly based on Coles (2001) and O'Hagan (1994). In Chapter 3 we present the well-known Metropolis-Hastings algorithm for simulation of probability distributions, and the particular algorithm used in this work to obtain simulated samples from the posterior distribution for the parameters of interest. In the next chapter we formulate the modeling of the observed data of maximum, presenting the likelihood function and setting the prior distribution for the parameters. Two applications are presented in Chapter 5. The first one deals with observations of the quarterly maximum for unemployment rates in the United States of America, between the first quarter of 1994 and first quarter of 2009. In the second application we studied the semiannual maximum of sea levels at Newlyn, in southwest of England, between 1990 and 2007. Finally, a brief discussion is presented in Chapter 6. / Mestrado / Estatistica / Mestre em Estatística
|
607 |
A hierarquia de preferência do consumidor em decisões de investimento financeiro / The consumer preference hierarchy in financial investments decicionsHudson Antunes Bessa 20 April 2016 (has links)
A literatura acadêmica sobre o comportamento do investidor financeiro é bastante escassa. A pesquisa sobre o processo de decisão, em geral, aborda tradeoffs na aquisição de produtos e pouco se discute o processo de decisão de investimento. Esta tese pretende contribuir para a redução deste gap ao discutir fatores determinantes para a tomada de decisão do investidor pessoal em produtos financeiros. A decisão de investimento é complexa, envolve, entre outros, o tradeoff entre renunciar o consumo presente pela possibilidade de maior bem estar no futuro. Adicionalmente, em muitas situações, existe possibilidade real de perda dos recursos financeiros investidos. Para investigar os percursos desta decisão foram realizadas entrevistas em profundidade com executivos ligados ao setor de fundos de investimento e ao de distribuição de produtos de investimento dos maiores bancos brasileiros atuantes no segmento de varejo. Os conhecimentos recolhidos e a revisão de literatura efetuada subsidiaram a elaboração do questionário de pesquisa aplicado em plataforma web junto a potenciais investidores. Os atributos rentabilidade, possibilidade de perda (proxy de risco), liquidez, taxa de administração e recomendação do gerente foram identificados como os mais relevantes para a decisão do investidor. Para construção dos estímulos e decomposição da utilidade da decisão foi utilizada a técnica conjoint based choice (CBC) que simula uma decisão real. Os resultados apontaram ser a recomendação do gerente o atributo mais importante para a formação da preferência por uma alternativa de investimento, resultado que, por si só, indica que fatores não racionais exercem influência na decisão. Estudou-se, então, o impacto da aversão ao risco e do estilo cognitivo do investidor. Os resultados denotam que os mais avessos e os mais intuitivos são mais suscetíveis à recomendação do gerente, mas que seus efeitos são independentes entre si. As evidências sugerem que os mais intuitivos utilizam o gerente para alcançar conforto cognitivo na decisão e que os mais avessos para mitigar a sensação de risco associada ao produto. Uma análise de cluster indicou ser possível segmentar a amostra em dois grupos, um mais propenso à recomendação do gerente e outro aos atributos do produto. A recomendação do gerente mostrou ser o atributo mais forte na distinção dos grupos. Os resultados indicam que uma segmentação de mercado baseada na propensão à recomendação do gerente pode ser efetiva para direcionar a construção de uma estratégia de relacionamento que busque incrementar os resultados de longo prazo. / The academic literature on the financial investor behavior is rather scarce. Research on decision generally discusses tradeoffs when purchasing products and little is discussed the investment decision process. This thesis aims to contribute to the reduction of this gap when discussing determining factors for decision making personnel investor in financial products. The investment decision is complex, involving, among others, the tradeoff between forgo present consumption by the possibility of greater well-being in the future. Additionally, in many situations, there is real possibility of loss of funds invested. To investigate the pathways of this decision were conducted in-depth interviews with executives linked to the investment fund industry and the distribution of investment products of the largest Brazilian banks operating in the retail segment. The collected knowledge and performed literature review supported the development of the survey questionnaire applied to web platform to potential investors. Attributes profitability, possible loss (risk proxy), liquidity, management fees and manager\'s recommendation have been identified as the most relevant for the investor\'s decision. Construction of stimuli and decomposition of the decision utility was based on the technique named conjoint based choice (CBC) that simulates a real decision. The results showed to be the manager\'s recommendation the most important attribute for the formation of preference for an alternative investment, a result which, in itself, indicates that non-rational factors influence the decision. It was studied, then the impact of risk aversion and investor cognitive style. The results show that the more averse and more intuitive are more susceptible to the manager\'s recommendation but their effects are independent of each other. Evidence suggests that the most intuitive use the manager to achieve cognitive comfort in the decision and the most averse to mitigate the feeling of risk associated with the product. A cluster analysis indicated to be possible to segment the sample into two groups, one more prone to the manager\'s recommendation and the other more prone to product attributes. The manager\'s recommendation proved to be the strongest attribute distinguishing the groups. The results indicate that a segmentation of the market based on the propensity to manager\'s recommendation can be effective to direct the building of a relationship strategy that seeks to increase the long-term results
|
608 |
Análise dos descritores locais de imagens no contexto de detecção de semi-réplicas / Analysis of local image descriptors in the context of near-duplicate detectionBueno, Lucas Moutinho, 1986- 19 August 2018 (has links)
Orientador: Ricardo da Silva Torres / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-19T05:33:07Z (GMT). No. of bitstreams: 1
Bueno_LucasMoutinho_M.pdf: 2058716 bytes, checksum: f240d03556434e5a689f39f7e2bc197f (MD5)
Previous issue date: 2011 / Resumo: Descritores locais de imagens são amplamente utilizados em diversas aplicações de reconhecimento de objetos ou de cenas. Muitos descritores locais foram propostos na literatura para caracterizar pontos de interesse em imagens. Entre eles destacam-se: PCA-SIFT, SIFT, GLOH, SURF, DAISY. Pontos de interesse em imagens são determinados por detectores. Exemplos de detectores são Harris-Affine, Hessian-Affine, Fast Hessian, MSER, DoG. O objetivo deste trabalho é investigar o uso de descritores locais no contexto de recuperação de imagens semi-réplicas por conteúdo, usando centenas de milhares de imagens. Recuperação de imagens por conteúdo consiste em achar imagens na base de dados usando o conteúdo de outra imagem como consulta, normalmente usando descritores. Imagens semi-réplicas são determinadas pela deformação de uma imagem original a partir de transformações geométricas, radiométricas ou oclusões. Devido ao grande úmero de pontos de interesse calculados sobre cada uma das centenas de milhares de imagens da base de dados, técnicas exaustivas de busca não são viáveis em larga escala. Assim, métodos, tais como Multicurves, LSH e Min-Hash, foram criados para melhorar a velocidade de recuperação de imagens semi-réplicas. Esse trabalho contribui para o estado da arte em dois aspectos principais. Primeiro, uma análise de descritores locais é realizada de modo a avaliar escalabilidade deles. Segundo, um sistema inovador por busca Bayesiana é proposto para diminuir significantemente a quantidade de pontos de interesse usados na recuperação de imagens semi-réplicas, sem perda significativa de acurácia / Abstract: Local image descriptors are widely used in various applications for recognition of objects or scenes. Many local descriptors have been proposed in the literature to characterize points of interest in images. Among them are: PCA-SIFT, SIFT, GLOH, SURF, DAISY. Points of interest in images are determined by the detectors. Examples of detectors are Harris-Affine, Hessian-Affine, Fast Hessian, MSER, DoG. The objective of this work is to investigate the use of local descriptors in the context of content-based near-duplicate image retrieval, using hundreds of thousands of images. Content-based image retrieval aims at finding images in the database using the content of another image as a query, typically using descriptors. Near-duplicate images are determined by the deformation of an original image from geometric or radiometric transformations or occlusions. Due to the large number of points of interest computed on each of the hundreds of thousands images from database, exhaustive search techniques are not feasible on a large scale. Thus, methods such as Multicurves, LSH and Min-Hash, are designed to improve the speed of near-duplicate image retrieval. This work contributes to the state of the art in two major aspects. First, an analysis of local descriptors is carried out to evaluate the scalability of them. Second, an innovative system using Bayesian search is proposed to significantly decrease the amount of points of interest used in near-duplicate image retrieval, without significant loss of accuracy / Mestrado / Ciência da Computação / Mestre em Ciência da Computação
|
609 |
Bayesian logistic regression models for credit scoringWebster, Gregg January 2011 (has links)
The Bayesian approach to logistic regression modelling for credit scoring is useful when there are data quantity issues. Data quantity issues might occur when a bank is opening in a new location or there is change in the scoring procedure. Making use of prior information (available from the coefficients estimated on other data sets, or expert knowledge about the coefficients) a Bayesian approach is proposed to improve the credit scoring models. To achieve this, a data set is split into two sets, “old” data and “new” data. Priors are obtained from a model fitted on the “old” data. This model is assumed to be a scoring model used by a financial institution in the current location. The financial institution is then assumed to expand into a new economic location where there is limited data. The priors from the model on the “old” data are then combined in a Bayesian model with the “new” data to obtain a model which represents all the available information. The predictive performance of this Bayesian model is compared to a model which does not make use of any prior information. It is found that the use of relevant prior information improves the predictive performance when the size of the “new” data is small. As the size of the “new” data increases, the importance of including prior information decreases
|
610 |
Modelling the spatial dynamics of non-state terrorism : world study, 2002-2013Python, André January 2017 (has links)
To this day, terrorism perpetrated by non-state actors persists as a worldwide threat, as exemplified by the recent lethal attacks in Paris, London, Brussels, and the ongoing massacres perpetrated by the Islamic State in Iraq, Syria and neighbouring countries. In response, states deploy various counterterrorism policies, the costs of which could be reduced through more efficient preventive measures. The literature has not applied statistical models able to account for complex spatio-temporal dependencies, despite their potential for explaining and preventing non-state terrorism at the sub-national level. In an effort to address this shortcoming, this thesis employs Bayesian hierarchical models, where the spatial random field is represented by a stochastic partial differential equation. The results show that lethal terrorist attacks perpetrated by non-state actors tend to be concentrated in areas located within failed states from which they may diffuse locally, towards neighbouring areas. At the sub-national level, the propensity of attacks to be lethal and the frequency of lethal attacks appear to be driven by antagonistic mechanisms. Attacks are more likely to be lethal far away from large cities, at higher altitudes, in less economically developed areas, and in locations with higher ethnic diversity. In contrast, the frequency of lethal attacks tends to be higher in more economically developed areas, close to large cities, and within democratic countries.
|
Page generated in 0.1062 seconds