• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 737
  • 173
  • 83
  • 60
  • 59
  • 23
  • 20
  • 18
  • 10
  • 9
  • 6
  • 6
  • 5
  • 5
  • 5
  • Tagged with
  • 1529
  • 301
  • 289
  • 286
  • 234
  • 194
  • 175
  • 146
  • 127
  • 123
  • 122
  • 111
  • 111
  • 92
  • 90
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
521

Essays on investor trading activity in a limit order book market

Deji-Olowe, Adeola January 2014 (has links)
This thesis consists of three essays examining the impact and consequences of the trading behaviour of a finely disaggregated category of investors in an electronic limit order book equity market, the Malta Stock Exchange (MSE). The three essays in market microstructure are closely related and examine how investor heterogeneity impacts the informational content of the limit order book, the informational content of individual trades, the price impact of investor trades, the aggressiveness of order submission strategies and the price discovery process within such a market. The first essay investigates the role of the financial intermediary in the price discovery process in a limit order market. We address this issue by analysing the trades of brokers within the Malta Stock Exchange by comparing the profitability of their individual trades and the impact of these trades on the price discovery process. The results of a Weighted Price Contribution methodology indicate that more active brokers that dominate the market in terms of volume and amount traded account for a significant portion of the price discovery process. We also find that the level of profitability of these brokers is directly proportional to the amount of volume traded and their relative share in the price discovery process. This appears to rule out the possibility of manipulative trades by these brokers in order to influence profitabilityThe second essay examines the price impact of the order flow emanating from finely disaggregated classes of investor with the aim of determining whether detectable differences exist in the extent to which orders emanating from particular groups of investors impact on the evolution of stock prices. On the aggregate stock level, results indicate price impact is inversely related to liquidity and as such the price impact of trades is of a higher magnitude and significance in stocks that are less liquid. Significantly, we find that stocks with higher liquidity and trading volume adjust quickly to price changes and the cumulative impact is realised earlier for these stocks. Similarly, for investor classes, our results show that the magnitude and significance of individual price impact increases as liquidity of the stocks declines, showing that as liquidity increases in the order book, the impact of information asymmetry begins to diminish. Institutional investors consistently have the highest significant impact on the evolution of prices across all the stocks. The final essay examines how investors structure their order submission choice in response to changes in the limit order book and market conditions (such as order depth, volatility, returns, and height of the limit order book). We identify 7 distinct investor classes who differ in their trading requirement and the information set available to them, and as such we expect that these investors will adopt different strategies to maximise their trades. The results show variability in the submission strategies adopted by investors as trade sides changes from buy to sell trades. It also indicate that investors have to balance between execution risk, the timely use of private information and the risk of being picked-off by other informed investors. In analysing the varied responses of these investors, we find that the order submission strategies adopted is most responsive to the risk of non-execution.
522

On P2P Networks and P2P-Based Content Discovery on the Internet

Memon, Ghulam 17 June 2014 (has links)
The Internet has evolved into a medium centered around content: people watch videos on YouTube, share their pictures via Flickr, and use Facebook to keep in touch with their friends. Yet, the only globally deployed service to discover content - i.e., Domain Name System (DNS) - does not discover content at all; it merely translates domain names into locations. The lack of persistent naming, in particular, makes content discovery, instead of domain discovery, challenging. Content Distribution Networks (CDNs), which augment DNSs with location-awareness, also suffer from the same problem of lack of persistent content names. Recently, several infrastructure- level solutions to this problem have emerged, but their fundamental limitation is that they fail to preserve the autonomy of network participants. Specifically, the storage requirements for resolution within each participant may not be proportional to their capacity. Furthermore, these solutions cannot be incrementally deployed. To the best of our knowledge, content discovery services based on peer-to-peer (P2P) networks are the only ones that support persistent content names. These services also come with the built-in advantage of scalability and deployability. However, P2P networks have been deployed in the real-world only recently, and their real-world characteristics are not well understood. It is important to understand these real-world characteristics in order to improve the performance and propose new designs by identifying the weaknesses of existing designs. In this dissertation, we first propose a novel, lightweight technique for capturing P2P traffic. Using our captured data, we characterize several aspects of P2P networks and draw conclusions about their weaknesses. Next, we create a botnet to demonstrate the lethality of the weaknesses of P2P networks. Finally, we address the weaknesses of P2P systems to design a P2P-based content discovery service, which resolves the drawbacks of existing content discovery systems and can operate at Internet-scale. This dissertation includes both previously published/unpublished and co-authored material.
523

A visual analytics approach for passing strateggies analysis in soccer using geometric features

Malqui, José Luis Sotomayor January 2017 (has links)
As estrategias de passes têm sido sempre de interesse para a pesquisa de futebol. Desde os inícios do futebol, os técnicos tem usado olheiros, gravações de vídeo, exercícios de treinamento e feeds de dados para coletar informações sobre as táticas e desempenho dos jogadores. No entanto, a natureza dinâmica das estratégias de passes são bastante complexas para refletir o que está acontecendo dentro do campo e torna difícil o entendimento do jogo. Além disso, existe uma demanda crecente pela deteção de padrões e analise de estrategias de passes popularizado pelo tiki-taka utilizado pelo FC. Barcelona. Neste trabalho, propomos uma abordagem para abstrair as sequências de pases e agrupálas baseadas na geometria da trajetória da bola. Para analizar as estratégias de passes, apresentamos um esquema de visualização interátiva para explorar a frequência de uso, a localização espacial e ocorrência temporal das sequências. A visualização Frequency Stripes fornece uma visão geral da frequencia dos grupos achados em tres regiões do campo: defesa, meio e ataque. O heatmap de trajetórias coordenado com a timeline de passes permite a exploração das formas mais recorrentes no espaço e tempo. Os resultados demostram oito trajetórias comunes da bola para sequências de três pases as quais dependem da posição dos jogadores e os ângulos de passe. Demonstramos o potencial da nossa abordagem com utilizando dados de várias partidas do Campeonato Brasileiro sob diferentes casos de estudo, e reportamos os comentários de especialistas em futebol. / Passing strategies analysis has always been of interest for soccer research. Since the beginning of soccer, managers have used scouting, video footage, training drills and data feeds to collect information about tactics and player performance. However, the dynamic nature of passing strategies is complex enough to reflect what is happening in the game and makes it hard to understand its dynamics. Furthermore, there exists a growing demand for pattern detection and passing sequence analysis popularized by FC Barcelona’s tiki-taka. We propose an approach to abstract passing strategies and group them based on the geometry of the ball trajectory. To analyse passing sequences, we introduce a interactive visualization scheme to explore the frequency of usage, spatial location and time occurrence of the sequences. The frequency stripes visualization provide, an overview of passing groups frequency on three pitch regions: defense, middle, attack. A trajectory heatmap coordinated with a passing timeline allow, for the exploration of most recurrent passing shapes in temporal and spatial domains. Results show eight common ball trajectories for three-long passing sequences which depend on players positioning and on the angle of the pass. We demonstrate the potential of our approach with data from the Brazilian league under several case studies, and report feedback from a soccer expert.
524

A ironia como vocação: mais uma epistemologia das ciências sociais / Irony as vocation: one more epistemology of social science

Paulo Henrique Sette Ferreira Pires Granafei 14 August 2012 (has links)
Conselho Nacional de Desenvolvimento Científico e Tecnológico / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / A tese pretende estabelecer o que estaria o mais próximo possível de uma lógica da descoberta para as ciências sociais. A narrativa dessas disciplinas não seria neutra nem objetiva, mas procuraria produzir, retoricamente, os efeitos de neutralidade e objetividade, evitando a heroicização, a vilanização e a vitimização dos agentes. Isso decorreria da necessidade de o cientista social validar sua narrativa perante um auditório ideal ou potencialmente universal, abrigando, em princípio, todo o tipo de valores. Essa pluralidade de visões de mundo não permite que os agentes sejam ingenuamente tratados como heróis, vilões ou vítimas. Com isso, o autor do texto de ciências sociais procuraria simular um ponto de vista de Deus, como ironista supremo, que tudo vê, acima dos participantes imperceptivos de seu relato. Foi feito um estudo de caso a partir do debate sobre populismo no Brasil, no qual foram identificados quatro pontos básicos em torno dos quais girou a controvérsia: o das estruturas prototípicas, o da estruturação imaginária, o da estrutura intersubjetiva e a dinâmica da relação entre grande teoria e pesquisa empírica. / The aim of this thesis is to get as close as possible to a logic of discovery for the social sciences. Those disciplines do not have a neutral and objective narrative, but try to achieve, rhetorically, neutrality and objectivity effects, avoiding to portrait agents as heroes, villains or victims. It follows from the need to validate scientific narratives in face of an ideally or potentially universal auditory, withholding, in principle, all kinds of values. Such plurality of world visions makes it difficult to naively treat agents as heroes, villains, or victims. As a consequence, the social scientist simulates a Gods Eye view, placing himself in a Supreme Ironist perspective, who sees everything from above, whose vision reaches beyond what nonperceptive agents can see. To make my point, I took as case study the Brazilian controversy over populism. Based on it, four main topics of development were identified: one refered to the prototypical theoretic structures, other to its imaginary structuration, another to its intersubjective structure, and a last one to the relationship of empirical research to grand theory.
525

Uso de ontologia em serviço de contexto e descoberta de recursos para autoadaptação de sistemas. / The use of ontologies on context and discovery services for self-adaptation of applications.

Leila Negris Bezerra 13 July 2011 (has links)
Aplicações cientes de contexto precisam de mecanismos para recuperar informações sobre o seu contexto de execução. Com base no contexto atual, tais aplicações são capazes de se autoadaptar para fornecer informações e serviços adequados aos seus usuários. A abordagem comum para infraestruturas de apoio às aplicações sensíveis ao contexto fornece serviços para a descoberta de recursos através da utilização de pares <chave-valor> e motores que executam apenas correspondência sintática. Esta abordagem não considera as possíveis relações semânticas entre as palavras-chave usadas. Portanto, a sua expressividade semântica limitada, leva a um serviço de descoberta que provê baixa taxa de recuperação e baixa acurácia. Este trabalho apresenta a utilização de uma outra abordagem para o serviço de contexto e descoberta, que utiliza ontologias para representar os recursos do contexto de execução e capturar a semântica da consulta do usuário, melhorando assim o processo de descoberta para a autoadaptação de sistemas sensíveis ao contexto. A abordagem proposta oferece também pontos de extensão para as aplicações clientes através da utilização de outras ontologias. Esta abordagem foi integrada à infraestrutura CDRF, de forma a adicionar semântica aos serviços desenvolvidos neste projeto. Exemplos de aplicações são também propostos para demonstrar a utilização dos novos serviços. / Context-aware applications demand ways of retrieving context information from the environment. Based on the current context, such applications are able to self-adapt to provide the correct information and services to its users. The usual approach for supporting infrastructures for context-aware applications provides facilities for resource discovery using <key-value> pairs and discovery engines that perform syntactic matching. This approach does not consider the possible semantic relations between the keywords used. So its limited semantic expressiveness often leads to poor discovery results. This paper presents the use of a different approach for service discovery that uses ontologies to represent resources and capture the semantics of the users query, improving the discovery process for self-adaptation of context-aware systems. The proposed approach also offers extension hooks to the client applications through the use of other ontologies. This approach is integrated into the CDRF framework and adds semantics to the services developed in that project. Example applications are also proposed to demonstrate the use of the new services.
526

Um estudo sobre agrupamento de documentos textuais em processamento de informações não estruturadas usando técnicas de "clustering" / A study about arrangement of textual documents applied to unstructured information processing using clustering techniques

Wives, Leandro Krug January 1999 (has links)
Atualmente, técnicas de recuperação e análise de informações, principalmente textuais, são de extrema importância. Após o grande BOOM da Internet, muitos problemas que já eram conhecidos em contextos fechados passaram a preocupar também toda a comunidade científica. No âmbito deste trabalho os problemas relacionados à sobrecarga de informações, que ocorre devido ao grande volume de dados a disposição de uma pessoa, são os mais importantes. Visando minimizar estes problemas, este trabalho apresenta um estudo sobre métodos de agrupamento de objetos textuais (documentos no formato ASCII), onde os objetos são organizados automaticamente em grupos de objetos similares, facilitando sua localização, manipulação e análise. Decorrente deste estudo, apresenta-se uma metodologia de aplicação do agrupamento descrevendo-se suas diversas etapas. Estas etapas foram desenvolvidas de maneira que após uma ter sido realizada ela não precisa ser refeita, permitindo que a etapa seguinte seja aplicada diversas vezes sobre os mesmos dados (com diferentes parâmetros) de forma independente. Além da metodologia, realiza-se um estudo comparativo entre alguns algoritmos de agrupamento, inclusive apresentando-se um novo algoritmo mais eficiente. Este fato é comprovado em experimentos realizados nos diversos estudos de caso propostos. Outras contribuições deste trabalho incluem a implementação de uma ferramenta de agrupamento de textos que utiliza a metodologia elaborada e os algoritmos estudados; além da utilização de uma fórmula não convencional de cálculo de similaridades entre objetos (de abordagem fuzzy), aplicada a informações textuais, obtendo resultados satisfatórios. / The Internet is the vital media of today and, as being a mass media, problems known before to specific fields of Science arise. One of these problems, capable of annoying many people, is the information overload problem caused by the excessive amount of information returned in response to the user’s query. Due to the information overload problem, advanced techniques for information retrieval and analysis are needed. This study presents some aids in these fields, presenting a methodology to help users to apply the clustering process in textual data. The technique investigated is capable of grouping documents of several subjects in clusters of documents of the same subject. The groups identified can be used to simplify the process of information analysis and retrieval. This study also presents a tool that was created using the methodology and the algorithms analyzed. The tool was implemented to facilitate the process of investigation and demonstration of the study. The results of the application of a fuzzy formula, used to calculate the similarity among documents, are also presented.
527

Social entrepreneurship opportunities in China : a critical realist analysis

Hu, Xiaoti January 2016 (has links)
Social entrepreneurship (SE) has become a rapidly advancing domain of enquiry and holds a place in policy makers consideration around the globe. Opportunities have been regarded as critical in SE, but are often portrayed in abstract and unspecified ways. Research on this topic remains relatively scarce, theory building is not yet established and integrated, and the dearth of empirical studies further constrains theoretical development in SE. Researchers have thus called for more exploration and a comprehensive theoretical understanding of SE opportunities. The purpose of this study is to explore SE opportunities through empirical investigation and theoretical development. As an exploratory study, this study addresses two broad research questions: (1) What are SE opportunities? And (2) How do they emerge? To answer these questions, I draw on the broader entrepreneurship literature which provides two main alternative explanations: opportunity discovery (nexus theory) and opportunity creation (effectuation theory). While the discovery/creation debate is still ongoing, recent theoretical advancement has shown a possible path of forwarding entrepreneurial opportunity research, suggesting that research should incorporate structure and agency simultaneously in studying opportunities. Following this path, this study contributes to SE opportunity research by providing a comprehensive understanding of SE opportunities, it also helps address the discovery/creation debate in the context of SE. To make this contribution, this study first adopts critical realism as a research philosophy as well as methodology. Critical realism incorporates the effects of both structure and agency through its ontological assumptions of three domains of reality, while providing an explanatory framework to assess competing theories. Second, this study selects China as a context for empirical study. As a relation-oriented society, China provides a useful context for studying the causal relations between the social structure (guanxi) and SE opportunity. China s institutional context and fast growing social enterprise sector also provides a promising setting for exploratory research on SE opportunities. Based on critical realism, I used a three-step qualitative multi-case study to develop an explanatory framework in which guanxi and social capital theory provide theoretical explanations of the social structure and its causal powers, which lead to SE opportunity emergence in China. Data were collected from 45 interviews with Chinese social entrepreneurs, their employees and other key stakeholders in 36 organisations in Beijing, Hunan Province and Shanghai. My research findings show that SE opportunities develop in all of the three domains defined by critical realism. In the domain of empirical a world of human experience of social events a SE opportunity can be described as discovered, created, or as both discovered and created. In the domain of actual the social events under study a SE opportunity consists of three internal and necessary constituents: unjust social equilibrium (USE), social entrepreneurs beliefs (SEB), and social feasibility (SF). In the domain of real deeper structures, causal powers and mechanism that produce the social event the emergence of SE opportunities can be seen as the result of a resource acquisition and mobilisation mechanism whereby USE, SEB and SF are identified or formed through social entrepreneurs social capital embedded in guanxi. Building on these findings, this study concludes with a theoretical framework that offers a comprehensive explanation of SE opportunity emergence in China. This study is the first attempt to apply critical realism to the study of opportunities in the context of SE in China. It contributes to the SE and general entrepreneurship literature by developing a theoretical framework of SE opportunity emergence that provides an alternative explanation for the existence of discovery and creation opportunities, and by extending our theoretical understandings of some key concepts of SE. This research further provides an example of the use of qualitative methods to apply critical realism in SE and general entrepreneurship research, which contributes to the development of relatively rigorous research design and research methods in studying complex social events.
528

Um estudo sobre agrupamento de documentos textuais em processamento de informações não estruturadas usando técnicas de "clustering" / A study about arrangement of textual documents applied to unstructured information processing using clustering techniques

Wives, Leandro Krug January 1999 (has links)
Atualmente, técnicas de recuperação e análise de informações, principalmente textuais, são de extrema importância. Após o grande BOOM da Internet, muitos problemas que já eram conhecidos em contextos fechados passaram a preocupar também toda a comunidade científica. No âmbito deste trabalho os problemas relacionados à sobrecarga de informações, que ocorre devido ao grande volume de dados a disposição de uma pessoa, são os mais importantes. Visando minimizar estes problemas, este trabalho apresenta um estudo sobre métodos de agrupamento de objetos textuais (documentos no formato ASCII), onde os objetos são organizados automaticamente em grupos de objetos similares, facilitando sua localização, manipulação e análise. Decorrente deste estudo, apresenta-se uma metodologia de aplicação do agrupamento descrevendo-se suas diversas etapas. Estas etapas foram desenvolvidas de maneira que após uma ter sido realizada ela não precisa ser refeita, permitindo que a etapa seguinte seja aplicada diversas vezes sobre os mesmos dados (com diferentes parâmetros) de forma independente. Além da metodologia, realiza-se um estudo comparativo entre alguns algoritmos de agrupamento, inclusive apresentando-se um novo algoritmo mais eficiente. Este fato é comprovado em experimentos realizados nos diversos estudos de caso propostos. Outras contribuições deste trabalho incluem a implementação de uma ferramenta de agrupamento de textos que utiliza a metodologia elaborada e os algoritmos estudados; além da utilização de uma fórmula não convencional de cálculo de similaridades entre objetos (de abordagem fuzzy), aplicada a informações textuais, obtendo resultados satisfatórios. / The Internet is the vital media of today and, as being a mass media, problems known before to specific fields of Science arise. One of these problems, capable of annoying many people, is the information overload problem caused by the excessive amount of information returned in response to the user’s query. Due to the information overload problem, advanced techniques for information retrieval and analysis are needed. This study presents some aids in these fields, presenting a methodology to help users to apply the clustering process in textual data. The technique investigated is capable of grouping documents of several subjects in clusters of documents of the same subject. The groups identified can be used to simplify the process of information analysis and retrieval. This study also presents a tool that was created using the methodology and the algorithms analyzed. The tool was implemented to facilitate the process of investigation and demonstration of the study. The results of the application of a fuzzy formula, used to calculate the similarity among documents, are also presented.
529

The identification and characterisation of novel inhibitors of the 17β-HSD10 enzyme for the treatment of Alzheimer's disease

Guest, Patrick January 2016 (has links)
In 2015, an estimated 46.8 million people were living with dementia, a number predicted to increase to 74.7 million by 2030 and 131.5 million by 2050. Whilst there are numerous causes for the development of dementia, Alzheimer's disease is by far the most common, accounting for approximately 50-70% of all cases. Current therapeutic agents against Alzheimer's disease are palliative in nature, managing symptoms without addressing the underlying cause and thus disease progression and patient death remain a certainty. Whilst the main underlying cause for the development of Alzheimer's disease was originally thought to be an abnormal deposition of insoluble amyloid-β peptide derived plaques within the brain, the failure of several high-profile therapeutic agents, which were shown to reduce the plaque burden without improving cognition, has recently prompted a shift in focus to soluble oligomeric forms of amyloid-β peptide. Such soluble oligomers have been shown to be toxic in their own right and to precede plaque deposition. Soluble amyloid-β oligomers have been identified in various subcellular compartments, including the mitochondria, where they form a complex with the 17β-HSD10 enzyme resulting in cytotoxicity. Interestingly, hallmarks of this toxicity have been shown to be dependent on the catalytic activity of the 17β-HSD10 enzyme, suggesting two therapeutic approaches may hold merit in treating Alzheimer's disease: disrupting the interaction between the 17β-HSD10 enzyme and amyloid-β peptide, or directly inhibiting the catalytic activity of the 17β-HSD10 enzyme. In 2006, Frentizole was identified as a small molecule capable of disrupting the 17β-HSD10/amyloid interaction. The work described herein details the generation of a robust screening assay allowing the catalytic activity of the 17β-HSD10 enzyme to be measured in vitro. This assay was subsequently employed for small molecule screening using two methodologies; first in a targeted approach using compounds derived from the Frentizole core scaffold, and second in an explorative manner using a diverse library of compounds supplied by the National Cancer Institute. As a result, a range of novel small molecule inhibitors of the 17β-HSD10 enzyme have been identified and the most promising characterised in terms of potency and mechanism of action. De-selection assays were developed to allow the efficient triage of hit compounds and work was begun on a cellular based assay which would allow the ability of compounds of interest to reverse a disease relevant phenotype to be assessed in a cellular environment. As such, we now have a number of hit compounds which will form the basis for the generation of subsequent series of derivatives with improved potency and specificity, as well as the robust assays required to measure such criteria, potentially leading to the generation of novel therapeutic agents against Alzheimer's disease.
530

Estratégias de imputação e associação genômica com dados de sequenciamento para características de produção de leite na raça Gir / Imputation strategies and genome-wide association with sequence data for milk production traits in Gyr cattle

Nascimento, Guilherme Batista do [UNESP] 22 February 2018 (has links)
Submitted by Guilherme Batista do Nascimento null (guilhermebn@msn.com) on 2018-03-16T12:24:54Z No. of bitstreams: 1 Tese_Guilherme_Batista_do_Nascimento.pdf: 1770231 bytes, checksum: ad03948ecc7b09b89d46d26b7c9e3bf8 (MD5) / Approved for entry into archive by Alexandra Maria Donadon Lusser Segali null (alexmar@fcav.unesp.br) on 2018-03-16T19:03:02Z (GMT) No. of bitstreams: 1 nascimento_gb_dr_jabo.pdf: 1770231 bytes, checksum: ad03948ecc7b09b89d46d26b7c9e3bf8 (MD5) / Made available in DSpace on 2018-03-16T19:03:02Z (GMT). No. of bitstreams: 1 nascimento_gb_dr_jabo.pdf: 1770231 bytes, checksum: ad03948ecc7b09b89d46d26b7c9e3bf8 (MD5) Previous issue date: 2018-02-22 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / A implementação de dados de sequenciamento de nova geração - “next-generation sequence” (NGS) em programas de melhoramento genético animal representa a mais recente ferramenta na utilização de dados genotípicos nos modelos de associação genômica, tendo em vista que todo polimorfismo é considerado nas associações entre registros fenotípicos e dados de sequenciamento. Como em toda nova tecnologia, a prospecção das variantes ainda representa um desafio no sentido computacional e de viabilidade dos custos para sua implementação em larga escala. Diante desses desafios, neste trabalho buscou-se meios de explorar os benefícios na utilização da NGS nas predições genômicas e superar as limitações inerentes a esse processo. Registros fenotípicos e genotípicos (Illumina Bovine HD BeadChip) de 2.279 animais da raça Gir (Bos taurus indicus) foram disponibilizados pela Embrapa Gado de Leite (MG) e utilizados para as análises de associação genômica. Além disso, dados de sequenciamento de 53 animais do 1000 “Bulls Project” deram origem à população de referência de imputação. Visando verificar a eficiência de imputação, foram testados diferentes cenários quanto a sua acurácia de imputação por meio da análise “leave-one-out”, utilizando apenas os dados de sequenciamento, que apresentaram eficiências de até 84%, no cenário com todos os 51 animais disponíveis após o controle de qualidade. Também foram verificadas as influências das variantes em baixa frequência na acurácia de imputação em diferentes regiões do genoma. Com a escolha da melhor estrutura da população de referência de imputação e aplicação dos controles de qualidade nos dados de NGS e genômicos, foi possível imputar os 2.237 animais genotipados, que passaram pelo controle de qualidade para dados de sequenciamento e realizar análise de associação genômica para as características produção de leite (PL305), teor de gordura (PG305), proteína (PP305) e sólidos totais (PS305), mensuradas aos 305 dias em animais da raça Gir leiteiro. Para tal, foram utilizados os valores genéticos desregredidos (dEBV) como variável resposta no modelo de regressão múltipla. Regiões de 1Mb que contivessem 100 ou mais variantes com “False Discovery Rate” (FDR) inferior a 0,05, foram consideradas significativas e submetidas a análise de enriquecimento por meio dos termos MeSh (“Medical Subject Headings”). As três regiões significativas (FDR<0,05) para PS305 foram observadas nos cromossomos 11, 12 e 28 e a única região significativa em PG305 foi no cromossomo 6. Tais regiões apresentaram variantes associadas com vias metabólicas da produção de leite, ausentes nos painéis comerciais de genotipagem, podendo representar genes candidatos a seleção. / - Implementing "next-generation sequence" (NGS) data in animal breeding programs represents the latest tool in the use of genotypic data in genomic association models, since all polymorphisms are considered in the associations between phenotypic records and sequencing data. As with any new technology, variant prospecting still represents a computational and cost-effective challenge for large-scale implementation. Front to these challenges, this work sought ways to explore the benefits of using NGS in genomic predictions and overcome the inherent limitations of this process. Phenotypic and genotypic (Illumina Bovine HD BeadChip) records of 2,279 Gir animals (Bos taurus indicus) were made available by Embrapa Gado de Leite (MG) and used for genomic association analysis. In addition, sequence data of 53 animals from the 1000 Bulls Project gave rise to the imputation reference population. In order to verify the imputation efficiency, different scenarios were tested for their imputation accuracy through the leave-one-out analysis, using only the sequencing data, which presented efficiencies of up to 84%, in the scenario with all the 51 animals available after quality control. Influences from the low-frequency variants on the accuracy of imputation in different regions of the genome were also verified. After identifying the best reference population structure of imputation and applying the quality controls in the NGS and genomic data, it was possible to impute the 2 237 genotyped animals that passed in the quality control to sequencing data and perform genomic association analysis for (PL305), fat content (PG305), protein (PP305) and total solids (PS305), measured at 305 days in dairy Gir animals. For this, unregulated genetic values (dEBV) were used as response variable in the multiple regression model. Regions of 1Mb containing 100 or more variants with a False Discovery Rate (FDR) lower than 0.05 were considered statistically significant and submitted to pathways enrichment analysis using the MeSh (Medical Subject Headings) terms. The three significant regions (FDR <0.05) for PS305 were observed on chromosomes 11, 12 and 28 and only one significant region in PG305, was on chromosome 6. These regions presented variants associated with metabolic pathways of milk production, absent in the panels genotyping, and may represent genes that are candidates for selection / convênio Capes/Embrapa (edital 15/2014)

Page generated in 0.0488 seconds