• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 255
  • 55
  • 32
  • 16
  • 15
  • 6
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 466
  • 69
  • 60
  • 60
  • 58
  • 55
  • 49
  • 46
  • 45
  • 40
  • 37
  • 35
  • 34
  • 33
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

First-order numerical schemes for stochastic differential equations using coupling

Alnafisah, Yousef Ali January 2016 (has links)
We study a new method for the strong approximate solution of stochastic differential equations using coupling and we prove order one error bounds for the new scheme in Lp space assuming the invertibility of the diffusion matrix. We introduce and implement two couplings called the exact and approximate coupling for this scheme obtaining good agreement with the theoretical bound. Also we describe a method for non-invertibility case (Combined method) and we investigate its convergence order which will give O(h3/4 √log(h)j) under some conditions. Moreover we compare the computational results for the combined method with its theoretical error bound and we have obtained a good agreement between them. In the last part of this thesis we work out the performance of the multilevel Monte Carlo method using the new scheme with the exact coupling and we compare the results with the trivial coupling for the same scheme.
152

Técnicas de agrupamento de dados para computação aproximativa

Malfatti, Guilherme Meneguzzi January 2017 (has links)
Dois dos principais fatores do aumento da performance em aplicações single-thread – frequência de operação e exploração do paralelismo no nível das instruções – tiveram pouco avanço nos últimos anos devido a restrições de potência. Neste contexto, considerando a natureza tolerante a imprecisões (i.e.: suas saídas podem conter um nível aceitável de ruído sem comprometer o resultado final) de muitas aplicações atuais, como processamento de imagens e aprendizado de máquina, a computação aproximativa torna-se uma abordagem atrativa. Esta técnica baseia-se em computar valores aproximados ao invés de precisos que, por sua vez, pode aumentar o desempenho e reduzir o consumo energético ao custo de qualidade. No atual estado da arte, a forma mais comum de exploração da técnica é através de redes neurais (mais especificamente, o modelo Multilayer Perceptron), devido à capacidade destas estruturas de aprender funções arbitrárias e aproximá-las. Tais redes são geralmente implementadas em um hardware dedicado, chamado acelerador neural. Contudo, essa execução exige uma grande quantidade de área em chip e geralmente não oferece melhorias suficientes que justifiquem este espaço adicional. Este trabalho tem por objetivo propor um novo mecanismo para fazer computação aproximativa, baseado em reúso aproximativo de funções e trechos de código. Esta técnica agrupa automaticamente entradas e saídas de dados por similaridade, armazena-os em uma tabela em memória controlada via software. A partir disto, os valores quantizados podem ser reutilizados através de uma busca a essa tabela, onde será selecionada a saída mais apropriada e desta forma a execução do trecho de código será substituído. A aplicação desta técnica é bastante eficaz, sendo capaz de alcançar uma redução, em média, de 97.1% em Energy-Delay-Product (EDP) quando comparado a aceleradores neurais. / Two of the major drivers of increased performance in single-thread applications - increase in operation frequency and exploitation of instruction-level parallelism - have had little advances in the last years due to power constraints. In this context, considering the intrinsic imprecision-tolerance (i.e., outputs may present an acceptable level of noise without compromising the result) of many modern applications, such as image processing and machine learning, approximate computation becomes a promising approach. This technique is based on computing approximate instead of accurate results, which can increase performance and reduce energy consumption at the cost of quality. In the current state of the art, the most common way of exploiting the technique is through neural networks (more specifically, the Multilayer Perceptron model), due to the ability of these structures to learn arbitrary functions and to approximate them. Such networks are usually implemented in a dedicated neural accelerator. However, this implementation requires a large amount of chip area and usually does not offer enough improvements to justify this additional cost. The goal of this work is to propose a new mechanism to address approximate computation, based on approximate reuse of functions and code fragments. This technique automatically groups input and output data by similarity and stores this information in a sofware-controlled memory. Based on these data, the quantized values can be reused through a search to this table, in which the most appropriate output will be selected and, therefore, execution of the original code will be replaced. Applying this technique is effective, achieving an average 97.1% reduction in Energy-Delay-Product (EDP) when compared to neural accelerators.
153

Resgatando a diversidade genética e história demográfica de povos nativos americanos através de populações mestiças do sul do Brasil e Uruguai / Rescuing the genetic diversity and demographic history of native american peoples through mestizo populations of Southern Brazil and Uruguay

Tavares, Gustavo Medina January 2018 (has links)
Após a chegada dos conquistadores europeus, as populações nativas americanas foram dizimadas por diversas razões, como guerras e doenças, o que possivelmente levou diversas linhagens genéticas autóctones à extinção. Entretanto, durante essa invasão, houve miscigenação entre os colonizadores e os povos nativos e muitos estudos genéticos têm mostrado uma importante contribuição matrilinear nativa americana na formação da população colonial. Portanto, se muitos indivíduos na atual população urbana brasileira carregam linhagens nativas americanas no seu DNA mitocondrial (mtDNA), muito da diversidade genética nativa perdida durante o período colonial pode ter se mantido, por miscigenação, nas populações urbanas. Assim, essas populações representam, efetivamente, um importante reservatório genético de linhagens nativas americanas no Brasil e em outros países americanos, constituindo o reflexo mais fiel da diversidade genética pré-colombiana em populações nativas. Baseado nisso, este estudo teve como objetivos 1) comparar os padrões de diversidade genética de linhagens nativas americanas do mtDNA em populações nativas do Sul do Brasil e da população urbana (miscigenada) adjacente; e 2) comparar, através de Computação Bayesiana Aproximada (ABC), a história demográfica de ambas populações para chegar a uma estimativa do nível de redução do tamanho efetivo populacional (Ne) das populações indígenas aqui tratadas. Foram utilizados dados já publicados da região hipervariável (HVS-I) do mtDNA de linhagens nativas de 396 indivíduos Nativos Americanos (NAT) pertencentes aos grupos Guarani, Caingangue e Charrua e de 309 indivíduos de populações miscigenadas urbanas (URB) do Sul do Brasil e do Uruguai As análises de variabilidade e estrutura genética, bem como testes de neutralidade, foram feitos no programa Arlequin 3.5 e a rede de haplótipos mitocondriais foi estimada através do método Median-Joining utilizando o programa Network 5.0. Estimativas temporais do tamanho populacional efetivo foram feitas através de Skyline Plot Bayesiano utilizando o pacote de programas do BEAST 1.8.4. Por fim, o programa DIYABC 2.1 foi utilizado para testar cenários evolutivos e para estimar o Ne dos nativos americanos pré- (Nanc) e pós-contato (Nnat), para assim, se estimar o impacto da redução de variação genética causada pela colonização europeia. Os resultados deste estudo indicam que URB é a melhor preditora da diversidade nativa ancestral, possuindo uma diversidade substancialmente maior que NAT, pelo menos na região Sul do Brasil e no Uruguai (H = 0,96 vs. 0,85, Nhap = 131 vs. 27, respectivamente). Ademais, a composição de haplogrupos é bastante diferente entre as populações, sugerindo que a população nativa tenha tido eventos de gargalo afetando os haplogrupos B2 e C1 e super-representando o haplogrupo A2. Em relação à demografia histórica, observou-se que URB mantém sinais de expansão remetendo à entrada na América, contrastando com NAT em que esses sinais estão erodidos, apenas retendo sinais de contração populacional recente. De acordo com as estimativas aqui geradas, o declínio populacional em NAT foi de cerca de 300 vezes (84 – 555). Em outras palavras, a população efetiva nativa amricana nessa região corresponderia a apenas 0,33% (0,18% – 1,19%) da população ancestral– 99,8%, corroborando os achados de outros estudos genéticos e também com os registros históricos. / After the arrival of the European conquerors, the Native American populations were decimated due to multiple reasons, such as wars and diseases, which possibly led many autochtonous genetic lineages to extinction. However, during the European invasion of the Americas, colonizers and indigenous people admixed, and many genetic studies have shown an important Native American matrilineal contribution to the formation of the Colonial population. Therefore, if many individuals in the current urban population harbor Native American lineages in their mitochondrial DNA (mtDNA), much of Native American genetic diversity that have been lost during the Colonial Era may have been mantained by admixture in urban populations. In this case, these populations effectively represent an important reservoir of Native lineages in Brazil and other American countries, constituting the most accurate portrait of pre-Columbian genetic diverstity of Native populations. Based on this, the aims of the presente study were 1) to compare the patterns of genetic diversity of Native American mtDNA lineages in Native populations from Southern Brazil and the surrounding admixed urban populations; and 2) to compare, using Approximate Bayesian Computation (ABC), the demographic history of both groups to estimate the level of reduction in the effective population size (Ne) for the indigenous groups present here. We used mtDNA hypervariable segment (HVS-I) data of indigenous origin already published from 396 Native American individuals (NAT) belonging to the Guarani, Kaingang, and Charrua groups, and 309 individuals from Southern Brazilian and Uruguayan admixed urban populations (URB) The analyzes of variability and genetic structure, as well as the neutrality tests were accomplished using Arlequin 3.5, and the mitochondrial haplotype network estimated through the Median-Joining method available in Network 5.0. Time estimates for effective population size were performed using Bayesian Skyline Plot available in the BEAST 1.8.4 package. Finally, the DIYABC 2.1 software was used to test evolutionary scenarios and to estimate the pre (Nanc) and post-contact (Nnat) Native American Ne, and estimate the impact of the colonization process on the Native American genetic variability. The results indicate that URB is the best predictor of ancestral Native diversity, having substancially greater genetic diversity than NAT, at least in the Southern Brazilian and Uruguayan regions (H = 0.96 vs. 0.85, Nhap = 11 vs. 27, respectively). Moreover, the haplogroup compositions are very distinct between these groups, suggesting that the Native population passed through bottleneck events affecting the haplogroups B2 and C1, and overrepresenting the haplogroup A2. In relation to demographic history, we observed that URB retains signals of population expansion back to the entry in the Americas. In contrast, these signals are eroded in NAT, which maintains only signals of recent population contraction. According to our estimates, the population decline in NAT was around 300x (84 – 555x). In other words, the effective Native American population in this region would correspond to only 0.33% (0.18% – 1.19%) of the ancestral population, corroborating the findings of other genetic studies and historical records.
154

Um método para deduplicação de metadados bibliográficos baseado no empilhamento de classificadores / A method for bibliographic metadata deduplication based on stacked generalization

Borges, Eduardo Nunes January 2013 (has links)
Metadados bibliográficos duplicados são registros que correspondem a referências bibliográficas semanticamente equivalentes, ou seja, que descrevem a mesma publicação. Identificar metadados bibliográficos duplicados em uma ou mais bibliotecas digitais é uma tarefa essencial para garantir a qualidade de alguns serviços como busca, navegação e recomendação de conteúdo. Embora diversos padrões de metadados tenham sido propostos, eles não resolvem totalmente os problemas de interoperabilidade porque mesmo que exista um mapeamento entre diferentes esquemas de metadados, podem existir variações na representação do conteúdo. Grande parte dos trabalhos propostos para identificar duplicatas aplica uma ou mais funções sobre o conteúdo de determinados campos no intuito de captar a similaridade entre os registros. Entretanto, é necessário escolher um limiar que defina se dois registros são suficientemente similares para serem considerados semanticamente equivalentes ou duplicados. Trabalhos mais recentes tratam a deduplicação de registros como um problema de classificação de dados, em que um modelo preditivo é treinado para estimar a que objeto do mundo real um registro faz referência. O objetivo principal desta tese é o desenvolvimento de um método efetivo e automático para identificar metadados bibliográficos duplicados, combinando o aprendizado de múltiplos classificadores supervisionados, sem a necessidade de intervenção humana na definição de limiares de similaridade. Sobre o conjunto de treinamento são aplicadas funções de similaridade desenvolvidas especificamente para o contexto de bibliotecas digitais e com baixo custo computacional. Os escores produzidos pelas funções são utilizados para treinar múltiplos modelos de classificação heterogêneos, ou seja, a partir de algoritmos de diversos tipos: baseados em árvores, regras, redes neurais artificiais e probabilísticos. Os classificadores aprendidos são combinados através da estratégia de empilhamento visando potencializar o resultado da deduplicação a partir do conhecimento heterogêneo adquirido individualmente pelos algoritmo de aprendizagem. O modelo de classificação final é aplicado aos pares candidatos ao casamento retornados por uma estratégia de blocagem de dois níveis bastante eficiente. A solução proposta é baseada na hipótese de que o empilhamento de classificadores supervisionados pode aumentar a qualidade da deduplicação quando comparado a outras estratégias de combinação. A avaliação experimental mostra que a hipótese foi confirmada quando o método proposto é comparado com a escolha do melhor classificador e com o voto da maioria. Ainda são analisados o impacto da diversidade dos classificadores no resultado do empilhamento e os casos de falha do método proposto. / Duplicated bibliographic metadata are semantically equivalent records, i.e., references that describe the same publication. Identifying duplicated bibliographic metadata in one or more digital libraries is an essential task to ensure the quality of some services such as search, navigation, and content recommendation. Although many metadata standards have been proposed, they do not completely solve interoperability problems because even if there is a mapping between different metadata schemas, there may be variations in the content representation. Most of work proposed to identify duplicated records uses one or more functions on some fields in order to capture the similarity between the records. However, we need to choose a threshold that defines whether two records are sufficiently similar to be considered semantically equivalent or duplicated. Recent studies deal with record deduplication as a data classification problem, in which a predictive model is trained to estimate the real-world object to which a record refers. The main goal of this thesis is the development of an effective and automatic method to identify duplicated bibliographic metadata, combining multiple supervised classifiers, without any human intervention in the setting of similarity thresholds. We have applied on the training set cheap similarity functions specifically designed for the context of digital libraries. The scores returned by these functions are used to train multiple and heterogeneous classification models, i.e., using learning algorithms based on trees, rules, artificial neural networks and probabilistic models. The learned classifiers are combined by stacked generalization strategy to improve the deduplication result through heterogeneous knowledge acquired by each learning algorithm. The final model is applied to pairs of records that are candidate to matching. These pairs are defined by an efficient two phase blocking strategy. The proposed solution is based on the hypothesis that stacking supervised classifiers can improve the quality of deduplication when compared to other combination strategies. The experimental evaluation shows that the hypothesis has been confirmed by comparing the proposed method to selecting the best classifier or the majority vote technique. We also have analyzed the impact of classifiers diversity on the stacking results and the cases for which the proposed method fails.
155

Um método para deduplicação de metadados bibliográficos baseado no empilhamento de classificadores / A method for bibliographic metadata deduplication based on stacked generalization

Borges, Eduardo Nunes January 2013 (has links)
Metadados bibliográficos duplicados são registros que correspondem a referências bibliográficas semanticamente equivalentes, ou seja, que descrevem a mesma publicação. Identificar metadados bibliográficos duplicados em uma ou mais bibliotecas digitais é uma tarefa essencial para garantir a qualidade de alguns serviços como busca, navegação e recomendação de conteúdo. Embora diversos padrões de metadados tenham sido propostos, eles não resolvem totalmente os problemas de interoperabilidade porque mesmo que exista um mapeamento entre diferentes esquemas de metadados, podem existir variações na representação do conteúdo. Grande parte dos trabalhos propostos para identificar duplicatas aplica uma ou mais funções sobre o conteúdo de determinados campos no intuito de captar a similaridade entre os registros. Entretanto, é necessário escolher um limiar que defina se dois registros são suficientemente similares para serem considerados semanticamente equivalentes ou duplicados. Trabalhos mais recentes tratam a deduplicação de registros como um problema de classificação de dados, em que um modelo preditivo é treinado para estimar a que objeto do mundo real um registro faz referência. O objetivo principal desta tese é o desenvolvimento de um método efetivo e automático para identificar metadados bibliográficos duplicados, combinando o aprendizado de múltiplos classificadores supervisionados, sem a necessidade de intervenção humana na definição de limiares de similaridade. Sobre o conjunto de treinamento são aplicadas funções de similaridade desenvolvidas especificamente para o contexto de bibliotecas digitais e com baixo custo computacional. Os escores produzidos pelas funções são utilizados para treinar múltiplos modelos de classificação heterogêneos, ou seja, a partir de algoritmos de diversos tipos: baseados em árvores, regras, redes neurais artificiais e probabilísticos. Os classificadores aprendidos são combinados através da estratégia de empilhamento visando potencializar o resultado da deduplicação a partir do conhecimento heterogêneo adquirido individualmente pelos algoritmo de aprendizagem. O modelo de classificação final é aplicado aos pares candidatos ao casamento retornados por uma estratégia de blocagem de dois níveis bastante eficiente. A solução proposta é baseada na hipótese de que o empilhamento de classificadores supervisionados pode aumentar a qualidade da deduplicação quando comparado a outras estratégias de combinação. A avaliação experimental mostra que a hipótese foi confirmada quando o método proposto é comparado com a escolha do melhor classificador e com o voto da maioria. Ainda são analisados o impacto da diversidade dos classificadores no resultado do empilhamento e os casos de falha do método proposto. / Duplicated bibliographic metadata are semantically equivalent records, i.e., references that describe the same publication. Identifying duplicated bibliographic metadata in one or more digital libraries is an essential task to ensure the quality of some services such as search, navigation, and content recommendation. Although many metadata standards have been proposed, they do not completely solve interoperability problems because even if there is a mapping between different metadata schemas, there may be variations in the content representation. Most of work proposed to identify duplicated records uses one or more functions on some fields in order to capture the similarity between the records. However, we need to choose a threshold that defines whether two records are sufficiently similar to be considered semantically equivalent or duplicated. Recent studies deal with record deduplication as a data classification problem, in which a predictive model is trained to estimate the real-world object to which a record refers. The main goal of this thesis is the development of an effective and automatic method to identify duplicated bibliographic metadata, combining multiple supervised classifiers, without any human intervention in the setting of similarity thresholds. We have applied on the training set cheap similarity functions specifically designed for the context of digital libraries. The scores returned by these functions are used to train multiple and heterogeneous classification models, i.e., using learning algorithms based on trees, rules, artificial neural networks and probabilistic models. The learned classifiers are combined by stacked generalization strategy to improve the deduplication result through heterogeneous knowledge acquired by each learning algorithm. The final model is applied to pairs of records that are candidate to matching. These pairs are defined by an efficient two phase blocking strategy. The proposed solution is based on the hypothesis that stacking supervised classifiers can improve the quality of deduplication when compared to other combination strategies. The experimental evaluation shows that the hypothesis has been confirmed by comparing the proposed method to selecting the best classifier or the majority vote technique. We also have analyzed the impact of classifiers diversity on the stacking results and the cases for which the proposed method fails.
156

Search path generation with UAV applications using approximate convex decomposition

Öst, Gustav January 2012 (has links)
This work focuses on the problem that pertains to area searching with UAVs. Specifically developing algorithms that generate flight paths that are short with- out sacrificing flyability. For instance, very sharp turns will compromise flyability since fixed wing aircraft cannot make very sharp turns. This thesis provides an analysis of different types of search methods, area decompositions, and combi- nations thereof. The search methods used are side to side searching and spiral searching. In side to side searching the aircraft goes back and forth only making 90-degree turns. Spiral search searches the shape in a spiral pattern starting on the outer perimeter working its way in. The idea being that it should generate flight paths that are easy to fly since all turns should be with a large turn radii. Area decomposition is done to divide complex shapes into smaller more manage- able shapes. The report concludes that with the implemented methods the side to side scanning method without area decomposition yields good and above all very reliable results. The reliability stems from the fact that all turns are 90 degrees and that algorithm never get stuck or makes bad mistakes. Only having 90 degree turns results in only four different types of turns. This allows the airplanes behav- ior along the route to be predictable after flying the first four turns. Although this assumes that the strength of the wind is a greater influence than the turbulences effect on the aircraft’s flight characteristics. This is a very valuable feature for an operator in charge of a flight. The other tested methods and area decompositions often yield a shorter flight path, however, despite extensive adjustments to the algorithms they never came to handle all cases in a satisfactory manner. These methods may also generate any kind of turn at any time, including turns of nearly 180 degrees. These turns can lead to an airplane missing the intended flight path and thus missing to scan the intended area properly. Area decomposition proves to be really effective only when the area has many protrusions that stick out in different directions, think of a starfish shape. In these cases the side to side algo- rithm generate a path that has long legs over parts that are not in the search area. When the area is decomposed the algorithm starts with, for example, one arm of the starfish at a time and then search the rest of the arms and body in turn.
157

The cognitive underpinnings of non-symbolic comparison task performance

Clayton, Sarah January 2016 (has links)
Over the past twenty years, the Approximate Number System (ANS), a cognitive system for representing non-symbolic quantity information, has been the focus of much research attention. Psychologists seeking to understand how individuals learn and perform mathematics have investigated how this system might underlie symbolic mathematical skills. Dot comparison tasks are commonly used as measures of ANS acuity, however very little is known about the cognitive skills that are involved in completing these tasks. The aim of this thesis was to explore the factors that influence performance on dot comparison tasks and discuss the implications of these findings for future research and educational interventions. The first study investigated how the accuracy and reliability of magnitude judgements is influenced by the visual cue controls used to create dot array stimuli. This study found that participants performances on dot comparison tasks created with different visual cue controls were unrelated, and that stimuli generation methods have a substantial influence on test-retest reliability. The studies reported in the second part of this thesis (Studies 2, 3, 4 and 5) explored the role of inhibition in dot comparison task performance. The results of these studies provide evidence that individual differences in inhibition may, at least partially, explain individual differences in dot comparison task performance. Finally, a large multi-study re-analysis of dot comparison data investigated whether individuals take account of numerosity information over and above the visual cues of the stimuli when comparing dot arrays. This analysis revealed that dot comparison task performance may not reflect numerosity processing independently from visual cue processing for all participants, particularly children. This novel evidence may provide some clarification for conflicting results in the literature regarding the relationship between ANS acuity and mathematics achievement. The present findings call into question whether dot comparison tasks should continue to be used as valid measures of ANS acuity.
158

Error handling and energy estimation for error resilient near-threshold computing / Gestion des erreurs et estimations énergétiques pour les architectures tolérantes aux fautes et proches du seuil

Ragavan, Rengarajan 22 September 2017 (has links)
Les techniques de gestion dynamique de la tension (DVS) sont principalement utilisés dans la conception de circuits numériques pour en améliorer l'efficacité énergétique. Cependant, la réduction de la tension d'alimentation augmente l'impact de la variabilité et des erreurs temporelles dans les technologies nano-métriques. L'objectif principal de cette thèse est de gérer les erreurs temporelles et de formuler un cadre pour estimer la consommation d'énergie d'applications résistantes aux erreurs dans le contexte du régime proche du seuil (NTR) des transistors. Dans cette thèse, la détection et la correction d'erreurs basées sur la spéculation dynamique sont explorées dans le contexte de l'adaptation de la tension et de la fréquence d‘horloge. Outre la détection et la correction des erreurs, certaines erreurs peuvent être également tolérées et les circuits peuvent calculer au-delà de leurs limites avec une précision réduite pour obtenir une plus grande efficacité énergétique. La méthode de détection et de correction d'erreur proposée atteint 71% d'overclocking avec seulement 2% de surcoût matériel. Ce travail implique une étude approfondie au niveau des portes logiques pour comprendre le comportement des portes sous l'effet de modification de la tension d'alimentation, de la tension de polarisation et de la fréquence d'horloge. Une approche ascendante est prise en étudiant les tendances de l'énergie par rapport a l'erreur des opérateurs arithmétiques au niveau du transistor. En se basant sur le profilage des opérateurs, un flot d'outils est formulé pour estimer les paramètres d'énergie et d'erreur pour différentes configurations. Nous atteignons une efficacité énergétique maximale de 89% pour les opérateurs arithmétiques comme les additionneurs 8 bits et 16 bits au prix de 20% de bits défectueux en opérant en NTR. Un modèle statistique est développé pour que les opérateurs arithmétiques représentent le comportement des opérateurs pour différents impacts de variabilité. Ce modèle est utilisé pour le calcul approximatif dans les applications qui peuvent tolérer une marge d'erreur acceptable. Cette méthode est ensuite explorée pour unité d'exécution d'un processeur VLIW. L'environnement proposé fournit une estimation rapide des indicateurs d'énergie et d'erreurs d'un programme de référence par compilation simple d'un programme C. Dans cette méthode d'estimation de l'énergie, la caractérisation des opérateurs se fait au niveau du transistor, et l'estimation de l'énergie se fait au niveau fonctionnel. Cette approche hybride rend l'estimation de l'énergie plus rapide et plus précise pour différentes configurations. Les résultats d'estimation pour différents programmes de référence montrent une précision de 98% par rapport à la simulation SPICE. / Dynamic voltage scaling (DVS) technique is primarily used in digital design to enhance the energy efficiency by reducing the supply voltage of the design. However reduction in Vdd augments the impact of variability and timing errors in sub-nanometer designs. The main objective of this work is to handle timing errors, and to formulate a framework to estimate energy consumption of error resilient applications in the context of near-threshold regime (NTR). In this thesis, Dynamic Speculation based error detection and correction is explored in the context of adaptive voltage and clock overscaling. Apart from error detection and correction, some errors can also be tolerated or, in other words, circuits can be pushed beyond their limits to compute incorrectly to achieve higher energy efficiency. The proposed error detection and correction method achieves 71% overclocking with 2% additional hardware cost. This work involves extensive study of design at gate level to understand the behaviour of gates under overscaling of supply voltage, bias voltage and clock frequency (collectively called as operating triads). A bottom-up approach is taken: by studying trends of energy vs. error of basic arithmetic operators at transistor level. Based on the profiling of arithmetic operators, a tool flow is formulated to estimate energy and error metrics for different operating triads. We achieve maximum energy efficiency of 89% for arithmetic operators like 8-bit and 16-bit adders at the cost of 20% faulty bits by operating in NTR. A statistical model is developed for the arithmetic operators to represent the behaviour of the operators for different variability impacts. This model is used for approximate computing of error resilient applications that can tolerate acceptable margin of errors. This method is further explored for execution unit of a VLIW processor. The proposed framework provides quick estimation of energy and error metrics of a benchmark programs by simple compilation in a C compiler. In the proposed energy estimation framework, characterization of arithmetic operators is done at transistor level, and the energy estimation is done at functional level. This hybrid approach makes energy estimation faster and accurate for different operating triads. The proposed framework estimates energy for different benchmark programs with 98% accuracy compared to SPICE simulation.
159

Approximate inference in graphical models

Hennig, Philipp January 2011 (has links)
Probability theory provides a mathematically rigorous yet conceptually flexible calculus of uncertainty, allowing the construction of complex hierarchical models for real-world inference tasks. Unfortunately, exact inference in probabilistic models is often computationally expensive or even intractable. A close inspection in such situations often reveals that computational bottlenecks are confined to certain aspects of the model, which can be circumvented by approximations without having to sacrifice the model's interesting aspects. The conceptual framework of graphical models provides an elegant means of representing probabilistic models and deriving both exact and approximate inference algorithms in terms of local computations. This makes graphical models an ideal aid in the development of generalizable approximations. This thesis contains a brief introduction to approximate inference in graphical models (Chapter 2), followed by three extensive case studies in which approximate inference algorithms are developed for challenging applied inference problems. Chapter 3 derives the first probabilistic game tree search algorithm. Chapter 4 provides a novel expressive model for inference in psychometric questionnaires. Chapter 5 develops a model for the topics of large corpora of text documents, conditional on document metadata, with a focus on computational speed. In each case, graphical models help in two important ways: They first provide important structural insight into the problem; and then suggest practical approximations to the exact probabilistic solution.
160

Inferring Viral Dynamics from Sequence Data

Ibeh, Neke January 2016 (has links)
One of the primary objectives of infectious disease research is uncovering the direct link that exists between viral population dynamics and molecular evolution. For RNA viruses in particular, evolution occurs at such a rapid pace that epidemiological processes become ingrained into gene sequences. Conceptually, this link is easy to make: as RNA viruses spread throughout a population, they evolve with each new host infection. However, developing a quantitative understanding of this connection is difficult. Thus, the emerging discipline of phylodynamics is centered on reconciling epidemiology and phylogenetics using genetic analysis. Here, we present two research studies that draw on phylodynamic principles in order to characterize the progression and evolution of the Ebola virus and the human immunodefficiency virus (HIV). In the first study, the interplay between selection and epistasis in the Ebola virus genome is elucidated through the ancestral reconstruction of a critical region in the Ebola virus glycoprotein. Hence, we provide a novel mechanistic account of the structural changes that led up to the 2014 Ebola virus outbreak. The second study applies an approximate Bayesian computation (ABC) approach to the inference of epidemiological parameters. First, we demonstrate the accuracy of this approach with simulated data. Then, we infer the dynamics of the Swiss HIV-1 epidemic, illustrating the applicability of this statistical method to the public health sector. Altogether, this thesis unravels some of the complex dynamics that shape epidemic progression, and provides potential avenues for facilitating viral surveillance efforts.

Page generated in 0.0495 seconds