• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 11
  • 1
  • Tagged with
  • 26
  • 26
  • 18
  • 13
  • 12
  • 11
  • 9
  • 9
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Podobnostní vyhledávání v databázích hmotnostních spekter / Similarity search in Mass Spectra Databases

Novák, Jiří January 2013 (has links)
Shotgun proteomics is a widely known technique for identification of protein and peptide sequences from an "in vitro" sample. A tandem mass spectrometer generates tens of thousands of mass spectra which must be annotated with peptide sequences. For this purpose, the similarity search in a database of theoretical spectra generated from a database of known protein sequences can be utilized. Since the sizes of databases grow rapidly in recent years, there is a demand for utilization of various database indexing techniques. We investigate the capabilities of (non)metric access methods as the database indexing techniques for fast and approximate similarity retrieval in mass spectra databases. We show that the method for peptide sequences identification is more than 100x faster than a sequential scan over the entire database while more than 90% of spectra are correctly annotated with peptide sequences. Since the method is currently suitable for small mixtures of proteins, we also utilize a precursor mass filter as the database indexing technique for complex mixtures of proteins. The precursor mass filter followed by ranking of spectra by a modification of the parametrized Hausdorff distance outperforms state-of-the-art tools in the number of identified peptide sequences and the speed of search. The...
22

A global systematic review of forest management institutions: towards a new research agenda

Ndzifon Kimengsi, Jude, Owusu, Raphael, Charmakar, Shambhu, Manu, Gordon, Giessen, Lukas 19 March 2024 (has links)
Context Globally, forest landscapes are rapidly transforming, with the role of institutions as mediators in their use and management constantly appearing in the literature. However, global comparative reviews to enhance comprehension of how forest management institutions (FMIs) are conceptualized, and the varying determinants of compliance, are lacking. And so too, is there knowledge fragmentation on the methodological approaches which have and should be prioritized in the new research agenda on FMIs. Objectives We review the regional variations in the conceptualization of FMIs, analyze the determinants of compliance with FMIs, and assess the methodological gaps applied in the study of FMIs. Methods A systematic review of 197 empirically conducted studies (491 cases) on FMIs was performed, including a directed content analysis. Results First, FMIs literature is growing; multi-case and multi-country studies characterize Europe/North America, Africa and Latin America, over Asia. Second, the structure-process conceptualization of FMIs predominates in Asia and Africa. Third, global south regions report high cases of compliance with informal FMIs, while non-compliance was registered for Europe/North America in the formal domain. Finally, mixed-methods approaches have been least employed in the studies so far; while the use of only qualitative methods increased over time, the adoption of only quantitative approaches witnessed a decrease. Conclusion Future research should empirically ground informality in the institutional set-up of Australia while also valorizing mixed-methods research globally. Crucially, future research should consider multidisciplinary and transdisciplinary approaches to explore the actor and power dimensions of forest management institutions.
23

Interleaved Prefetching Ray Traversal on CPUs

Meyer, Viktor January 2021 (has links)
Ray tracing is used in computer graphics to generate images. The process of rendering images using ray tracing includes testing large counts of rays for intersection against geometry. Testing for ray-geometry intersection is more formally known as the Ray Shooting Problem (RSP) and has broad applications across multiple communities. Hierarchical acceleration structures are frequently employed to index geometry and increase processing speed. Such hierarchical structures make it almost impossible for central processing units to predict memory access and branching patterns. This project focuses on the Bounding Volume Hierarchy (BVH) structure and improving its performance when querying large batches of first hit ray-geometry intersections. The core contribution is an Interleaved Prefetching Ray Traversal (IPRT) algorithm that addresses memory and branching issues. Five standardized test scenarios with varying geometric complexity provide evaluation data. The experimental evaluation suggests that for incoherent rays, IPRT achieves 6:1 - 41:8% faster performance compared to Stackless Traversal. However, for fully coherent rays, performance is 68:8 - 149:1% slower. These results suggest that for select ray tracing workloads that elicit low coherence, the IPRT algorithm is likely to outperform Stackless Traversal. A microarchitectural analysis affirms previous research; memory accesses and branching behavior are critical for performance. Surprisingly, addressing each component in isolation yields no significant performance improvement. It is paramount to address the two simultaneously, as the IPRT algorithm does successfully. / Ray tracing används inom datorgrafik för att generera bilder. Att rendera bilder med ray tracing kräver att datorer simulerar stora mängder ljusstrålar i en virtuell miljö. Problemet att beräkna om en stråle träffar geometri är mer formellt känt som Ray Shooting Problem (RSP) och har breda applikationer inom flera områden. Hierarkiska accelerationsstrukturer används ofta för att indexera geometri och öka beräkningshastighet. Sådana hierarkiska strukturer gör det nästan omöjligt för centrala processorenheter att förutsäga minnesåtkomst och förgreningsmönster. Detta projekt fokuserar på Bounding Volume Hierarchy (BVH) strukturen och dess prestanda. En ny algoritm tas fram för att behandla dessa identifierade problem: Interleaved Prefetching Ray Traversal (IPRT). Fem standardiserade testscener med varierande geometrisk komplexitet används för experimentell utvärdering. Den experimentella utvärderingen antyder i jämförelse med Stackless Traversal så uppnår IPRT-algoritmen 6:1 - 41:8% bättre prestanda för osammanhängande strålar. När det gäller helt sammanhängande strålar är prestandan dock 68:8 - 149:1% långsammare. En mikroarkitektisk analys bekräftar tidigare forskning; minnesåtkomst och förgreningsbeteende är mycket viktigt för prestanda. Isolerad optimering av faktorerna ger dessvärre ingen signifikant prestandaförbättring. Det är därför ytterst viktigt att optimera båda komponenter samtidigt, vilket IPRT-algoritmen lyckas med.
24

Spatial Indexing on Flash-based Solid State Drives / Espacial em Dispositivos de Estado Sólido baseados em Memória Flash

Carniel, Anderson Chaves 21 December 2018 (has links)
Spatial database systems widely employ spatial indexing structures to speed up the processing of spatial queries. Many of the proposed spatial indices in the literature, such as the R-tree, assume magnetic disks (i.e., HDDs) as the underlying storage device. They are termed as disk-based spatial indices. On the other hand, several spatial database applications are increasingly using flash-based Solid State Drives (SSDs) and thus, designing spatial indices for these storage devices has gained increasing attention. This is due the fact that, compared to HDDs, SSDs offer smaller size, lighter weight, lower power consumption, better shock resistance, and faster reads and writes. Hence, specific indices for SSDs, termed flash-aware spatial indices, have been proposed in the literature to deal with the intrinsic characteristics of SSDs, such as the asymmetric costs of reads and writes. However, the research to date has not been able to establish a flash-aware spatial index that actually exploits all the benefits of SSDs. This PhD thesis advances on the literature as follows. We firstly define a methodology to create spatial datasets for experimental evaluations. We also propose FESTIval, a versatile framework that provides a common and unique environment to execute experimental evaluations. Such contributions served as a foundation to conduct performance analysis along this PhD work. By using this foundation, we analyze the performance behavior of spatial indices on different storage devices, such as HDDs and SSDs. Further, we discuss the applicability of employing flash simulators on the evaluation of spatial indices. The findings of these experiments contributed to the proposal of eFIND, a generic and efficient framework for flash-aware spatial indexing. eFIND is generic because it can port a wide range of disk-based spatial indices to SSDs. eFIND is also efficient because it is based on a set of design goals that exploits SSD performance. Performance tests showed that, compared to the state of the art, eFIND improved the construction of ported disk-based spatial indices and the execution of spatial queries. For porting the R-tree (i.e., the eFIND R-tree), eFIND showed performance reductions from 43% to 77% to build spatial indices, and from 4% to 23% to execute spatial queries. For porting the xBR+-tree (i.e., the eFIND xBR+-tree), eFIND showed reductions from 28% to 83% to build spatial indices and up to 35% in the spatial query processing. / Sistemas de banco de dados espaciais empregam estruturas de indexação espaciais para acelerar o processamento de consultas espaciais. Muitos dos índices espaciais propostos na literatura, como a R-tree, assumem que os dispositivos de armazenamentos são os discos magnéticos (i.e., HDDs) e são denominados índices espaciais baseados em disco. Por outro lado, várias aplicações de banco de dados espaciais estão cada vez mais usando Solid State Drives (SSDs) baseados em memória flash e, assim, projetar índices espaciais para esses dispositivos tem ganhado cada vez mais atenção. Isso se deve ao fato de que, em comparação com os HDDs, os SSDs oferecem menor tamanho, menor peso, menor consumo de energia, melhor resistência a choques além de leituras e escritas mais rápidas. Assim, índices espaciais para memória flash têm sido propostos na literatura para lidar com as características intrínsecas dos SSDs, como os custos assimétricos de leituras e escritas. No entanto, a pesquisa até o momento não conseguiu estabelecer um índice espacial que realmente explora todos os benefícios dos SSDs. Esta tese de doutorado avança na literatura da seguinte forma. Primeiramente, é definida uma metodologia para criar conjuntos de dados espaciais para avaliações experimentais. Também é proposto FESTIval, um arcabouço versátil que fornece um ambiente comum e único para executar avaliações experimentais. Tais contribuições serviram como base para conduzir análises de desempenho ao longo deste trabalho de doutorado. Usando essa base, o comportamento de desempenho de índices espaciais em diferentes dispositivos de armazenamento, como HDDs e SSDs, é analisado. Além disso, discutese a aplicabilidade de simuladores flash na avaliação experimental de índices espaciais. Os resultados desses experimentos contribuíram para a proposta de eFIND, uma estrutura genérica e eficiente para indexação espacial em memórias flash. eFIND é genérico porque pode portar uma ampla gama de índices espaciais baseados em disco para SSDs. eFIND também é eficiente porque é baseado em um conjunto de objetivos de projeto que exploram o desempenho do SSD. Os testes de desempenho mostraram que, em comparação com o estado da arte, eFIND melhorou a construção de índices espaciais portados e a execução de consultas espaciais. Para portar a R-tree (ou seja, a eFIND R-tree), eFIND mostrou melhorias de desempenho de 43% a 77% para construir índices espaciais e de 4% a 23% para executar consultas espaciais. Para portar a xBR+-tree (ou seja, a eFIND xBR+-tree), eFIND mostrou melhorias de 28% a 83% para construir índices espaciais e de até 35% no processamento de consultas espaciais.
25

Algoritmos de remoção para a estrutura de indexação Onion-tree

Marrach, Debora Gonçalves Rodrigues 27 August 2013 (has links)
Made available in DSpace on 2016-06-02T19:06:10Z (GMT). No. of bitstreams: 1 5601.pdf: 3183108 bytes, checksum: 0ac17d1e4d1f1556e3258bf2bd169cf2 (MD5) Previous issue date: 2013-08-27 / The Onion-tree is an efficient metric access method based on main memory for similarity search. The Onion-tree has already provided algorithms for insertion and processing of similarity queries (range query and k-nearest neighbors query). However, in the literature no algorithm has been proposed for removing elements in Onion-tree. For this index be incorporated into a database management system, it is necessary the proposal and implementation of at least one algorithm of deletion. This master's research focused primarily on the implementation and performance evaluation of the algorithms proposed for logical deletion in (CARÉLO et al., 2011). The proposal presented in (CARÉLO et al., 2011) led to the implementation of three algorithms, called LogicalDelete, ReplaceReducing and ReplaceGrowing. The first algorithm applies the logic deletion, while the other two algorithms are specializations adding special treatment for the deletion of elements in internal nodes with children exclusively leaf. The ReplaceReducing algorithm allows the reduction of the radius of the node that contains de deleted element. On the other hand, the ReplaceGrowing algorithm allows increasing this radius. In addition, algorithms have been proposed and evaluated for physical deletion that can be applied at any level of the Onion-tree. The algorithm ReorgAll rearranges all the elements in the hierarchy of the node that contains de deleted element, by physically removing the elements and reinserting them using the insertion algorithm, and algorithm PromoteNode, which extends the algorithm ReorgAll, promotes, when exists conditions for such operation, other node to replace the one that contains the deleted element. Experimental evaluation of the algorithms LogicalDelete, ReplaceReducing and ReplaceGrowing showed that the algorithm LogicalDelete is more cost effective than the algorithms ReplaceReducing and ReplaceGrowing in query processing after the deletion of elements. Experimental evaluation of physical removal algorithms showed that the promotion of a node to replace the removed node has advantages over the simple reorganization of the hierarchy of the node that contains the deleted element. Besides presenting lower cost of deletion of elements, the algorithm PromoteNode also outperformed the algorithm ReorgAll in query processing after removing elements. When compared with the logic deletion algorithm, for a large amount of deletion operations, the algorithms ReorgAll and PromoteNode produced performance gain of 21.6% in range query processing. However, in the same comparison, these algorithms have a much higher cost of deletion. / A Onion-tree é um método de acesso métrico eficiente baseado em memória primária para pesquisa por similaridade. Esta estrutura de indexação já provê algoritmos para a inserção de elementos e o processamento de consultas por similaridade dos tipos Range Query (consulta por abrangência) e KNN (consulta aos k-vizinhos mais próximos). Entretanto, ainda não foi proposto na literatura um algoritmo para a remoção de elementos na Onion-tree. Para que a Onion-tree possa ser efetivamente incorporada a um Sistema Gerenciador de Banco de Dados, portanto, é necessário a proposta e a implementação de, pelo menos, um algoritmo de remoção. Esta pesquisa de mestrado se concentrou primeiramente na implementação e na avaliação de desempenho do algoritmo de remoção lógica proposto em (CARÉLO et al., 2011). A proposta feita em (CARÉLO et al., 2011) deu origem à implementação de três algoritmos de remoção lógica, denominados LogicalDelete, ReplaceReducing e ReplaceGrowing. O algoritmo LogicalDelete aplica a remoção lógica, enquanto os algoritmos ReplaceReducing e ReplaceGrowing são especializações da remoção lógica, adicionando tratamento especial para a remoção de elementos em nós internos com filhos exclusivamente folha. O algoritmo ReplaceReducing permite a diminuição do raio do nó que sofreu a remoção. De forma antagônica, o algoritmo ReplaceGrowing permite o aumento deste raio. Adicionalmente, foram propostos e avaliados algoritmos de remoção física que podem ser aplicados em qualquer nível da estrutura da Onion-tree: O algoritmo ReorgAll reorganiza todos os elementos da hierarquia do nó que sofreu a remoção, removendo-os fisicamente e reinserindo-os no índice usando o algoritmo de inserção de elementos; e o algoritmo PromoteNode, o qual estende o algoritmo ReorgAll, promovendo, quando houver condições para tal, outro nó em substituição àquele que sofreu a remoção. Os testes experimentais dos algoritmos de remoção LogicalDelete, ReplaceReducing e ReplaceGrowing mostraram que o algoritmo LogicalDelete tem melhor relação custo/benefício que os algoritmos ReplaceReducing e ReplaceGrowing no processamento de consultas por abrangência após a remoção de elementos. Os testes experimentais dos algoritmos de remoção física mostraram que a promoção de um nó, em substituição ao nó removido, efetuada pelo algoritmo PromoteNode apresenta vantagens em relação a simples reorganização da hierarquia que sofreu a remoção. Além de apresentar menor custo de remoção dos elementos no índice, o algoritmo PromoteNode também apresenta desempenho superior no processamento de consultas por abrangência após a remoção de elementos. Quando comparados com o algoritmo de remoção lógica, para uma grande quantidade de operações de remoção, os algoritmos ReorgAll e PromoteNode produziram melhora de 21,6% no desempenho do processamento de consultas por abrangência. Porém, na mesma comparação, estes algoritmos apresentaram custo de remoção muito maior.
26

Automated Performance Test Generation and Comparison for Complex Data Structures - Exemplified on High-Dimensional Spatio-Temporal Indices

Menninghaus, Mathias 23 August 2018 (has links)
There exist numerous approaches to index either spatio-temporal or high-dimensional data. None of them is able to efficiently index hybrid data types, thus spatio-temporal and high-dimensional data. As the best high-dimensional indexing techniques are only able to index point-data and not now-relative data and the best spatio-temporal indexing techniques suffer from the curse of dimensionality, this thesis introduces the Spatio-Temporal Pyramid Adapter (STPA). The STPA maps spatio-temporal data on points, now-values on the median of the data set and indexes them with the pyramid technique. For high-dimensional and spatio-temporal index structures no generally accepted benchmark exists. Most index structures are only evaluated by custom benchmarks and compared to a tiny set of competitors. Benchmarks may be biased as a structure may be created to perform well in a certain benchmark or a benchmark does not cover a certain speciality of the investigated structures. In this thesis, the Interface Based Performance Comparison (IBPC) technique is introduced. It automatically generates test sets with a high code coverage on the system under test (SUT) on the basis of all functions defined by a certain interface which all competitors support. Every test set is performed on every SUT and the performance results are weighted by the achieved coverage and summed up. These weighted performance results are then used to compare the structures. An implementation of the IBPC, the Performance Test Automation Framework (PTAF) is compared to a classic custom benchmark, a workload generator whose parameters are optimized by a genetic algorithm and a specific PTAF alternative which incorporates the specific behavior of the systems under test. This is done for a set of two high-dimensional spatio-temporal indices and twelve variants of the R-tree. The evaluation indicates that PTAF performs at least as good as the other approaches in terms of minimal test cases with a maximized coverage. Several case studies on PTAF demonstrate its widespread abilities.

Page generated in 0.0452 seconds