Spelling suggestions: "subject:"external memory"" "subject:"4external memory""
1 |
Memory-efficient graph search applied to multiple sequence alignmentZhou, Rong 06 August 2005 (has links)
Graph search is used in many areas of computer science. It is well-known that the scalability of graph-search algorithms such as A* is limited by their memory requirements. In this dissertation, I describe three complementary strategies for reducing the memory requirements of graph-search algorithms, especially for multiple sequence alignment (a central problem in computational molecular biology). These search strategies dramatically increase the range and difficulty of multiple sequence alignment problems that can be solved. The first strategy uses a divide-and-conquer method of solution reconstruction, and one of my contributions is to show that when divide-and-conquer solution reconstruction is used, a layer-by-layer strategy for multiple sequence alignment is more memory-efficient than a bestirst strategy. The second strategy is a new approach to duplicate detection in external-memory graph search that involves partitioning the search graph based on an abstraction of the state space. For graphs with sufficient local structure, it allows graph-search algorithms to use external memory, such as disk storage, almost as efficiently as internal memory. The third strategy is a technique for reducing the memory requirements of sub-alignment search heuristics that are stored in lookup tables. It uses the start and goal states of a problem instance to restrict the region of the state space for which a table-based heuristic is needed, making it possible to store more accurate heuristic estimates in the same amount of memory. These three strategies dramatically improve the scalability of graph search not only for multiple sequence alignment, but for many other graph-search problems, and generalizations of these search strategies for other graph-search problems are discussed throughout the dissertation.
|
2 |
Big Vector: An External Memory Algorithm and Data StructureUpadhyay, Abhyudaya 16 October 2015 (has links)
No description available.
|
3 |
Increasing Wayfinding for Long-Term Care Residents with Dementia using Spaced Retrieval Training with External AidsRose, Veronica Joy 24 August 2012 (has links)
No description available.
|
4 |
Exploring the Role of Prospective Memory in Location-Based RemindersWang, Yao 03 May 2017 (has links)
Location-based reminder systems (LBRs) are typically used to remind people to complete a to-do task at a particular location. People use their prospective memory to remember future to-do tasks. However, the current design of LBRs fails to take advantage of human prospective memory theory. In this dissertation, I propose a framework connecting human prospective memory theory with LBRs. My work applies human prospective memory into the technical design of LBRs. The goal of my work is to make the reminder work more consistently with how human memory works.
Prospective memory research suggests that encoding of the location and familiarity with the location have an impact on prospective remembering. I conducted two empirical studies to test how this theoretical knowledge applies to LBRs. In one experiment, I hypothesized that if the encoding stage provides a closer match to the retrieval stage in LBRs, then location recognition and task recall should improve at retrieval time. The results indicate that providing a first-person view (street view of the desired location) at the encoding stage benefits prospective remembering the most.
Prospective memory theory also suggests that the familiarity with the external cue has a significant influence on prospective remembering. In the second experiment, I hypothesized that familiarity with a location has an impact on the location recognition at the retrieval. The results show that the encoding interface is used differently for familiar and unfamiliar cities and businesses to support recognizing a target location.
The findings have implications for the design of future LBRs. I designed an LBR prototype by applying these empirical research findings and conducted a usability evaluation. Future designers of LBR should consider 1) providing more support in matching the encoding stage to the eventual cue in retrieval stage and 2) involving user’s familiarity level with the places at the encoding stage to provide a better user experience. My work showed the importance of using prospective memory theory in the design of LBR systems. / Ph. D. / People use location-based mobile reminder systems to complete a to-do task at a particular location. My work studied how people memorize the locations at the reminder set up stage, as well as how they recognize the locations and remember the associated to-do tasks at the reminder retrieval stage. People see a location from a first-person view while completing a to-do task in the real life. However, current interface designs of location-based reminders do not support that. My research results contributed to the future interface design of location-based mobile reminder system. That is, future encoding interface design of location-based reminders could possibly apply a street view of the location to make the reminder more usable and more user-friendly.
|
5 |
Um algoritmo para a construção de vetores de sufixo generalizados em memória externa / External memory generalized suffix array construction algorithmLouza, Felipe Alves da 17 December 2013 (has links)
O vetor de sufixo é uma estrutura de dados importante utilizada em muitos problemas que envolvem cadeias de caracteres. Na literatura, muitos trabalhos têm sido propostos para a construção de vetores de sufixo em memória externa. Entretanto, esses trabalhos não enfocam conjuntos de cadeias, ou seja, não consideram vetores de sufixo generalizados. Essa limitação motiva esta dissertação, a qual avança no estado da arte apresentando o algoritmo eGSA, o primeiro algoritmo proposto para a construção de vetores de sufixo generalizados aumentado com o vetor de prefixo comum mais longo (LCP) e com a transformada de Burrows-Wheeler (BWT) em memória externa. A dissertação foi desenvolvida dentro do contexto de bioinformática, já que avanços tecnológicos recentes têm aumentado o volume de dados biológicos disponíveis, os quais são armazenados como cadeias de caracteres. O algoritmo eGSA foi validado por meio de testes de desempenho com dados reais envolvendo sequências grandes, como DNA, e sequências pequenas, como proteínas. Com relação aos testes comparativos com conjuntos de grandes cadeias de DNA, o algoritmo proposto foi comparado com o algoritmo correlato mais eficiente na literatura de construção de vetores de sufixo, o qual foi adaptado para construção de vetores generalizados. O algoritmo eGSA obteve um tempo médio de 3,2 a 8,3 vezes menor do que o algoritmo correlato e consumiu 50% menos de memória. Para conjuntos de cadeias pequenas de proteínas, foram realizados testes de desempenho apenas com o eGSA, já que no melhor do nosso conhecimento, não existem trabalhos correlatos que possam ser adaptados. Comparado com o tempo médio para conjuntos de cadeias grandes, o eGSA obteve tempos competitivos para conjuntos de cadeias pequenas. Portanto, os resultados dos testes demonstraram que o algoritmo proposto pode ser aplicado eficientemente para indexar tanto conjuntos de cadeias grandes quanto conjuntos de cadeias pequenas / The suffix array is an important data structure used in several string processing problems. In the literature, several approaches have been proposed to deal with external memory suffix array construction. However, these approaches are not specifically aimed to index sets of strings, that is, they do not consider generalized suffix arrays. This limitation motivates this masters thesis, which presents eGSA, the first external memory algorithm developed to construct generalized suffix arrays enhanced with the longest common prefix array (LCP) and the Burrows-Wheeler transform (BWT). We especially focus on the context of bioinformatics, as recent technological advances have increased the volume of biological data available, which are stored as strings. The eGSA algorithm was validated through performance tests with real data from DNA and proteins sequences. Regarding performance tests with large strings of DNA, we compared our algorithm with the most efficient and related suffix array construction algorithm in the literature, which was adapted to construct generalized arrays. The results demonstrated that our algorithm reduced the time spent by a factor of 3.2 to 8.3 and consumed 50% less memory. For sets of small strings of proteins, tests were performed only with the eGSA, since to the best of our knowledge, there is no related work that can be adapted. Compared to the average time spent to index sets of large strings, the eGSA obtained competitive times to index sets of small strings. Therefore, the performance tests demonstrated that the proposed algorithm can be applied efficiently to index both sets of large strings and sets of small strings
|
6 |
Um algoritmo para a construção de vetores de sufixo generalizados em memória externa / External memory generalized suffix array construction algorithmFelipe Alves da Louza 17 December 2013 (has links)
O vetor de sufixo é uma estrutura de dados importante utilizada em muitos problemas que envolvem cadeias de caracteres. Na literatura, muitos trabalhos têm sido propostos para a construção de vetores de sufixo em memória externa. Entretanto, esses trabalhos não enfocam conjuntos de cadeias, ou seja, não consideram vetores de sufixo generalizados. Essa limitação motiva esta dissertação, a qual avança no estado da arte apresentando o algoritmo eGSA, o primeiro algoritmo proposto para a construção de vetores de sufixo generalizados aumentado com o vetor de prefixo comum mais longo (LCP) e com a transformada de Burrows-Wheeler (BWT) em memória externa. A dissertação foi desenvolvida dentro do contexto de bioinformática, já que avanços tecnológicos recentes têm aumentado o volume de dados biológicos disponíveis, os quais são armazenados como cadeias de caracteres. O algoritmo eGSA foi validado por meio de testes de desempenho com dados reais envolvendo sequências grandes, como DNA, e sequências pequenas, como proteínas. Com relação aos testes comparativos com conjuntos de grandes cadeias de DNA, o algoritmo proposto foi comparado com o algoritmo correlato mais eficiente na literatura de construção de vetores de sufixo, o qual foi adaptado para construção de vetores generalizados. O algoritmo eGSA obteve um tempo médio de 3,2 a 8,3 vezes menor do que o algoritmo correlato e consumiu 50% menos de memória. Para conjuntos de cadeias pequenas de proteínas, foram realizados testes de desempenho apenas com o eGSA, já que no melhor do nosso conhecimento, não existem trabalhos correlatos que possam ser adaptados. Comparado com o tempo médio para conjuntos de cadeias grandes, o eGSA obteve tempos competitivos para conjuntos de cadeias pequenas. Portanto, os resultados dos testes demonstraram que o algoritmo proposto pode ser aplicado eficientemente para indexar tanto conjuntos de cadeias grandes quanto conjuntos de cadeias pequenas / The suffix array is an important data structure used in several string processing problems. In the literature, several approaches have been proposed to deal with external memory suffix array construction. However, these approaches are not specifically aimed to index sets of strings, that is, they do not consider generalized suffix arrays. This limitation motivates this masters thesis, which presents eGSA, the first external memory algorithm developed to construct generalized suffix arrays enhanced with the longest common prefix array (LCP) and the Burrows-Wheeler transform (BWT). We especially focus on the context of bioinformatics, as recent technological advances have increased the volume of biological data available, which are stored as strings. The eGSA algorithm was validated through performance tests with real data from DNA and proteins sequences. Regarding performance tests with large strings of DNA, we compared our algorithm with the most efficient and related suffix array construction algorithm in the literature, which was adapted to construct generalized arrays. The results demonstrated that our algorithm reduced the time spent by a factor of 3.2 to 8.3 and consumed 50% less memory. For sets of small strings of proteins, tests were performed only with the eGSA, since to the best of our knowledge, there is no related work that can be adapted. Compared to the average time spent to index sets of large strings, the eGSA obtained competitive times to index sets of small strings. Therefore, the performance tests demonstrated that the proposed algorithm can be applied efficiently to index both sets of large strings and sets of small strings
|
7 |
Memory, aging and external memory aids : Two traditions of cognitive research and their implications for a successful development of memory augmentationKristiansson, Mattias January 2011 (has links)
The topic of this thesis is how the decline of cognitive abilities and memory functioning in elder people can be assisted by external memory aids. This issue was approached through a combination of methods. The starting point was a literature review of two approaches to the study of memory – the traditional where memory functions are located in the brain and the situated where remembering transcends over external resources, and by a literature review on declining memory abilities in elderly people. An ethnographic study of everyday remembering in an older population, aged from 72 to 91, found many instances of the spontaneous use of the environment to support a declining memory ability, which in turn suggest that the traditional approach to memory research is of limited value when studying everyday memory abilities in older people. A study on existing memory aids, as well as memory aids currently under development in research laboratories showed that these technologies are primarily based on an explicit or implicit traditional view of memory that disregard several aspects of remembering in the natural world. It is therefore suggested that future development of memory aids could fruitfully benefit from a distributed and situated approach, where the individuals‘ current use of external memory aids are used as the starting point, with the goal of extending and amplifying methods and artefacts already spontaneously in use.
|
8 |
Skeleton-based visualization of massive voxel objects with network-like architectureProhaska, Steffen January 2007 (has links)
This work introduces novel internal and external memory algorithms for computing voxel skeletons of massive voxel objects with complex network-like architecture and for converting these voxel skeletons to piecewise linear geometry, that is triangle meshes and piecewise straight lines. The presented techniques help to tackle the challenge of visualizing and analyzing 3d images of increasing size and complexity, which are becoming more and more important in, for example, biological and medical research.
Section 2.3.1 contributes to the theoretical foundations of thinning algorithms with a discussion of homotopic thinning in the grid cell model. The grid cell model explicitly represents a cell complex built of faces, edges, and vertices shared between voxels. A characterization of pairs of cells to be deleted is much simpler than characterizations of simple voxels were before. The grid cell model resolves topologically unclear voxel configurations at junctions and locked voxel configurations causing, for example, interior voxels in sets of non-simple voxels. A general conclusion is that the grid cell model is superior to indecomposable voxels for algorithms that need detailed control of topology.
Section 2.3.2 introduces a noise-insensitive measure based on the geodesic distance along the boundary to compute two-dimensional skeletons. The measure is able to retain thin object structures if they are geometrically important while ignoring noise on the object's boundary. This combination of properties is not known of other measures. The measure is also used to guide erosion in a thinning process from the boundary towards lines centered within plate-like structures. Geodesic distance based quantities seem to be well suited to robustly identify one- and two-dimensional skeletons. Chapter 6 applies the method to visualization of bone micro-architecture.
Chapter 3 describes a novel geometry generation scheme for representing voxel skeletons, which retracts voxel skeletons to piecewise linear geometry per dual cube. The generated triangle meshes and graphs provide a link to geometry processing and efficient rendering of voxel skeletons. The scheme creates non-closed surfaces with boundaries, which contain fewer triangles than a representation of voxel skeletons using closed surfaces like small cubes or iso-surfaces. A conclusion is that thinking specifically about voxel skeleton configurations instead of generic voxel configurations helps to deal with the topological implications. The geometry generation is one foundation of the applications presented in Chapter 6.
Chapter 5 presents a novel external memory algorithm for distance ordered homotopic thinning. The presented method extends known algorithms for computing chamfer distance transformations and thinning to execute I/O-efficiently when input is larger than the available main memory. The applied block-wise decomposition schemes are quite simple. Yet it was necessary to carefully analyze effects of block boundaries to devise globally correct external memory variants of known algorithms. In general, doing so is superior to naive block-wise processing ignoring boundary effects. Chapter 6 applies the algorithms in a novel method based on confocal microscopy for quantitative study of micro-vascular networks in the field of microcirculation. / Die vorliegende Arbeit führt I/O-effiziente Algorithmen und Standard-Algorithmen zur Berechnung von Voxel-Skeletten aus großen Voxel-Objekten mit komplexer, netzwerkartiger Struktur und zur Umwandlung solcher Voxel-Skelette in stückweise-lineare Geometrie ein. Die vorgestellten Techniken werden zur Visualisierung und Analyse komplexer drei-dimensionaler Bilddaten, beispielsweise aus Biologie und Medizin, eingesetzt.
Abschnitt 2.3.1 leistet mit der Diskussion von topologischem Thinning im Grid-Cell-Modell einen Beitrag zu den theoretischen Grundlagen von Thinning-Algorithmen. Im Grid-Cell-Modell wird ein Voxel-Objekt als Zellkomplex dargestellt, der aus den Ecken, Kanten, Flächen und den eingeschlossenen Volumina der Voxel gebildet wird. Topologisch unklare Situationen an Verzweigungen und blockierte Voxel-Kombinationen werden aufgelöst. Die Charakterisierung von Zellpaaren, die im Thinning-Prozess entfernt werden dürfen, ist einfacher als bekannte Charakterisierungen von so genannten "Simple Voxels". Eine wesentliche Schlussfolgerung ist, dass das Grid-Cell-Modell atomaren Voxeln überlegen ist, wenn Algorithmen detaillierte Kontrolle über Topologie benötigen.
Abschnitt 2.3.2 präsentiert ein rauschunempfindliches Maß, das den geodätischen Abstand entlang der Oberfläche verwendet, um zweidimensionale Skelette zu berechnen, welche dünne, aber geometrisch bedeutsame, Strukturen des Objekts rauschunempfindlich abbilden. Das Maß wird im weiteren mit Thinning kombiniert, um die Erosion von Voxeln auf Linien zuzusteuern, die zentriert in plattenförmigen Strukturen liegen. Maße, die auf dem geodätischen Abstand aufbauen, scheinen sehr geeignet zu sein, um ein- und zwei-dimensionale Skelette bei vorhandenem Rauschen zu identifizieren. Eine theoretische Begründung für diese Beobachtung steht noch aus. In Abschnitt 6 werden die diskutierten Methoden zur Visualisierung von Knochenfeinstruktur eingesetzt.
Abschnitt 3 beschreibt eine Methode, um Voxel-Skelette durch kontrollierte Retraktion in eine stückweise-lineare geometrische Darstellung umzuwandeln, die als Eingabe für Geometrieverarbeitung und effizientes Rendering von Voxel-Skeletten dient. Es zeigt sich, dass eine detaillierte Betrachtung der topologischen Eigenschaften eines Voxel-Skeletts einer Betrachtung von allgemeinen Voxel-Konfigurationen für die Umwandlung zu einer geometrischen Darstellung überlegen ist. Die diskutierte Methode bildet die Grundlage für die Anwendungen, die in Abschnitt 6 diskutiert werden.
Abschnitt 5 führt einen I/O-effizienten Algorithmus für Thinning ein. Die vorgestellte Methode erweitert bekannte Algorithmen zur Berechung von Chamfer-Distanztransformationen und Thinning so, dass diese effizient ausführbar sind, wenn die Eingabedaten den verfügbaren Hauptspeicher übersteigen. Der Einfluss der Blockgrenzen auf die Algorithmen wurde analysiert, um global korrekte Ergebnisse sicherzustellen. Eine detaillierte Analyse ist einer naiven Zerlegung, die die Einflüsse von Blockgrenzen vernachlässigt, überlegen. In Abschnitt 6 wird, aufbauend auf den I/O-effizienten Algorithmen, ein Verfahren zur quantitativen Analyse von Mikrogefäßnetzwerken diskutiert.
|
9 |
Range Searching Data Structures with Cache LocalityHamilton, Christopher 17 March 2011 (has links)
This thesis focuses on range searching data structures, an elementary problem in computational
geometry with research spanning decades. These problems often involve very large data sets.
Processor speeds increase faster than memory speeds, thus the gap between the rate at which CPUs can
process data and the rate at which it can be retrieved is increasing. To bridge this gap, various
levels of cache are used. Since cache misses are costly, algorithms should be cache-friendly.
The input-output (I/O) model was the first model for constructing cache-efficient algorithms,
focusing on a two-level memory hierarchy. Algorithms for this model require manual tuning to
determine optimal values for hardware dependent parameters, and are only optimal at a single level
of a memory hierarchy. Cache-oblivious (CO) algorithms are built without knowledge of the hierarchy,
allowing them to be optimal across all levels at once.
There exist strong theoretical and practical results for I/O-efficient range searching. Recently,
the CO model has received attention, but range searching remains poorly understood. This thesis
explores data structures for CO range counting and reporting. It presents the first space and
worst-case query-time optimal approximate range counting structure for a family of related problems,
and associated O(N log N)-space query-optimal reporting structures. The approximate counting
structure is the first of its kind in internal memory, I/O and CO models. Researchers have been
trying to create linear-space query-optimal CO reporting structures. This thesis shows that for a
variety of problems, linear space is in fact impossible.
Heuristics are also used for building cache-friendly algorithms. Space-filling curves are
continuous functions mapping multi-dimensional sets into one-dimensional ones. They are used to
build search structures in the hopes that objects that were close in the original space remain close
in the resulting ordering. This results in queries incurring fewer page swaps when traversing the
structure. The Hilbert curve is notably good at this, but often imposes a space or time penalty.
This thesis introduces compact Hilbert indices, which remove the ineffiency inherent for input point
sets with bounding boxes smaller than their bounding hypercubes.
|
10 |
Efficient External-Memory Graph Search for Model CheckingLamborn, Peter C 17 May 2014 (has links)
Model checking problems suffer from state space explosion. State space explosion is the number of states in the graph increases exponentially with the number of variables in the state description. Searching the large graphs required in model checking requires an efficient algorithm. This dissertation explores several methods to improve an externalmemory search algorithm for model checking problems. A tool implementing these methods is built on top of the Murphi model checker. One improvement is a state cache for immediate detection leveraging the properties of state locality. A novel type of locality, intralayer locality is explained and shown to exist in a variety of search spaces. Another improvement, partial delayed duplicate detection, exploits interlayer locality to reduce search times. An automatic partitioning function is described that allows hash-based delayed duplicate detection to be used without domain knowledge of the state space. A phased delayed duplicate detection algorithm combining features of hash-based delayed duplicate detection and sorting-based delayed duplicate detection is explained and compared to the other methods.
|
Page generated in 0.0474 seconds