• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 156
  • 27
  • 23
  • 16
  • 11
  • 11
  • 6
  • 6
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 340
  • 40
  • 39
  • 28
  • 27
  • 25
  • 24
  • 21
  • 21
  • 19
  • 18
  • 17
  • 17
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Generation of an Indoor Navigation Network for the University of Saskatchewan

2014 July 1900 (has links)
Finding ones way in unknown and unfamiliar environments is a common task. A number of tools ranging from paper maps to location-based services have been introduced to assist human navigation. Undoubtedly, car navigation systems can be considered the most successful example of location based services that widely gained user acceptance. However the concept of car navigation is not always (perhaps rarely) suitable for pedestrian navigation. Moreover, precise localization of moving objects indoors is not possible due to the absence of an absolute positioning method such as GPS. These make accurate indoor tracking and navigation an interesting problem to explore. Many of the methods of spatial analysis popular in outdoor applications can be used indoors. In particular, generation of the indoor navigation network can be an effective solution for a) improving the navigation experience inside complex indoor structures and b) enhancing the analysis of the indoor tracking data collected with existing positioning solutions. Such building models should be based on a graph representation and consist of the number of ‘nodes’ and ‘edges’, where ‘nodes’ correspond to the central position of the room and ‘edge’ represents the medial axis of the hallway polygons, which physically connects these rooms. Similar node-links should be applied stairs and elevators to connect building floors. To generate this model, I selected the campus of University of Saskatchewan as the study area and presented a method that creates an indoor navigation network using ESRI ArcGIS products. First, the proposed method automatically extracts geometry and topology of campus buildings and computes the distances among all entities to calculate the shortest path between them. The system navigates through the University campus and it helps locating classrooms, offices, or facilities. The calculation of the route is based on the Dijkstra algorithm, but could employ any network navigation algorithm. To show the advantage of the generated network, I present results of a study conducted in conjunction with the department of Computer Science. An experiment that included 37 participants was designed to collect the tracking data on a university campus to demonstrate how the incorporation of the indoor navigation model can improve the analysis of the indoor movement data. Based on the results of the study, it can be concluded that the generated indoor network can be applied to raw positioning data in order to improve accuracy, as well as be employed as a stand-alone tool for enhancing of the route guidance on a university campus, and by extension any large indoor space consisting of individual or multiple buildings.
52

Enhancements to Hidden Markov Models for Gene Finding and Other Biological Applications

Vinar, Tomas January 2005 (has links)
In this thesis, we present enhancements of hidden Markov models for the problem of finding genes in DNA sequences. Genes are the parts of DNA that serve as a template for synthesis of proteins. Thus, gene finding is a crucial step in the analysis of DNA sequencing data. <br /><br /> Hidden Markov models are a key tool used in gene finding. Yhis thesis presents three methods for extending the capabilities of hidden Markov models to better capture the statistical properties of DNA sequences. In all three, we encounter limiting factors that lead to trade-offs between the model accuracy and those limiting factors. <br /><br /> First, we build better models for recognizing biological signals in DNA sequences. Our new models capture non-adjacent dependencies within these signals. In this case, the main limiting factor is the amount of training data: more training data allows more complex models. Second, we design methods for better representation of length distributions in hidden Markov models, where we balance the accuracy of the representation against the running time needed to find genes in novel sequences. Finally, we show that creating hidden Markov models with complex topologies may be detrimental to the prediction accuracy, unless we use more complex prediction algorithms. However, such algorithms require longer running time, and in many cases the prediction problem is NP-hard. For gene finding this means that incorporating some of the prior biological knowledge into the model would require impractical running times. However, we also demonstrate that our methods can be used for solving other biological problems, where input sequences are short. <br /><br /> As a model example to evaluate our methods, we built a gene finder ExonHunter that outperforms programs commonly used in genome projects.
53

Enhancements to Hidden Markov Models for Gene Finding and Other Biological Applications

Vinar, Tomas January 2005 (has links)
In this thesis, we present enhancements of hidden Markov models for the problem of finding genes in DNA sequences. Genes are the parts of DNA that serve as a template for synthesis of proteins. Thus, gene finding is a crucial step in the analysis of DNA sequencing data. <br /><br /> Hidden Markov models are a key tool used in gene finding. Yhis thesis presents three methods for extending the capabilities of hidden Markov models to better capture the statistical properties of DNA sequences. In all three, we encounter limiting factors that lead to trade-offs between the model accuracy and those limiting factors. <br /><br /> First, we build better models for recognizing biological signals in DNA sequences. Our new models capture non-adjacent dependencies within these signals. In this case, the main limiting factor is the amount of training data: more training data allows more complex models. Second, we design methods for better representation of length distributions in hidden Markov models, where we balance the accuracy of the representation against the running time needed to find genes in novel sequences. Finally, we show that creating hidden Markov models with complex topologies may be detrimental to the prediction accuracy, unless we use more complex prediction algorithms. However, such algorithms require longer running time, and in many cases the prediction problem is NP-hard. For gene finding this means that incorporating some of the prior biological knowledge into the model would require impractical running times. However, we also demonstrate that our methods can be used for solving other biological problems, where input sequences are short. <br /><br /> As a model example to evaluate our methods, we built a gene finder ExonHunter that outperforms programs commonly used in genome projects.
54

Evidence Combination in Hidden Markov Models for Gene Prediction

Brejova, Bronislava January 2005 (has links)
This thesis introduces new techniques for finding genes in genomic sequences. Genes are regions of a genome encoding proteins of an organism. Identification of genes in a genome is an important step in the annotation process after a new genome is sequenced. The prediction accuracy of gene finding can be greatly improved by using experimental evidence. This evidence includes homologies between the genome and databases of known proteins, or evolutionary conservation of genomic sequence in different species. <br /><br /> We propose a flexible framework to incorporate several different sources of such evidence into a gene finder based on a hidden Markov model. Various sources of evidence are expressed as partial probabilistic statements about the annotation of positions in the sequence, and these are combined with the hidden Markov model to obtain the final gene prediction. The opportunity to use partial statements allows us to handle missing information transparently and to cope with the heterogeneous character of individual sources of evidence. On the other hand, this feature makes the combination step more difficult. We present a new method for combining partial probabilistic statements and prove that it is an extension of existing methods for combining complete probability statements. We evaluate the performance of our system and its individual components on data from the human and fruit fly genomes. <br /><br /> The use of sequence evolutionary conservation as a source of evidence in gene finding requires efficient and sensitive tools for finding similar regions in very long sequences. We present a method for improving the sensitivity of existing tools for this task by careful modeling of sequence properties. In particular, we build a hidden Markov model representing a typical homology between two protein coding regions and then use this model to optimize a component of a heuristic algorithm called a spaced seed. The seeds that we discover significantly improve the accuracy and running time of similarity search in protein coding regions, and are directly applicable to our gene finder.
55

A Boolean Function Based Approach to Nearest Neighbor Finding

Hsiao, Yuan-shu 29 June 2005 (has links)
With the rapid advances in technologies, strategies for efficiently operating the spatial data are needed. The spatial data consists of points, lines, rectangles, regions, surface, and volumes. In this thesis, we focus on the region data. There are many important and efficient operations for the region data, such as neighbor finding, rotation, and mirroring. The nearest neighbor (NN) finding is frequently used in geographic information system (GIS). We can find the specific point (e.g., a park, a department store, etc.) that is the closest to our position in geographical information systems. In any representation for the region data, it is not instinctive and easy for nearest neighbor finding, since the coordinate information has been lost. Voros, Chen, and Chang have proposed the strategies for the nearest neighbor finding based on the quadtree in eight directions. Chen and Chang have proposed the nearest neighbor finding based on the Peano curves. These strategies for the nearest neighbor finding based on the quadtree and the Peano curve use a looping process, which is time-consuming. On the other hand, in recent years, many researchers have also focused on finding efficient strategies for the rotating and mirroring operations, which is useful when the animation is performed by computers. The boolean function-based encoding is a considerable amount of space-saving with respect to the other binary image representation. The CBLQ representation saves memory space as compared to the other binary image representations that have proposed the strategies of the set operations. However, the processes for obtaining the rotated or mirrored code based on these two representations are time-consuming, since the coordinate information of all pixels has been lost. Therefore, in this thesis, first, for the nearest neighbor finding based on the quadtrees and the Peano curve, we propose the strategy which uses the bitwise and arithmetic operations, and it is more efficient than the strategies based on the looping processes. Next, we propose efficient strategies for rotating and mirroring images based on the boolean function-based encoding and constant bit-length linear quadtrees (CBLQ) representations. From our simulation study, first, we show that our strategies based on the quadtree and the Peano curve require the least CPU-time and our strategy based on the Hilbert curve requires the least total time (the CPU-time + the I/O time) among the strategies for the nearest neighbor finding based on the quadtree and the three space-filling curves. Next, in most of cases, when the black density is no larger than 50%, the CPU-time based on the boolean function-based encoding is less than that based on CBLQ.
56

Motif Finding in Biological Sequences

Liao, Ying-Jer 21 August 2003 (has links)
A huge number of genomic information, including protein and DNA sequences, is generated by the human genome project. Deciphering these sequences and detecting local residue patterns of multiple sequences are very difficult. One of the ways to decipher these biological sequences is to detect local residue patterns from them. However, detecting unknown patterns from multiple sequences is still very difficult. In this thesis, we propose an algorithm, based on the Gibbs sampler method, for identifying local consensus patterns (motifs) in monomolecular sequences. We first designed an ACO (ant colony optimization) algorithm to find a good initial solution and a set of better candidate positions for revising the motif. Then the Gibbs sampler method is applied with these better candidate positions as the input. The required time for finding motifs using our algorithm is reduced drastically. It takes only 20 % of time of the Gibbs sampler method and it maintains the comparable quality.
57

Combination of results from gene-finding programs

Hammar, Cecilia January 1999 (has links)
<p>Gene-finding programs available over the Internet today are shown to be nothing more than guides to possible coding regions in the DNA. The programs often do incorrect predictions. The idea of combining a number of different gene-finding programs arised a couple of years ago. Murakami and Takagi (1998) published one of the first attempts to combine results from gene-finding programs built on different techniques (e.g. artificial neural networks and hidden Markov models). The simple combinations methods used by Murakami and Takagi (1998) indicated that the prediction accuracy could be improved by a combination of programs.</p><p>In this project artificial neural networks are used to combine the results of the three well-known gene-finding programs GRAILII, FEXH, and GENSCAN. The results show a considerable increase in prediction accuracy compared to the best performing single program GENSCAN</p>
58

The archival web: contextual authority files and the representation of institutional textual documents in online description

McLuhan-Myers, Madeleine 23 August 2012 (has links)
This thesis considers the problem of the representation of individual institutional textual records in archival research tools. While document studies in academic journals point to the value of focussed consideration of various types of records, archives do not have the resources to apply such focus to every item in their holdings, even though these convey the information sought by many researchers. Over the last century, archivists have emphasized description of groups of records, because this provides insight into the context in which documents exist and immense quantities involved left little choice. Recent developments, however, suggest the individual document should be re-visited. This thesis focuses on how formal descriptive systems might be enhanced to allow closer consideration of individual institutional textual records. It reviews the history of description, explores benefits to researchers seeking information from particular documents (e.g. the will) and explores tools created in response, such as contextual authority files.
59

The archival web: contextual authority files and the representation of institutional textual documents in online description

McLuhan-Myers, Madeleine 23 August 2012 (has links)
This thesis considers the problem of the representation of individual institutional textual records in archival research tools. While document studies in academic journals point to the value of focussed consideration of various types of records, archives do not have the resources to apply such focus to every item in their holdings, even though these convey the information sought by many researchers. Over the last century, archivists have emphasized description of groups of records, because this provides insight into the context in which documents exist and immense quantities involved left little choice. Recent developments, however, suggest the individual document should be re-visited. This thesis focuses on how formal descriptive systems might be enhanced to allow closer consideration of individual institutional textual records. It reviews the history of description, explores benefits to researchers seeking information from particular documents (e.g. the will) and explores tools created in response, such as contextual authority files.
60

Failure Finding Interval Optimization for Periodically Inspected Repairable Systems

Tang, Tian Qiao 31 August 2012 (has links)
The maintenance of equipment has been an important issue for companies for many years. For systems with hidden or unrevealed failures (i.e., failures are not self-announcing), a common practice is to regularly inspect the system looking for such failures. Examples of these systems include protective devices, emergency devices, standby units, underwater devices etc. If no periodical inspection is scheduled, and a hidden failure has already occurred, severe consequences may result. Research on periodical inspection seeks to establish the optimal inspection interval (Failure Finding Interval) of systems to maximize availability and/or minimize expected cost. Research also focuses on important system parameters such as unavailability. Most research in this area considers non-negligible downtime due to repair/replacement but ignores the downtime caused by inspections. In many situations, however, inspection time is non-negligible. We address this gap by proposing an optimal failure finding interval (FFI) considering both non-negligible inspection time and repair/replacement time. A novel feature of this work is the development of models for both age-based and calendar-based inspection policies with random/constant inspection time and random/constant repair/replacement time. More specifically, we first study instantaneous availability for constant inspection and repair/replacement times. We start with the assumption of renewal of the system at each inspection. We then consider models with the assumption of renewal only after failure. We also develop limiting average availability models for random inspection and repair/replacement times, considering both age-based and calendar-based inspection policies. We optimize these availability models to obtain an optimal FFI in order to maximize the system’s availability. Finally, we develop several cost models for both age-based and calendar-based inspection policies with random inspection time and repair/replacement time. We formulate the model for constant inspection time and repair/replacement time as a special case. We investigate the optimization of cost models for each case to obtain optimal FFI in order to minimize the expected cost. The numerical examples and case study presented in the dissertation demonstrate the importance of considering non-negligible downtime due to inspection.

Page generated in 0.9201 seconds