• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1017
  • 224
  • 97
  • 96
  • 70
  • 31
  • 29
  • 19
  • 19
  • 14
  • 12
  • 8
  • 7
  • 7
  • 7
  • Tagged with
  • 2079
  • 745
  • 706
  • 585
  • 437
  • 357
  • 330
  • 310
  • 227
  • 221
  • 193
  • 189
  • 174
  • 165
  • 160
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
811

Optimal Path Queries in Very Large Spatial Databases

Zhang, Jie January 2005 (has links)
Researchers have been investigating the optimal route query problem for a long time. Optimal route queries are categorized as either unconstrained or constrained queries. Many main memory based algorithms have been developed to deal with the optimal route query problem. Among these, Dijkstra's shortest path algorithm is one of the most popular algorithms for the unconstrained route query problem. The constrained route query problem is more complicated than the unconstrained one, and some constrained route query problems such as the Traveling Salesman Problem and Hamiltonian Path Problem are NP-hard. There are many algorithms dealing with the constrained route query problem, but most of them only solve a specific case. In addition, all of them require that the entire graph resides in the main memory. Recently, due to the need of applications in very large graphs, such as the digital maps managed by Geographic Information Systems (GIS), several disk-based algorithms have been derived by using divide-and-conquer techniques to solve the shortest path problem in a very large graph. However, until now little research has been conducted on the disk-based constrained problem. <br /><br /> This thesis presents two algorithms: 1) a new disk-based shortest path algorithm (DiskSPNN), and 2) a new disk-based optimal path algorithm (DiskOP) that answers an optimal route query without passing a set of forbidden edges in a very large graph. Both algorithms fit within the same divide-and-conquer framework as the existing disk-based shortest path algorithms proposed by Ning Zhang and Heechul Lim. Several techniques, including query super graph, successor fragment and open boundary node pruning are proposed to improve the performance of the previous disk-based shortest path algorithms. Furthermore, these techniques are applied to the DiskOP algorithm with minor changes. The proposed DiskOP algorithm depends on the concept of collecting a set of boundary vertices and simultaneously relaxing their adjacent super edges. Even if the forbidden edges are distributed in all the fragments of a graph, the DiskOP algorithm requires little memory. Our experimental results indicate that the DiskSPNN algorithm performs better than the original ones with respect to the I/O cost as well as the running time, and the DiskOP algorithm successfully solves a specific constrained route query problem in a very large graph.
812

An Ordered Bag Semantics for SQL

Chinaei, Hamid R. January 2007 (has links)
Semantic query optimization is an important issue in many contexts of databases including information integration, view maintenance and data warehousing and can substantially improve performance, especially in today's database systems which contain gigabytes of data. A crucial issue in semantic query optimization is query containment. Several papers have dealt with the problem of conjunctive query containment. In particular, some of the literature admits SQL like query languages with aggregate operations such as sum/count. Moreover, since real SQL requires a richer semantics than set semantics, there has been work on bag-semantics for SQL, essentially by introducing an interpreted column. One important technique for reasoning about query containment in the context of bag semantics is to translate the queries to alternatives using aggregate functions and assuming set semantics. Furthermore, in SQL, order by is the operator by which the results are sorted based on certain attributes and, clearly, ordering is an important issue in query optimization. As such, there has been work done in support of ordering based on the application of the domain. However, a final step is required in order to introduce a rich semantics in support. In this work, we integrate set and bag semantics to be able to reason about real SQL queries. We demonstrate an ordered bag semantics for SQL using a relational algebra with aggregates. We define a set algebra with various expressions of interest, then define syntax and semantics for bag algebra, and finally extend these definitions to ordered bags. This is done by adding a pair of additional interpreted columns to computed relations in which the first column is used in the standard fashion to capture duplicate tuples in query results, and the second adds an ordering priority to the output. We show that the relational algebra with aggregates can be used to compute these interpreted columns with sufficient flexibility to work as a semantics for standard SQL queries, which are allowed to include order by and duplicate preserving select clauses. The reduction of a workable ordered bag semantics for SQL to the relational algebra with aggregates - as we have developed it - can enable existing query containment theory to be applied in practical query containment.
813

Automatic Physical Design for XML Databases

Elghandour, Iman January 2010 (has links)
Database systems employ physical structures such as indexes and materialized views to improve query performance, potentially by orders of magnitude. It is therefore important for a database administrator to choose the appropriate configuration of these physical structures (i.e., the appropriate physical design) for a given database. Deciding on the physical design of a database is not an easy task, and a considerable amount of research exists on automatic physical design tools for relational databases. Recently, XML database systems are increasingly being used for managing highly structured XML data, and support for XML data is being added to commercial relational database systems. This raises the important question of how to choose the appropriate physical design (i.e., the appropriate set of physical structures) for an XML database. Relational automatic physical design tools are not adequate, so new research is needed in this area. In this thesis, we address the problem of automatic physical design for XML databases, which is the process of automatically selecting the best set of physical structures for a given database and a given query workload representing the client application's usage patterns of this data. We focus on recommending two types of physical structures: XML indexes and relational materialized views of XML data. For each of these structures, we study the recommendation process and present a design advisor that automatically recommends a configuration of physical structures given an XML database and a workload of XML queries. The recommendation process is divided into four main phases: (1) enumerating candidate physical structures, (2) generalizing candidate structures in order to generate more candidates that are useful to queries that are not seen in the given workload but similar to the workload queries, (3) estimating the benefit of various candidate structures, and (4) selecting the best set of candidate structures for the given database and workload. We present a design advisor for recommending XML indexes, one for recommending materialized views, and an integrated design advisor that recommends both indexes and materialized views. A key characteristic of our advisors is that they are tightly coupled with the query optimizer of the database system, and rely on the optimizer for enumerating and evaluating physical designs whenever possible. This characteristic makes our techniques suitable for any database system that complies with a set of minimum requirements listed within the thesis. We have implemented the index, materialized view, and integrated advisors in a prototype version of IBM DB2 V9, which supports both relational and XML data, and we experimentally demonstrate the effectiveness of their recommendations using this implementation.
814

New AB initio methods of small genome sequence interpretation

Mills, Ryan Edward 07 April 2006 (has links)
This thesis presents novel methods for analysis of short viral sequences and identifying biologically significant regions based on their statistical properties. The first section of this thesis describes the ab initio method for identifying genes in viral genomes of varying type, shape and size. This method uses statistical models of the viral protein-coding and non-coding regions. We have created an interactive database summarizing the results of the application of this method to viral genomes currently available in GenBank. This database, called VIOLIN, provides an access to the genes identified for each viral genome, allows for further analysis of these gene sequences and the translated proteins, and displays graphically the distribution of protein-coding potential in a viral genome. The next two sections of this thesis describe individual projects for two specific viral genomes analyzed with the new method. The first project was devoted to the recently sequenced Herpes B virus from Rhesus macaque. This genome was initially thought to lack an ortholog of the gamma-34.5 gene encoding for a neurovirulence factor necessary for viability of the two close relatives, human herpes simplex viruses 1 and 2. The genome of Rhesus macaque Herpes B virus was annotated using the new gene finding procedure and an in-depth analysis was conducted to find a gamma-34.5 ortholog using a variety of tools for a similarity search. A profound similarity in codon usage between B virus and its host was also identified, despite the large difference in their GC contents (74% and 51%, respectively). The last thesis section describes the analysis of the Mouse Cytomegalovirus (MCMV) genome by the combination of methods such as sequence segmentation, gene finding and protein identification by mass spectrometry. The MCMV genome is a challenging subject for statistical sequence analysis due to the heterogeneity of its protein coding regions. Therefore the MCMV genome was segmented based on its nucleotide composition and then each segment was considered independently. A thorough analysis was conducted to identify previously unnoticed genes, incorrectly annotated genes and potential sequence errors causing frameshifts. All the findings were then corroborated by the mass spectrometry analysis.
815

A Hash Trie Filter Approach to Approximate String Match for Genomic Databases

Hsu, Min-tze 28 June 2005 (has links)
Genomic sequence databases, like GenBank, EMBL, are widely used by molecular biologists for homology searching. Because of the long length of each genomic sequence and the increase of the size of genomic sequence databases, the importance of efficient searching methods for fast queries grows. The DNA sequences are composed of four kinds of nucleotides, and these genomic sequences can be regarded as the text strings. However, there is no concept of words in a genomic sequence, which makes the search of the genomic sequence in the genomic database much difficult. Approximate String Matching (ASM) with k errors is considered for genomic sequences, where k errors would be caused by insertion, deletion, and replacement operations. Filtration of the DNA sequence is a widely adopted technique to reduce the number of the text areas (i.e., candidates) for further verification. In most of the filter methods, they first split the database sequence into q-grams. A sequence of grams (subpatterns) which match some part of the text will be passed as a candidate. The match problem of grams with the part of the text could be speed up by using the index structure for the exact match. Candidates will then be examined by dynamic programming to get the final result. However, in the previous methods for ASM, most of them considered the local order within each gram. Only the (k + s) h-samples filter considers the global order of the sequence of matched grams. Although the (k + s) h-samples filter keeps the global order of the sequence of the grams, it still has some disadvantages. First, to be a candidate in the (k + s) h-samples filter, the number of the ordered matched grams, s, is always fixed to 2 which results in low precision. Second, the (k + s) h-samples filter uses the query time to build the index for query patterns. In this thesis, we propose a new approximate string matching method, the hash trie filter, for efficiently searching in genomic databases. We build a hash trie in the pre-computing time for the genomic sequence stored in database. Although the size q of each split grams is also decided by the same formula used in the (k + s) h-samples filter, we have proposed a different way to find the ordered subpatterns in text T. Moreover, we reduce the number of candidates by pruning some unreasonable matched positions. Furthermore, unlike the (k + s) h-samples filter which always uses s = 2 to decide whether s matched subpatterns could be a candidate or not, our method will dynamically decide s, resulting in the increase of precision. The simulation results show that our hash trie filter outperforms the (k +s) h-samples filter in terms of the response time, the number of verified candidates, and the precision under different length of the query patterns and different error levels.
816

Agri-tourism:as A New Element Ofrural Development

Demirbas Topcu, Elif 01 November 2003 (has links) (PDF)
ABSTRACT AGRI-TOURISM: AS A NEW ELEMENT OF RURAL DEVELOPMENT DemirbaS Topcu, Elif MS., City and Regional Planning Department, Urban Design Supervisor: Assoc. Prof. Dr. Baykan G&uuml / nay October 2007, 187 Pages This thesis study is developed under the lights of new developments related to rural tourism sector in the world. With the effects of emerging term &lsquo / sustainability&rsquo / in 1980s, sustainable tourism concept has found new implementation areas. The increasing demand on the tourism activities taking place in rural areas has lead the governments to find ways of benefiting from this tendency in a sustainable way. Since the early 1990s, a new type of rural tourism called as agri-tourism has been developed as a concept that integrates agriculture and tourism activities in the western world. Whether it is evaluated as a tourism or agriculture development element, it is a new element of country planning. Nowadays, it is seen that there is also a new tendency for agri-tourism at local level through local initiatives in Turkey. Although there is still no governmental regulation for agri-tourism activities, political and practical developments demonstrate that the sector should be evaluated as a planning element for Turkey. The main purpose of this study is examining the rural development element characteristic of agri-tourism concept as an element for enhancing the rural tourism activities in Turkey. To achieve the purpose, two examples from EU- Lublin and Tuscany Regions were examined to understand the dynamics of agri-tourism as a planning element. For these study interpretative-comparative-textual method is used. Accordingly, the present condition in Turkey is evaluated through the obtained data and SWOT Analysis method was employed for analyzing the data. Accordingly, some suggestions are presented for developing agri-tourism sector in Turkey.
817

Analysis Of Corner Effects On In-situ Walls Supporting Deep Excavations: Comparison Of Plane Strain And Three Dimensional Analyses

Unlu, Guliz 01 December 2008 (has links) (PDF)
In this thesis, hypothetical cases of in-situ walls, that are supported at one, two and four levels, as well as cantilever walls, are analyzed using plane strain and 3D finite element programs. A parametric study is performed by varying the soil stiffness. Deflection, moment, anchor loads and effective lateral earth pressures acting on the walls are examined to understand corner effect. Comparisons are made between plane strain and 3D without corner analysis results to confirm that two programs yield similar results. Moreover, two deep excavation case histories namely: i) Ankara &Ccedil / ankaya trade center and residence and, ii) Ekol construction are analyzed using calibrated models. Calibrations of the models are made using inclinometer data. In hypothetical models, it is found that corner effects on deflections diminish after 20m distance from the corner for excavations that are 8m and 12m deep. Corner effects on deflection decrease as elastic modulus of soil or stiffness of the system increase. Moment diagram pattern changes along the excavation side in cantilever case study. Moment diagram obtained around a corner in 3D analysis and diagrams obtained from the plane strain analyses by modeling the corner as a strut are quite similar. The anchor loads increase until 10-15m distance from the corner. After this distance they become nearly constant. In the analysis of case histories, a trial error solution is adopted to fit the deformed shape of piled wall obtained from 3D analysis to the deformations recorded by inclinometers. These results are compared with the results of plane strain analyses. Ankara-&Ccedil / ankaya project is solved by modeling the corner as strut in plane strain analyses. Results of this analyze agrees with field monitoring data, indicating that corner effects could be simulated by modeling the perpendicular pile wall as a strut in plane strain analysis.
818

Geological Mapping Using Remote Sensing Technologies

Akkok, Inci 01 May 2009 (has links) (PDF)
In an area of interest- Sivas Basin, Turkey- where most of the units are sedimentary and show similar spectral characteristics, spectral settings of ASTER sensor may not be enough by itself. Therefore, considering other aspects, such as morphological variables, is reasonable in addition to spectral classifiers. The main objective of this study is to test usefulness of integration of spectral analysis and morphological information for geological mapping. Remotely sensed imagery obtained from ASTER sensor is used to classify different lithological units while DEM is used to characterize landforms related to these lithological units. Maximum Likelihood Classification (MLC) is used to integrate data streaming from different sources. The methodology involves integrating the surface properties of the classified geological units in addition to the spectral reflectances. Seven different classification trials were conducted: : 1. MLC using only nine ASTER bands, 2. MLC using ASTER bands and DEM, 3. MLC using ASTER bands and slope, 4. MLC using ASTER bands and plan curvature, 5. MLC using ASTER bands and profile curvature, 6. MLC using ASTER bands and drainage density and finally 7. MLC using ASTER bands and all ancillary data. The results revealed that integrating topographical parameters aid in improvement of classification where spectral information is not sufficient to discriminate between classes of interest. An increase of more than 5% is observed in overall accuracy for the all ancillary data integration case. Moreover more than 10% improvement for most of the classes was identified. However from the results it is evident that the areal extent of the classified units causes constraints on application of the methodology.
819

Solution Processable Benzotriazole And Fluorene Containing Copolymers For Photovoltaic Applications

Kaya, Emine 01 September 2011 (has links) (PDF)
2-Dodecyl benzotriazole and 9,9-dioctylfluorene containing alternating copolymers poly((9,9-dioctylfluorene)-2,7-diyl-(2-dodecyl-benzo[1,2,3]triazole)) (P1), poly((9,9-dioc-tylfluorene)-2,7-diyl-(4,7-bis(thien-2-yl) 2-dodecyl benzo[1,2,3]triazole)) (P2), poly((9,9 dioctylfluorene)-2,7-diyl-(4,7-bis(3-hexylthien-5-yl) 2-dodecyl-benzo[1,2,3]triazole)) (P3) were synthesized via Suzuki polycondensation. Synthesized monomers and copolymers were characterized by Nuclear Magnetic Resonance (1H-NMR, 13C-NMR). Optical and electronic properties of resulting alternating copolymers were investigated by means of Cyclic Voltammetry (CV), Ultraviolet&ndash / Visible Spectroscopy and spectroelectrochemistry. All three polymers showed both p and n doping behaviors and multicolored electrochromic states. In order to learn switchingtimes and percent transmittance changes kinetic studies were also performed. Thermal properties of the polymers were investigated via Thermogravimetric Analysis (TGA) and Differential Scanning Calorimetry (DSC). Due to the convenient HOMO and LUMO levels, band gaps, strong absorptions in the visible region and thermal stability, polymers were tested in Organic Solar Cell (OSC) device applications. The preliminary investigation indicated that polymers had promising power conversion efficiencies.
820

Electrochemical And Optical Properties Of Solution Processable Benzotriazole And Benzothiadiazole Containing Copolymers

Karakus, Melike 01 September 2011 (has links) (PDF)
2-Dodecyl benzotriazole (BTz) and benzothiadiazole (BTd) containing copolymers poly(4-(2-dodecyl-2H-benzo[d][1,2,3]triazol-4-yl)benzo[c][1,2,5]thiadiazole (P1), poly(4-(5-(2-dodecyl-7-(thiophen-2yl)-2H-benzo[d][1,2,3]triazol-4-yl)thiophen-2-yl)benzo[c][1,2,5] thiadiazole (P2) and poly(4-(5-(2-dodecyl-7-(4-hexylthiophen-2-yl)-2H-benzo[d] [1,2,3]triazol-4-yl) -3-hexylthiophen-2-yl) benzo[c][1,2,5] thiadiazole (P3) were synthesized via Suzuki polymerization. Electrochemical and optical properties of the polymers were analyzed. The fabrication of solar cells were carried out and current density-voltage (J-V) and incident photon to charge carrier efficiency (IPCE) measurements were done to characterize the solar cells.

Page generated in 0.0395 seconds