• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1016
  • 224
  • 97
  • 96
  • 69
  • 31
  • 29
  • 19
  • 19
  • 14
  • 12
  • 8
  • 7
  • 7
  • 7
  • Tagged with
  • 2077
  • 745
  • 706
  • 585
  • 437
  • 357
  • 330
  • 310
  • 227
  • 221
  • 193
  • 189
  • 174
  • 165
  • 160
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
451

Expectations and Realities of Online Information Databases: A Rhetorical Analysis of WebMD

Lurie, Christine A 06 June 2013 (has links)
The internet is fundamentally a large storage unit for immense amounts of data. Consequently, the majority of online users log on to the internet in order to find information. Innovations in technology continue to make both the production and consumption of this information an easily achieved endeavor, resulting in high expectations for instantaneous answers via immediate search results. While a plethora of information is not difficult to find, knowing what to do with that information is often problematic. To turn information into knowledge requires an ability to contextualize it and critically engage with it. WebMD is a highly recognizable health information database that often runs into information overload problems with its users. This thesis will examine the information that the WebMD website provides, as well as its usability. The goal is to investigate, firstly, the importance of context for knowledge-forming when users perform online information research and, secondly, the critical literacy required to use such information.
452

Relations entre bases de données et ontologies dans le cadre du web des données

Curé, Olivier 11 October 2010 (has links) (PDF)
Ce manuscrit présente mon intérêt pour la conception des méthodes et algorithmes nécessaires pour la réalisation d'applications avancées pour le web sémantique. Cette extension du web actuel vise à autoriser l'intégration et le partage de données entre organismes et applications. Une conséquence directe du succès de cette approche permettrait de considérer le web comme une base de données globale contenant les données stockées sur toutes les machines connectées. Cet aspect s'exprime bien dans le site web dédié à l'activité web sémantique du W3C, qui déclare que le web sémantique est un web des données. Ainsi, ce web des données permettra de soumettre des requêtes structurées sur tous les ensembles de données connectés, et de récupérer des résultats pertinents provenant de sources diverses et hétérogènes. Une question essentielle liée à cette hétérogénéité concerne la notion de sémantique. Dans le contexte du web sémantique, elle est généralement traitée avec des ontologies et les opérations de médiation associées. Ma recherche s'ancrent dans ces thématiques et ce manuscrit vise à présenter quelques unes de mes recherches et résultats, ainsi qu'à décrire certaines des applications que j'ai conçues et implémentées
453

CGU: A common graph utility for DL Reasoning and Conjunctive Query Optimization

Palacios Villa, Jesus Alejandro January 2005 (has links)
We consider the overlap between reasoning involved in <em>conjunctive query optimization</em> (CQO) and in tableaux-based approaches to reasoning about subsumption in <em>description logics</em> (DLs). In both cases, an underlying graph is created, searched and modified. This process is determined by a given <em>query</em> and <em>database schema</em> in the first case and by a given <em>description</em> and <em>terminology</em> in the second. The opportunities for overlap derive from an abundance of reductions of various schema languages to terminologies for common DL dialects, and from the fact that descriptions can in turn be viewed as queries that compute a single column. <br /><br /> Our main contributions are as follows. We present the design and implementation of a common graph utility that integrates the requirements for both CQO and DL reasoning. We then verify this model by also presenting the design and implementation for two drivers, one that implements a query optimizer for a conjunctive query language extended with descriptions, and one that implements a complete DL reasoner for a feature based DL dialect.
454

CGU: A common graph utility for DL Reasoning and Conjunctive Query Optimization

Palacios Villa, Jesus Alejandro January 2005 (has links)
We consider the overlap between reasoning involved in <em>conjunctive query optimization</em> (CQO) and in tableaux-based approaches to reasoning about subsumption in <em>description logics</em> (DLs). In both cases, an underlying graph is created, searched and modified. This process is determined by a given <em>query</em> and <em>database schema</em> in the first case and by a given <em>description</em> and <em>terminology</em> in the second. The opportunities for overlap derive from an abundance of reductions of various schema languages to terminologies for common DL dialects, and from the fact that descriptions can in turn be viewed as queries that compute a single column. <br /><br /> Our main contributions are as follows. We present the design and implementation of a common graph utility that integrates the requirements for both CQO and DL reasoning. We then verify this model by also presenting the design and implementation for two drivers, one that implements a query optimizer for a conjunctive query language extended with descriptions, and one that implements a complete DL reasoner for a feature based DL dialect.
455

Static Conflict Analysis of Transaction Programs

Zhang, Connie January 2000 (has links)
Transaction programs are comprised of read and write operations issued against the database. In a shared database system, one transaction program conflicts with another if it reads or writes data that another transaction program has written. This thesis presents a semi-automatic technique for pairwise static conflict analysis of embedded transaction programs. The analysis predicts whether a given pair of programs will conflict when executed against the database. There are several potential applications of this technique, the most obvious being transaction concurrency control in systems where it is not necessary to support arbitrary, dynamic queries and updates. By analyzing transactions in such systems before the transactions are run, it is possible to reduce or eliminate the need for locking or other dynamic concurrency control schemes.
456

A new formal and analytical process to product modeling (PPM) method and its application to the precast concrete industry

Lee, Ghang 08 November 2004 (has links)
The current standard product (data) modeling process relies on the experience and subjectivity of data modelers who use their experience to eliminate redundancies and identify omissions. As a result, product modeling becomes a social activity that involves iterative review processes of committees. This study aims to develop a new, formal method for deriving product models from data collected in process models of companies within an industry sector. The theoretical goals of this study are to provide a scientific foundation to bridge the requirements collection phase and the logical modeling phase of product modeling and to formalize the derivation and normalization of a product model from the processes it supports. To achieve these goals, a new and formal method, Georgia Tech Process to Product Modeling (GTPPM), has been proposed. GTPPM consists of two modules. The first module is called the Requirements Collection and Modeling (RCM) module. It provides semantics and a mechanism to define a process model, information items used by each activity, and information flow between activities. The logic to dynamically check the consistency of information flow within a process also has been developed. The second module is called the Logical Product Modeling (LPM) module. It integrates, decomposes, and normalizes information constructs collected from a process model into a preliminary product model. Nine design patterns are defined to resolve conflicts between information constructs (ICs) and to normalize the resultant model. These two modules have been implemented as a Microsoft Visio ™ add-on. The tool has been registered and is also called GTPPM ™. The method has been tested and evaluated in the precast concrete sector of the construction industry through several GTPPM modeling efforts. By using GTPPM, a complete set of information items required for product modeling for a medium or a large industry can be collected without generalizing each company's unique process into one unified high-level model. However, the use of GTPPM is not limited to product modeling. It can be deployed in several other areas including: workflow management system or MIS (Management Information System) development software specification development business process re-engineering.
457

The Multiple Retailer Inventory Routing Problem With Backorders

Alisan, Onur 01 July 2008 (has links) (PDF)
In this study we consider an inventory routing problem in which a supplier distributes a single product to multiple retailers in a finite planning horizon. Retailers should satisfy the deterministic and dynamic demands of end customers in the planning horizon, but the retailers can backorder the demands of end customers considering the supply chain costs. In each period the supplier decides the retailers to be visited, and the amount of products to be supplied to each retailer by a fleet of vehicles. The decision problems of the supplier are about when, to whom and how much to deliver products, and in which order to visit retailers while minimizing system-wide costs. We propose a mixed integer programming model and a Lagrangian relaxation based solution approach in which both upper and lower bounds are computed. We test our solution approach with test instances taken from the literature and provide our computational results.
458

An ACGT-Words Tree for Efficient Data Access in Genomic Databases

Hu, Jen-Wei 25 July 2003 (has links)
Genomic sequence databases, like GenBank, EMBL, are widely used by molecular biologists for homology searching. Because of the increase of the size of genomic sequence databases, the importance of indexing the sequences for fast queries grows. The DNA sequences are composed of 4 base pairs, and these genomic sequences can be regarded as the text strings. Similar to conventional databases, there are some approaches use indexes to provide efficient access to the data. The inverted-list indexing approach uses hashing to store the database sequences. However, the perfect hashing function is difficult to construct, and the collision in a hash table may occur frequently. Different from the inverted-list approach, there are other data structures, such as the suffix tree, the suffix array, and the suffix binary search tree, to index the genomic sequences. One characteristic of those suffix-tree-like data structures is that they store all suffixes of the sequences. They do not break the sequences into words. The advantage of the suffix tree is simple. However, the storage space of the suffix tree is too large. The suffix array and the suffix binary search tree reduce more storage space than the suffix tree. But since they use the binary searching technique to find the query sequence, they waste too much time to do the search. Another data structure, the word suffix tree, uses the concept of words and stores partial suffixes to index the DNA sequence. Although the word suffix tree reduces the storage space, it will lose information in the search process. In this thesis, we propose a new index structure, ACGT-Words tree, for efficiently support query processing in genomic databases. We define the concept of words which is different from the word definition given in the word suffix tree, and separate the DNA sequences stored in the database and in the query sequence into distinct words. Our approach does not store all of the suffixes in the database sequences. Therefore, we need less space than the suffix tree approach. We also propose an efficient search algorithm to do the sequence match based on the ACGT-Words tree index structure; therefore, we can take less time to finish the search than the suffix array approach. Our approach also avoids the missing cases in the word suffix tree. Then, based on the ACGT-Words tree, we propose one improved operation for data insertion and two improved operations for the searching process. In the improved operation for insertion, we sort the ACGT-Words generated and then preprocess them before constructing the tree structure. In the two improved operations, we can provide better performance when the query sequence satisfies some conditions. The simulation results show that the ACGT-Words tree outperforms the suffix tree and the suffix array in terms of storage and processing time, respectively. Moreover, we show that the improved operations in the ACGT-Words tree also require shorter time to construct or search than the original processes or the suffix array.
459

A Unique-Bit-Pattern-Based Indexing Strategy for Image Rotation and Reflection in Image Databases

Yeh, Wei-horng 16 June 2008 (has links)
A symbolic image database system is a system in which a large amount of image data and their related information are represented by both symbolic images and physical images. Spatial relationships are important issues for similarity-based retrieval in many image database applications. How to perceive spatial relationships among the components in a symbolic image is an important criterion to find a match between the symbolic image of the scene object and the one being store as a modal in the symbolic image database. With the popularity of digital cameras and the related image processing software, a sequence of images are often rotated or flipped. That is, those images are transformed in the rotation orientation or the reflection direction. A robust spatial similarity framework should be able to recognize image variants such as translation, scaling, rotation, and arbitrary variants. Current retrieval by spatial similarity algorithms can be classified into symbolic projection methods, geometric methods, and graph-matching methods. Symbolic projection could preserve the useful spatial information of objects, such as width, height, and location. However, many iconic indexing strategies based on symbolic projection are sensitive to rotation or reflection. Therefore, these strategies may miss the qualified images, when the query is issued in the orientation different from the orientation of the database images. To solve this problem, researchers derived the rule of the change of spatial relationships in image transformation, and proposed a function to map the spatial relationship to its related transformed one. However, this mapping consists of several conditional statements, which is time-consuming. Thus, in this dissertation, first, we classify the mapping into three cases and carefully assign a 16-bit unique bit pattern to each spatial relationship. Based on the assignment, we can easily do the mapping through our proposed bit operation, intra-exchange, which is a CPU operation and needs only the complexity of O(1). Moreover, we propose an efficient iconic index strategy, called Unique Bit Pattern matrix strategy (UBP matrix strategy) to record the spatial information. In this way, when doing similarity retrieval, we do not need to reconstruct the original image from the UBP matrix in order to obtain the indexes of the rotated and flipped image. Conversely, we can directly derive the index of the rotated or flipped image from the index of the original one through bit operations and the matrix manipulation. Thus, our proposed strategy can do similarity retrieval without missing the qualified database images. In our performance study, first, we analyze the time complexity of the similarity retrieval process of our proposed strategy. Then, the efficiency of our proposed strategy according to the simulation results is presented. We show that our strategy outperforms those mapping strategies based on different number of objects in an image. According to the different number of objects in an image, the percentage of improvement is between 13.64% and 53.23%.
460

An Efficient JMC Algorithm for the Rhythm Query in Music Databases

Chou, Han-ping 03 July 2009 (has links)
In recent years, the music has become more popular due to the evolution of the technology. Various kinds of music around us become more complexity and huge. This explosive growth in the music has generated the urgent need for new techniques and tools that can intelligently and automatically transform the music into useful information, and classify the music into correct music groups precisely. The rhythm query is the fundamental technique in music genre classification and content-based retrieval, which are crucial to multimedia applications. Recently, Christodoulakis et al. has proposed the CIRS algorithm that can be used to classify music duration sequences according to rhythms. In the CIRS algorithm, a rhythm is represented by a sequence of ¡§Quick¡¨ (Q) and ¡§Slow¡¨ (S) symbols, which corresponds to the (relative) duration of notes, such that S = 2Q. In order to classify music by rhythms, the CIRS algorithm locates the MaxCover which is the maximum-length substring of the music duration sequence, which can be covered (overlapping or consecutively) by the rhythm query continuously. During the matching step, one S symbol in the rhythm query can be regarded as two consecutive Q symbols in the duration sequence, but the two consecutive Q symbols in the rhythm query can not be combined as one S symbol in the duration sequence. This definition causes the difficulty for designing the algorithm. The CIRS algorithm contains four steps and repeat Steps 2, 3, and 4 to get local MaxCover for each different duration value of the music duration sequence. Finally, the global MaxCover is computed. We observe that it will generate unnecessary results repeatedly among Steps 2, 3, and 4. Therefore, in this thesis, to avoid repeatedly processing Steps 2, 3, and 4 for each different duration value, we propose the JMC (Jumping-by-MaxCover) algorithm which provides a pruning strategy to find the MaxCover incrementally, resulting in the reducing of the processing cost. In fact, we can make use of the relationship between the MaxCover MX founded by a different duration value X, and use the duration sequences cut by such a different duration value X to reduce the unnecessary process for the other different duration value Y , where Y < X. To make use of this property to reduce the processing time, we propose a cut-sequence structure and update it incrementally to compute the final global MaxCover. In this way, we can skip many steps and find the same answer of the CIRS algorithm. From our simulation results, we show that the running time of the JMC algorithm could be shorter than that of the CIRS algorithm. When the largest different duration value is uniformly distributed in the duration sequence, the running time can be reduced hugely, which is the best case of our proposed JMC algorithm.

Page generated in 0.0609 seconds