• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1016
  • 224
  • 97
  • 96
  • 69
  • 31
  • 29
  • 19
  • 19
  • 14
  • 12
  • 8
  • 7
  • 7
  • 7
  • Tagged with
  • 2077
  • 745
  • 706
  • 585
  • 437
  • 357
  • 330
  • 310
  • 227
  • 221
  • 193
  • 189
  • 174
  • 165
  • 160
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

An empirical study of the use of conceptual models for mutation testing of database application programs

Wu, Yongjian, January 2006 (has links)
Thesis (M. Phil.)--University of Hong Kong, 2007. / Title proper from title frame. Also available in printed format.
242

Methods for Failure Analysis Data within Databases and Aids

Chadda, Tommy, Berg, Johannes January 2009 (has links)
<p>In an advanced avionics system, the demand of high reliability and availability is of great importance. Testability Analysis is a method of examining this. In the project RWE Tornado GE at Saab Avitronics, they use Built-In Test (BIT), for the purpose of detecting and isolating possible failures in the equipment in question. There is however the need of verification of BIT functionality. Some of the verification tests are requested by customer EADS to be simulated and demonstrated. The objective of this thesis is to understand the Testability Analysis process as well as develop a tool to assist the Testability Demonstration preparations and result recording.</p> / <p>Kravet på hög tillförlitlighet samt tillgänglighet är av yttersta vikt inom ett avancerat avioniksystem. Testbarhetsanalys är en undersökningsmetod som man kan tillämpa. I projektet RWE Tornado GE på Saab Avitronics använder man sig av inbyggda tester - Built-In Test (BIT) - för att kunna upptäcka och isolera eventuella felmoder i en utrustning. Utöver detta vill man verifiera att BIT-funktionen faktiskt fungerar. Vissa av dessa verifieringstester måste simuleras och demonstreras i enlighet med kunden EADS begäran. Målet med examensarbetet är att förstå testbarhetsanalysprocessen samt utveckla ett verktyg för att kunna göra nödvändiga förberedelser för samt resultatinsamling under testbarhetsdemonstrationen (T-Demo).</p>
243

Digitalt arkiv

Abrahamsson, Emil January 2008 (has links)
<p>Grunden för detta arbete har varit att förenkla hantering och sökning bland ritningar och förteckningar i ett ritningsarkiv. Lösningen har blivit ett webbaserat datasystem som bygger på en databas där all information lagras och är sökbar. Resultatet har blivit ett system som man oavsett var man befinner sig inom företagets nätverk ha tillgång till alla ritningar och förteckningar på ett snabbt och effektivt sätt.</p>
244

Optimering av SQL-frågor för analys i QlikView / Optimizing SQL-statements for analysis in QlikView

Forsberg, Mikael, Morell, Mikael January 2006 (has links)
<p>Huvudsyftet med detta examensarbete är att för vår uppdragsgivare ÅF:s räkning, analysera huruvida det är möjligt att, rent tidsmässigt, hämta lagrad information från en av företagets kunders databaser online samt presentera detta grafiskt, som statistisk information. Hänsyn måste tas till server- och nätverksbelastning. Vid analysen optimerade vi SQL-frågor, vilket resulterade i en minskning av den totala exekveringstiden på serversidan med 24 sekunder.</p><p>Den databasoberoende applikationen QlikView med sina inbyggda funktioner visade sig vara ett mycket användbart rapporteringsverktyg. Genom att utveckla ett grafiskt gränssnitt, tydliggjorde vi olika fråge- och svarsalternativ, vilka fungerar som underlag för sammanställning av produktionsdata.</p> / <p>The main purpose of this thesis is, on the behalf of ÅF, to examine how stored information can be gathered online in a timely fashion. Consideration must be taken to the server and network load. During our investigation we were able to optimize SQL queries that resulted in a reduction of the total execution time by 24 seconds at the server side. One analysis tool that was very useful during our assignment was QlikView, which turned out to be a database independent analysis and reporting tool. We have clarified some alternatives of the questions and answers by developing a graphical interface, which is the basis for putting together the information.</p>
245

Querying Mediated Web Services

Sabesan, Manivasakan January 2007 (has links)
<p>Web services provide a framework for data interchange between applications by incorporating standards such as XMLSchema, WSDL, SOAP, HTTP etc. They define operations to be invoked over a network to perform the actions. These operations are described publicly in a WSDL document with the data types of their argument and result. Searching data accessible via web services is essential in many applications. However, web services don’t provide any general query language or view capabilities. Current web services applications to access the data must be developed using a regular programming language such Java, or C#. The thesis provides an approach to simplify querying web services data and proposes efficient processing of database queries to views of wrapped web services. To show the effectiveness of the approach, a prototype, <em>webService MEDiator system (WSMED</em>), is developed. WSMED provides general view and query capabilities over data accessible through web services by automatically extracting basic meta-data from WSDL descriptions. Based on imported meta-data, the user can then define views that extract data from the results of calls to web service operations. The views can be queried using SQL. A given view can access many different web service operations in different ways depending on what view attributes are known. The views can be specified in terms of several declarative queries to be applied by the query processor. In addition, the user can provide semantic enrichments of the meta-data with key constraints to enable efficient query execution over the views by automatic query transformations. We evaluated the effectiveness of our approach over multilevel views of existing web services and show that the key constraint enrichments substantially improve query performance.</p> / SIDA
246

A Visual Approach to Automated Text Mining and Knowledge Discovery

Puretskiy, Andrey A. 01 December 2010 (has links)
The focus of this dissertation has been on improving the non-negative tensor factorization technique of text mining. The improvements have been made in both pre-processing and post-processing stages, with the goal of making the non-negative tensor factorization algorithm accessible to the casual user. The improved implementation allows the user to construct and modify the contents of the tensor, experiment with relative term weights and trust measures, and experiment with the total number of algorithm output features. Non-negative tensor factorization output feature production is closely integrated with a visual post-processing tool, FutureLens, that allows the user to perform in depth analysis and has a great potential for discovery of interesting and novel patterns within a large collection of textual data. This dissertation necessitated a number of significant modifications and additions to FutureLens in order to facilitate its integration into the analysis environment.
247

A toolkit for managing XML data with a relational database management system

Ramani, Ramasubramanian, January 2001 (has links) (PDF)
Thesis (M.S.)--University of Florida, 2001. / Title from first page of PDF file. Document formatted into pages; contains x, 54 p.; also contains graphics. Vita. Includes bibliographical references (p. 50-53).
248

Bridging Decision Applications and Multidimensional Databases

Nargesian, Fatemeh 04 May 2011 (has links)
Data warehouses were envisioned to facilitate analytical reporting and data visualization by providing a model for the flow of data from operational databases to decision support environments. Decision support environments provide a multidimensional conceptual view of the underlying data warehouse, which is usually stored in relational DBMSs. Typically, there is an impedance mismatch between this conceptual view — shared also by all decision support applications accessing the data warehouse — and the physical model of the data stored in relational DBMSs. This thesis presents a mapping compilation algorithm in the context of the Conceptual Integration Model (CIM) [67] framework. In the CIM framework, the relationships between the conceptual model and the physical model are specified by a set of attribute-to-attribute correspondences. The algorithm compiles these correspondences into a set of mappings that associate each construct in the conceptual model with a query on the physical model. Moreover, the homogeneity and summarizability of data in conceptual models is the key to accurate query answering, a necessity in decision making environments. A data-driven approach to refactor relational models into summarizable schemas and instances is proposed as the solution of this issue. We outline the algorithms and challenges in bridging multidimensional conceptual models and the physical model of data warehouses and discuss experimental results.
249

Bridging Decision Applications and Multidimensional Databases

Nargesian, Fatemeh 04 May 2011 (has links)
Data warehouses were envisioned to facilitate analytical reporting and data visualization by providing a model for the flow of data from operational databases to decision support environments. Decision support environments provide a multidimensional conceptual view of the underlying data warehouse, which is usually stored in relational DBMSs. Typically, there is an impedance mismatch between this conceptual view — shared also by all decision support applications accessing the data warehouse — and the physical model of the data stored in relational DBMSs. This thesis presents a mapping compilation algorithm in the context of the Conceptual Integration Model (CIM) [67] framework. In the CIM framework, the relationships between the conceptual model and the physical model are specified by a set of attribute-to-attribute correspondences. The algorithm compiles these correspondences into a set of mappings that associate each construct in the conceptual model with a query on the physical model. Moreover, the homogeneity and summarizability of data in conceptual models is the key to accurate query answering, a necessity in decision making environments. A data-driven approach to refactor relational models into summarizable schemas and instances is proposed as the solution of this issue. We outline the algorithms and challenges in bridging multidimensional conceptual models and the physical model of data warehouses and discuss experimental results.
250

A Framework for Records Management in Relational Database Systems

Ataullah, Ahmed Ayaz 02 May 2008 (has links)
The problem of records retention is often viewed as simply deleting records when they have outlived their purpose. However, in the world of relational databases there is no standardized notion of a business record and its retention obligations. Unlike physical documents such as forms and reports, information in databases is organized such that one item of data may be part of various legal records and consequently subject to several (and possibly conflicting) retention policies. This thesis proposes a framework for records retention in relational database systems. It presents a mechanism through which users can specify a broad range of protective and destructive data retention policies for relational records. Compared to naïve solutions for enforcing records management policies, our framework is not only significantly more efficient but it also addresses several unanswered questions about how policies can be mapped from given legal requirements to actions on relational data. The novelty in our approach is that we defined a record in a relational database as an arbitrary logical view, effectively allowing us to reduce several challenges in enforcing data retention policies to well-studied problems in database theory. We argue that our expression based approach of tracking records management obligations is not only easier for records managers to use but also far more space/time efficient compared to traditional metadata approaches discussed in the literature. The thesis concludes with a thorough examination of the limitations of the proposed framework and suggestion for future research in the area of records management for relational database management systems.

Page generated in 0.4824 seconds