• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19765
  • 3375
  • 2417
  • 2018
  • 1551
  • 1432
  • 884
  • 407
  • 390
  • 359
  • 297
  • 238
  • 208
  • 208
  • 208
  • Tagged with
  • 38358
  • 12482
  • 9269
  • 7139
  • 6701
  • 5896
  • 5350
  • 5221
  • 4759
  • 3478
  • 3309
  • 2879
  • 2734
  • 2555
  • 2119
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
881

Graph query autocompletion

Yi, Peipei 31 August 2018 (has links)
The prevalence of graph-structured data in modern real-world applications has led to a rejuvenation of research on graph data management and analytics. Several database query languages have been proposed for textually querying graph databases. Unfortunately, formulating a graph query using any of these query languages often demands considerable cognitive effort and requires "programming" skill at least similar to programming in SQL. Yet, in a wide spectrum of graph applications consumers need to query graph data but are not proficient query writers. Hence, it is important to devise intuitive techniques that can alleviate the burden of query formulation and thus increase the usability of graph databases. In this dissertation, we take the first step to study the graph query autocompletion problem. We provide techniques that take a user's graph query as input and generate top-k query suggestions as output, to help to alleviate the verbose and error-prone graph query formulation process in a visual environment. Firstly, we study visual query autocompletion for graph databases. Techniques for query autocompletion have been proposed for web search and XML search. However, a corresponding capability for graph query engine is in its infancy. We propose a novel framework for graph query autocompletion (called AutoG). The novelties of AutoG are as follows: First, we formalize query composition that specifies how query suggestions are formed. Second, we propose to increment a query with the logical units called c-prime features, that are (i) frequent subgraphs and (ii) constructed from smaller c-prime features in no more than c ways. Third, we propose algorithms to rank candidate suggestions. Fourth, we propose a novel index called feature DAG (FDAG) to further optimize the ranking. Secondly, we propose user focus-based graph query autocompletion. AutoG provides suggestions that are formed by adding subgraph increments to arbitrary places of an existing user query. However, humans can only interact with a small number of recent software artifacts in hand. Hence, many such suggestions could be irrelevant. We present the GFocus framework that exploits a novel notion of user focus of graph query formulation. Intuitively, the focus is the subgraph that a user is working on. We formulate locality principles to automatically identify and maintain the focus. We propose novel monotone submodular ranking functions for generating popular and comprehensive query suggestions only at the focus. We propose efficient algorithms and an index for ranking the suggestions. Thirdly, we propose graph query autocompletion for large graphs. Graph features that have been exploited in AutoG are either absent or rare in large graphs. To address this, we present Flexible graph query autocompletion for LArge Graphs, called FLAG. We propose wildcard label for query graph and query suggestions. In particular, FLAG allows augmenting users' queries using subgraph increments with wildcard labels, which summarize query suggestions that have similar increment structures but different labels. We propose an efficient ranking algorithm and a novel index, called Suggestion Summarization DAG (SSDAG), to optimize the online suggestion ranking. Detailed problem analysis and extensive experimental studies consistently demonstrate the effectiveness and robustness of our proposed techniques in a broad range of settings.
882

Secondary school administration: Data processing's untapped market?

Seder, Alan J. January 1963 (has links)
Thesis (M.B.A.)--Boston University
883

Pulse position modulation for optical fiber local area networks

Hausien, H. H. January 1991 (has links)
No description available.
884

An extensible system for the automatic transmission of a class of programming languages

Perwaiz, Najam January 1975 (has links)
This thesis deals with the topic of programming linguistics. A survey of the current techniques in the fields of syntax analysis and semantic synthesis is given. An extensible automatic translator has been described which can be used for the automatic translation of a class of programming languages. The automatic translator consists of two major parts: the syntax analyser and the semantic synthesizer. The syntax analyser is a generalised version of LL(K) parsers, the theoretical study of which has already been published by Lewis and Stearns and also by Rosenkrantz and Stearns. It accepts grammar of a given language in a modified version of the Backus Normal Form (MBNF) and parses the source language statements in a top down, left to right process without ever backing up. The semantic synthesizer is a table driven system which is called by the parser and performs semantic synthesis as .the parsing proceeds. The semantics of a programming language is specified in the form of semantic productions. These are used by the translator to construct semantic tables. The system is implemented in SN0B0L4 (SPITBOL version 2.0) on an IBM 360/44 and its description is supported by various examples. The automatic translator is an extensible system and SN0B0L4, the implementation language appears as its subset. It can be used to introduce look ahead in the parser, so that backup can be avoided. It can also be used to introduce new facilities in the semantic synthesizer.
885

Translation of APL to other high-level languages

Jacobs, Margaret M. January 1975 (has links)
The thesis describes a method of translating the computer language APL to other high-level languages. Particular reference is made to FORTRAN, a language widely available to computer users. Although gaining in popularity, APL is not at present so readily available, and the main aim of the translation process is to enable the more desirable features of APL to be at the disposal of a far greater number of users. The translation process should also speed up the running of routines, since compilation in general leads to greater efficiency than interpretive techniques. Some inefficiencies of the APL language have been removed by the translation process. The above reasons for translating APL to other high-level languages are discussed in the introduction to the thesis. A description of the method of translation forms the main part of the thesis. The APL input code is first lexically scanned, a process whereby the subsequent phases are greatly simplified. An intermediate code form is produced in which bracketing is used to group operators and operands together, and to assign priorities to operators such that sub-expressions will be handled in the correct order. By scanning the intermediate code form, information is stacked until required later. The information is used to make possible a process of macro expansion. Each of the above processes is discussed in the main text of the thesis. The format of all information which can or must be supplied at translation time is clearly outlined in the text.
886

Metody sběru a zpracování dat v prostředí www / Methods of collecting and processing of data in www environment

Masner, Jan January 2016 (has links)
The thesis deals with theoretical basis for the dissertation. Firstly, the terms data, information and knowledge are characterized. Then, the current state and development of web technologies on client and server side is explored. Moreover, the analysis of current state of art deals with Content Management Systems and their approach to information content management. Besides that, contemporary research papers in the desired field were studied. On the whole, forthcoming methodical procedure and a dissertation hypotheses are proposed.
887

Data mining and intervention in Calculus I

Manly, Ian January 1900 (has links)
Doctor of Philosophy / Department of Mathematics / Andrew Bennett / Many students have difficulty performing well in Calculus 1. Since Calculus 1 is often the first math course that people take in college, these difficulties can set a precedent of failure for these students. Using tools from data mining and interviews with Precalculus and Calculus 1 students, this work seeks to identify the different types of students in Calculus 1, determine which students are at risk for failure, and to study how intervention can help them succeed both in mathematics and in their college careers.
888

Fundamental parameters of the Milky Way galaxy

Camarillo, Tia January 1900 (has links)
Master of Science / Department of Physics / Bharat Ratra / Over three-quarters of observed galaxies are spiral galaxies, and of those spirals roughly two-thirds are barred. The Milky Way, a barred spiral galaxy, is naturally a great foundation to studying the structure of other barred spiral galaxies. Two important fundamental constants are used to describe the Milky Way, R₀ (the radial distance from the Sun to the Galactic center) and θ₀ (the Galactic rotational velocity at R₀). These two constants are also crucial for developing the rotation curve of the Galaxy, which helps to understand the mass distribution of the Galaxy and may be able to lend insight to the dark matter mass contribution. This work presents new, independently calculated values for R₀ and θ₀. The error distributions of a compilation of 28 (since 2011) independent measurements of R₀ are wider than a standard Gaussian and best fit by an n=4 Student's t probability density function. Given this non-Gaussianity, the results of our median statistics analysis, summarized as R₀ = 8.0 ± 0.3 kpc (2σ error), probably provides the most reliable estimate of R₀. The unsymmetrized value for R₀ is R₀ = 7.96+0.24-0.30 kpc (2σ error). A complete collection of 18 recent (since 2000) measurements of θ0 indicates a median statistics estimate of θ₀ = 220 ± 10 km/s ⁻¹ (2σ error) as the most reliable summary for most practical purposes, at R0 = 8.0 ± 0.3 kpc (2σ error). The resulting error distribution of this data set is only mildly non-Gaussian, much more so than that of R₀. These measurements use tracers that are believed to more accurately reflect the systematic rotation of the Milky Way. Unlike other recent compilations of R₀ and θ₀, our collections includes only independent measurements. This work concludes with a new set of Galactic constants (with 1σ error bars) of θ₀ = 222 ± 6 km s⁻¹, R₀ = 7.96 ± 0.17 kpc, and ω₀ = θ₀/ R₀ = 27.9 ± 1.0 km s⁻¹ kpc⁻¹ as probably the most reliable to date.
889

Vizualizace a analýza vícerozměrných dat

Jambor, Marek January 2012 (has links)
No description available.
890

Metodika archivace a zálohování digitálních dat v orgánech veřejné moci

Kyncl, Libor January 2013 (has links)
No description available.

Page generated in 0.4559 seconds