• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19646
  • 3370
  • 2417
  • 2007
  • 1551
  • 1432
  • 877
  • 406
  • 390
  • 359
  • 297
  • 234
  • 208
  • 208
  • 208
  • Tagged with
  • 38133
  • 12457
  • 9252
  • 7111
  • 6698
  • 5896
  • 5291
  • 5197
  • 4727
  • 3455
  • 3303
  • 2815
  • 2726
  • 2539
  • 2116
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
871

Statistical analysis of multivariate interval-censored failure time data

Chen, Man-Hua, January 2007 (has links)
Thesis (Ph.D.)--University of Missouri-Columbia, 2007. / The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file (viewed on March 6, 2009) Includes bibliographical references.
872

Seeing the forest for the trees: tree-based uncertain frequent pattern mining

MacKinnon, Richard Kyle 12 1900 (has links)
Many frequent pattern mining algorithms operate on precise data, where each data point is an exact accounting of a phenomena (e.g., I have exactly two sisters). Alas, reasoning this way is a simplification for many real world observations. Measurements, predictions, environmental factors, human error, &ct. all introduce a degree of uncertainty into the mix. Tree-based frequent pattern mining algorithms such as FP-growth are particularly efficient due to their compact in-memory representations of the input database, but their uncertain extensions can require many more tree nodes. I propose new algorithms with tightened upper bounds to expected support, Tube-S and Tube-P, which mine frequent patterns from uncertain data. Extensive experimentation and analysis on datasets with different probability distributions are undertaken that show the tightness of my bounds in different situations. / February 2016
873

An experiment in high-level microprogramming

Sommerville, John F. January 1977 (has links)
This thesis describes an experiment in developing a true high-level microprogramming language for the Burroughs B1700 series of computers. Available languages for machine description both at a behavioural level and at a microprogramming level are compared and the conclusion drawn that none were suitable for our purpose and that it was necessary to develop a new language which we call SUILVEN. SUILVEN is a true high-level language with no machine-dependent features. It permits the exact specification of the size of abstract machine data areas (via the BITS declaration) and allows the user to associate structure with these data areas (via the TEMPLATE declaration), SUILVEN only permits the use of structured control statements (if-then-else, while-do etc.) - the go to statement is not a feature of the language. SUILVEN is compiled into microcode for the B1700 range of machines. The compiler is written in SNOBOL4 and uses a top-down recursive descent analysis technique, using abstract machines for PASCAL and the locally developed SASL, SUILVEN was compared with other high and low level languages. The conclusions drawn from this comparison were as follows: - (i) SUILVEN was perfectly adequate for describing simple S-machines (ii) SUILVEN lacked certain features for describing higher-level machines (iii) The needs of a machine description language and a microprogram implementation language are different and that it is unrealistic to attempt to combine these in a single language.
874

The effective application of syntactic macros to language extensibility

Campbell, William R. January 1978 (has links)
Starting from B M Leavenworth's proposal for syntactic macros, we describe an extension language LE with which one may extend a base Language LB for defining a new programming language LP. The syntactic macro processor is designed to minimise the overheads required for implementing the extensions and for carrying the syntax and data type error diagnostics of LB through to the extended language LP. Wherever possible, programming errors are flagged where they are introduced in the source text, whether in a macro definition or in a macro call. LE provides a notation, similar to popular extended forms of BNF, for specifying alternative syntaxes for new linguistic forms in the macro template, a separate assertion clause for imposing context sensitive restrictions on macro calls which cannot be imposed by the template, and a non-procedural language which reflects the nested structure of the template for prescribing conditional text replacement in the macro body. A super user may use LE for introducing new linguistic forms to LB and redefining, replacing or deleting existing forms. The end user is given the syntactic macro in terms of an LP macro declaration with which he may define new forms which are local to the lexical environments in which they are declared in his LP program. Because the macro process is embedded in and directed by a deterministic top down parse, the user can be sure that his extensions are unambiguous. Examples of macro definitions are given using a base language LB which has been designed to be rich enough in syntax and data types for illustrating the problems encountered in extending high level languages. An implementation of a compiler/processor for LB and LE is also described. A survey of previous work in this area, summaries of LE and LB, and a description of the abstract target machine are contained in appendices.
875

Effective termination techniques

Cropper, Nick I. January 1997 (has links)
An important property of term rewriting systems is termination: the guarantee that every rewrite sequence is finite. This thesis is concerned with orderings used for proving termination, in particular the Knuth-Bendix and polynomial orderings. First, two methods for generating termination orderings are enhanced. The Knuth-Bendix ordering algorithm incrementally generates numeric and symbolic constraints that are sufficient for the termination of the rewrite system being constructed. The KB ordering algorithm requires an efficient linear constraint solver that detects the nature of degeneracy in the solution space, and for this a revised method of complete description is presented that eliminates the space redundancy that crippled previous implementations. Polynomial orderings are more powerful than Knuth-Bendix orderings, but are usually much harder to generate. Rewrite systems consisting of only a handful of rules can overwhelm existing search techniques due to the combinatorial complexity. A genetic algorithm is applied with some success. Second, a subset of the family of polynomial orderings is analysed. The polynomial orderings on terms in two unary function symbols are fully resolved into simpler orderings. Thus it is shown that most of the complexity of polynomial orderings is redundant. The order type (logical invariant), either r or A (numeric invariant), and precedence is calculated for each polynomial ordering. The invariants correspond in a natural way to the parameters of the orderings, and so the tabulated results can be used to convert easily between polynomial orderings and more tangible orderings. The orderings of order type are two of the recursive path orderings. All of the other polynomial orderings are of order type w or w2 and each can be expressed as a lexicographic combination of r (weight), A (matrix), and lexicographic (dictionary) orderings. The thesis concludes by showing how the analysis extends to arbitrary monadic terms, and discussing possible developments for the future.
876

On the development of Algol

Morrison, Ronald January 1979 (has links)
The thesis outlines the major problems in the design of high level programming languages. The complexity of these languages has caused the user problems in intellectual manageability. Part of this complexity is caused by lack of generality which also causes loss of power. The maxim of power through simplicity, simplicity through generality is established. To achieve this simplicity a number of ground rules, the principle of abstraction, the principle of correspondence and the principle of data type completeness are discussed and used to form a methodology for programming language design. The methodology is then put into practice and the language S-algol is designed as the first member of a family of languages. The second part of the thesis describes the implementation of the S-algol language. In particular a simple and effective method of compiler construction based on the technique of recursive descent is developed. The method uses a hierarchy of abstractions which are implemented as layers to define the compiler. The simplicity and success of the technique depends on the structuring of the layers and the choice of abstractions. The compiler is itself written in S-algol. An abstract machine to support the S-algol language is then proposed and implemented. This machine, the S-code machine, has two stacks and a heap with a garbage collector and a unique method of procedure entry and exit. A detailed description of the S-code machine for the PDP11 computer is given in the Appendices. The thesis then describes the measurement tools used to aid the implementer and the user. The results of improvements in efficiency when these tools are used on the compiler itself are discussed. Finally, the research is evaluated and a discussion of how it may be extended is given.
877

The imperative implementation of algebraic data types

Thomas, Muffy January 1988 (has links)
The synthesis of imperative programs for hierarchical, algebraically specified abstract data types is investigated. Two aspects of the synthesis are considered: the choice of data structures for efficient implementation, and the synthesis of linked implementations for the class of ADTs which insert and access data without explicit key. The methodology is based on an analysis of the algebraic semantics of the ADT. Operators are partitioned according to the behaviour of their corresponding operations in the initial algebra. A family of relations, the storage relations of an ADT, Is defined. They depend only on the operator partition and reflect an observational view of the ADT. The storage relations are extended to storage graphs: directed graphs with a subset of nodes designated for efficient access. The data structures in our imperative language are chosen according to properties of the storage relations and storage graphs. Linked implementations are synthesised in a stepwise manner by implementing the given ADT first by its storage graphs, and then by linked data structures in the imperative language. Some circumstances under which the resulting programs have constant time complexity are discussed.
878

Image transmission over the Cambridge Ring

Lee, Bu-Sung January 1986 (has links)
Local Area Networks (LAN) are destined to play a rapidly increasing part in the transmission and distribution of a wide range of information, and this thesis describes the study of the problems concerning the transmission of coloured images over a particu1ar network, the Cambridge Ring. A colour image station has been developed for the use on the Cambridge Ring. It provides two main services: a high resolution freeze frame transmission and a medium resolution slow-scan image transmission.
879

Security of Big Data: Focus on Data Leakage Prevention (DLP)

Nyarko, Richard January 2018 (has links)
Data has become an indispensable part of our daily lives in this era of information age. The amount of data which is generated is growing exponentially due to technological advances. This voluminous of data which is generated daily has brought about new term which is referred to as big data. Therefore, security is of great concern when it comes to securing big data processes. The survival of many organizations depends on the preventing of these data from falling into wrong hands. Because if these sensitive data fall into wrong hands it could cause serious consequences. For instance, the credibility of several businesses or organizations will be compromised when sensitive data such as trade secrets, project documents, and customer profiles are leaked to their competitors (Alneyadi et al, 2016).  In addition, the traditional security mechanisms such as firewalls, virtual private networks (VPNs), and intrusion detection systems/intrusion prevention systems (IDSs/IPSs) are not enough to prevent against the leakage of such sensitive data. Therefore, to overcome this deficiency in protecting sensitive data, a new paradigm shift called data leakage prevention systems (DLPSs) have been introduced. Over the past years, many research contributions have been made to address data leakage. However, most of the past research focused on data leakage detection instead of preventing against the leakage. This thesis contributes to research by using the preventive approach of DLPS to propose hybrid symmetric-asymmetric encryption to prevent against data leakage.  Also, this thesis followed the Design Science Research Methodology (DSRM) with CRISP-DM (CRoss Industry Standard Process for Data Mining) as the kernel theory or framework for the designing of the IT artifact (method). The proposed encryption method ensures that all confidential or sensitive documents of an organization are encrypted so that only users with access to the decrypting keys can have access. This is achieved after the documents have been classified into confidential and non-confidential ones with Naïve Bayes Classifier (NBC).  Therefore, any organizations that need to prevent against data leakage before the leakage occurs can make use of this proposed hybrid encryption method.
880

Graph query autocompletion

Yi, Peipei 31 August 2018 (has links)
The prevalence of graph-structured data in modern real-world applications has led to a rejuvenation of research on graph data management and analytics. Several database query languages have been proposed for textually querying graph databases. Unfortunately, formulating a graph query using any of these query languages often demands considerable cognitive effort and requires "programming" skill at least similar to programming in SQL. Yet, in a wide spectrum of graph applications consumers need to query graph data but are not proficient query writers. Hence, it is important to devise intuitive techniques that can alleviate the burden of query formulation and thus increase the usability of graph databases. In this dissertation, we take the first step to study the graph query autocompletion problem. We provide techniques that take a user's graph query as input and generate top-k query suggestions as output, to help to alleviate the verbose and error-prone graph query formulation process in a visual environment. Firstly, we study visual query autocompletion for graph databases. Techniques for query autocompletion have been proposed for web search and XML search. However, a corresponding capability for graph query engine is in its infancy. We propose a novel framework for graph query autocompletion (called AutoG). The novelties of AutoG are as follows: First, we formalize query composition that specifies how query suggestions are formed. Second, we propose to increment a query with the logical units called c-prime features, that are (i) frequent subgraphs and (ii) constructed from smaller c-prime features in no more than c ways. Third, we propose algorithms to rank candidate suggestions. Fourth, we propose a novel index called feature DAG (FDAG) to further optimize the ranking. Secondly, we propose user focus-based graph query autocompletion. AutoG provides suggestions that are formed by adding subgraph increments to arbitrary places of an existing user query. However, humans can only interact with a small number of recent software artifacts in hand. Hence, many such suggestions could be irrelevant. We present the GFocus framework that exploits a novel notion of user focus of graph query formulation. Intuitively, the focus is the subgraph that a user is working on. We formulate locality principles to automatically identify and maintain the focus. We propose novel monotone submodular ranking functions for generating popular and comprehensive query suggestions only at the focus. We propose efficient algorithms and an index for ranking the suggestions. Thirdly, we propose graph query autocompletion for large graphs. Graph features that have been exploited in AutoG are either absent or rare in large graphs. To address this, we present Flexible graph query autocompletion for LArge Graphs, called FLAG. We propose wildcard label for query graph and query suggestions. In particular, FLAG allows augmenting users' queries using subgraph increments with wildcard labels, which summarize query suggestions that have similar increment structures but different labels. We propose an efficient ranking algorithm and a novel index, called Suggestion Summarization DAG (SSDAG), to optimize the online suggestion ranking. Detailed problem analysis and extensive experimental studies consistently demonstrate the effectiveness and robustness of our proposed techniques in a broad range of settings.

Page generated in 0.149 seconds