• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 781
  • 144
  • 113
  • 56
  • 26
  • 13
  • 13
  • 13
  • 13
  • 13
  • 13
  • 12
  • 10
  • 6
  • 6
  • Tagged with
  • 1663
  • 1663
  • 500
  • 350
  • 343
  • 312
  • 290
  • 281
  • 265
  • 221
  • 188
  • 166
  • 148
  • 143
  • 138
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
321

A formal model for fuzzy ontologies.

January 2006 (has links)
Au Yeung Ching Man. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2006. / Includes bibliographical references (leaves 97-110). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgement --- p.iv / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- The Semantic Web and Ontologies --- p.3 / Chapter 1.2 --- Motivations --- p.5 / Chapter 1.2.1 --- Fuzziness of Concepts --- p.6 / Chapter 1.2.2 --- Typicality of Objects --- p.6 / Chapter 1.2.3 --- Context and Its Effect on Reasoning --- p.8 / Chapter 1.3 --- Objectives --- p.9 / Chapter 1.4 --- Contributions --- p.10 / Chapter 1.5 --- Structure of the Thesis --- p.11 / Chapter 2 --- Background Study --- p.13 / Chapter 2.1 --- The Semantic Web --- p.14 / Chapter 2.2 --- Ontologies --- p.16 / Chapter 2.3 --- Description Logics --- p.20 / Chapter 2.4 --- Fuzzy Set Theory --- p.23 / Chapter 2.5 --- Concepts and Categorization in Cognitive Psychology --- p.25 / Chapter 2.5.1 --- Theory of Concepts --- p.26 / Chapter 2.5.2 --- Goodness of Example versus Degree of Typicality --- p.28 / Chapter 2.5.3 --- Similarity between Concepts --- p.29 / Chapter 2.5.4 --- Context and Context Effects --- p.31 / Chapter 2.6 --- Handling of Uncertainty in Ontologies and Description Logics --- p.33 / Chapter 2.7 --- Typicality in Models for Knowledge Representation --- p.35 / Chapter 2.8 --- Semantic Similarity in Ontologies and the Semantic Web --- p.39 / Chapter 2.9 --- Contextual Reasoning --- p.41 / Chapter 3 --- A Formal Model of Ontology --- p.44 / Chapter 3.1 --- Rationale --- p.45 / Chapter 3.2 --- Concepts --- p.47 / Chapter 3.3 --- Characteristic Vector and Property Vector --- p.47 / Chapter 3.4 --- Subsumption of Concepts --- p.49 / Chapter 3.5 --- Likeliness of an Individual in a Concept --- p.51 / Chapter 3.6 --- Prototype Vector and Typicality --- p.54 / Chapter 3.7 --- An Example --- p.59 / Chapter 3.8 --- Similarity between Concepts --- p.61 / Chapter 3.9 --- Context and Contextualization of Ontology --- p.65 / Chapter 3.9.1 --- Formal Definitions --- p.67 / Chapter 3.9.2 --- Contextualization of an Ontology --- p.69 / Chapter 3.9.3 --- "Contextualized Subsumption Relations, Likeliness, Typicality and Similarity" --- p.71 / Chapter 4 --- Discussions and Analysis --- p.73 / Chapter 4.1 --- Properties of the Formal Model for Fuzzy Ontologies --- p.73 / Chapter 4.2 --- Likeliness and Typicality --- p.78 / Chapter 4.3 --- Comparison between the Proposed Model and Related Works --- p.81 / Chapter 4.3.1 --- Comparison with Traditional Ontology Models --- p.81 / Chapter 4.3.2 --- Comparison with Fuzzy Ontologies and DLs --- p.82 / Chapter 4.3.3 --- Comparison with Ontologies modeling Typicality of Objects --- p.83 / Chapter 4.3.4 --- Comparison with Ontologies modeling Context --- p.84 / Chapter 4.3.5 --- Limitations of the Proposed Model --- p.85 / Chapter 4.4 --- "Significance of Modeling Likeliness, Typicality and Context in Ontologies" --- p.86 / Chapter 4.5 --- Potential Application of the Model --- p.88 / Chapter 4.5.1 --- Searching in the Semantic Web --- p.88 / Chapter 4.5.2 --- Benefits of the Formal Model of Ontology --- p.90 / Chapter 5 --- Conclusions and Future Work --- p.91 / Chapter 5.1 --- Conclusions --- p.91 / Chapter 5.2 --- Future Research Directions --- p.93 / Publications --- p.96 / Bibliography --- p.97
322

Factors governing the quality of time encoded speech

Seneviratne, Aruna January 1982 (has links)
In time encoded speech (TES), information is transmitted relating to the distances between zero crossings and the shape of the waveform between successive zero crossings. The quality of the reconstructed TES signal will therefore depend on the accuracy to which these original signal parameters are presented in the reconstructed signal. When transmitting the waveform parameter descriptors (symbols), the variable TES symbol generation rate has to be matched to constant rate transmission channels using first-in first-out storage buffers. Since there are large variations in generation rates, at modest transmission rates, these buffers overflow destroying some of the symbols. Therefore in practical TES systems, the description of the original signal parameters will also depend on the amount of buffer distortion introduced in the transmission process. In this thesis, two techniques of describing the waveshape more: accurately than existing TES methods, four methods of controlling buffer overflow, and the auditory effects of these waveshape describing the buffer overflow control techniques are presented. Using the two new waveshape describing techniques and a parabolic reconstruction techniques it is shown that to obtain a significant improvement in quality in high quality TES Systems, a substantial increase in precise original signal information is required. Ways of achieving this kind of increase in original signal information without significantly increasing the data rate, has been suggested and demonstrated. Using the four buffer control strategies it is shown that for the control strategies to operate satisfactorily, buffer overflow in the voiced regions should be avoided. It is then shown that this can be achieved without significantly increasing the transmission rate, by exploiting properties of speech perception. Further, various methods of quantising TES parameters and the tradeoffs between quantisation and buffer overflow distortion are also investigated.
323

The design of synchronisation and tracking loops for spread-spectrum communication systems

Al-Rawas, Layth January 1985 (has links)
The work reported in this thesis deals with aspects of synchronisation and tracking in direct sequence spread spectrum systems used in ranging and communications applications. This is regarded as a major design problem in such systems and several novel solutions are presented. Three main problem areas have been defined: i) reduction of the acquisition time of code sychronisation in the spread spectrum receiver; ii) reduction of the receiver complexity; iii) improvement of the signal to noise ratio performance of the system by better utilisation of the power spectrum in the main lobe of the transmitted signal. Greater tolerance to Doppler shift effects is also important. A general review of the spread spectrum concept and past work is first given in Chapter One, and common methods of synchronisation and tracking are reviewed in Chapter Two. There, current performance limitations are also included. In Chapter Three a novel method is given for increasing the speed of synchronisation between locally generated and received codes, using a technique of controlling the loop's error curve during acquisition. This method is applied to different width delay lock loops, and a significant increase in maximum search rate is obtained. The effect of the width of the discriminator characteristics and damping ratio on the maximum search rate are also examined. The technique is applied to data modulated spread spectrum systems which use either synchronous or asynchronous data communication systems. All methods have been tested experimentally and found to perform as predicted theoretically. Several novel spread sprectrum configurations are given in Chapter Four which employ multi-level sequences. Some configurations have reduced the complexity and cost of the spread spectrum receivers. Others show some improvement in the maximum search rate as well as the signal to noise ratio performance. Some of these configurations have been implemented experimentally. In Chapter Five, the generation and properties of the composite (Kronecker) sequences are explained. Several types of component sequences are examined. And the reception of these composite sequences are discussed. In particular, a technique is introduced for achieving a rapid acquisition of phase synchronisation using these codes. The effect of white Gaussian noise on the acquisition performance of the delay lock loop is given in Chapter Six. Experimental results are obtained for both digital and analogue correlators. Chapter Seven gives a final summary of the conclusions, and further work suggestions.
324

Data reduction for the transmission of time encoded speech

Longshaw, Stephen January 1985 (has links)
Time Encoded Speech (TES) transmits information concerning the duration between zero-crossings, shape and the amplitude of the signal between successive zero-crossings. This thesis examines a number of aspects of TES with the view of achieving data reductions to enable the transmission of speech, with acceptable quality and intelligibility, at low bit rates and a practical system delay. This thesis presents: (i) A study of techniques for signalling amplitude information in a TES coder. It was indicated that a minimum of the order of 1 bit per epoch is required. Diagnostic Rhyme Tests (DRT) yielded intelligibility scores of the order of 88% for algorithms employing 1 and 2 bits of amplitude information per epoch. (ii) Investigations into Median and Moving Average filtering for preprocessing the epoch duration sequences. It has been shown that such applications, which involve simple numerical smoothing, are of little value for they degrade the quality of the synthesised speech. (iii) Studies of Extremal Coding and Orthogonal Transformations for achieving data reductions in the signalling of epoch duration and, in some instances, the peak magnitude sequences. Each technique yielded a useful data reduction. The technique using Hadamard Transformations yielded the greatest data reduction, a ratio of 2:1 for the representation of the epoch duration sequences. The Hadamard Transformation also proved to be of low complexity in its implementation. (iv) A real-time simplex digital voice channel, developed during the course of this thesis, and a study of the implementation of TES and TES related coders. It is reported that speech of acceptable quality and intelligibility is achieved for a transmission rate of 10 or 15kb/s with a transmission delay of 300ms.
325

Algorithms and lower bounds for testing properties of structured distributions

Nikishkin, Vladimir January 2016 (has links)
In this doctoral thesis we consider various property testing problems for structured distributions. A distribution is said to be structured if it belongs to a certain class which can be simply described in approximation terms. Such distributions often arise in practice, e.g. log-concave distributions, easily approximated by polynomials (see [Bir87a]), often appear in econometric research. For structured distributions, testing a property often requires far less samples than for general unrestricted distributions. In this thesis we prove that this is indeed the case for several distance-related properties. Namely, we give explicit sub-linear time algorithms for L1 and L2 distance testing between two structured distributions for the cases when either one or both of them are available as a “black box”. We also prove that the given algorithms have the best possible asymptotic complexity by proving matching lower bounds in the form of explicit problem instances (albeit constructed using randomized techniques) demanding at least a specified amount of data to be tested successfully. As the main numerical result, we prove that testing that total variation distance to an explicitly given distribution is at least e requires O(√k/e²) samples, where k is an approximation parameter, dependent on the class of distribution being tested and independent of the support size. Testing that the total variation distance between two “black box” distributions is at least e requires O(k⁴/⁵e⁶/⁵). In some cases, when k ~ n, this result may be worse than using an unrestricted testing algorithm (which requires O( n²/3/e² ) samples where n is the domain size). To address this issue, we develop a third algorithm, which requires O(k²/³e⁴/³ log⁴/³(n/k) log log(n/k)) and serves as a bridge between the cases of small and large domain sizes.
326

Self-motivated composition of strategic action policies

Anthony, Tom January 2018 (has links)
In the last 50 years computers have made dramatic progress in their capabilities, but at the same time their failings have demonstrated that we, as designers, do not yet understand the nature of intelligence. Chess playing, for example, was long offered up as an example of the unassailability of the human mind to Artificial Intelligence, but now a chess engine on a smartphone can beat a grandmaster. Yet, at the same time, computers struggle to beat amateur players in simpler games, such as Stratego, where sheer processing power cannot substitute for a lack of deeper understanding. The task of developing that deeper understanding is overwhelming, and has previously been underestimated. There are many threads and all must be investigated. This dissertation explores one of those threads, namely asking the question "How might an artificial agent decide on a sensible course of action, without being told what to do?". To this end, this research builds upon empowerment, a universal utility which provides an entirely general method for allowing an agent to measure the preferability of one state over another. Empowerment requires no explicit goals, and instead favours states that maximise an agent's control over its environment. Several extensions to the empowerment framework are proposed, which drastically increase the array of scenarios to which it can be applied, and allow it to evaluate actions in addition to states. These extensions are motivated by concepts such as bounded rationality, sub-goals, and anticipated future utility. In addition, the novel concept of strategic affinity is proposed as a general method for measuring the strategic similarity between two (or more) potential sequences of actions. It does this in a general fashion, by examining how similar the distribution of future possible states would be in the case of enacting either sequence. This allows an agent to group action sequences, even in an unknown task space, into 'strategies'. Strategic affinity is combined with the empowerment extensions to form soft-horizon empowerment, which is capable of composing action policies in a variety of unknown scenarios. A Pac-Man-inspired prey game and the Gambler's Problem are used to demonstrate this selfmotivated action selection, and a Sokoban inspired box-pushing scenario is used to highlight the capability to pick strategically diverse actions. The culmination of this is that soft-horizon empowerment demonstrates a variety of 'intuitive' behaviours, which are not dissimilar to what we might expect a human to try. This line of thinking demonstrates compelling results, and it is suggested there are a couple of avenues for immediate further research. One of the most promising of these would be applying the self-motivated methodology and strategic affinity method to a wider range of scenarios, with a view to developing improved heuristic approximations that generate similar results. A goal of replicating similar results, whilst reducing the computational overhead, could help drive an improved understanding of how we may get closer to replicating a human-like approach.
327

A new computer-based speech therapy tutor for vowel production

Turnbull, James January 1991 (has links)
Our primary mode of communication is via speech. Therefore, any person who has difficulty in producing understandable speech, for whatever reason, is at a great disadvantage. It is the role of the speech therapist to help such people to improve their speech production wherever possible. The aim of this work was to develop a computer-based speech therapy tutor for vowel production. The Tutor would be able to analyse monosyllabic utterances in real-time, extract the vowel portion and match this to a pre-determined template, and display the result with no appreciable delay. A fully-working prototype has been developed which employs general principles of aircraft tracking in a novel way, to track the coefficients of the quadratic factors of the all-pole linear-prediction model for speech production. It is shown how tracking these parameters can be used to extract extended vowels from a monosyllabic utterance. It is also shown how the algorithm which is used to determine the optimum frame-to-frame tracks can be used to perform template matching. The real plane on which the parameters are tracked, the rs-plane, suffers from non-linear scaling of frequency measures. This leads to poor spectral resolution of the perceptually-important low frequency parameters. To overcome this problem, the rs-plane can be warped in order that distance measures taken between points on the plane are more meaningful perceptually. The Tutor is based on a personal computer (PC). In order that real-time operation can be achieved, the processing power of the PC is enhanced by the addition of a digital signal processor (TMS32020) board and a transputer (T800) board. The prototype Tutor was developed with help and advice from Dundee Speech Therapy Service, Tayside Health Board, who also conducted a short pilot study of the Tutor.
328

Essays in Information Economics and Monotone Comparative Statics

Rappoport, Daniel January 2018 (has links)
This dissertation studies communication in a variety of contexts and attempts to derive general comparative statics results and equilibrium characterizations. The main goal is to understand how usual comparative statics predictions extend to realistic but previously intractable frameworks. These range from examining communication when outcomes are lotteries, to disclosure games when the evidence structure can be arbitrarily complex. Chapter 1 studies verifiable disclosure games, that is, a sender communicating with a receiver using hard evidence in order to influence his action choice. The main goal is to understand how prior beliefs about the evidence environment affect which actions are chosen in equilibrium. More specifically, the goal is to understand which beliefs will be less preferred by the sender: I say that a prior belief is more skeptical than another if it induces less preferred equilibrium actions for the sender regardless of his type or the receiver's preferences. The main contribution is to show that this equilibrium order, which is difficult to check. is equivalent to when the sender is expected to have more evidence, a more straightforward order over the primitives. This equivalence has application to any disclosure game in which the sender can affect or choose the receiver that he faces. Examples include jury selection and dynamic disclosure. In addition, the methodology of the paper provides an explicit expression for equilibrium actions, and a novel comparative statics result. Chapter 2 studies when choice over lotteries is monotonic given any choice set. A central prediction of the signaling literature is monotone comparative statics (MCS) or that higher types choose higher outcomes. The driving behavioral assumption behind MCS is the single crossing property on preferences. However, this property is only sufficient when the outcome is non-random. More realistically, choices correspond to lotteries over outcomes: a student choosing her education level is not certain about her lifetime salary. Motivated by this observation we characterize preferences that admit an analogous single crossing property over lotteries. We show that this property is necessary and sufficient to maintain MCS in many signaling applications when noise is introduced after the choice has been made. Chapter 3 studies how a principal incentivizes costly information acquisition from a disinterested agent through monetary transfers. The main focus is the moral hazard that arises when the principal can observe the results of the investigation but not the entire research process. More specifically, we assume that the principal can contract on the realized posterior belief but not on the posterior beliefs that could have been realized or on their probability. We find that, unlike in standard moral hazard problems, under either limited liability or risk aversion the principal implements his first best experiment at first best cost. However, under risk aversion and limited liability the principal suffers efficiency loss. More specifically, if the principal plans to implement an asymmetric experiment, one which seeks certainty with low probability and is uninformative otherwise, the second best experiment will be distorted toward less asymmetric experiments and provide the agent with a positive rent.
329

Beyond Discourse: Computational Text Analysis and Material Historical Processes

Atria, Jose Tomas January 2018 (has links)
This dissertation proposes a general methodological framework for the application of computational text analysis to the study of long duration material processes of transformation, beyond their traditional application to the study of discourse and rhetorical action. Over a thin theory of the linguistic nature of social facts, the proposed methodology revolves around the compilation of term co-occurrence matrices and their projection into different representations of an hypothetical semantic space. These representations offer solutions to two problems inherent to social scientific research: that of "mapping" features in a given representation to theoretical entities and that of "alignment" of the features seen in models built from different sources in order to enable their comparison. The data requirements of the exercise are discussed through the introduction of the notion of a "narrative horizon", the extent to which a given source incorporates a narrative account in its rendering of the context that produces it. Useful primary data will consist of text with short narrative horizons, such that the ideal source will correspond to a continuous archive of institutional, ideally bureaucratic text produced as mere documentation of a definite population of more or less stable and comparable social facts across a couple of centuries. Such a primary source is available in the Proceedings of the Old Bailey (POB), a collection of transcriptions of 197,752 criminal trials seen by the Old Bailey and the Central Criminal Court of London and Middlesex between 1674 and 1913 that includes verbatim transcriptions of witness testimony. The POB is used to demonstrate the proposed framework, starting with the analysis of the evolution of an historical corpus to illustrate the procedure by which provenance data is used to construct longitudinal and cross-sectional comparisons of different corpus segments. The co-occurrence matrices obtained from the POB corpus are used to demonstrate two different projections: semantic networks that model different notions of similarity between the terms in a corpus' lexicon as an adjacency matrix describing a graph and semantic vector spaces that approximate a lower-dimensional representation of an hypothetical semantic space from its empirical effects on the co-occurrence matrix. Semantic networks are presented as discrete mathematical objects that offer a solution to the mapping problem through operation that allow for the construction of sets of terms over which an order can be induced using any measure of significance of the strength of association between a term set and its elements. Alignment can then be solved through different similarity measures computed over the intersection and union of the sets under comparison. Semantic vector spaces are presented as continuous mathematical objects that offer a solution to the mapping problem in the linear structures contained in them. This include, in all cases, a meaningful metric that makes it possible to define neighbourhoods and regions in the semantic space and, in some cases, a meaningful orientation that makes it possible to trace dimensions across them. Alignment can then proceed endogenously in the case of oriented vector spaces for relative comparisons, or through the construction of common basis sets for non-oriented semantic spaces for absolute comparisons. The dissertation concludes with the proposition of a general research program for the systematic compilation of text distributional patterns in order to facilitate a much needed process of calibration required by the techniques discussed in the previous chapters. Two specific avenues for further research are identified. First, the development of incremental methods of projection that allow a semantic model to be updated as new observations come along, an area that has received considerable attention from the field of electronic finance and the pervasive use of Gentleman's algorithm for matrix factorisation. Second, the development of additively decomposable models that may be combined or disaggregated to obtain a similar result to the one that would have been obtained had the model being computed from the union or difference of their inputs. This is established to be dependent on whether the functions that actualise a given model are associative under addition or not.
330

Some results on linear network coding.

January 2004 (has links)
Ngai Chi Kin. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2004. / Includes bibliographical references (leaves 57-59). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgement --- p.iii / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Linear Network Coding --- p.12 / Chapter 3 --- Combination Networks --- p.16 / Chapter 4 --- Multi-Source Multicast Networks --- p.31 / Chapter 5 --- Multi-source Network Coding with two sinks --- p.42 / Chapter 6 --- Conclusion --- p.55 / Bibliography --- p.59

Page generated in 0.0747 seconds