1 |
Changing complex documents /Carter, Simon Matthew James. January 2001 (has links) (PDF)
Thesis (M. Sc.)--University of Queensland, 2002. / Includes bibliographical references.
|
2 |
Empirical comparison of three signal processing methods adaptive periodogram technique, Morlet wavelet transform, and adaptive windowed Fourier transform and their application on gravity waves /Zhang, Junbo. January 2006 (has links)
Thesis (M.C.S.)--Miami University, Dept. of Computer Science and Systems Analysis, 2006. / Title from first page of PDF document. Includes bibliographical references (p. 50-51).
|
3 |
Solutions to non-stationary problems in wavelet spaceTassignon, Hugo January 1997 (has links)
No description available.
|
4 |
The design of a protocol for collaboration in a distributed repository - NomadRama, Jiten. January 2007 (has links)
Thesis (M.Sc.)(Computer Science)--University of Pretoria, 2006 / Includes summary. Includes bibliographical references. Available on the Internet via the World Wide Web.
|
5 |
The use of digital signal processing in adaptive HF frequency managementGallagher, Mark January 1995 (has links)
No description available.
|
6 |
The design of a unified data modelEdgar, John Alexander January 1986 (has links)
A unified data model is presented which offers a superset of the data modelling constructs and semantic integrity constraints of major existing data models. These semantic integrity constraints are both temporal and non-temporal, and are classified by constraint type (attribute, membership, set, temporal) and semantic integrity category (type, attribute value, intra-tuple, intra-class, inter-class). The unified data model has an onion-skin architecture comprising a DB state, DB state transition and temporal models, the realization of all three providing the facilities of a temporal DB. The DB state model is concerned with object-entities and the DB state transition model deals with event-entities and the non-destructive updating of data. A third species of entity is the rule. The temporal model conveys the times of object existence, event occurrence, retro-/post-active update, data error correction, the historical states of objects, and Conceptual Schema versions. Times are either instantaneous/durational time-points or time-intervals. Object and event classes are organized along the taxonomic axes of aggregation, association, categorization and generalization. Semantic integrity constraints and attribute inheritance are defined for each kind of data abstraction. A predicate logic oriented Conceptual Schema language is outlined for specifying class definitions, abstraction and transformation rules, and semantic integrity constraints. Higher-order abstraction classes are primarily defined in terms of the constraints for their lower-order, definitive classes. Transformation rules specify update dependencies between classes. Support is shown for the major features of the main semantic data models, and a token implementation is presented.
|
7 |
Hierarchical video semantic annotation the vision and techniques /Li, Honglin. January 2003 (has links)
Thesis (Ph. D.)--Ohio State University, 2003. / Title from first page of PDF file. Document formatted into pages; contains xv, 146 p.; also includes graphics. Includes bibliographical references (p. 136-146).
|
8 |
Cluster-based relevance models for automatic image annotation /Petkova, Desislava I. January 2005 (has links) (PDF)
Undergraduate honors paper--Mount Holyoke College, 2005. Dept. of Computer Science. / Includes bibliographical references (leaves 113-116).
|
9 |
The spline approach to the numerical solution of parabolic partial differential equationsKadhum, Nashat Ibrahim January 1988 (has links)
This thesis is concerned with the Numerical Solution of Partial Differential Equations. Initially some definitions and mathematical background are given, accompanied by the basic theories of solving linear systems and other related topics. Also, an introduction to splines, particularly cubic splines and their identities are presented. The methods used to solve parabolic partial differential equations are surveyed and classified into explicit or implicit (direct and iterative) methods. We concentrate on the Alternating Direction Implicit (ADI), the Group Explicit (GE) and the Crank-Nicolson (C-N) methods. A new method, the Splines Group Explicit Iterative Method is derived, and a theoretical analysis is given. An optimum single parameter is found for a special case. Two criteria for the acceleration parameters are considered; they are the Peaceman-Rachford and the Wachspress criteria. The method is tested for different numbers of both parameters. The method is also tested using single parameters, i. e. when used as a direct method. The numerical results and the computational complexity analysis are compared with other methods, and are shown to be competitive. The method is shown to have good stability property and achieves high accuracy in the numerical results. Another direct explicit method is developed from cubic splines; the splines Group Explicit Method which includes a parameter that can be chosen to give optimum results. Some analysis and the computational complexity of the method is given, with some numerical results shown to confirm the efficiency and compatibility of the method. Extensions to two dimensional parabolic problems are given in a further chapter. In this thesis the Dirichlet, the Neumann and the periodic boundary conditions for linear parabolic partial differential equations are considered. The thesis concludes with some conclusions and suggestions for further work.
|
10 |
Leveraging Structure for Effective Question AnsweringBonadiman, Daniele 25 September 2020 (has links)
In this thesis, we focus on Answer Sentence Selection (A2S) that is the core task of retrieval based question answering. A2S consists of selecting the sentences that answer user queries from a collection of documents retrieved by a search engine. Over more than two decades, several solutions based on machine learning have been proposed to solve this task, starting from simple approaches based on manual feature engineering to more complex Structural Tree Kernels models, and recently Neural Network architectures.
In particular, the latter requires little human effort as they can automatically extract relevant features from plain text. The development of neural architectures brought improvements in many areas of A2S, reaching unprecedented results. They substantially increase accuracy on almost all benchmark datasets for A2S. However, this has come with the cost of a huge increase in the number of parameters and computational costs of the models. A large number of parameters has led to two drawbacks. The model requires a massive amount of data to train effectively, and huge computational power to maintain an acceptable transaction per second in a production environment. Current state-of-the-art techniques for A2S use huge Transformer architectures, having up to 340 million parameters, pre-trained on a massive amount of data, e.g., BERT. The latter and related models in the same family, such as RoBERTa, are general architectures, i.e., they can be applied to many tasks of NLP without any architectural change.
In contrast to the trend above, we focus on specialized architectures for A2S that can effectively encode the local structure of the question and answer candidate and global information, i.e., the structure of the task and the context in which the answer candidate appears.
In particular, we propose solutions to effectively encode both the local and the global structure of A2S in efficient neural network models. (i) We encode syntactic information in a fast CNN architecture exploiting the capabilities of Structural Tree Kernel to encode the syntactic structure. (ii) We propose an efficient model that can use semantic relational information between question and answer candidates by pretraining word representations on a relational knowledge base. (iii) This efficient approach is further extended to encode each answer candidate's contextual information, encoding all answer candidates in the original context. Lastly, (iv) we propose a solution to encode task-specific structure that is available, for example, available on the community Question Answering task.
The final model, which encodes different aspects of the task, achieves state-of-the-art performance on A2S compared with other efficient architectures. The proposed model is more efficient than attention based architectures and outperforms BERT by two orders of magnitude in terms of transaction per second during training and testing, i.e., it processes 700 questions per second compared to 6 questions per second for BERT when training on a single GPU.
|
Page generated in 0.158 seconds