Spelling suggestions: "subject:"fixedpoint"" "subject:"fixed9point""
21 |
DSP-Based non-Language specific Keyword Retrieval and Recognition SystemLin, Bing-Hau 11 July 2005 (has links)
In this thesis, the PC base and DSP base speech keyword retrieval and recognition systems could work. The keywords and describing sentences will not have the limit of word length and could be any languages. Besides, training speech models is not needed anymore. It means that the database gets its expansibility without training speech models again.
We can establish the system on the PC base, and calculate the program with fixed-point DSP board. In the processing of speech signal, lots of mathematical functions will be required. We must reach its immediately effect, so that the system could be useful. In addition, compared with floating point, the fixed point DSP cost much less; it makes the system nearer to users.
After being tested by experiments, the speech keyword retrieval and recognition system got great recognition and efficiency.
|
22 |
A functional approach to positive solutions of boundary value problemsEhrke, John E. Henderson, Johnny. January 2007 (has links)
Thesis (Ph.D.)--Baylor University, 2007. / In abstract "n, ri1, and sj-1" are superscript. In abstract "1, k, n-k, k-1, and nk-1" are subscript. Includes bibliographical references (p. 82-84).
|
23 |
Some fundamentals for Nielsen theory on torus configuration spacesLa Fleur, Stephen J. January 2008 (has links)
Thesis (M.S.)--University of Nevada, Reno, 2008. / "May, 2008." Includes bibliographical references (leaf 58). Online version available on the World Wide Web.
|
24 |
The converse of the Lefschetz fixed point theorem for surfaces and higher dimensional manifoldsMcCord, Daniel Lee, January 1970 (has links)
Thesis (Ph. D.)--University of Wisconsin--Madison, 1970. / Typescript. Vita. eContent provider-neutral record in process. Description based on print version record. Includes bibliographical references.
|
25 |
Fixed point theorems for single and multi-valued mappings. --Veitch, Mary Veronica. January 1973 (has links)
Thesis (M.Sc.) -- Memorial University of Newfoundland. 1973. / Typescript. Bibliography : leaves 69-76. Also available online.
|
26 |
Fixed point theory of finite polyhedra /Singh, Gauri Shanker, January 1982 (has links)
Thesis (M.Sc.)--Memorial University of Newfoundland. / Bibliography : leaves 62-63. Also available online.
|
27 |
Some general convergence theorems on fixed pointsPanicker, Rekha Manoj January 2014 (has links)
In this thesis, we first obtain coincidence and common fixed point theorems for a pair of generalized non-expansive type mappings in a normed space. Then we discuss two types of convergence theorems, namely, the convergence of Mann iteration procedures and the convergence and stability of fixed points. In addition, we discuss the viscosity approximations generated by (ψ ,ϕ)-weakly contractive mappings and a sequence of non-expansive mappings and then establish Browder and Halpern type convergence theorems on Banach spaces. With regard to iteration procedures, we obtain a result on the convergence of Mann iteration for generalized non-expansive type mappings in a Banach space which satisfies Opial's condition. And, in the case of stability of fixed points, we obtain a number of stability results for the sequence of (ψ,ϕ)- weakly contractive mappings and the sequence of their corresponding fixed points in metric and 2-metric spaces. We also present a generalization of Fraser and Nadler type stability theorems in 2-metric spaces involving a sequence of metrics.
|
28 |
Generalizations of some fixed point theorems in banach and metric spacesNiyitegeka, Jean Marie Vianney January 2015 (has links)
A fixed point of a mapping is an element in the domain of the mapping that is mapped into itself by the mapping. The study of fixed points has been a field of interests to mathematicians since the discovery of the Banach contraction theorem, i.e. if is a complete metric space and is a contraction mapping (i.e. there exists such that for all ), then has a unique fixed point. The Banach contraction theorem has found many applications in pure and applied mathematics. Due to fixed point theory being a mixture of analysis, geometry, algebra and topology, its applications to other fields such as physics, economics, game theory, chemistry, engineering and many others has become vital. The theory is nowadays a very active field of research in which many new theorems are published, some of them applied and many others generalized. Motivated by all of this, we give an exposition of some generalizations of fixed point theorems in metric fixed point theory, which is a branch of fixed point theory about results of fixed points of mappings between metric spaces, where certain properties of the mappings involved need not be preserved under equivalent metrics. For instance, the contractive property of mappings between metric spaces need not be preserved under equivalent metrics. Since metric fixed point theory is wide, we limit ourselves to fixed point theorems for self and non-self-mappings on Banach and metric spaces. We also take a look at some open problems on this topic of study. At the end of the dissertation, we suggest our own problems for future research.
|
29 |
NEWTON'S METHOD AS A MEAN VALUE METHODTran, Vanthu Thy 08 August 2007 (has links)
No description available.
|
30 |
Exploring Accumulated Gradient-Based Quantization and Compression for Deep Neural NetworksGaopande, Meghana Laxmidhar 29 May 2020 (has links)
The growing complexity of neural networks makes their deployment on resource-constrained embedded or mobile devices challenging. With millions of weights and biases, modern deep neural networks can be computationally intensive, with large memory, power and computational requirements. In this thesis, we devise and explore three quantization methods (post-training, in-training and combined quantization) that quantize 32-bit floating-point weights and biases to lower bit width fixed-point parameters while also achieving significant pruning, leading to model compression. We use the total accumulated absolute gradient over the training process as the indicator of importance of a parameter to the network. The most important parameters are quantized by the smallest amount. The post-training quantization method sorts and clusters the accumulated gradients of the full parameter set and subsequently assigns a bit width to each cluster. The in-training quantization method sorts and divides the accumulated gradients into two groups after each training epoch. The larger group consisting of the lowest accumulated gradients is quantized. The combined quantization method performs in-training quantization followed by post-training quantization. We assume storage of the quantized parameters using compressed sparse row format for sparse matrix storage. On LeNet-300-100 (MNIST dataset), LeNet-5 (MNIST dataset), AlexNet (CIFAR-10 dataset) and VGG-16 (CIFAR-10 dataset), post-training quantization achieves 7.62x, 10.87x, 6.39x and 12.43x compression, in-training quantization achieves 22.08x, 21.05x, 7.95x and 12.71x compression and combined quantization achieves 57.22x, 50.19x, 13.15x and 13.53x compression, respectively. Our methods quantize at the cost of accuracy, and we present our work in the light of the accuracy-compression trade-off. / Master of Science / Neural networks are being employed in many different real-world applications. By learning the complex relationship between the input data and ground-truth output data during the training process, neural networks can predict outputs on new input data obtained in real time. To do so, a typical deep neural network often needs millions of numerical parameters, stored in memory. In this research, we explore techniques for reducing the storage requirements for neural network parameters. We propose software methods that convert 32-bit neural network parameters to values that can be stored using fewer bits. Our methods also convert a majority of numerical parameters to zero. Using special storage methods that only require storage of non-zero parameters, we gain significant compression benefits. On typical benchmarks like LeNet-300-100 (MNIST dataset), LeNet-5 (MNIST dataset), AlexNet (CIFAR-10 dataset) and VGG-16 (CIFAR-10 dataset), our methods can achieve up to 57.22x, 50.19x, 13.15x and 13.53x compression respectively. Storage benefits are achieved at the cost of classification accuracy, and we present our work in the light of the accuracy-compression trade-off.
|
Page generated in 0.0393 seconds