• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 118
  • 41
  • 17
  • 9
  • 5
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 225
  • 225
  • 59
  • 36
  • 36
  • 35
  • 30
  • 27
  • 20
  • 19
  • 18
  • 16
  • 15
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

A functional approach to positive solutions of boundary value problems

Ehrke, John E. Henderson, Johnny. January 2007 (has links)
Thesis (Ph.D.)--Baylor University, 2007. / In abstract "n, ri1, and sj-1" are superscript. In abstract "1, k, n-k, k-1, and nk-1" are subscript. Includes bibliographical references (p. 82-84).
22

Some fundamentals for Nielsen theory on torus configuration spaces

La Fleur, Stephen J. January 2008 (has links)
Thesis (M.S.)--University of Nevada, Reno, 2008. / "May, 2008." Includes bibliographical references (leaf 58). Online version available on the World Wide Web.
23

The converse of the Lefschetz fixed point theorem for surfaces and higher dimensional manifolds

McCord, Daniel Lee, January 1970 (has links)
Thesis (Ph. D.)--University of Wisconsin--Madison, 1970. / Typescript. Vita. eContent provider-neutral record in process. Description based on print version record. Includes bibliographical references.
24

Fixed point theorems for single and multi-valued mappings. --

Veitch, Mary Veronica. January 1973 (has links)
Thesis (M.Sc.) -- Memorial University of Newfoundland. 1973. / Typescript. Bibliography : leaves 69-76. Also available online.
25

Fixed point theory of finite polyhedra /

Singh, Gauri Shanker, January 1982 (has links)
Thesis (M.Sc.)--Memorial University of Newfoundland. / Bibliography : leaves 62-63. Also available online.
26

Some general convergence theorems on fixed points

Panicker, Rekha Manoj January 2014 (has links)
In this thesis, we first obtain coincidence and common fixed point theorems for a pair of generalized non-expansive type mappings in a normed space. Then we discuss two types of convergence theorems, namely, the convergence of Mann iteration procedures and the convergence and stability of fixed points. In addition, we discuss the viscosity approximations generated by (ψ ,ϕ)-weakly contractive mappings and a sequence of non-expansive mappings and then establish Browder and Halpern type convergence theorems on Banach spaces. With regard to iteration procedures, we obtain a result on the convergence of Mann iteration for generalized non-expansive type mappings in a Banach space which satisfies Opial's condition. And, in the case of stability of fixed points, we obtain a number of stability results for the sequence of (ψ,ϕ)- weakly contractive mappings and the sequence of their corresponding fixed points in metric and 2-metric spaces. We also present a generalization of Fraser and Nadler type stability theorems in 2-metric spaces involving a sequence of metrics.
27

Generalizations of some fixed point theorems in banach and metric spaces

Niyitegeka, Jean Marie Vianney January 2015 (has links)
A fixed point of a mapping is an element in the domain of the mapping that is mapped into itself by the mapping. The study of fixed points has been a field of interests to mathematicians since the discovery of the Banach contraction theorem, i.e. if is a complete metric space and is a contraction mapping (i.e. there exists such that for all ), then has a unique fixed point. The Banach contraction theorem has found many applications in pure and applied mathematics. Due to fixed point theory being a mixture of analysis, geometry, algebra and topology, its applications to other fields such as physics, economics, game theory, chemistry, engineering and many others has become vital. The theory is nowadays a very active field of research in which many new theorems are published, some of them applied and many others generalized. Motivated by all of this, we give an exposition of some generalizations of fixed point theorems in metric fixed point theory, which is a branch of fixed point theory about results of fixed points of mappings between metric spaces, where certain properties of the mappings involved need not be preserved under equivalent metrics. For instance, the contractive property of mappings between metric spaces need not be preserved under equivalent metrics. Since metric fixed point theory is wide, we limit ourselves to fixed point theorems for self and non-self-mappings on Banach and metric spaces. We also take a look at some open problems on this topic of study. At the end of the dissertation, we suggest our own problems for future research.
28

NEWTON'S METHOD AS A MEAN VALUE METHOD

Tran, Vanthu Thy 08 August 2007 (has links)
No description available.
29

Exploring Accumulated Gradient-Based Quantization and Compression for Deep Neural Networks

Gaopande, Meghana Laxmidhar 29 May 2020 (has links)
The growing complexity of neural networks makes their deployment on resource-constrained embedded or mobile devices challenging. With millions of weights and biases, modern deep neural networks can be computationally intensive, with large memory, power and computational requirements. In this thesis, we devise and explore three quantization methods (post-training, in-training and combined quantization) that quantize 32-bit floating-point weights and biases to lower bit width fixed-point parameters while also achieving significant pruning, leading to model compression. We use the total accumulated absolute gradient over the training process as the indicator of importance of a parameter to the network. The most important parameters are quantized by the smallest amount. The post-training quantization method sorts and clusters the accumulated gradients of the full parameter set and subsequently assigns a bit width to each cluster. The in-training quantization method sorts and divides the accumulated gradients into two groups after each training epoch. The larger group consisting of the lowest accumulated gradients is quantized. The combined quantization method performs in-training quantization followed by post-training quantization. We assume storage of the quantized parameters using compressed sparse row format for sparse matrix storage. On LeNet-300-100 (MNIST dataset), LeNet-5 (MNIST dataset), AlexNet (CIFAR-10 dataset) and VGG-16 (CIFAR-10 dataset), post-training quantization achieves 7.62x, 10.87x, 6.39x and 12.43x compression, in-training quantization achieves 22.08x, 21.05x, 7.95x and 12.71x compression and combined quantization achieves 57.22x, 50.19x, 13.15x and 13.53x compression, respectively. Our methods quantize at the cost of accuracy, and we present our work in the light of the accuracy-compression trade-off. / Master of Science / Neural networks are being employed in many different real-world applications. By learning the complex relationship between the input data and ground-truth output data during the training process, neural networks can predict outputs on new input data obtained in real time. To do so, a typical deep neural network often needs millions of numerical parameters, stored in memory. In this research, we explore techniques for reducing the storage requirements for neural network parameters. We propose software methods that convert 32-bit neural network parameters to values that can be stored using fewer bits. Our methods also convert a majority of numerical parameters to zero. Using special storage methods that only require storage of non-zero parameters, we gain significant compression benefits. On typical benchmarks like LeNet-300-100 (MNIST dataset), LeNet-5 (MNIST dataset), AlexNet (CIFAR-10 dataset) and VGG-16 (CIFAR-10 dataset), our methods can achieve up to 57.22x, 50.19x, 13.15x and 13.53x compression respectively. Storage benefits are achieved at the cost of classification accuracy, and we present our work in the light of the accuracy-compression trade-off.
30

Analysis of Fix‐point Aspects for Wireless Infrastructure Systems

Grill, Andreas, Englund, Robin January 2009 (has links)
A large amount of today’s telecommunication consists of mobile and short distance wireless applications, where the effect of the channel is unknown and changing over time, and thus needs to be described statistically. Therefore the received signal can not be accurately predicted and has to be estimated. Since telecom systems are implemented in real-time, the hardware in the receiver for estimating the sent signal can for example be based on a DSP where the statistic calculations are performed. A fixed-point DSP with a limited number of bits and a fixed binary point causes larger quantization errors compared to floating point operations with higher accuracy. The focus on this thesis has been to build a library of functions for handling fixed-point data. A class that can handle the most common arithmetic operations and a least squares solver for fixed-point have been implemented in MATLAB code. The MATLAB Fixed-Point Toolbox could have been used to solve this task, but in order to have full control of the algorithms and the fixed-point handling an independent library was created. The conclusion of the simulation made in this thesis is that the least squares result are depending more on the number of integer bits then the number of fractional bits. / En stor del av dagens telekommunikation består av mobila trådlösa kortdistanstillämpningar där kanalens påverkan är okänd och förändras över tid. Signalen måste därför beskrivas statistiskt, vilket gör att den inte kan bestämmas exakt, utan måste estimeras. Eftersom telekomsystem arbetar i realtid består hårdvaran i mottagaren av t.ex. en DSP där de statistiska beräkningarna görs. En fixtals DSP har ett bestämt antal bitar och fast binärpunkt, vilket introducerar ett större kvantiseringsbrus jämfört med flyttalsoperationer som har en större noggrannhet. Tyngdpunkten på det här arbetet har varit att skapa ett bibliotek av funktioner för att hantera fixtal. En klass har skapats i MATLAB-kod som kan hantera de vanligaste aritmetiska operationerna och lösa minsta-kvadrat-problem. MATLAB:s Fixed-Point Toolbox skulle kunna användas för att lösa den här uppgiften men för att ha full kontroll över algoritmerna och fixtalshanteringen behövs ett eget bibliotek av funktioner som är oberoende av MATLAB:s Fixed-Point Toolbox. Slutsatsen av simuleringen gjord i detta examensarbete är att resultatet av minsta-kvadrat-metoden är mer beroende av antalet heltalsbitar än antalet binaler. / fixtal, telekommunikation, DSP, MATLAB, Fixed-Point Toolbox, minsta-kvadrat-lösning, flyttal, Householder QR faktorisering, saturering, kvantiseringsbrus

Page generated in 0.0239 seconds