371 |
Univalence and Neharis̓ criterionSchauer, Rita T. January 1975 (has links)
Thesis (M.A.)--Kutztown State College, 1975. / Source: Masters Abstracts International, Volume: 45-06, page: 3173. Typescript. Abstract precedes thesis as [2] preliminary leaves. Includes bibliographical records (leaves 56-58).
|
372 |
Scalable kernel methods for machine learningKulis, Brian Joseph 09 October 2012 (has links)
Machine learning techniques are now essential for a diverse set of applications in computer vision, natural language processing, software analysis, and many other domains. As more applications emerge and the amount of data continues to grow, there is a need for increasingly powerful and scalable techniques. Kernel methods, which generalize linear learning methods to non-linear ones, have become a cornerstone for much of the recent work in machine learning and have been used successfully for many core machine learning tasks such as clustering, classification, and regression. Despite the recent popularity in kernel methods, a number of issues must be tackled in order for them to succeed on large-scale data. First, kernel methods typically require memory that grows quadratically in the number of data objects, making it difficult to scale to large data sets. Second, kernel methods depend on an appropriate kernel function--an implicit mapping to a high-dimensional space--which is not clear how to choose as it is dependent on the data. Third, in the context of data clustering, kernel methods have not been demonstrated to be practical for real-world clustering problems. This thesis explores these questions, offers some novel solutions to them, and applies the results to a number of challenging applications in computer vision and other domains. We explore two broad fundamental problems in kernel methods. First, we introduce a scalable framework for learning kernel functions based on incorporating prior knowledge from the data. This frame-work scales to very large data sets of millions of objects, can be used for a variety of complex data, and outperforms several existing techniques. In the transductive setting, the method can be used to learn low-rank kernels, whose memory requirements are linear in the number of data points. We also explore extensions of this framework and applications to image search problems, such as object recognition, human body pose estimation, and 3-d reconstructions. As a second problem, we explore the use of kernel methods for clustering. We show a mathematical equivalence between several graph cut objective functions and the weighted kernel k-means objective. This equivalence leads to the first eigenvector-free algorithm for weighted graph cuts, which is thousands of times faster than existing state-of-the-art techniques while using significantly less memory. We benchmark this algorithm against existing methods, apply it to image segmentation, and explore extensions to semi-supervised clustering. / text
|
373 |
Teaching functions through modelingBarlow, Brittany Kristine 08 April 2013 (has links)
This report discusses topics relating to modeling functions. The pedagogical content knowledge of student teachers and expert teachers and its effect on their ability to teach through modeling is examined. An observed modeling lesson is presented. To conclude, there will be a discussion about the pitfalls of using calculators in modeling and exploration lessons. / text
|
374 |
Some mean value theorems for certain error terms in analytic number theoryKong, Kar-lun, 江嘉倫 January 2014 (has links)
published_or_final_version / Mathematics / Master / Master of Philosophy
|
375 |
Green's function methods in 1D nanoscale electron waveguidesCorse, William Zachary 03 February 2015 (has links)
R-matrix theory has been used to analyze a variety of scattering potentials in ballistic electron waveguides. The S-matrix is the principal result of this method. Here we analyze ballistic electron scattering in a 1D waveguide with a step potential at its terminus using Green’s function theory. We calculate the S-matrix for this system, scattering particles’ quasibound states, and the survival probability of a particle initially localized in the step region. We then apply R-matrix theory to the same problem. In doing so, we demonstrate the versatility of the Green’s function approach, but also its relative complexity. / text
|
376 |
A Nash-Moser implicit function theorem with Whitney regularity and applicationsVano, John Andrew 28 August 2008 (has links)
Not available / text
|
377 |
Mahler measure evaluations in terms of polylogarithmsCondon, John Donald 28 August 2008 (has links)
Not available / text
|
378 |
New constructions of cryptographic pseudorandom functionsBanerjee, Abhishek 21 September 2015 (has links)
Pseudorandom functions (PRFs) are the building blocks of symmetric-key cryptography. Almost all central goals of symmetric cryptography (e.g., encryption, authentication, identification) have simple solutions that make efficient use of a PRF. Most existing constructions of these objects are either (a) extremely fast in practice but without provable security guarantees based on hard mathematical problems [AES, Blowfish etc.], or (b) provably secure under assumptions like the hardness of factoring, but extremely inefficient in practice.
Lattice-based constructions enjoy strong security guarantees based on natural mathematical problems, are asymptotically and practically efficient, and have thus far even withstood attacks by quantum algorithms. However, most recent lattice-based constructions are of public-key objects, and it's natural to ask whether these advantages can be brought to the world of symmetric-key constructions.
In this thesis, we construct asymptotically fast and parallel pseudorandom functions basing their security on a well known hard lattice problem called the learning with errors problem. We provide several types of constructions that have their respective efficiency and security advantages. In addition to this, we also provide improved constructions of key-homomorphic PRFs that achieve almost optimal quasi-linear magnitudes of public parameters, key sizes and incremental run times. We also propose a new cryptographic primitive, constrained key-homomorphic PRFs, provide secure candidate constructions and applications. Lastly, we detail an implementation in software of a candidate PRF and analyze its efficiency and security.
|
379 |
A comparison of conventional acceleration schemes to the method of residual expansion functionsRustaey, Abid, 1961- January 1989 (has links)
The algebraic equations resulting from a finite difference approximation may be solved numerically. A new scheme that appears quite promising is the method of residual expansion functions. In addition to speedy convergence, it is also independent of the number of algebraic equations under consideration, hence enabling us to analyze larger systems with higher accuracies. A factor which plays an important role in convergence of some numerical schemes is the concept of diagonal dominance. Matrices that converge at high rates are indeed the ones that possess a high degree of diagonal dominance. Another attractive feature of the method of residual expansion functions is its accurate convergence with minimal degree of diagonal dominance. Methods such as simultaneous and successive displacements, Chebyshev and projection are also discussed, but unlike the method of residual expansion functions, their convergence rates are strongly dependent on the degree of diagonal dominance.
|
380 |
The Kodaira vanishing theorem and generalizations潘維凱, Poon, Wai-hoi, Bobby. January 2002 (has links)
published_or_final_version / abstract / toc / Mathematics / Master / Master of Philosophy
|
Page generated in 0.0324 seconds