• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1311
  • 444
  • 238
  • 177
  • 78
  • 38
  • 29
  • 25
  • 23
  • 19
  • 18
  • 14
  • 12
  • 11
  • 10
  • Tagged with
  • 3074
  • 540
  • 483
  • 471
  • 455
  • 427
  • 417
  • 372
  • 321
  • 301
  • 295
  • 282
  • 262
  • 242
  • 234
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Deletion-Induced Triangulations

Taylor, Clifford T 01 January 2015 (has links)
Let d > 0 be a fixed integer and let A ⊆ ℝd be a collection of n ≥ d + 2 points which we lift into ℝd+1. Further let k be an integer satisfying 0 ≤ k ≤ n-(d+2) and assign to each k-subset of the points of A a (regular) triangulation obtained by deleting the specified k-subset and projecting down the lower hull of the convex hull of the resulting lifting. Next, for each triangulation we form the characteristic vector defined by Gelfand, Kapranov, and Zelevinsky by assigning to each vertex the sum of the volumes of all adjacent simplices. We then form a vector for the lifting, which we call the k-compound GKZ-vector, by summing all the characteristic vectors. Lastly, we construct a polytope Σk(A) ⊆ ℝ|A| by taking the convex hull of all obtainable k-compound GKZ-vectors by various liftings of A, and note that $\Sigma_0(\A)$ is the well-studied secondary polytope corresponding to A. We will see that by varying k, we obtain a family of polytopes with interesting properties relating to Minkowski sums, Gale transforms, and Lawrence constructions, with the member of the family with maximal k corresponding to a zonotope studied by Billera, Fillamen, and Sturmfels. We will also discuss the case k = d = 1, in which we can provide a combinatorial description of the vertices allowing us to better understand the graph of the polytope and to obtain formulas for the numbers of vertices and edges present.
42

Chebyshev Approximation of Discrete polynomials and Splines

Park, Jae H. 31 December 1999 (has links)
The recent development of the impulse/summation approach for efficient B-spline computation in the discrete domain should increase the use of B-splines in many applications. Because we show here how the impulse/summation approach can also be used for constructing polynomials, the approach with a search table approach for the inverse square root operation allows an efficient shading algorithm for rendering an image in a computer graphics system. The approach reduces the number of multiplies and makes it possible for the entire rendering process to be implemented using an integer processor. In many applications, Chebyshev approximation with polynomials and splines is useful in representing a stream of data or a function. Because the impulse/summation approach is developed for discrete systems, some aspects of traditional continuous approximation are not applicable. For example, the lack of the continuity concept in the discrete domain affects the definition of the local extrema of a function. Thus, the method of finding the extrema must be changed. Both forward differences and backward differences must be checked to find extrema instead of using the first derivative in the continuous domain approximation. Polynomial Chebyshev approximation in the discrete domain, just as in the continuous domain, forms a Chebyshev system. Therefore, the Chebyshev approximation process always produces a unique best approximation. Because of the non-linearity of free knot polynomial spline systems, there may be more than one best solution and the convexity of the solution space cannot be guaranteed. Thus, a Remez Exchange Algorithm may not produce an optimal approximation. However, we show that the discrete polynomial splines approximate a function using a smaller number of parameters (for a similar minimax error) than the discrete polynomials do. Also, the discrete polynomial spline requires much less computation and hardware than the discrete polynomial for curve generation when we use the impulse/summation approach. This is demonstrated using two approximated FIR filter implementations. / Ph. D.
43

Multicolor Ramsey and List Ramsey Numbers for Double Stars

Ruotolo, Jake 01 January 2022 (has links)
The core idea of Ramsey theory is that complete disorder is impossible. Given a large structure, no matter how complex it is, we can always find a smaller substructure that has some sort of order. For a graph H, the k-color Ramsey number r(H; k) of H is the smallest integer n such that every k-edge-coloring of Kn contains a monochromatic copy of H. Despite active research for decades, very little is known about Ramsey numbers of graphs. This is especially true for r(H; k) when k is at least 3, also known as the multicolor Ramsey number of H. Let Sn denote the star on n+1 vertices, the graph with one vertex of degree n (the center of Sn) and n vertices of degree 1. The double star S(n,m) is the graph consisting of the disjoint union of Sn and Sm together with an edge joining their centers. In this thesis, we study the multicolor Ramsey number of double stars. We obtain upper and lower bounds for r(S(n,m); k) when k is at least 3 and prove that r(S(n,m); k) = nk + m + 2 for k odd and n sufficiently large. We also investigate a new variant of the Ramsey number known as the list Ramsey number. Let L be an assignment of k-element subsets of the positive integers to the edges of Kn. A k-edge-coloring c of Kn is an L-coloring if c(e) belongs to L(e) for each edge e of Kn. The list Ramsey number rl(H; k) of H is the smallest integer n such that there is some L for which every L-coloring of Kn contains a monochromatic copy of H. In this thesis, we study rl(S(1,1); p) and rl(Sn; p), where p is an odd prime number.
44

The discrete cosine transform

Flickner, Myron Dale January 2011 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries
45

Polyhedral Lyapunov functions and stabilization under polyhedral constraints

Marikar, Mohamed Tariq January 1997 (has links)
No description available.
46

Chip Firing Games and Riemann-Roch Properties for Directed Graphs

Gaslowitz, Joshua Z 01 May 2013 (has links)
The following presents a brief introduction to tropical geometry, especially tropical curves, and explains a connection to graph theory. We also give a brief summary of the Riemann-Roch property for graphs, established by Baker and Norine (2007), as well as the tools used in their proof. Various generalizations are described, including a more thorough description of the extension to strongly connected directed graphs by Asadi and Backman (2011). Building from their constructions, an algorithm to determine if a directed graph has Row Riemann-Roch Property is given and thoroughly explained.
47

Postprocessing of images coded using block DCT at low bit rates.

January 2007 (has links)
Sun, Deqing. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2007. / Includes bibliographical references (leaves 86-91). / Abstracts in English and Chinese. / Abstract --- p.i / 摘要 --- p.iii / Contributions --- p.iv / Acknowledgement --- p.vi / Abbreviations --- p.xviii / Notations --- p.xxi / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Image compression and postprocessing --- p.1 / Chapter 1.2 --- A brief review of postprocessing --- p.3 / Chapter 1.3 --- Objective and methodology of the research --- p.7 / Chapter 1.4 --- Thesis organization --- p.8 / Chapter 1.5 --- A note on publication --- p.9 / Chapter 2 --- Background Study --- p.11 / Chapter 2.1 --- Image models --- p.11 / Chapter 2.1.1 --- Minimum edge difference (MED) criterion for block boundaries --- p.12 / Chapter 2.1.2 --- van Beek's edge model for an edge --- p.15 / Chapter 2.1.3 --- Fields of experts (FoE) for an image --- p.16 / Chapter 2.2 --- Degradation models --- p.20 / Chapter 2.2.1 --- Quantization constraint set (QCS) and uniform noise --- p.21 / Chapter 2.2.2 --- Narrow quantization constraint set (NQCS) --- p.22 / Chapter 2.2.3 --- Gaussian noise --- p.22 / Chapter 2.2.4 --- Edge width enlargement after quantization --- p.25 / Chapter 2.3 --- Use of these models for postprocessing --- p.27 / Chapter 2.3.1 --- MED and edge models --- p.27 / Chapter 2.3.2 --- The FoE prior model --- p.27 / Chapter 3 --- Postprocessing using MED and edge models --- p.28 / Chapter 3.1 --- Blocking artifacts suppression by coefficient restoration --- p.29 / Chapter 3.1.1 --- AC coefficient restoration by MED --- p.29 / Chapter 3.1.2 --- General derivation --- p.31 / Chapter 3.2 --- Detailed algorithm --- p.34 / Chapter 3.2.1 --- Edge identification --- p.36 / Chapter 3.2.2 --- Region classification --- p.36 / Chapter 3.2.3 --- Edge reconstruction --- p.37 / Chapter 3.2.4 --- Image reconstruction --- p.37 / Chapter 3.3 --- Experimental results --- p.38 / Chapter 3.3.1 --- Results of the proposed method --- p.38 / Chapter 3.3.2 --- Comparison with one wavelet-based method --- p.39 / Chapter 3.4 --- On the global minimum of the edge difference . . --- p.41 / Chapter 3.4.1 --- The constrained minimization problem . . --- p.41 / Chapter 3.4.2 --- Experimental examination --- p.42 / Chapter 3.4.3 --- Discussions --- p.43 / Chapter 3.5 --- Conclusions --- p.44 / Chapter 4 --- Postprocessing by the MAP criterion using FoE --- p.49 / Chapter 4.1 --- The proposed method --- p.49 / Chapter 4.1.1 --- The MAP criterion --- p.49 / Chapter 4.1.2 --- The optimization problem --- p.51 / Chapter 4.2 --- Experimental results --- p.52 / Chapter 4.2.1 --- Setting algorithm parameters --- p.53 / Chapter 4.2.2 --- Results --- p.56 / Chapter 4.3 --- Investigation on the quantization noise model . . --- p.58 / Chapter 4.4 --- Conclusions --- p.61 / Chapter 5 --- Conclusion --- p.71 / Chapter 5.1 --- Contributions --- p.71 / Chapter 5.1.1 --- Extension of the DCCR algorithm --- p.71 / Chapter 5.1.2 --- Examination of the MED criterion --- p.72 / Chapter 5.1.3 --- Use of the FoE prior in postprocessing . . --- p.72 / Chapter 5.1.4 --- Investigation on the quantization noise model --- p.73 / Chapter 5.2 --- Future work --- p.73 / Chapter 5.2.1 --- Degradation model --- p.73 / Chapter 5.2.2 --- Efficient implementation of the MAP method --- p.74 / Chapter 5.2.3 --- Postprocessing of compressed video --- p.75 / Chapter A --- Detailed derivation of coefficient restoration --- p.76 / Chapter B --- Implementation details of the FoE prior --- p.81 / Chapter B.1 --- The FoE prior model --- p.81 / Chapter B.2 --- Energy function and its gradient --- p.83 / Chapter B.3 --- Conjugate gradient descent method --- p.84 / Bibliography --- p.86
48

Nature Abhors an Empty Vacuum

Minsky, Marvin 01 August 1981 (has links)
Imagine a crystalline world of tiny, discrete "cells", each knowing only what its nearest neighbors do. Each volume of space contains only a finite amount of information, because space and time come in discrete units. In such a universe, we'll construct analogs of particles and fields ??d ask what it would mean for these to satisfy constraints like conservation of momentum. In each case classical mechanics will break down ?? scales both small and large, and strange phenomena emerge: a maximal velocity, a slowing of internal clocks, a bound on simultaneous measurement, and quantum-like effects in very weak or intense fields. This fantasy about conservation in cellular arrays was inspired by this first conference on computation and physics, a subject destined to produce profound and powerful theories. I wish this essay could include one such; alas, it only portrays images of what such theories might be like. The "cellular array" idea is popular already in such forms as Ising models, renormalization theories, the "Game of Life" and Von Neumann's work on self-producing machines. This essay exploits many unpublished ideas I got from Edward Fredkin. The ideas about field and particle are original; Richard Feynman persuaded me to consider fields instead of forces, but is not responsible for my compromise on potential surfaces. I also thank Danny Hillis and Richard Stallman for other ideas.
49

Assuring production-derived quality in Canadian food markets

Innes, Brian Grant 26 January 2009
Food quality attributes arising from farming methods are important to many Canadians. The credence nature of these quality attributes necessitates some form of quality assurance for accurate signalling to consumers. This thesis examines the appropriate role for private, third party, and government actors in credible quality assurance systems for production-derived attributes. Concurrently, it explores the nature of trust that Canadians put in various organizations for quality assurance. In a nationwide survey, Canadian consumers obtained significant benefits from government verification of pesticide free and environmentally sustainable grains contained in pre-packaged sliced bread. The data was collected using a discrete choice experiment. Farmers, third party, and government organizations were similarly trusted for accurate information about farming methods. The dimensions of this trust varied across organizations. Government standards relating to environmental sustainability were perceived as most effective. Results obtained using a latent class multinomial logit model showed that respondents who most valued production-derived food quality also received the greatest benefit from government verification and significant negative utility from supermarket or third party verification. In relative terms, the difference in utility between third party and government verification represents 141% of the value of the environmentally sustainable attribute and 87% of the pesticide free attribute. The results suggest that significant consumer benefit can be achieved if government were to take a leading role in quality assurance for production-derived quality.
50

Assuring production-derived quality in Canadian food markets

Innes, Brian Grant 26 January 2009 (has links)
Food quality attributes arising from farming methods are important to many Canadians. The credence nature of these quality attributes necessitates some form of quality assurance for accurate signalling to consumers. This thesis examines the appropriate role for private, third party, and government actors in credible quality assurance systems for production-derived attributes. Concurrently, it explores the nature of trust that Canadians put in various organizations for quality assurance. In a nationwide survey, Canadian consumers obtained significant benefits from government verification of pesticide free and environmentally sustainable grains contained in pre-packaged sliced bread. The data was collected using a discrete choice experiment. Farmers, third party, and government organizations were similarly trusted for accurate information about farming methods. The dimensions of this trust varied across organizations. Government standards relating to environmental sustainability were perceived as most effective. Results obtained using a latent class multinomial logit model showed that respondents who most valued production-derived food quality also received the greatest benefit from government verification and significant negative utility from supermarket or third party verification. In relative terms, the difference in utility between third party and government verification represents 141% of the value of the environmentally sustainable attribute and 87% of the pesticide free attribute. The results suggest that significant consumer benefit can be achieved if government were to take a leading role in quality assurance for production-derived quality.

Page generated in 0.0307 seconds