• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 268
  • 52
  • 27
  • 25
  • 19
  • 10
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • Tagged with
  • 478
  • 478
  • 353
  • 335
  • 187
  • 99
  • 64
  • 63
  • 58
  • 53
  • 52
  • 52
  • 49
  • 49
  • 47
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Distribution system planning a set of new formulations and hybrid algorithms /

Wang, Zhuding. January 2000 (has links) (PDF)
Thesis (Ph. D.)--University of Wisconsin--Milwaukee, 2000. / Major Professor: David C. Yu. Includes bibliographical references.
112

Information structures for single echelon organizations

January 1982 (has links)
Debra A. Stabile, Alexander H. Levis, Susan A. Hall. / Bibliography: p. 44. / "January 1982" / "AFOSR-80-0229" "ONR-N00014-77-C-0532"
113

Theory of 3-4 heap : a thesis submitted in partial fulfilment of the requirements for the degree of Master of Science in the University of Canterbury /

Bethlehem, Tobias. January 2008 (has links)
Thesis (M. Sc.)--University of Canterbury, 2008. / Typescript (photocopy). Includes bibliographical references (p. 118-119). Also available via the World Wide Web.
114

Chování datových struktur při změnách velikosti vyrovnávací paměti / Data structure behavior with variable cache size

Král, Karel January 2017 (has links)
Cache-oblivious algorithms are well understood when the cache size remains constant. Recently variable cache sizes have been considered. We are motivated by programs running in pseudo-parallel and competing for a single cache. This thesis studies the underlying cache model and gives a generalization of two models considered in the literature. We give a new cache model called the "depth model" where pages are accessed by page depths in an LRU cache instead of their ad- dresses. This model allows us to construct cache-oblivious algorithms that cause a certain number of cache misses prescribed by an arbitrary function computable without causing a cache miss. Finally we prove that two algorithms satisfying the regularity property running in pseudo-parallel cause asymptotically the same number of cache misses as their serial computations provided that the cache is satisfying the tall-cache assumption.
115

Preserving large cuts in fully dynamic graphs

Wasim, Omer 21 May 2020 (has links)
This thesis initiates the study of the MAX-CUT problem in fully dynamic graphs. Given a graph $G=(V,E)$, we present the first fully dynamic algorithms to maintain a $\frac{1}{2}$-approximate cut in sublinear update time under edge insertions and deletions to $G$. Our results include the following deterministic algorithms: i) an $O(\Delta)$ \textit{worst-case} update time algorithm, where $\Delta$ denotes the maximum degree of $G$ and ii) an $O(m^{1/2})$ amortized update time algorithm where $m$ denotes the maximum number of edges in $G$ during any sequence of updates. \\ \indent We also give the following randomized algorithms when edge updates come from an oblivious adversary: i) a $\tilde{O}(n^{2/3})$ update time algorithm\footnote{Throughout this thesis, $\tilde{O}$ hides a $O(\text{polylog}(n))$ factor.} to maintain a $\frac{1}{2}$-approximate cut, and ii) a $\min\{\tilde{O}(n^{2/3}), \tilde{O}(\frac{n^{{3/2}+2c_0}}{m^{1/2}})\}$ worst case update time algorithm which maintains a $(\frac{1}{2}-o(1))$-approximate cut for any constant $c_0>0$ with high probability. The latter algorithm is obtained by designing a fully dynamic algorithm to maintain a sparse subgraph with sublinear (in $n$) maximum degree which approximates all large cuts in $G$ with high probability. / Graduate
116

Lokální vyhledávání pro Linux / Desktop Search for Linux

Prívozník, Michal January 2010 (has links)
This work deals with indexing, difeerent types of indexing structures their advantages and disadvantages. It provides the basis for a search engine with support of morphology or difeerent file formats. Provides insight to the basic ideas, which answer is aim of the master's thesis.
117

Separating representation from translation of shared data in a heterogeneous computing environment

Mullins, Robert W. 27 April 2010 (has links)
Master of Science
118

Analyzing Sensitive Data with Local Differential Privacy

Tianhao Wang (10711713) 30 April 2021 (has links)
<div>Vast amounts of sensitive personal information are collected by companies, institutions and governments. A key technological challenge is how to effectively extract knowledge from data while preserving the privacy of the individuals involved. In this dissertation, we address this challenge from the perspective of privacy-preserving data collection and analysis. We focus on investigation of a technique called local differential privacy (LDP) and studied several aspects of it. </div><div><br></div><div><br></div><div>In particular, the thesis serves as a comprehensive study of multiple aspects of the LDP field. We investigated the following seven problems: (1) We studied LDP primitives, i.e., the basic mechanisms that are used to build LDP protocols. (2) We then studied the problem when the domain size is very big (e.g., larger than $2^{32$), where finding the values with high frequency is a challenge, because one needs to enumerate through all values. (3) Another interesting setting is when each user possesses a set of values, instead of a single private value. (4) With the basic problems visited, we then aim to make the LDP protocols practical for real-world scenarios. We investigated the case where each user's data is high-dimensional (e.g., in the census survey, each user has multiple questions to answer), and the goal is to recover the joint distribution among the attributes. (5) We also built a system for companies to issue SQL queries over the data protected under LDP, where each user is associated with some public weights and holds some private values; an LDP version of the values is sent to the server from each user. (6) To further increase the accuracy of LDP, we study how to add post-processing steps to protocols to make them consistent while achieving high accuracy for a wide range of tasks, including frequencies of individual values, frequencies of the most frequent values, and frequencies of subsets of values. (7) Finally, we investigate a different model of LDP which is called the shuffler model. While users still use LDP algorithms to report their sensitive data, now there exists a semi-trusted shuffler that shuffles the users' reports and then send them to the server. This model provides better utility but at the cost of requiring more trust that the shuffler should not collude with the server.</div>
119

Approximate Partially Dynamic Directed Densest Subgraph

Richard Zou Li (15361858) 29 April 2023 (has links)
<p>The densest subgraph problem is an important problem with both theoretical and practical significance. We consider a variant of the problem, the directed densest subgraph problem, under the partially dynamic setting of edge insertions only. We give a algorithm maintaining a (1-ε)-approximate directed densest subgraph in O(log<sup>3</sup>n/ε<sup>6</sup>) amortized time per edge insertion, based on earlier work by Chekuri and Quanrud. This result partially improves on an earlier result by Sawlani and Wang, which guarantees O(log<sup>5</sup>n/ε<sup>7</sup>) worst case time for edge insertions and deletions.</p>
120

Ray Collection Bounding Volume Hierarchy

Rivera, Kris Krishna 01 January 2011 (has links)
This thesis presents Ray Collection BVH, an improvement over a current day Ray Tracing acceleration structure to both build and perform the steps necessary to efficiently render dynamic scenes. Bounding Volume Hierarchy (BVH) is a commonly used acceleration structure, which aides in rendering complex scenes in 3D space using Ray Tracing by breaking the scene of triangles into a simple hierarchical structure. The algorithm this thesis explores was developed in an attempt at accelerating the process of both constructing this structure, and also using it to render these complex scenes more efficiently. The idea of using "ray collection" as a data structure was accidentally stumbled upon by the author in testing a theory he had for a class project. The overall scheme of the algorithm essentially collects a set of localized rays together and intersects them with subsequent levels of the BVH at each build step. In addition, only part of the acceleration structure is built on a per-Ray need basis. During this partial build, the Rays responsible for creating the scene are partially processed, also saving time on the overall procedure. Ray tracing is a widely used technique for simple rendering from realistic images to making movies. Particularly, in the movie industry, the level of realism brought in to the animated movies through ray tracing is incredible. So any improvement brought to these algorithms to improve the speed of rendering would be considered useful and iii welcome. This thesis makes contributions towards improving the overall speed of scene rendering, and hence may be considered as an important and useful contribution

Page generated in 0.4758 seconds