• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4200
  • 1250
  • 510
  • 267
  • 200
  • 119
  • 91
  • 46
  • 45
  • 45
  • 45
  • 45
  • 45
  • 43
  • 35
  • Tagged with
  • 8420
  • 2437
  • 1620
  • 1142
  • 1110
  • 1108
  • 1052
  • 983
  • 926
  • 919
  • 875
  • 806
  • 734
  • 568
  • 527
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
271

Point-based mathematics and graphics for CADCAM

Cook, Peter Robert January 2000 (has links)
No description available.
272

Aggregation, dissemination and filtering : controlling complex information flows in networks

Banerjee, Siddhartha 25 October 2013 (has links)
Modern day networks, both physical and virtual, are designed to support increasingly sophisticated applications based on complex manipulation of information flows. On the flip side, the ever-growing scale of the underlying networks necessitate the use of low-complexity algorithms. Exploring this tension needs an understanding of the relation between these flows and the network structure. In this thesis, we undertake a study of three such processes: aggregation, dissemination and filtering. In each case, we characterize how the network topology imposes limits on these processes, and how one can use knowledge of the topology to design simple yet efficient control algorithms. Aggregation: We study data-aggregation in sensor networks via in-network computation, i.e., via combining packets at intermediate nodes. In particular, we are interested in maximizing the refresh-rate of repeated/streaming aggregation. For a particular class of functions, we characterize the maximum achievable refresh-rate in terms of the underlying graph structure; furthermore we develop optimal algorithms for general networks, and also a simple distributed algorithm for acyclic wired networks. Dissemination: We consider dissemination processes on networks via intrinsic peer-to-peer transmissions aided by external agents: sources with bounded spreading power, but unconstrained by the network. Such a model captures many static (e.g. long-range links) and dynamic/controlled (e.g. mobile nodes, broadcasting) models for long-range dissemination. We explore the effect of external sources for two dissemination models: spreading processes, wherein nodes once infected remain so forever, and epidemic process, in which nodes can recover from the infection. The main takeaways from our results demonstrate: (i) the role of graph structure, and (ii) the power of random strategies. In spreading processes, we show that external agents dramatically reduce the spreading time in networks that are spatially constrained; furthermore random policies are order-wise optimal. In epidemic processes, we show that for causing long-lasting epidemics, external sources must scale with the number of nodes -- however the strategies can be random. Filtering: A common phenomena in modern recommendation systems is the use of user-feedback to infer the 'value' of an item to other users, resulting in an exploration vs. exploitation trade-off. We study this in a simple natural model, where an 'access-graph' constrains which user is allowed to see which item, and the number of items and the number of item-views are of the same order. We want algorithms that recommend relevant content in an online manner (i.e., instantaneously on user arrival). To this end, we consider both finite-population (i.e., with a fixed set of users and items) and infinite-horizon settings (i.e., with user/item arrivals and departures) -- in each case, we design algorithms with guarantees on the competitive ratio for any arbitrary user. Conversely, we also present upper bounds on the competitive ratio, which show that in many settings our algorithms are orderwise optimal. / text
273

Sequence mining algorithms

Zhang, Minghua, 張明華 January 2004 (has links)
published_or_final_version / Computer Science and Information Systems / Doctoral / Doctor of Philosophy
274

Cross-domain subspace learning

Si, Si, 斯思 January 2010 (has links)
published_or_final_version / Computer Science / Master / Master of Philosophy
275

A bio-inspired object tracking algorithm for minimising power consumption

Lai, Wai-chung., 賴偉聰. January 2009 (has links)
published_or_final_version / Industrial and Manufacturing Systems Engineering / Master / Master of Philosophy
276

Design and analysis of efficient algorithms for finding frequent itemsin a data stream

Zhang, Wen, 张问 January 2011 (has links)
published_or_final_version / Computer Science / Master / Master of Philosophy
277

Analysis and synthesis of positive systems and related gene network models

Li, Ping, 李平 January 2011 (has links)
The Best PhD Thesis in the Faculties of Dentistry, Engineering, Medicine and Science (University of Hong Kong), Li Ka Shing Prize,2010-11 / published_or_final_version / Mechanical Engineering / Doctoral / Doctor of Philosophy
278

Algorithms for some combinatorial optimization problems

Chen, Qin, 陈琴 January 2011 (has links)
published_or_final_version / Mathematics / Doctoral / Doctor of Philosophy
279

Medial axis simplification based on global geodesic slope and accumulated hyperbolic distance

Wang, Rui, 王睿 January 2012 (has links)
The medial axis is an important shape representation and the computation of the medial axis is a fundamental research problem in computer graphics. Practically, the medial axis is widely used in various aspects of computer graphics, such as shape analysis, image segmentation, skeleton extraction and mesh generation and so forth. However, the applications of the medial axis have been limited by its sensitivity to boundary perturbations. This characteristic may lead to a number of noise branches and increase the complexity of the medial axis. To solve the sensitivity problem, it is critical to simplify the medial axis. This thesis first investigates the algorithms for computing medial axes of different input shapes. Several algorithms for the filtration of medial axes are then reviewed, such as the local importance measurement algorithms, boundary smoothness algorithms, and the global algorithms. Two novel algorithms for the simplification of the medial axis are proposed to generate a stable and simplified medial axis as well as its reconstructed boundary. The developed Global Geodesic Slope(GGS) algorithm for the medial axis simplification is based on the global geodesic slope defined in this thesis, which combines the advantages of the global and the local algorithms. The GGS algorithm prunes the medial axis according to local features as well as the relative size of the shape. It is less sensitive to boundary noises than the local algorithms, and can maintain the features of the shape in highly concave regions while the global algorithms may not. The other simplification algorithm we propose is the Accumulated Hyperbolic Distance(AHD) algorithm. It directly uses the evaluation criterion of the error, accumulated hyperbolic distance defined in this thesis, as the pruning measurement in the filtration process. It guarantees the upper bound of the error between the reconstructed shape and the original one within the defined threshold. The AHD algorithm avoids sudden changes of the reconstructed shape as the defined threshold changes. / published_or_final_version / Computer Science / Master / Master of Philosophy
280

New results on online job scheduling

Zhu, Jianqiao., 朱剑桥. January 2013 (has links)
This thesis presents several new results on online job scheduling. Job scheduling is a basic requirement of many practical computer systems, and the scheduling behavior directly affects a system’s performance. In theoretical aspect, scheduling scenarios are abstracted into scheduling models, which are studied mathematically. In this thesis, we look into a variety of scheduling models which are under active research. We incorporate these models and organize them into generalized pictures. We first study non-clairvoyant scheduling to minimize weighted flow time on two different multi-processor models. In the first model, processors are all identical and jobs can possibly be speeded up by running on several processors in parallel. Under the non-clairvoyant model, the online scheduler has no information about the actual job size and degree of speed-up due to parallelism during the execution of a job, yet it has to determine dynamically when and how many processors to run the jobs. The literature contains several O(1)-competitive algorithms for this problem under the unit-weight multi-processor setting [13, 14] as well as the weighted single-processor setting [5]. This thesis shows the first O(1)-competitive algorithm for weighted flow time in the multi-processor setting. In the second model, we consider processors with different functionalities and only processors of the same functionality can work on the same job in parallel to achieve some degree of speed up. Here a job is modeled as a sequence of non-clairvoyant demands of different functionalities. This model is derived naturally from the classical job shop scheduling; but as far as we know, there is no previous work on scheduling to minimize flow time under this multi-processor model. In this thesis we take a first step to study non-clairvoyant scheduling on this multi-processor model. Motivated by the literature on 2-machine job shop scheduling, we focus on the special case when processors are divided into two types of functionalities, and we show a non-clairvoyant algorithm that is O(1)-competitive for weighted flow time. This thesis also initiates the study of online scheduling with rejection penalty in the non-clairvoyant setting. In the rejection penalty model, jobs can be rejected with a penalty, and the user cost of a job is defined as the weighted flow time of the job plus the penalty if it is rejected before completion. Previous work on minimizing the total user cost focused on the clairvoyant single-processor setting [3, 10] and has produced O(1)-competitive online algorithm for jobs with arbitrary weights and penalties. This thesis gives the first non-clairvoyant algorithms that are O(1)-competitive for minimizing the total user cost on a single processor and multi-processors, when using slightly faster (i.e., (1 + ∈)-speed for any ∈> 0) processors. Note that if no extra speed is allowed, no online algorithm can be O(1)-competitive even for minimizing (unweighted) flow time alone. The above results assume a processor running at a fixed speed. This thesis shows more interesting results on extending the above study to the dynamic speed scaling model, where the processor can vary the speed dynamically and the rate of energy consumption is an arbitrary increasing function of speed. A scheduling algorithm has to decide job rejection and determine the order and speed of job execution. It is interesting to study the tradeoff between the above-mentioned user cost and energy. This thesis gives two O(1)-competitive non-clairvoyant algorithms for minimizing the user cost plus energy on a single processor and multi-processors, respectively. / published_or_final_version / Computer Science / Master / Master of Philosophy

Page generated in 0.0336 seconds