• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 819
  • 148
  • 89
  • 72
  • 66
  • 32
  • 17
  • 15
  • 9
  • 8
  • 7
  • 7
  • 5
  • 5
  • 4
  • Tagged with
  • 1596
  • 196
  • 195
  • 188
  • 166
  • 112
  • 103
  • 100
  • 91
  • 85
  • 80
  • 77
  • 76
  • 76
  • 75
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
391

Approximation algorithms for multidimensional bin packing

Khan, Arindam 07 January 2016 (has links)
The bin packing problem has been the corner stone of approximation algorithms and has been extensively studied starting from the early seventies. In the classical bin packing problem, we are given a list of real numbers in the range (0, 1], the goal is to place them in a minimum number of bins so that no bin holds numbers summing to more than 1. In this thesis we study approximation algorithms for three generalizations of bin packing: geometric bin packing, vector bin packing and weighted bipartite edge coloring. In two-dimensional (2-D) geometric bin packing, we are given a collection of rectangular items to be packed into a minimum number of unit size square bins. Geometric packing has vast applications in cutting stock, vehicle loading, pallet packing, memory allocation and several other logistics and robotics related problems. We consider the widely studied orthogonal packing case, where the items must be placed in the bin such that their sides are parallel to the sides of the bin. Here two variants are usually studied, (i) where the items cannot be rotated, and (ii) they can be rotated by 90 degrees. We give a polynomial time algorithm with an asymptotic approximation ratio of $\ln(1.5) + 1 \approx 1.405$ for the versions with and without rotations. We have also shown the limitations of rounding based algorithms, ubiquitous in bin packing algorithms. We have shown that any algorithm that rounds at least one side of each large item to some number in a constant size collection values chosen independent of the problem instance, cannot achieve an asymptotic approximation ratio better than 3/2. In d-dimensional vector bin packing (VBP), each item is a d-dimensional vector that needs to be packed into unit vector bins. The problem is of great significance in resource constrained scheduling and also appears in recent virtual machine placement in cloud computing. Even in two dimensions, it has novel applications in layout design, logistics, loading and scheduling problems. We obtain a polynomial time algorithm with an asymptotic approximation ratio of $\ln(1.5) + 1 \approx 1.405$ for 2-D VBP. We also obtain a polynomial time algorithm with almost tight (absolute) approximation ratio of $1+\ln(1.5)$ for 2-D VBP. For $d$ dimensions, we give a polynomial time algorithm with an asymptotic approximation ratio of $\ln(d/2) + 1.5 \approx \ln d+0.81$. We also consider vector bin packing under resource augmentation. We give a polynomial time algorithm that packs vectors into $(1+\epsilon)Opt$ bins when we allow augmentation in (d - 1) dimensions and $Opt$ is the minimum number of bins needed to pack the vectors into (1,1) bins. In weighted bipartite edge coloring problem, we are given an edge-weighted bipartite graph $G=(V,E)$ with weights $w: E \rightarrow [0,1]$. The task is to find a proper weighted coloring of the edges with as few colors as possible. An edge coloring of the weighted graph is called a proper weighted coloring if the sum of the weights of the edges incident to a vertex of any color is at most one. This problem is motivated by rearrangeability of 3-stage Clos networks which is very useful in various applications in interconnected networks and routing. We show a polynomial time approximation algorithm that returns a proper weighted coloring with at most $\lceil 2.2223m \rceil$ colors where $m$ is the minimum number of unit sized bins needed to pack the weight of all edges incident at any vertex. We also show that if all edge weights are $>1/4$ then $\lceil 2.2m \rceil$ colors are sufficient.
392

Modeling and experimental investigation on ultrasonic-vibration-assisted grinding

Qin, Na January 1900 (has links)
Doctor of Philosophy / Department of Industrial & Manufacturing Systems Engineering / Zhijian Pei / Poor machinability of hard-to-machine materials (such as advanced ceramics and titanium) limits their applications in industries. Ultrasonic-vibration-assisted grinding (UVAG), a hybrid machining process combining material-removal mechanisms of diamond grinding and ultrasonic machining, is one cost-effective machining method for these materials. Compared to ultrasonic machining, UVAG has much higher material removal rate while maintaining lower cutting pressure and torque, reduced edge chipping and surface damage, improved accuracy, and lower tool wear rate. However, physics-based models to predict cutting force in UVAG have not been reported to date. Furthermore, edge chipping is one of the technical challenges in UVAG of brittle materials. There is no report related to effects of cutting tool design on edge chipping in UVAG of brittle materials. The goal of this research is to provide new knowledge of machining these hard-to-machine materials with UVAG for further improvements in machining cost and surface quality. First, a thorough literature review is given to show what has been done in this field. Then, a physics-based predictive cutting force model and a mechanistic cutting force model are developed for UVAG of ductile and brittle materials, respectively. Effects of input variables (diamond grain number, diamond grain diameter, vibration amplitude, vibration frequency, spindle speed, and federate) on cutting force are studied based on the developed models. Interaction effects of input variables on cutting force are also studied. In addition, an FEA model is developed to study effects of cutting tool design and input variables on edge chipping. Furthermore, some trends predicted from the developed models are verified through experiments. The results in this dissertation could provide guidance for choosing reasonable process variables and designing diamond tools for UVAG.
393

CELLULAR BROADBAND TELEMETRY OPTIONS FOR THE 21st CENTURY: Looking at broadband cellular from a telemetry perspective

Smith, Brian J. 10 1900 (has links)
ITC/USA 2006 Conference Proceedings / The Forty-Second Annual International Telemetering Conference and Technical Exhibition / October 23-26, 2006 / Town and Country Resort & Convention Center, San Diego, California / With the recent broadband upgrades to various cellular infrastructures and the myriad new emerging wireless broadband standards and services offered by carriers, it is often difficult to navigate this sea of technology. In deciding the best choice for broadband telemetry applications, one must look not only at the technology, but also at the economics, market timing, bandwidths, legacy issues, future expandability and coverage, security, protocols, and the requirements of the specific application. This paper reviews the technology roadmap of cellular providers keeping these issues in perspective as they apply to TCP/IP data for images, audio, video, and other broadband telemetry data using CDMA 1xRTT, EV-DO, and EV-DO Rev A systems as well as GSM GPRS/EDGE, UMTS/W-CDMA, HSDPA, and HSUPA networks. Lastly, issues seen by system integrators when using cellular channels for telemetry applications are examined, and a case is presented for overcoming many of these issues through the use of cellular routers.
394

Calculation of the radial electric field in the DIII-D tokamak edge plasma

Wilks, Theresa M. 27 May 2016 (has links)
The application of a theoretical framework for calculating the radial electric field in the DIII-D tokamak edge plasma is discussed. Changes in the radial electric field are correlated with changes in many important edge plasma phenomena, including rotation, the L-H transition, and ELM suppression. A self-consistent model for the radial electric field may therefore suggest a means of controlling other important parameters in the edge plasma. Implementing a methodology for calculating the radial electric field can be difficult due to its complex interrelationships with ion losses, rotation, radial ion fluxes, and momentum transport. The radial electric field enters the calculations for ion orbit loss. This ion orbit loss, in turn, affects the radial ion flux both directly and indirectly through return currents, which have been shown theoretically to torque the edge plasma causing rotation. The edge rotation generates a motional radial electric field, which can influence both the edge pedestal structure and additional ion orbit losses. In conjunction with validating the analytical modified Ohm’s Law model for calculating the radial electric field, modeling efforts presented in this dissertation focus on improving calculations of ion orbit losses and x-loss into the divertor region, as well as the formulation of models for fast beam ion orbit losses and the fraction of lost particles that return to the confined plasma. After rigorous implementation of the ion orbit loss model and related mechanisms into fluid equations, efforts are shifted to calculate effects from rotation on the radial electric field calculation and compared to DIII-D experimental measurements and computationally simulated plasmas. This calculation of the radial electric field will provide a basis for future modeling of a fast, predictive calculation to characterize future tokamaks like ITER.
395

Fog Computing with Go: A Comparative Study

Butterfield, Ellis H 01 January 2016 (has links)
The Internet of Things is a recent computing paradigm, de- fined by networks of highly connected things – sensors, actuators and smart objects – communicating across networks of homes, buildings, vehicles, and even people. The Internet of Things brings with it a host of new problems, from managing security on constrained devices to processing never before seen amounts of data. While cloud computing might be able to keep up with current data processing and computational demands, it is unclear whether it can be extended to the requirements brought forth by Internet of Things. Fog computing provides an architectural solution to address some of these problems by providing a layer of intermediary nodes within what is called an edge network, separating the local object networks and the Cloud. These edge nodes provide interoperability, real-time interaction, routing, and, if necessary, computational delegation to the Cloud. This paper attempts to evaluate Go, a distributed systems language developed by Google, in the context of requirements set forth by Fog computing. Similar methodologies of previous literature are simulated and benchmarked against in order to assess the viability of Go in the edge nodes of Fog computing architecture.
396

Fast and accurate lithography simulation and optical proximity correction for nanometer design for manufacturing

Yu, Peng 23 October 2009 (has links)
As semiconductor manufacture feature sizes scale into the nanometer dimension, circuit layout printability is significantly reduced due to the fundamental limit of lithography systems. This dissertation studies related research topics in lithography simulation and optical proximity correction. A recursive integration method is used to reduce the errors in transmission cross coefficient (TCC), which is an important factor in the Hopkins Equation in aerial image simulation. The runtime is further reduced, without increasing the errors, by using the fact that TCC is usually computed on uniform grids. A flexible software framework, ELIAS, is also provided, which can be used to compute TCC for various lithography settings, such as different illuminations. Optimal coherent approximations (OCAs), which are used for full-chip image simulation, can be speeded up by considering the symmetric properties of lithography systems. The runtime improvement can be doubled without loss of accuracy. This improvement is applicable to vectorial imaging models as well. Even in the case where the symmetric properties do not hold strictly, the new method can be generalized such that it could still be faster than the old method. Besides new numerical image simulation algorithms, variations in lithography systems are also modeled. A Variational LIthography Model (VLIM) as well as its calibration method are provided. The Variational Edge Placement Error (V-EPE) metrics, which is an improvement of the original Edge Placement Error (EPE) metrics, is introduced based on the model. A true process-variation aware OPC (PV-OPC) framework is proposed using the V-EPE metric. Due to the analytical nature of VLIM, our PV-OPC is only about 2-3× slower than the conventional OPC, but it explicitly considers the two main sources of process variations (exposure dose and focus variations) during OPC. The EPE metrics have been used in conventional OPC algorithms, but it requires many intensity simulations and takes the majority of the OPC runtime. By making the OPC algorithm intensity based (IB-OPC) rather than EPE based, we can reduce the number of intensity simulations and hence reduce the OPC runtime. An efficient intensity derivative computation method is also provided, which makes the new algorithm converge faster than the EPE based algorithm. Our experimental results show a runtime speedup of more than 10× with comparable result quality compared to the EPE based OPC. The above mentioned OPC algorithms are vector based. Other categories of OPC algorithms are pixel based. Vector based algorithms in general generate less complex masks than those of pixel based ones. But pixel based algorithms produce much better results than vector based ones in terms of contour fidelity. Observing that vector based algorithms preserve mask shape topologies, which leads to lower mask complexities, we combine the strengths of both categories—the topology invariant property and the pixel based mask representation. A topological invariant pixel based OPC (TIP-OPC) algorithm is proposed, with lithography friendly mask topological invariant operations and an efficient Fast Fourier Transform (FFT) based cost function sensitivity computation. The experimental results show that TIP-OPC can achieve much better post-OPC contours compared with vector based OPC while maintaining the mask shape topologies. / text
397

ALGEBRAIC PROPERTIES OF EDGE IDEALS

Bouchat, Rachelle R. 01 January 2008 (has links)
Given a simple graph G, the corresponding edge ideal IG is the ideal generated by the edges of G. In 2007, Ha and Van Tuyl demonstrated an inductive procedure to construct the minimal free resolution of certain classes of edge ideals. We will provide a simplified proof of this inductive method for the class of trees. Furthermore, we will provide a comprehensive description of the finely graded Betti numbers occurring in the minimal free resolution of the edge ideal of a tree. For specific subclasses of trees, we will generate more precise information including explicit formulas for the projective dimensions of the quotient rings of the edge ideals. In the second half of this thesis, we will consider the class of simple bipartite graphs known as Ferrers graphs. In particular, we will study a class of monomial ideals that arise as initial ideals of the defining ideals of the toric rings associated to Ferrers graphs. The toric rings were studied by Corso and Nagel in 2007, and by studying the initial ideals of the defining ideals of the toric rings we are able to show that in certain cases the toric rings of Ferrers graphs are level.
398

Cathodoluminescence studies of defects and piezoelectric fields in GaN

Henley, S. J. January 2002 (has links)
No description available.
399

Optimal Path Queries in Very Large Spatial Databases

Zhang, Jie January 2005 (has links)
Researchers have been investigating the optimal route query problem for a long time. Optimal route queries are categorized as either unconstrained or constrained queries. Many main memory based algorithms have been developed to deal with the optimal route query problem. Among these, Dijkstra's shortest path algorithm is one of the most popular algorithms for the unconstrained route query problem. The constrained route query problem is more complicated than the unconstrained one, and some constrained route query problems such as the Traveling Salesman Problem and Hamiltonian Path Problem are NP-hard. There are many algorithms dealing with the constrained route query problem, but most of them only solve a specific case. In addition, all of them require that the entire graph resides in the main memory. Recently, due to the need of applications in very large graphs, such as the digital maps managed by Geographic Information Systems (GIS), several disk-based algorithms have been derived by using divide-and-conquer techniques to solve the shortest path problem in a very large graph. However, until now little research has been conducted on the disk-based constrained problem. <br /><br /> This thesis presents two algorithms: 1) a new disk-based shortest path algorithm (DiskSPNN), and 2) a new disk-based optimal path algorithm (DiskOP) that answers an optimal route query without passing a set of forbidden edges in a very large graph. Both algorithms fit within the same divide-and-conquer framework as the existing disk-based shortest path algorithms proposed by Ning Zhang and Heechul Lim. Several techniques, including query super graph, successor fragment and open boundary node pruning are proposed to improve the performance of the previous disk-based shortest path algorithms. Furthermore, these techniques are applied to the DiskOP algorithm with minor changes. The proposed DiskOP algorithm depends on the concept of collecting a set of boundary vertices and simultaneously relaxing their adjacent super edges. Even if the forbidden edges are distributed in all the fragments of a graph, the DiskOP algorithm requires little memory. Our experimental results indicate that the DiskSPNN algorithm performs better than the original ones with respect to the I/O cost as well as the running time, and the DiskOP algorithm successfully solves a specific constrained route query problem in a very large graph.
400

Face pose estimation in monocular images

Shafi, Muhammad January 2010 (has links)
People use orientation of their faces to convey rich, inter-personal information. For example, a person will direct his face to indicate who the intended target of the conversation is. Similarly in a conversation, face orientation is a non-verbal cue to listener when to switch role and start speaking, and a nod indicates that a person has understands, or agrees with, what is being said. Further more, face pose estimation plays an important role in human-computer interaction, virtual reality applications, human behaviour analysis, pose-independent face recognition, driver s vigilance assessment, gaze estimation, etc. Robust face recognition has been a focus of research in computer vision community for more than two decades. Although substantial research has been done and numerous methods have been proposed for face recognition, there remain challenges in this field. One of these is face recognition under varying poses and that is why face pose estimation is still an important research area. In computer vision, face pose estimation is the process of inferring the face orientation from digital imagery. It requires a serious of image processing steps to transform a pixel-based representation of a human face into a high-level concept of direction. An ideal face pose estimator should be invariant to a variety of image-changing factors such as camera distortion, lighting condition, skin colour, projective geometry, facial hairs, facial expressions, presence of accessories like glasses and hats, etc. Face pose estimation has been a focus of research for about two decades and numerous research contributions have been presented in this field. Face pose estimation techniques in literature have still some shortcomings and limitations in terms of accuracy, applicability to monocular images, being autonomous, identity and lighting variations, image resolution variations, range of face motion, computational expense, presence of facial hairs, presence of accessories like glasses and hats, etc. These shortcomings of existing face pose estimation techniques motivated the research work presented in this thesis. The main focus of this research is to design and develop novel face pose estimation algorithms that improve automatic face pose estimation in terms of processing time, computational expense, and invariance to different conditions.

Page generated in 0.0436 seconds