• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2615
  • 940
  • 381
  • 347
  • 331
  • 101
  • 66
  • 49
  • 40
  • 36
  • 34
  • 32
  • 31
  • 27
  • 26
  • Tagged with
  • 5983
  • 1458
  • 887
  • 728
  • 724
  • 703
  • 493
  • 493
  • 481
  • 451
  • 421
  • 414
  • 386
  • 366
  • 342
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
421

Communication of 3D Graphic Models

Park, Insu 10 1900 (has links)
<p> A new area-based mesh simplification algorithm is described. The proposed algorithm removes the center vertex of a polygon which consists of n ≥ 3 faces and represents that polygon with n - 2 faces. A global search method is adapted that iteratively determines which vertex is to be removed using the proposed area-based distortion measurement. Although the global search method requires more computations compared to a local search method, it guarantees better quality of approximation. Various re-triangulations are also considered to improve the perceptual quality of the final approximation.</p> <p> From multiple re-triangulations, one with minimum distortion is selected to represent the original mesh. Experimental results demonstrate the performance of the proposed algorithm for data reduction while maintaining the quality of the rendered objects. The performance of multiple description decoder when not all descriptions are available depends on the decoding strategy. By approximating lost description the distortion can be reduced. When decoder reconstructs input source without having all descriptions different methods exist to approximate the lost description. We proposed two side decoding algorithms. The proposed side decoders are based on diagonal element and the probability of input source. When low bit-rate and complicated index assignment matrix are used the side decoder based on probability of input source is recommendable. To approximate the lost description we compare the performance of standard decoding method with the performance of proposed methods. A trade-off between the performance of decoder and computational complexity exists.</p> <p> An error concealment algorithm is proposed based on flow of facial expression to improve communication of animated facial data over a limited bandwidth channel with error. Facial expression flow is tracked using dominant muscles which are those with maximum change between two successive frames. By comparing the dominant muscle data with the predetermined expression information table, facial expression flow is determined. The receiver uses linear interpolation and the information on facial expression flow to interpolate the erroneous facial animation data. For objective comparison, a distortion measurement tool, which compares two 3D objects based on point-to-point difference, is introduced. Experimental results are provided to show that the proposed error concealment method improves the quality of an animated face communications.</p> / Thesis / Master of Applied Science (MASc)
422

IMPLEMENTATION OF AN INTERIOR POINT CUTTING PLANE ALGORITHM

Ghaffari, Hamid 09 1900 (has links)
In this thesis, first we propose a new approach to solve Semi Infinite Linear Optimization Problems (SILP). The new algorithm uses the idea of adding violated cut or cuts at each iteration. Our proposed algorithm distinguishes itself from Luo, Roos, and Terlaky's logarithmic barrier decomposition method, in three aspects: First, the violated cuts are added at their original locations. Second, we extend the analysis to the case where multiple violated cuts are added simultaneously, instead of adding only one constraint at a time. Finally, at each iteration we update the barrier parameter and the feasible set in the same step. In terms of complexity, we also show that a good approximation of an optimal solution will be guaranteed after finite number of iterations. Our focus in this thesis is mainly on the implementation of our algorithm to approximate an optimal solution of the SILP. Our numerical experiences show that unlike other SILP solvers which are suffering from the lack of accuracy, our algorithm can reach high accuracy in a competitive time. We discuss the linear algebra involved in efficient implementation and describe the software that was developed. Our test problem set includes large scale second order conic optimization problems. / Thesis / Master of Applied Science (MASc)
423

Application of Evolutionary Computation - Genetic Algorithm in the Unified Model Design Considerations for ACSR

Liu, Hongyan 01 1900 (has links)
Aluminum Conductor Steel Reinforced (ACSR) conductors have been applied in electric power transmission and distribution for over 80 years. Research about ACSR includes its possible properties in electrical, mechanical, and thermal areas. We postulate that these properties predict certain behaviours in power transmission and distribution lines. Four models have been established by various authors for determining conductor behaviour. They are the electromagnetic, mechanical, radial conduction, and steady-state thermal models. These models were developed independently,. Although they can be used in their fields individually, there are no experimental studies verifying a combined model. Also, using them separately does not yield the required information for determining conductor performance. The unified model connects these models probabilistically by considering power system loads and meteorological factors. Based on the unified model and its modules, it is possible to use mathematical tools to optimize the ACSR design and analyze conductor characteristics when conductor parameters are changed,. Evolutionary Computation is an optimization process simulating natural evolution on the computer. Instances based on evolutionary principles are Evolutionary Algorithms that historically include Genetic Algorithms, Evolution Strategies, and Evolutionary Programming. Genetic Algorithms are used in the optimization of multi-dimensional problems in this work. Evolutionary Algorithms are empirically robust in finding near-optimal solutions to complex problems through parallel searches of solution space. Evolution Computations imitates natural evolution and genetic variation, and lays the mathematical foundation for problems in which many inputs are variable. Especially, Genetic Algorithms are extensively applied in engineering to solve problems without satisfying gradient descent, deterministic hill climbing, or purely random search. This project introduces the Evolutionary Algorithms and applies the Genetic Algorithms to the unified models. The problem solved by applying Genetic Algorithms to optimize the unified model is to select optimum multi-dimensional input parameters for the model. This provides an effective way to find conductor size for optimizing conductor design. The results give the variation of electrical, thermal, and mechanical characteristics according to conductor loss changes and predict the variation ranges of electric and magnetic fields of three-layer conductors within ASTM standards. The procedure to apply Genetic Algorithms to optimize ACSR design is unique to the problem. Objective functions are found according to the characteristics of each model. The results are put into the unified model. Comparing results gives rules to change geometrical parameters of ACSR to reach minimal Joule loss. / Thesis / Master of Engineering (ME)
424

Truss topology optimization with species conserving genetic algorithm

Li, J., Campean, Felician January 2014 (has links)
No / Abstract: This paper is to apply the species conserving genetic algorithm (SCGA) to search multiple solutions of truss topology optimization problems in a single run. A real-vector is used to represent the corresponding cross-sectional areas and a member is thought to be existent if its area is bigger than a critical area. A finite element analysis model has been developed to deal with more practical considerations in modeling, such as existences of members, kinematic stability analysis and the computation of stresses and displacements. Cross-sectional areas and node connections are taken as decision variables and optimized simultaneously to minimize the total weight of trusses. Numerical results demonstrate that some truss topology optimization examples have many global and local solutions and different topologies can be found by using the proposed algorithm in a single run and some trusses have smaller weight than the solutions in the literature.
425

Semi-Parametric Techniques for Multi-Response Optimization

Wan, Wen 05 November 2007 (has links)
The multi-response optimization (MRO) problem in response surface methodology (RSM) is quite common in industry and in many other areas of science. During the optimization stage in MRO, the desirability function method, one of the most flexible and popular MRO approaches and which has been utilized in this research, is a highly nonlinear function. Therefore, we have proposed use of a genetic algorithm (GA), a global optimization tool, to help solve the MRO problem. Although a GA is a very powerful optimization tool, it has a computational efficiency problem. To deal with this problem, we have developed an improved GA by incorporating a local directional search into a GA process. In real life, practitioners usually prefer to identify all of the near-optimal solutions, or all feasible regions, for the desirability function, not just a single or several optimal solutions, because some feasible regions may be more desirable than others based on practical considerations. We have presented a procedure using our improved GA to approximately construct all feasible regions for the desirability function. This method is not limited by the number of factors in the design space. Before the optimization stage in MRO, appropriate fitted models for each response are required. The parametric approach, a traditional RSM regression technique, which is inflexible and heavily relies on the assumption of well-estimated models for the response of interests, can lead to highly biased estimates and result in miscalculating optimal solutions when the user's model is incorrectly specified. Nonparametric methods have been suggested as an alternative, yet they often result in highly variable estimates, especially for sparse data with a small sample size which are the typical properties of traditional RSM experiments. Therefore, in this research, we have proposed use of model robust regression 2 (MRR2), a semi-parametric method, which combines parametric and nonparametric methods. This combination does combine the advantages from each of the parametric and nonparametric methods and, at the same time, reduces some of the disadvantages inherent in each. / Ph. D.
426

Algorithms and Optimization for Wireless Networks

Shi, Yi 09 November 2007 (has links)
Recently, many new types of wireless networks have emerged for both civil and military applications, such as wireless sensor networks, ad hoc networks, among others. To improve the performance of these wireless networks, many advanced communication techniques have been developed at the physical layer. For both theoretical and practical purposes, it is important for a network researcher to understand the performance limits of these new wireless networks. Such performance limits are important not only for theoretical understanding, but also in that they can be used as benchmarks for the design of distributed algorithms and protocols. However, due to some unique characteristics associated with these networks, existing analytical technologies may not be applied directly. As a result, new theoretical results, along with new mathematical techniques, need to be developed. In this dissertation, we focus on the design of new algorithms and optimization techniques to study theoretical performance limits associated with these new wireless networks. In this dissertation, we mainly focus on sensor networks and ad hoc networks. Wireless sensor networks consist of battery-powered nodes that are endowed with a multitude of sensing modalities. A wireless sensor network can provide in-situ, unattended, high-precision, and real-time observation over a vast area. Wireless ad hoc networks are characterized by the absence of infrastructure support. Nodes in an ad hoc network are able to organize themselves into a multi-hop network. An ad hoc network can operate in a stand-alone fashion or could possibly be connected to a larger network such as the Internet (also known as mesh networks). For these new wireless networks, a number of advanced physical layer techniques, e.g., ultra wideband (UWB), multiple-input and multiple-output (MIMO), and cognitive radio (CR), have been employed. These new physical layer technologies have the potential to improve network performance. However, they also introduce some unique design challenges. For example, CR is capable of reconfiguring RF (on the fly) and switching to newly-selected frequency bands. It is much more advanced than the current multi-channel multi-radio (MC-MR) technology. MC-MR remains hardware-based radio technology: each radio can only operate on a single channel at a time and the number of concurrent channels that can be used at a wireless node is limited by the number of radio interfaces. While a CR can use multiple bands at the same time. In addition, an MC-MR based wireless network typically assumes there is a set of "common channels" available for all nodes in the network. While for CR networks, each node may have a different set of frequency bands based on its particular location. These important differences between MC-MR and CR warrant that the algorithmic design for a CR network is substantially more complex than that under MC-MR. Due to the unique characteristics of these new wireless networks, it is necessary to consider models and constraints at multiple layers (e.g., physical, link, and network) when we explore network performance limits. The formulations of these cross-layer problems are usually in very complex forms and are mathematically challenging. We aim to develop some novel algorithmic design and optimization techniques that provide optimal or near-optimal solutions. The main contributions of this dissertation are summarized as follows. 1. Node lifetime and rate allocation We study the sensor node lifetime problem by considering not only maximizing the time until the first node fails, but also maximizing the lifetimes for all the nodes in the network. For fairness, we maximize node lifetimes under the lexicographic max-min (LMM) criteria. Our contributions are two-fold. First, we develop a polynomial-time algorithm based on a parametric analysis (PA) technique, which has a much lower computational complexity than an existing state-of-the-art approach. We also present a polynomial-time algorithm to calculate the flow routing schedule such that the LMM-optimal node lifetime vector can be achieved. Second, we show that the same approach can be employed to address a different but related problem, called LMM rate allocation problem. More important, we discover an elegant duality relationship between the LMM node lifetime problem and the LMM rate allocation problem. We show that it is sufficient to solve only one of the two problems and that important insights can be obtained by inferring the duality results. 2. Base station placement Base station location has a significant impact on sensor network lifetime. We aim to determine the best location for the base station so as to maximize the network lifetime. For a multi-hop sensor network, this problem is particularly challenging as data routing strategies also affect the network lifetime performance. We present an approximation algorithm that can guarantee (1- ε)-optimal network lifetime performance with any desired error bound ε > 0. The key step is to divide the continuous search space into a finite number of subareas and represent each subarea with a "fictitious cost point" (FCP). We prove that the largest network lifetime achieved by one of these FCPs is (1- ε)-optimal. This approximation algorithm offers a significant reduction in complexity when compared to a state-of-the-art algorithm, and represents the best known result to this problem. 3. Mobile base station The benefits of using a mobile base station to prolong sensor network lifetime have been well recognized. However, due to the complexity of the problem (time-dependent network topology and traffic routing), theoretical performance limits and provably optimal algorithms remain difficult to develop. Our main result hinges upon a novel transformation of the joint base station movement and flow routing problem from the time domain to the space domain. Based on this transformation, we first show that if the base station is allowed to be present only on a set of pre-defined points, then we can find the optimal sojourn time for the base station on each of these points so that the overall network lifetime is maximized. Based on this finding, we show that when the location of the base station is un-constrained (i.e., can move to any point in the two-dimensional plane), we can develop an approximation algorithm for the joint mobile base station and flow routing problem such that the network lifetime is guaranteed to be at least (1- ε) of the maximum network lifetime, where ε can be made arbitrarily small. This is the first theoretical result with performance guarantee on this problem. 4. Spectrum sharing in CR networks Cognitive radio is a revolution in radio technology that promises unprecedented flexibility in radio communications and is viewed as an enabling technology for dynamic spectrum access. We consider a cross-layer design of scheduling and routing with the objective of minimizing the required network-wide radio spectrum usage to support a set of user sessions. Here, scheduling considers how to use a pool of unequal size frequency bands for concurrent transmissions and routing considers how to transmit data for each user session. We develop a near-optimal algorithm based on a sequential fixing (SF) technique, where the determination of scheduling variables is performed iteratively through a sequence of linear programs (LPs). Upon completing the fixing of these scheduling variables, the value of the other variables in the optimization problem can be obtained by solving an LP. 5. Power control in CR networks We further consider the case of variable transmission power in CR networks. Now, our objective is minimizing the total required bandwidth footprint product (BFP) to support a set of user sessions. As a basis, we first develop an interference model for scheduling when power control is performed at each node. This model extends existing so-called protocol models for wireless networks where transmission power is deterministic. As a result, this model can be used for a broad range of problems where power control is part of the optimization space. An efficient solution procedure based on the branch-and-bound framework and convex hull relaxations is proposed to provide (1- ε)-optimal solutions. This is the first theoretical result on this important problem. / Ph. D.
427

Development of a Threat Assessment Algorithm for Intersection Collision Avoidance Systems

Doerzaph, Zachary R. 11 February 2008 (has links)
Relative to other roadway segments, intersections occupy a small portion of the overall infrastructure; however, they represent the location for nearly 41 % of the annual automotive crashes in the United States. Thus, intersections are an inherently dangerous roadway element and a prime location for vehicle conflicts. Traditional safety treatments are effective at addressing certain types of intersection safety deficiencies; however, cumulative traffic data suggests these treatments do not address a large portion of the crashes that occur each year. Intersection Collision Avoidance Systems (ICAS) represent a new breed of countermeasures that focus on the types of crashes that have not been reduced with the application of traditional methods. Incursion systems, a subset of ICAS, are designed to specifically undertake crashes that are a result of the violation of a traffic control device. Intersection Collision Avoidance Systems to address Violations (ICAS-V) monitor traffic as it approaches the intersection through a network of in-vehicle sensors, infrastructure- mounted sensors, and communication equipment. A threat-assessment algorithm performs computations to predict the driver's intended intersection maneuver, based on these sensor inputs. If the system predicts a violation, it delivers a timely warning to the driver with the aim of compelling the driver to stop. This warning helps the driver to avoid a potential crash with adjacent traffic. The following dissertation describes an investigation of intersection approach behavior aimed at developing a threat assessment algorithm for stop-sign intersections. Data were collected at live intersections to gather infrastructure-based naturalistic vehicle approach trajectories. This data were compiled and analyzed with the goal of understanding how drivers approach intersections under various speeds and environmental conditions. Six stop-controlled intersection approaches across five intersections in the New River Valley, Virginia area were selected as the test sites. Data were collected from each site for at least two months, resulting in over sixteen total months of data. A series of statistical analysis techniques were applied to construct a set of threat assessment algorithms for stop-controlled intersections. These analyses identified characteristics of intersection approaches that suggested driver intent at the stop sign. Models were constructed to predict driver stopping intent based on measured vehicle kinematics. These models were thoroughly tested using simulation and evaluated with signal detection theory. The overall output of this work is a set of algorithms that may be integrated into an ICAS-V for on-road testing. / Ph. D.
428

Estimating Reachability Set Sizes in Dynamic Graphs

Aji, Sudarshan Mandayam 01 July 2014 (has links)
Graphs are a commonly used abstraction for diverse kinds of interactions, e.g., on Twitter and Facebook. Different kinds of topological properties of such graphs are computed for gaining insights into their structure. Computing properties of large real networks is computationally very challenging. Further, most real world networks are dynamic, i.e., they change over time. Therefore there is a need for efficient dynamic algorithms that offer good space-time trade-offs. In this thesis we study the problem of computing the reachability set size of a vertex, which is a fundamental problem, with applications in databases and social networks. We develop the first Giraph based algorithms for different dynamic versions of these problems, which scale to graphs with millions of edges. / Master of Science
429

A Gillespie-Type Algorithm for Particle Based Stochastic Model on Lattice

Liu, Weigang January 2019 (has links)
In this thesis, I propose a general stochastic simulation algorithm for particle based lattice model using the concepts of Gillespie's stochastic simulation algorithm, which was originally designed for well-stirred systems. I describe the details about this method and analyze its complexity compared with the StochSim algorithm, another simulation algorithm originally proposed to simulate stochastic lattice model. I compare the performance of both algorithms with application to two different examples: the May-Leonard model and Ziff-Gulari-Barshad model. Comparison between the simulation results from both algorithms has validate our claim that our new proposed algorithm is comparable to the StochSim in simulation accuracy. I also compare the efficiency of both algorithms using the CPU cost of each code and conclude that the new algorithm is as efficient as the StochSim in most test cases, while performing even better for certain specific cases. / Computer simulation has been developed for almost one century. Stochastic lattice model, which follows the physics concept of lattice, is defined as a kind of system in which individual entities live on grids and demonstrate certain random behaviors according to certain specific rules. It is mainly studied using computer simulations. The most widely used simulation method to for stochastic lattice systems is the StochSim algorithm, which just randomly pick an entity and then determine its behavior based on a set of specific random rules. Our goal is to develop new simulation methods so that it is more convenient to simulate and analyze stochastic lattice system. In this thesis I propose another type of simulation methods for the stochastic lattice model using totally different concepts and procedures. I developed a simulation package and applied it to two different examples using both methods, and then conducted a series of numerical experiment to compare their performance. I conclude that they are roughly equivalent and our new method performs better than the old one in certain special cases.
430

Synthesizing a Hybrid Benchmark Suite with BenchPrime

Wu, Xiaolong 09 October 2018 (has links)
This paper presents BenchPrime, an automated benchmark analysis toolset that is systematic and extensible to analyze the similarity and diversity of benchmark suites. BenchPrime takes multiple benchmark suites and their evaluation metrics as inputs and generates a hybrid benchmark suite comprising only essential applications. Unlike prior work, BenchPrime uses linear discriminant analysis rather than principal component analysis, as well as selects the best clustering algorithm and the optimized number of clusters in an automated and metric-tailored way, thereby achieving high accuracy. In addition, BenchPrime ranks the benchmark suites in terms of their application set diversity and estimates how unique each benchmark suite is compared to other suites. As a case study, this work for the first time compares the DenBench with the MediaBench and MiBench using four different metrics to provide a multi-dimensional understanding of the benchmark suites. For each metric, BenchPrime measures to what degree DenBench applications are irreplaceable with those in MediaBench and MiBench. This provides means for identifying an essential subset from the three benchmark suites without compromising the application balance of the full set. The experimental results show that the necessity of including DenBench applications varies across the target metrics and that significant redundancy exists among the three benchmark suites. / Master of Science / Representative benchmarks are widely used in the research area to achieve an accurate and fair evaluation of hardware and software techniques. However, the redundant applications in the benchmark set can skew the average towards redundant characteristics overestimating the benefit of any proposed research. This work proposes a machine learning-based framework BenchPrime to generates a hybrid benchmark suite comprising only essential applications. In addition, BenchPrime ranks the benchmark suites in terms of their application set diversity and estimates how unique each benchmark suite is compared to other suites.

Page generated in 0.0369 seconds