• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2596
  • 912
  • 381
  • 347
  • 331
  • 101
  • 66
  • 49
  • 40
  • 36
  • 34
  • 32
  • 31
  • 27
  • 26
  • Tagged with
  • 5940
  • 1422
  • 871
  • 726
  • 722
  • 669
  • 492
  • 490
  • 479
  • 447
  • 421
  • 414
  • 386
  • 365
  • 340
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1091

A New Distributed QoS Routing Algorithm Based on Fano's Method

Deb, S.S., Woodward, Mike E. January 2005 (has links)
No / Providing a guaranteed quality-of-service (QoS) is essential to many real-time applications. The existing distributed QoS routing algorithms are based on either shortest path or flooding and both tend to have high message overhead. A new distributed unicast QoS routing algorithm based on Fano¿s decoding method is studied. Fano¿s decoding method is a technique used in error control coding and it attempts to trace an optimal path probabilistically. The similarity of various aspects of Fano¿s decoding method to a QoS routing algorithm and the benefits it can provide encourages us to investigate the possibility of using it in QoS routing. This is the first known attempt to modify an error control technique using Fano¿s decoding method for the purpose of QoS routing in fixed wired networks. Simulation results demonstrate the effectiveness of the proposed algorithm in terms of message overhead and success ratio (% of paths obtained that satisfy the given QoS constraints). It is shown that the message overhead in the proposed algorithm is reduced compared to flooding based algorithms while maintaining a similar success ratio. Message overhead is also reduced compared to distance vector based algorithms for all but very sparsely connected networks, while success ratio is not compromised. Nodal storage is also considerably lower than for most other contemporary QoS routing algorithms.
1092

Investigating the impact of discomfort in load scheduling using genetic algorithm

Anuebunwa, U.R., Rajamani, Haile S., Pillai, Prashant, Okpako, O. 24 November 2016 (has links)
Yes / Energy consumers oftentimes suffer some element of discomfort associated with the implementation of demand response programs as they aim to follow a suggested energy consumption profile generated from scheduling algorithms for the purpose of optimizing grid performance. This is because people naturally do not like to be told what to do or when to use their appliances. Although advances in renewable energy have made the consumer to also become energy supplier, who can actively cash in at times of the day when energy cost is high to either sell excess energy generated or consume it internally if required, thereby nullifying the adverse effect of this discomfort. But a majority of consumers still rely wholly on the supply from the grid. This impact on users' comfort who are active participants in demand response programs was investigated and ways to minimizing load scheduling discomfort was sought in order to encourage user participation.
1093

Humans vs. Machines Artificial Intelligence in Recruiting: Beyond the Hype, Unveiling the Real-World Impact

Picardo, Bianca, 0009-0004-0458-9972 05 1900 (has links)
Artificial Intelligence (AI) reimagined the Google search and how information is accessed daily. With vast processing times and the ability to digest large amounts of information at lightning speed, AI also can recognize patterns and commands, making information more relevant and useful for the end user. AI has a promising future when strategically placed within certain functions that may tremendously benefit from its skills, ultimately shaping business professionals to rethink approaches to solving complex business problems and determining how and where there may be strategic opportunities to reduce human capital with machines. Given the excitement of newly AI‐released capabilities like ChatGPT, questions have been raised around the validity of AI and if it is ethical, especially when used in a sensitive environment like talent recruiting. The hype associated with AI has led to the assumption that it will do what it is told, and marketed as an asset that can be used anywhere. Not only does the hype of AI concern the recruiting world, but it also manifests a much bigger long‐term concern for liability. This paper focuses on the use of AI in recruiting. On the surface, AI offers a lot of benefits to organizations, but many lack the knowledge and understanding of how it works. Therefore, I surveyed recruiters and managers to learn more about their dependency of AI for hiring decisions, especially when leveraging applicant tracking systems (ATS). Followed by a technical white paper, I evaluated the hype around AI in recruitment and determine best practices for companies to follow when considering implementing into this function. These technicalities include how and if AI should be used in a recruiting function. / Business Administration/Human Resource Management
1094

Design and Evaluation of a Data-distributed Massively Parallel Implementation of a Global Optimization Algorithm---DIRECT

He, Jian 12 January 2008 (has links)
The present work aims at an efficient, portable, and robust design of a data-distributed massively parallel DIRECT, the deterministic global optimization algorithm widely used in multidisciplinary engineering design, biological science, and physical science applications. The original algorithm is modified to adapt to different problem scales and optimization (exploration vs.\ exploitation) goals. Enhanced with a memory reduction technique, dynamic data structures are used to organize local data, handle unpredictable memory requirements, reduce the memory usage, and share the data across multiple processors. The parallel scheme employs a multilevel functional and data parallelism to boost concurrency and mitigate the data dependency, thus improving the load balancing and scalability. In addition, checkpointing features are integrated to provide fault tolerance and hot restarts. Important algorithm modifications and design considerations are discussed regarding data structures, parallel schemes, error handling, and portability. Using several benchmark functions and real-world applications, the present work is evaluated in terms of optimization effectiveness, data structure efficiency, memory usage, parallel performance, and checkpointing overhead. Modeling and analysis techniques are used to investigate the design effectiveness and performance sensitivity under various problem structures, parallel schemes, and system settings. Theoretical and experimental results are compared for two parallel clusters with different system scale and network connectivity. An analytical bounding model is constructed to measure the load balancing performance under different schemes. Additionally, linear regression models are used to characterize two major overhead sources---interprocessor communication and processor idleness, and also applied to the isoefficiency functions in scalability analysis. For a variety of high-dimensional problems and large scale systems, the data-distributed massively parallel design has achieved reasonable performance. The results of the performance study provide guidance for efficient problem and scheme configuration. More importantly, the generalized design considerations and analysis techniques are beneficial for transforming many global search algorithms to become effective large scale parallel optimization tools. / Ph. D.
1095

Agricultural BMP Placement for Cost-effective Pollution Control at the Watershed Level

Veith, Tamie L. 26 April 2002 (has links)
The overall goal of this research was to increase, relative to targeting recommendations, the cost-effectiveness of pollution reduction measures within a watershed. The goal was met through development of an optimization procedure for best management practice (BMP) placement at the watershed level. The procedure combines an optimization component, written in the C++ language, with spatially variable nonpoint source (NPS) prediction and economic analysis components, written in the ArcView geographic information system scripting language. The procedure is modular in design, allowing modifications or enhancements to the components while maintaining the overall theory. The optimization component uses a genetic algorithm to optimize a lexicographic multi-objective function of pollution reduction and cost increase. The procedure first maximizes pollution reduction to meet a specified goal, or maximum allowable load, and then minimizes cost increase. For the NPS component, a sediment delivery technique was developed and combined with the Universal Soil Loss Equation to predict average annual sediment yield at the watershed outlet. Although this evaluation considered only erosion, the NPS pollutant fitness score allows for evaluation of multiple pollutants, based on prioritization of each pollutant. The economic component considers farm-level public and private costs, accounting for crop productivity levels by soil and for enterprise budgets by field. The economic fitness score assigns higher fitness scores to scenarios in which costs decrease or are distributed more evenly across farms. Additionally, the economic score considers the amounts of cropland, hay, and pasture needed to meet feed and manure/poultry litter spreading requirements. Application to two watersheds demonstrated that the procedure optimized BMP placement, locating scenarios more cost-effective than a targeting strategy solution. The optimization procedure identified solutions with lower costs than the targeting strategy solution for the same level of pollution reduction. The benefit to cost ratio, including use of the procedure and implementation of resulting solutions, was demonstrated to be greater for the optimization procedure than for the targeting strategy. The optimization procedure identifies multiple near optimal solutions. Additionally, the procedure creates and evaluates scenarios in a repeated fashion without requiring human interaction. Thus, more scenarios can be evaluated than are feasible to evaluate manually. / Ph. D.
1096

Bayesian hierarchical modelling of dual response surfaces

Chen, Younan 08 December 2005 (has links)
Dual response surface methodology (Vining and Myers (1990)) has been successfully used as a cost-effective approach to improve the quality of products and processes since Taguchi (Tauchi (1985)) introduced the idea of robust parameter design on the quality improvement in the United States in mid-1980s. The original procedure is to use the mean and the standard deviation of the characteristic to form a dual response system in linear model structure, and to estimate the model coefficients using least squares methods. In this dissertation, a Bayesian hierarchical approach is proposed to model the dual response system so that the inherent hierarchical variance structure of the response can be modeled naturally. The Bayesian model is developed for both univariate and multivariate dual response surfaces, and for both fully replicated and partially replicated dual response surface designs. To evaluate its performance, the Bayesian method has been compared with the original method under a wide range of scenarios, and it shows higher efficiency and more robustness. In applications, the Bayesian approach retains all the advantages provided by the original dual response surface modelling method. Moreover, the Bayesian analysis allows inference on the uncertainty of the model parameters, and thus can give practitioners complete information on the distribution of the characteristic of interest. / Ph. D.
1097

Semiparametric Techniques for Response Surface Methodology

Pickle, Stephanie M. 14 September 2006 (has links)
Many industrial statisticians employ the techniques of Response Surface Methodology (RSM) to study and optimize products and processes. A second-order Taylor series approximation is commonly utilized to model the data; however, parametric models are not always adequate. In these situations, any degree of model misspecification may result in serious bias of the estimated response. Nonparametric methods have been suggested as an alternative as they can capture structure in the data that a misspecified parametric model cannot. Yet nonparametric fits may be highly variable especially in small sample settings which are common in RSM. Therefore, semiparametric regression techniques are proposed for use in the RSM setting. These methods will be applied to an elementary RSM problem as well as the robust parameter design problem. / Ph. D.
1098

On the Effects of Noise on Parameter Identification Optimization Problems

Vugrin, Kay Ellen White 06 May 2005 (has links)
The calibration of model parameters is an important step in model development. Commonly, system output is measured, and model parameters are iteratively varied until the model output is a good match to the measured system output. Optimization algorithms are often used to identify the model parameter values. The presence of noise is difficult to avoid when physical processes are used to calibrate models due to measurement error, model structure error, and errors arising from numerical techniques and approximate solutions. Our study focuses on the effects of noise in parameter identification optimization problems. We generate six test problems, including five perturbations of a smooth problem. A previously studied groundwater parameter identification problem serves as our seventh test problem. We test the Nelder-Mead Algorithm, a combination of the Nelder-Mead Algorithm and Simulated Annealing, and the Shuffled Complex Evolution Method on these test problems. Comparison of optimization results for these problems reveals the effects of noise on optimization performance, including an increase in fitness values and a decrease in the number of fit evaluations. We vary the values of the internal algorithmic parameters to determine the effects of different values and present numerical results that indicate that changing the values of the algorithmic parameters can cause profound differences in optimization results for all three algorithms. A variation of the generally accepted parameter values for the Nelder-Mead Algorithm is recommended, and we determine that the Nelder-Mead/Simulated Annealing Hybrid and Shuffled Complex Evolution Method are too problem dependent for general recommendations for parameter values. Finally, we prove new convergence results for the Nelder-Mead/Simulated Annealing Hybrid in both smooth and noisy cases. / Ph. D.
1099

Reduction of Printed Circuit Card Placement Time Through the Implementation of Panelization

Tester, John T. 09 October 1999 (has links)
Decreasing the cycle time of panels in the printed circuit card manufacturing process has been a significant research topic over the past decade. The research objective in such literature has been to reduce the placement machine cycle times by finding the optimal placement sequences and component-feeder allocation for a given, fixed, panel component layout for a given machine type. Until now, no research has been found which allows the alteration of the panel configuration itself, when panelization is a part of that electronic panel design. This research will be the first effort to incorporate panelization into the cycle time reduction field. The PCB circuit design is not to be altered; rather, the panel design (i.e., the arrangement of the PCB in the panel) is altered to reduce the panel assembly time. Component placement problem models are developed for three types of machines: The automated insertion machine (AIM), the pick-and-place (PAPM) machine, and the rotary turret head machine (RTHM). Two solution procedures are developed which are based upon a genetic algorithm (GA) approach. One procedure simultaneously produces solutions for the best panel design and component placement sequence. The other procedure first selects a best panel design based upon an estimation of its worth to the minimization problem. Then that procedure uses a more traditional GA to solve for the component placement and component type allocation problem for that panel design. Experiments were conducted to discover situations where the consideration of panelization can make a significant difierence in panel assembly times. It was shown that the PAPM scenario benefits most from panelization and the RTHM the least, though all three machine types show improvements under certain conditions established in the experiments. NOTE: An updated copy of this ETD was added on 09/17/2010. / Ph. D.
1100

Interfacing VHDL performance models to algorithm partitioning tools

Balasubramanian, Priya 13 February 2009 (has links)
Performance modeling is widely used to efficiently and rapidly assess the ability of multiprocessor architectures to effectively execute a given algorithm. In a typical design environment, VHD L performance models of hardware components are interconnected to form structural models of the target multiprocessor architectures. Algorithm features are described in application specific tools. Other automated tools partition the software among the various processors. Performance models evaluate the system performance. Since several iterations may be needed before a suitable configuration is obtained, a set of tools that directly interfaces the VHDL performance models to the algorithm partitioning tools will significantly reduce the time and effort needed to prepare the various models. In order to develop the interface tools, it is essential to determine the information that needs to be interchanged between the two systems. The primary goals of this thesis are to study the various models, determine the information that needs to be exchanged, and to develop tools to automatically extract the desired information from each model. / Master of Science

Page generated in 0.0738 seconds