• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2596
  • 912
  • 381
  • 347
  • 331
  • 101
  • 66
  • 49
  • 40
  • 36
  • 34
  • 32
  • 31
  • 27
  • 26
  • Tagged with
  • 5940
  • 1422
  • 871
  • 726
  • 722
  • 669
  • 492
  • 490
  • 479
  • 447
  • 421
  • 414
  • 386
  • 365
  • 340
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1111

Limited Memory Space Dilation and Reduction Algorithms

Ansari, Zafar A. 11 August 1998 (has links)
In this thesis, we present variants of Shor and Zhurbenko's r-algorithm, motivated by the memoryless and limited memory updates for differentiable quasi-Newton methods. This well known r-algorithm, which employs a space dilation strategy in the direction of the difference between two successive subgradients, is recognized as being one of the most effective procedures for solving nondifferentiable optimization problems. However, the method needs to store the space dilation matrix and update it at every iteration, resulting in a substantial computational burden for large-sized problems. To circumvent this difficulty, we first develop a memoryless update scheme. In the space transformation sense, the new update scheme can be viewed as a combination of space dilation and reduction operations. We prove convergence of this new algorithm, and demonstrate how it can be used in conjunction with a variable target value method that allows a practical, convergent implementation of the method. For performance comparisons we examine other memoryless and limited memory variants, and also prove a modification of a related algorithm due to Polyak that employs a projection on a pair of Kelley's cutting planes. These variants are tested along with Shor's r-algorithm on a set of standard test problems from the literature as well as on randomly generated dual transportation and assignment problems. Our computational experiments reveal that the proposed memoryless space dilation and reduction algorithm (VT-MSDR) and the proposed modification of the Polyak-Kelly cutting plane method (VT-PKC) provide an overall competitive performance relative to the other methods tested with respect to solution quality and computational effort. The r-Algorithm becomes increasingly more expensive with an increase in problem size, while not providing any gain in solution quality. The fixed dilation (with no reduction) strategy (VT-MSD) provides a comparable, though second-choice, alternative to VT-MSDR. Employing a two-step limited memory extension over VT-MSD sometimes helps in improving the solution quality, although it adds to computational effort, and is not as robust a procedure. / Master of Science
1112

Three Essays on HRM Algorithms: Where Do We Go from Here?

Cheng, Minghui January 2024 (has links)
The field of Human Resource Management (HRM) has experienced a significant transformation with the emergence of big data and algorithms. Major technology companies have introduced software and platforms for analyzing various HRM practices, such as hiring, compensation, employee engagement, and turnover management, utilizing algorithmic approaches. However, scholarly research has taken a cautious stance, questioning the strategic value and causal inference basis of these tools, while also raising concerns about bias, discrimination, and ethical issues in the applications of algorithms. Despite these concerns, algorithmic management has gained prominence in large organizations, shaping workforce management practices. This thesis aims to address the gap between the rapidly changing market of HRM algorithms and the lack of theoretical understanding. The thesis begins by conducting a comprehensive review of HRM algorithms in HRM practice and scholarship, clarifying their definition, exploring their unique features, and identifying specific topics and research questions in the field. It aims to bridge the gap between academia and practice to enhance the understanding and utilization of algorithms in HRM. I then explore the legal, causal, and moral issues associated with HR algorithms, comparing fairness criteria and advocating for the use of causal modeling to evaluate algorithmic fairness. The multifaceted nature of fairness is illustrated and practical strategies for enhancing justice perceptions and incorporating fairness into HR algorithms are proposed. Finally, the thesis adopts an artifact-centric approach to examine the ethical implications of HRM algorithms. It explores competing views on moral responsibility, introduces the concept of "ethical affordances," and analyzes the distribution of moral responsibility based on different types of ethical affordances. The paper provides a framework for analyzing and assigning moral responsibility to stakeholders involved in the design, use, and regulation of HRM algorithms. Together, these papers contribute to the understanding of algorithms in HRM by addressing the research-practice gap, exploring fairness and accountability issues, and investigating the ethical implications. They offer theoretical insights, practical recommendations, and future research directions for both researchers and practitioners. / Thesis / Doctor of Philosophy (PhD) / This thesis explores the use of advanced algorithms in Human Resource Management (HRM) and how they affect decision-making in organizations. With the rise of big data and powerful algorithms, companies can analyze various HR practices like hiring, compensation, and employee engagement. However, there are concerns about biases and ethical issues in algorithmic decision-making. This research examines the benefits and challenges of HRM algorithms and suggests ways to ensure fairness and ethical considerations in their design and application. By bridging the gap between theory and practice, this thesis provides insights into the responsible use of algorithms in HRM. The findings of this research can help organizations make better decisions while maintaining fairness and upholding ethical standards in HR practices.
1113

Bi-objective multi-assignment capacitated location-allocation problem

Maach, Fouad 01 June 2007 (has links)
Optimization problems of location-assignment correspond to a wide range of real situations, such as factory network design. However most of the previous works seek in most cases at minimizing a cost function. Traffic incidents routinely impact the performance and the safety of the supply. These incidents can not be totally avoided and must be regarded. A way to consider these incidents is to design a network on which multiple assignments are performed. Precisely, the problem we focus on deals with power supplying that has become a more and more complex and crucial question. Many international companies have customers who are located all around the world; usually one customer per country. At the other side of the scale, power extraction or production is done in several sites that are spread on several continents and seas. A strong willing of becoming less energetically-dependent has lead many governments to increase the diversity of supply locations. For each kind of energy, many countries expect to deal ideally with 2 or 3 location sites. As a decrease in power supply can have serious consequences for the economic performance of a whole country, companies prefer to balance equally the production rate among all sites as the reliability of all the sites is considered to be very similar. Sharing equally the demand between the 2 or 3 sites assigned to a given area is the most common way. Despite the cost of the network has an importance, it is also crucial to balance the loading between the sites to guarantee that no site would take more importance than the others for a given area. In case an accident happens in a site or in case technical problems do not permit to satisfy the demand assigned to the site, the overall power supply of this site is still likely to be ensured by the one or two available remaining site(s). It is common to assign a cost per open power plant and another cost that depends on the distance between the factory or power extraction point and the customer. On the whole, such companies who are concerned in the quality service of power supply have to find a good trade-off between this factor and their overall functioning cost. This situation exists also for companies who supplies power at the national scale. The expected number of areas as well that of potential sites, can reach 100. However the targeted size of problem to be solved is 50. This thesis focuses on devising an efficient methodology to provide all the solutions of this bi-objective problem. This proposal is an investigation of close problems to delimit the most relevant approaches to this untypical problem. All this work permits us to present one exact method and an evolutionary algorithm that might provide a good answer to this problem. / Master of Science
1114

Comparative Study of the Effect of Tread Rubber Compound on Tire Performance on Ice

Shenvi, Mohit Nitin 20 August 2020 (has links)
The tire-terrain interaction is complex and tremendously important; it impacts the performance and safety of the vehicle and its occupants. Icy roads further enhance these complexities and adversely affect the handling of the vehicle. The analysis of the tire-ice contact focusing on individual aspects of tire construction and operation is imperative for tire industry's future. This study investigates the effects of the tread rubber compound on the drawbar pull performance of tires in contact with an ice layer near its melting point. A set of sixteen tires of eight different rubber compounds were considered. The tires were identical in design and tread patterns but have different tread rubber compounds. To study the effect of the tread rubber compound, all operational parameters were kept constant during the testing conducted on the Terramechanics Rig at the Terramechanics, Multibody, and Vehicle Systems laboratory. The tests led to conclusive evidence of the effect of the tread rubber compound on the drawbar performance (found to be most prominent in the linear region of the drawbar-slip curve) and on the resistive forces of free-rolling tires. Modeling of the tire-ice contact for estimation of temperature rise and water film height was performed using ATIIM 2.0. The performance of this in-house model was compared against three classical tire-ice friction models. A parametrization of the Magic Formula tire model was performed using experimental data and a Genetic Algorithm. The dependence of individual factors of the Magic Formula on the ambient temperature, tire age, and tread rubber compounds was investigated. / Master of Science / The interaction between the tire and icy road conditions in the context of the safety of the occupants of the vehicle is a demanding test of the skills of the driver. The expected maneuvers of a vehicle in response to the actions of the driver become heavily unpredictable depending on a variety of factors like the thickness of the ice, its temperature, ambient temperature, the conditions of the vehicle and the tire, etc. To overcome the issues that could arise, the development of winter tires got a boost, especially with siping and rubber compounding technology. This research focuses on the effects on the tire performance on ice due to the variation in the tread rubber compounds. The experimental accomplishment of the same was performed using the Terramechanics rig at the Terramechanics, Multibody, and Vehicle Systems (TMVS) laboratory. It was found that the effect of the rubber compound is most pronounced in the region where most vehicles operate under normal circumstances. An attempt was made to simulate the temperature rise in the contact patch and the water film that exists due to the localized melting of ice caused by frictional heating. Three classical friction models were used to compare the predictions against ATIIM 2.0, an in-house developed model. Using an optimization technique namely the Genetic Algorithm, efforts were made to understand the effects of the tread rubber compound, the ambient temperature, and the aging of the tire on the parameters of the Magic Formula model, an empirical model describing the performance of the tire.
1115

New Differential Zone Protection Scheme Using Graph Partitioning for an Islanded Microgrid

Alsaeidi, Fahad S. 19 May 2022 (has links)
Microgrid deployment in electric grids improves reliability, efficiency, and quality, as well as the overall sustainability and resiliency of the grid. Specifically, microgrids alleviate the effects of power outages. However, microgrid implementations impose additional challenges on power systems. Microgrid protection is one of the technical challenges implicit in the deployment of microgrids. These challenges occur as a result of the unique properties of microgrid networks in comparison to traditional electrical networks. Differential protection is a fast, selective, and sensitive technique. Additionally, it offers a viable solution to microgrid protection concerns. The differential zone protection scheme is a cost-effective variant of differential protection. To implement a differential zone protection scheme, the network must be split into different protection zones. The reliability of this protection scheme is dependent upon the number of protective zones developed. This thesis proposes a new differential zone protection scheme using a graph partitioning algorithm. A graph partitioning algorithm is used to partition the microgrid into multiple protective zones. The IEEE 13-node microgrid is used to demonstrate the proposed protection scheme. The protection scheme is validated with MATLAB Simulink, and its impact is simulated with DIgSILENT PowerFactory software. Additionally, a comprehensive comparison was made to a comparable differential zone protection scheme. / Master of Science / A microgrid is a group of connected distributed energy resources (DERs) with the loads to be served that acts as a local electrical network. In electric grids, microgrid implementation enhances grid reliability, efficiency, and quality, as well as the system's overall sustainability and resiliency. Microgrids mitigate the consequences of power disruptions. Microgrid solutions, on the other hand, bring extra obstacles to power systems. One of the technological issues inherent in the implementation of microgrids is microgrid protection. These difficulties arise as a result of microgrid networks' distinct characteristics as compared to standard electrical networks. Differential protection is a technique that is fast, selective, and sensitive. It also provides a feasible solution to microgrid protection problems. This protection scheme, on the other hand, is more expensive than others. The differential zone protection scheme is a cost-effective variation of differential protection that lowers protection scheme expenses while improving system reliability. The network must be divided into different protection zones in order to deploy a differential zone protection scheme. The number of protective zones generated determines the reliability of this protection method. Using a network partitioning technique, this thesis presents a new differential zone protection scheme. The microgrid is divided into various protection zones using a graph partitioning algorithm. The proposed protection scheme is demonstrated using the IEEE 13-node microgrid. MATLAB Simulink is used to validate the protection scheme, while DIgSILENT PowerFactory is used to simulate its impact. A comparison of a similar differential zone protection scheme was also done.
1116

Modern Econometric Methods for the Analysis of Housing Markets

Kesiz Abnousi, Vartan 26 May 2021 (has links)
The increasing availability of richer, high-dimensional, home sales data-sets, as well as spatially geocoded data, allows for the use of new econometric and computational methods to explore novel research questions. This dissertation consists of three separate research papers which aim to leverage this trend to answer empirical inferential questions, propose new computational approaches in environmental valuation, and address future challenges. The first research chapter estimates the effect on home values of 10 large-scale urban stream restoration projects situated near the project sites. The study area is the Johnson Creek Watershed in Portland, Oregon. The research design incorporates four matching model approaches that vary based on the temporal bands' width, a narrow and a wider band, and two spatial zoning buffers, a smaller and larger that account for the affected homes' distances. Estimated effects tend to be positive for six projects when the restoration projects' distance is smaller, and the temporal bands are narrow, while two restoration projects have positive effects on home values across all four modeling approaches. The second research chapter focuses on the underlying statistical and computational properties of matching methods for causal treatment effects. The prevailing notion in the literature is that there is a tradeoff between bias and variance linked to the number of matched control observations for each treatment unit. In addition, in the era of Big Data, there is a paucity of research addressing the tradeoffs between inferential accuracy and computational time across different matching methods. Is it worth employing computationally costly matching methods if the gains in bias reduction and efficiency are negligible? We revisit the notion of bias-variance tradeoff and address the subject of computational time considerations. We conduct a simulation study and evaluate 160 models and 320 estimands. The results suggest that the conventional notion of a bias-variance tradeoff, with bias increasing and variance decreasing with the number of matched controls, does not hold under the bias-corrected matching estimator (BCME), developed by Abadie and Imbens (2011). Specifically, for the BCME, the trend of bias decreases as the number of matches per treated unit increases. Moreover, when the pre-matching balance's quality is already good, choosing only one match results in a significantly larger bias under all methods and estimators. In addition, the genetic search matching algorithm, GenMatch, is superior compared to the baseline Greedy Method by achieving a better balance between the observed covariate distributions of the treated and matched control groups. On the down side, GenMatch is 408 times slower compared to a greedy matching method. However, when we employ the BCME on matched data, there is a negligible difference in bias reduction between the two matching methods. Traditionally, environmental valuation methods using residential property transactions follow two approaches, hedonic price functions and Random Utility sorting models. An alternative approach is the Iterated Bidding Algorithm (IBA), introduced by Kuminoff and Jarrah (2010). This third chapter aims to improve the IBA approach to property and environmental valuation compared to its early applications. We implement this approach in an artificially simulated residential housing market, maintaining full control over the data generating mechanism. We implement the Mesh Adaptive Direct Search Algorithm (MADS) and introduce a convergence criterion that leverages the knowledge of individuals' actual pairing to homes. We proceed to estimate the preference parameters of the distribution of an underlying artificially simulated housing market. We estimate with significantly higher precision than the original baseline Nelder-Mead optimization that relied only on a price discrepancy convergence criterion, as implemented during the IBAs earlier applications. / Doctor of Philosophy / The increasing availability of richer, high-dimensional, home sales data sets enables us to employ new methods to explore novel research questions involving housing markets. This dissertation consists of three separate research papers which leverage this trend. The first research paper estimates the effects on home values of 10 large-scale urban stream restoration projects in Portland, Oregon. These homes are located near the project sites. The results show that the distance of the homes from the project sites and the duration of the construction cause different effects on home values. However, two restorations have positive effects regardless of the distance and the duration period. The second research study is focused on the issue of causality. The study demonstrates that a traditional notion concerning causality known as the ``bias-variance tradeoff" is not always valid. In addition, the research shows that sophisticated but time-consuming algorithms have negligible effects in improving the accuracy of estimating the causal effects when we account for the required computational time. The third research study improves an environmental evaluation method that relies on residential property transactions. The methodology leverages the features of more informative residential data sets in conjunction with a more efficient optimization method, leading to significant improvements. The study concludes that due to these improvements, this alternative method can be employed to elicit the true preferences of homeowners over housing and locational characteristics by avoiding the shortcomings of existing techniques.
1117

Preliminary Design of an Autonomous Underwater Vehicle Using a Multiple-Objective Genetic Optimizer

Martz, Matthew 26 June 2008 (has links)
The process developed herein uses a Multiple Objective Genetic Optimization (MOGO) algorithm. The optimization is implemented in ModelCenter (MC) from Phoenix Integration. It uses a genetic algorithm that searches the design space for optimal, feasible designs by considering three Measures of Performance (MOPs): Cost, Effectiveness, and Risk. The complete synthesis model is comprised of an input module, the three primary AUV synthesis modules, a constraint module, three objective modules, and a genetic algorithm. The effectiveness rating determined by the synthesis model is based on nine attributes identified in the US Navy's UUV Master Plan and four performance-based attributes calculated by the synthesis model. To solve multi-attribute decision problems the Analytical Hierarchy Process (AHP) is used. Once the MOGO has generated a final generation of optimal, feasible designs the decision-maker(s) can choose candidate designs for further analysis. A sample AUV Synthesis was performed and five candidate AUVs were analyzed. / Master of Science
1118

Design of an Analog Adaptive Piezoelectric Sensoriactuator

Fannin, Christopher A. 09 September 1997 (has links)
In order for a piezoelectric transducer to be used as a sensor and actuator simultaneously, a direct charge due to the applied voltage must be removed from the total response in order to allow observation of the mechanical response alone. Earlier researchers proposed electronic compensators to remove this term by creating a reference signal which destructively interferes with the direct piezoelectric charge output, leaving only the charge related to the mechanical response signal. This research presents alternative analog LMS adaptive filtering methods which accomplish the same result. The main advantage of the proposed analog compensation scheme is its ability to more closely match the order of the adaptive filter to the assumed dynamics of the piezostructure using an adaptive first-order high-pass filter. Theoretical and experimental results are provided along with a discussion of the difficulties encountered in trying to achieve perfect compensation of the feedthrough capacitive charge on a piezoelectric wafer. / Master of Science
1119

A Genetic Algorithm-Based Place-and-Route Compiler For A Run-time Reconfigurable Computing System

Kahne, Brian C. 14 May 1997 (has links)
Configurable Computing is a technology which attempts to increase computational power by customizing the computational platform to the specific problem at hand. An experimental computing model known as wormhole run-time reconfiguration allows for partial reconfiguration and is highly scalable. In this approach, configuration information and data are grouped together in a computing unit called a stream, which can tunnel through the chip creating a series of interconnected pipelines. The Colt/Stallion project at Virginia Tech implements this computing model into integrated circuits. In order to create applications for this platform, a compiler is needed which can convert a human readable description of an algorithm into the sequences of configuration information understood by the chip itself. This thesis covers two compilers which perform this task. The first compiler, Tier1, requires a programmer to explicitly describe placement and routing inside of the chip. This could be considered equivalent to an assembler for a traditional microprocessor. The second compiler, Tier2, allows the user to express a problem as a dataflow graph. Actual placing and routing of this graph onto the physical hardware is taken care of through the use of a genetic algorithm. A description of the two languages is presented, followed by example applications. In addition, experimental results are included which examine the behavior of the genetic algorithm and how alterations to various genetic operator probabilities affects performance. / Master of Science
1120

Examination of Driver Lane Change Behavior and the Potential Effectiveness of Warning Onset Rules for Lane Change or "Side" Crash Avoidance Systems

Hetrick, Shannon 27 March 1997 (has links)
Lane change or "Side" Crash Avoidance Systems (SCAS) technologies are becoming available to help alleviate the lane change crash problem. They detect lane change crash hazards and warn the driver of the presence of such hazards. This thesis examines driver lane change behavior and evaluates the potential effectiveness of five warning onset rules for lane change or "side" crash avoidance system (SCAS) technologies. The ideal SCAS should warn the driver only when two conditions are met: (1) positive indication of lane change intent and (2) positive detection of a proximal vehicle in the adjacent lane of concern. Together, these two conditions create a crash hazard. The development of SCAS technologies depends largely on an understanding of driver behavior and performance during lane change maneuvers. By quantifying lane change behavior, real world crash hazard scenarios can be simulated. This provides an opportunity to evaluate potential warning onset rules or algorithms of driver intent to change lanes. Five warning onset rules for SCAS were evaluated: turn-signal onset (TSO), minimum separation (MS), line crossing (LC), time-to-line crossing (TLC), and tolerance limit (TL). The effectiveness of each rule was measured by the maximum response time available (tavailable) to avoid a crash for a particular lane change crash scenario, and by the crash outcome, crashed or crash avoided, of a particular lane change crash scenario. / Master of Science

Page generated in 0.3347 seconds