• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 503
  • 273
  • 82
  • 59
  • 25
  • 11
  • 11
  • 9
  • 8
  • 6
  • 4
  • 4
  • 4
  • 4
  • 4
  • Tagged with
  • 1244
  • 981
  • 501
  • 432
  • 360
  • 229
  • 194
  • 185
  • 162
  • 132
  • 113
  • 113
  • 109
  • 109
  • 101
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
541

Efficient Modeling for DNN Hardware Resiliency Assessment / EFFICIENT MODELING FOR DNN HARDWARE RESILIENCY ASSESSMENT

Mahmoud, Karim January 2025 (has links)
Deep neural network (DNN) hardware accelerators are critical enablers of the current resurgence in machine learning technologies. Adopting machine learning in safety-critical systems imposes additional reliability requirements on hardware design. Addressing these requirements mandates an accurate assessment of the impact caused by permanent faults in the processing engines (PE). Carrying out this reliability assessment early in the design process allows for addressing potential reliability concerns when it is less costly to perform design revisions. However, the large size of modern DNN hardware and the complexity of the DNN applications running on it present barriers to efficient reliability evaluation before proceeding with the design implementation. Considering these barriers, this dissertation proposes two methodologies to assess fault resiliency in integer arithmetic units in DNN hardware. Using the information from the data streaming patterns of the DNN accelerators, which are known before the register-transfer level (RTL) implementation, the first methodology enables fault injection experiments to be carried out in PE units at the pre-RTL stage during architectural design space exploration. This is achieved in a DNN simulation framework that captures the mapping between a model's operations and the hardware's arithmetic units. This facilitates a fault resiliency comparison of state-of-the-art DNN accelerators comprising thousands of PE units. The second methodology introduces accurate and efficient modelling of the impact of permanent faults in integer multipliers. It avoids the need for computationally intensive circuit models, e.g., netlists, to inject faults in integer arithmetic units, thus scaling the fault resiliency assessment to accelerators with thousands of PE units with negligible simulation time overhead. As a first step, we formally analyze the impact of permanent faults affecting the internal nodes of two integer multiplier architectures. This analysis indicates that, for most internal faults, the impact on the output is independent of the operands involved in the arithmetic operation. As the second step, we develop a statistical fault injection approach based on the likelihood of a fault being triggered in the applications that run on the target DNN hardware. By modelling the impact of faults in internal nodes of arithmetic units using fault-free operations, fault injection campaigns run three orders of magnitude faster than using arithmetic circuit models in the same simulation environment. The experiments also show that the proposed method's accuracy is on par with that of using netlists to model arithmetic circuitry in which faults are injected. Using the proposed methods, one can conduct fault assessment experiments for various DNN models and hardware architectures, examining the sensitivity of DNN model-related and hardware architecture-related features on the DNN accelerator's reliability. In addition to understanding the impact of permanent hardware faults on the accuracy of DNN models running on defective hardware, the outcomes of these experiments can yield valuable insights for designers seeking to balance fault criticality and performance, thereby facilitating the development of more reliable DNN hardware in the future. / Thesis / Doctor of Philosophy (PhD) / The reliability of Deep Neural Network (DNN) hardware has become critical in recent years, especially for the adoption of machine learning in safety-critical applications. Evaluating the reliability of DNN hardware early in the design process enables addressing potential reliability concerns before committing to full implementation. However, the large size and complexity of DNN hardware impose challenges in evaluating its reliability in an efficient manner. In this dissertation, two novel methodologies are proposed to address these challenges. The first methodology introduces an efficient method to describe the mapping of operations of DNN applications to the processing engines of a target DNN hardware architecture in a high-performance computing DNN simulation environment. This approach allows for assessing the fault resiliency of large hardware architectures, incorporating thousands of processing engines while using fewer simulation resources compared to existing methods. The second methodology introduces an accurate and efficient approach to modelling the impact of permanent faults in integer arithmetic units of DNN hardware during inference. By leveraging the special characteristics of integer arithmetic units, this method achieves fault assessment at negligible computational overhead relative to running DNN inference in the fault-free mode in state-of-the-art DNN frameworks.
542

Generator maintenance scheduling of electric power systems using genetic algorithms with integer representations

Dahal, Keshav P., McDonald, J.R. January 1997 (has links)
The effective maintenance scheduling of power system generators is very important to a power utility for the economical and reliable operation of a power system. Many mathematical methods have been implemented for generator maintenance scheduling (GMS). However, these methods have many limitations and require many approximations. Here a Genetic Algorithm is proposed for GMS problems in order to overcome some of the limitations of the conventional methods. This paper formulates a general GMS problem using a reliability criterion as an integer programming problem, and demonstrates the use of GAs with three different problem encodings: binary, binary for integer and integer. The GA performances for each of these representations are analysed and compared for a test problem based on a practical power system scenario. The effects of different GA parameters are also studied. The results show that the integer GA is a very effective method for GMS problems.
543

Comparing linear programming and mixed integer programming formulations for forest planning on the Naval Surface Weapons Center, Dahlgren, Virginia

Cox, Eric Selde 10 November 2009 (has links)
This research project examined the ability to combine spatial data analysis and mathematical programming techniques in developing a multiple-use land management plan for a public forest in northeastern Virginia. Linear programming-based timber management scheduling models were constructed utilizing the Model I formulation of Johnson and Scheurman (1977). The models were formulated as mixed strata-based, area-based models (Johnson and Stuart 1987) that maximized present net worth subject to a fixed timberland base, an ending inventory requirement, workload control restrictions, and harvest volume control restrictions. The linear programming-based models which incorporated spatial data analysis capabilities were solved using mixed-integer programming. The model was used to assess the costs of implementing spatial restrictions designed to address forest resource management concerns, in particular, timber production and reserve status acreage for wildlife habitat purposes. The impact of imposing alternative spatial stand allocation requirements and different levels of reserve status acreage was evaluated by measuring the cost in terms of reductions in the present net value (PNV) of timber benefits and timber harvest volumes. The results indicate that the optimal solution value is more sensitive to the level of reserve status acreage imposed on the model than to the spatial restrictions for stand allocations placed on the model. / Master of Science
544

Characterization of FPGA-based High Performance Computers

Pimenta Pereira, Karl Savio 02 September 2011 (has links)
As CPU clock frequencies plateau and the doubling of CPU cores per processor exacerbate the memory wall, hybrid core computing, utilizing CPUs augmented with FPGAs and/or GPUs holds the promise of addressing high-performance computing demands, particularly with respect to performance, power and productivity. While traditional approaches to benchmark high-performance computers such as SPEC, took an architecture-based approach, they do not completely express the parallelism that exists in FPGA and GPU accelerators. This thesis follows an application-centric approach, by comparing the sustained performance of two key computational idioms, with respect to performance, power and productivity. Specifically, a complex, single precision, floating-point, 1D, Fast Fourier Transform (FFT) and a Molecular Dynamics modeling application, are implemented on state-of-the-art FPGA and GPU accelerators. As results show, FPGA floating-point FFT performance is highly sensitive to a mix of dedicated FPGA resources; DSP48E slices, block RAMs, and FPGA I/O banks in particular. Estimated results show that for the floating-point FFT benchmark on FPGAs, these resources are the performance limiting factor. Fixed-point FFTs are important in a lot of high performance embedded applications. For an integer-point FFT, FPGAs exploit a flexible data path width to trade-off circuit cost and speed of computation, improving performance and resource utilization. GPUs cannot fully take advantage of this, having a fixed data-width architecture. For the molecular dynamics application, FPGAs benefit from the flexibility in creating a custom, tightly-pipelined datapath, and a highly optimized memory subsystem of the accelerator. This can provide a 250-fold improvement over an optimized CPU implementation and 2-fold improvement over an optimized GPU implementation, along with massive power savings. Finally, to extract the maximum performance out of the FPGA, each implementation requires a balance between the formulation of the algorithm on the platform, the optimum use of available external memory bandwidth, and the availability of computational resources; at the expense of a greater programming effort. / Master of Science
545

Vehicle Routing for Emergency Evacuations

Pereira, Victor Caon 22 November 2013 (has links)
This dissertation introduces and analyzes the Bus Evacuation Problem (BEP), a unique Vehicle Routing Problem motivated both by its humanitarian significance and by the routing and scheduling challenges of planning transit-based, regional evacuations. First, a variant where evacuees arrive at constant, location-specific rates is introduced. In this problem, a fleet of capacitated buses must transport all evacuees to a depot/shelter such that the last scheduled pick-up and the end of the evacuee arrival process occurs at a location-specific time. The problem seeks to minimize their accumulated waiting time, restricts the number of pick-ups on each location, and exploits efficiencies from service choice and from allowing buses to unload evacuees at the depot multiple times. It is shown that, depending on the problem instance, increasing the maximum number of pick-ups allowed may reduce both the fleet size requirement and the evacuee waiting time, and that, past a certain threshold, there exist a range of values that guarantees an efficient usage of the available fleet and equitable reductions in waiting time across pick-up locations. Second, an extension of the Ritter (1967) Relaxation Algorithm, which explores the inherent structure of problems with complicating variables and constraints, such as the aforementioned BEP variant, is presented. The modified algorithm allows problems with linear, integer, or mixed-integer subproblems and with linear or quadratic objective functions to be solved to optimality. Empirical studies demonstrate the algorithm viability to solve large optimization problems. Finally, a two-stage stochastic formulation for the BEP is presented. Such variant assumes that all evacuees are at the pick-up locations at the onset of the evacuation, that the set of possible demands is provided, and, more importantly, that the actual demands become known once buses visit the pick-up locations for the first time. The effect of exploratory visits (sampling) and symmetry is explored, and the resulting insights used to develop an improved formulation for the problem. An iterative (dynamic) solution algorithm is proposed. / Ph. D.
546

Optimization-based Logistics Planning and Performance Measurement for Hospital Evacuation and Emergency Management

Agca, Esra 02 September 2013 (has links)
This dissertation addresses the development of optimization models for hospital evacuation logistics, as well as the analyses of various resource management strategies in terms of the equity of evacuation plans generated. We first formulate the evacuation transportation problem of a hospital as an integer programming model that minimizes the total evacuation risk consisting of the threat risk necessitating evacuation and the transportation risk experienced en route. Patients, categorized based on medical conditions and care requirements, are allocated to a limited fleet of vehicles with various medical capabilities and capacities to be transported to receiving beds, categorized much like patients, at the alternative facilities. We demonstrate structural properties of the underlying transportation network that enables the model to be used for both strategic planning and operational decision making. Next, we examine the resource management and equity issues that arise when multiple hospitals in a region are evacuated. The efficiency and equity of the allocation of resources, including a fleet of vehicles, receiving beds, and each hospital\'s loading capacity, determine the performance of the optimal evacuation plan. We develop an equity modeling framework, where we consider equity among evacuating hospitals and among patients. The range of equity of optimal solutions is investigated and properties of optimal and equitable solutions based on risk-based utility functions are analyzed. Finally, we study the integration of the transportation problem with the preceding hospital building evacuation. Since, in practice, the transportation plan depends on the pace of building evacuation, we develop a model that would generate the transportation plan subject to the output of hospital building evacuation. The optimal evacuation plans are analyzed with respect to resource utilization and patient prioritization schemes. Parametric analysis of the resource constraints is provided along with managerial insights into the assessment of evacuation requirements and resource allocation. In order to demonstrate the performance of the proposed models, computational results are provided using case studies with real data obtained from the second largest hospital in Virginia. / Ph. D.
547

Stochastically Constrained Simulation Optimization On Mixed-Integer Spaces

Nagaraj, Kalyani Shankar 27 October 2014 (has links)
We consider the problem of identifying solutions to a stochastic system under multiple constraints. The objective function and the constraints are expressed in terms of performance measures of the system that are observable only via a simulation model parameterized by a finite number of decision variables. In solving for such a system, one faces the much harder challenge of verifying the feasibility of a potential solution. Toward this, we present cgR-SPLINE, a multistart simulation optimization (SO) algorithm on integer spaces. cgR-SPLINE sequentially solves random restarts of a gradient-based local search routine with increasing precision. The local search routine in turn solves progressively stricter outer approximations of the underlying problem. The local solution estimator from a recently ended restart is probabilistically compared against an incumbent solution, thus generating a sequence of global solution estimators. The optimal convergence rate of the solution iterates is observed to be sub-exponential, slower than the exponential rate observed for SO problems on unconstrained discrete spaces. Additionally, efficiency for cgR-SPLINE dictates that the number of multistarts and the total simulation budget be sublinearly related, implying an increased emphasis on exploration than is prescribed in the continuous context. Heuristics for choosing constraint relaxations and solution reporting demonstrate good finite-time performance on three SO problems, of which two are nontrivial. The extension of cgR-SPLINE's framework to mixed spaces seems a natural next step. The presence of infeasible points arbitrarily close to the stochastic boundary, however pose challenges for consistency. We present a general framework for mixed spaces that is very much along the lines of cgR-SPLINE and propose ideas for specific algorithmic refinements and solution reporting. Strategically locating the restarts of a multistart SO algorithm appears to be a largely unexplored research topic. Toward achieving efficiency during the exploration phase, we present ideas for ``antithetically" generating the restarts from probability measures constructed from the SO algorithm's performance trajectory. Asymptotic behavior of the proposed sampling strategy and policies for optimal parameter selection are presently conjectural, but appear promising based on the outcomes of preliminary experiments. / Ph. D.
548

Effects of Farm and Household Decisions on Labor Allocation and Profitability of Beginning Vegetable Farms in Virginia: a Linear Programming Model

Mark, Allyssa 17 May 2016 (has links)
The United States is facing a rising average age of principal farm operators and a decline in number of beginning farmers. With numerous barriers and challenges resulting in many farm failures, a majority of beginning farmers are relying on off-farm income to support their households. Decision-making and farm business planning are difficult skills to develop and improve, and the ability to develop a plan to balance on- and off-farm labor could allow farmers to make more profitable decisions. In this study, a General Algebraic Modeling System (GAMS) is used to develop a labor management planning framework for use by Virginia's beginning vegetable farmers or service providers, such as extension agents, with the goal of improving total (on- and off-farm) profitability and farm viability. Study findings suggest that a willingness to work of 12 hours per day, 365 days per year and hired labor costs of $9.30 per hour, which is the national average for agricultural workers encourage a farmer to maintain an off-farm job, while a relatively lower off-farm wage or salary may encourage a farmer to work on the farm only. Lastly, higher hired labor costs may encourage a farmer to pursue his or her most profitable work opportunity, be it on- or off-farm, without hiring labor to maintain the farm. The model developed in this study may be used to plan multiple years of farm management to include anticipated changes in off-farm employment opportunities, land availability, product mix, and access to farm labor. The author suggests that beginning farmers who use this planning tool are able to make more informed decisions related to allocation of labor time and resources, resulting in lower failure rates for beginning farmers in Virginia. A user-friendly interface may be developed based on the study framework, to strengthen the results and increase the practicality of the tool. / Master of Science
549

Optimal Operation of Water and Power Distribution Networks

Singh, Manish K. 12 1900 (has links)
Under the envisioned smart city paradigm, there is an increasing demand for the coordinated operation of our infrastructure networks. In this context, this thesis puts forth a comprehensive toolbox for the optimization of electric power and water distribution networks. On the analytical front, the toolbox consists of novel mixed-integer (non)-linear program (MINLP) formulations; convex relaxations with optimality guarantees; and the powerful technique of McCormick linearization. On the application side, the developed tools support the operation of each of the infrastructure networks independently, but also towards their joint operation. Starting with water distribution networks, the main difficulty in solving any (optimal-) water flow problem stems from a piecewise quadratic pressure drop law. To efficiently handle these constraints, we have first formulated a novel MINLP, and then proposed a relaxation of the pressure drop constraints to yield a mixed-integer second-order cone program. Further, a novel penalty term is appended to the cost that guarantees optimality and exactness under pre-defined network conditions. This contribution can be used to solve the WF problem; the OWF task of minimizing the pumping cost satisfying operational constraints; and the task of scheduling the operation of tanks to maximize the water service time in an area experiencing electric power outage. Regarding electric power systems, a novel MILP formulation for distribution restoration using binary indicator vectors on graph properties alongside exact McCormick linearization is proposed. This can be used to minimize the restoration time of an electric system under critical operational constraints, and to enable a coordinated response with the water utilities during outages. / Master of Science / The advent of smart cities has promoted research towards interdependent operation of utilities such as water and power systems. While power system analysis is significantly developed due to decades of focused research, water networks have been relying on relatively less sophisticated tools. In this context, this thesis develops Advanced efficient computational tools for the analysis and optimization for water distribution networks. Given the consumer demands, an optimal water flow (OWF) problem for minimizing the pump operation cost is formulated. Developing a rigorous analytical framework, the proposed formulation provides significant computational improvements without compromising on the accuracy. Explicit network conditions are provided that guarantee the optimality and feasibility of the obtained OWF solution. The developed formulation is next used to solve two practical problems: the water flow problem, that solves the complex physical equations yielding nodal pressures and pipeline flows given the demands/injections; and an OWF problem that finds the best operational strategy for water utilities during power outages. The latter helps the water utility to maximize their service time during power outages, and helps power utilities better plan their restoration strategy. While the increased instrumentation and automation has enabled power utilities to better manage restoration during outages, finding an optimal strategy remains a difficult problem. The operational and coordination requirements for the upcoming distributed resources and microgrids further complicate the problem. This thesis develops a computationally fast and reasonably accurate power distribution restoration scheme enabling optimal coordination of different generators with optimal islanding. Numerical tests are conducted on benchmark water and power networks to corroborate the claims of the developed formulations.
550

Control Design for a Microgrid in Normal and Resiliency Modes of a Distribution System

Alvarez, Genesis Barbie 17 October 2019 (has links)
As inverter-based distributed energy resources (DERs) such as photovoltaic (PV) and battery energy storage system (BESS) penetrate within the distribution system. New challenges regarding how to utilize these devices to improve power quality arises. Before, PV systems were required to disconnect from the grid during a large disturbance, but now smart inverters are required to have dynamically controlled functions that allows them to remain connected to the grid. Monitoring power flow at the point of common coupling is one of the many functions the controller should perform. Smart inverters can inject active power to pick up critical load or inject reactive power to regulate voltage within the electric grid. In this context, this thesis focuses on a high level and local control design that incorporates DERs. Different controllers are implemented to stabilize the microgrid in an Islanding and resiliency mode. The microgrid can be used as a resiliency source when the distribution is unavailable. An average model in the D-Q frame is calculated to analyze the inherent dynamics of the current controller for the point of common coupling (PCC). The space vector approach is applied to design the voltage and frequency controller. Secondly, using inverters for Volt/VAR control (VVC) can provide a faster response for voltage regulation than traditional voltage regulation devices. Another objective of this research is to demonstrate how smart inverters and capacitor banks in the system can be used to eliminate the voltage deviation. A mixed-integer quadratic problem (MIQP) is formulated to determine the amount of reactive power that should be injected or absorbed at the appropriate nodes by inverter. The Big M method is used to address the nonconvex problem. This contribution can be used by distribution operators to minimize the voltage deviation in the system. / Master of Science / Reliable power supply from the electric grid is an essential part of modern life. This critical infrastructure can be vulnerable to cascading failures or natural disasters. A solution to improve power systems resilience can be through microgrids. A microgrid is a small network of interconnected loads and distributed energy resources (DERs) such as microturbines, wind power, solar power, or traditional internal combustion engines. A microgrid can operate being connected or disconnected from the grid. This research emphases on the potentially use of a Microgrid as a resiliency source during grid restoration to pick up critical load. In this research, controllers are designed to pick up critical loads (i.e hospitals, street lights and military bases) from the distribution system in case the electric grid is unavailable. This case study includes the design of a Microgrid and it is being tested for its feasibility in an actual integration with the electric grid. Once the grid is restored the synchronization between the microgrid and electric must be conducted. Synchronization is a crucial task. An abnormal synchronization can cause a disturbance in the system, damage equipment, and overall lead to additional system outages. This thesis develops various controllers to conduct proper synchronization. Interconnecting inverter-based distributed energy resources (DERs) such as photovoltaic and battery storage within the distribution system can use the electronic devices to improve power quality. This research focuses on using these devices to improve the voltage profile within the distribution system and the frequency within the Microgrid.

Page generated in 0.0608 seconds