• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 100
  • 28
  • 12
  • 12
  • 8
  • 7
  • 7
  • 6
  • 4
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 223
  • 223
  • 39
  • 28
  • 28
  • 25
  • 24
  • 21
  • 21
  • 18
  • 17
  • 16
  • 15
  • 14
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Two-Stage Stochastic Model to Invest in Distributed Generation Considering the Long-Term Uncertainties

Angarita-Márquez, Jorge L., Mokryani, Geev, Martínez-Crespo, J. 13 October 2021 (has links)
yes / This paper used different risk management indicators applied to the investment optimization performed by consumers in Distributed Generation (DG). The objective function is the total cost incurred by the consumer including the energy and capacity payments, the savings, and the revenues from the installation of DG, alongside the operation and maintenance (O&M) and investment costs. Probability density function (PDF) was used to model the price volatility in the long-term. The mathematical model uses a two-stage stochastic approach: investment and operational stages. The investment decisions are included in the first stage and which do not change with the scenarios of the uncertainty. The operation variables are in the second stage and, therefore, take different values with every realization. Three risk indicators were used to assess the uncertainty risk: Value-at-Risk (VaR), Conditional Value-at-Risk (CVaR), and Expected Value (EV). The results showed the importance of migration from deterministic models to stochastic ones and, most importantly, the understanding of the ramifications of every risk indicator.
72

Unveiling Hidden Problems: A Two-Stage Machine Learning Approach to Predict Financial Misstatement Using the Existence of Internal Control Material Weaknesses

Sun, Jing 07 1900 (has links)
Prior research has provided evidence that the disclosure of internal controls material weaknesses (ICMWs) is a powerful input attribute in misstatement prediction. However, the disclosure of ICMWs is imperfect in capturing internal control quality because many firms with control problems fail to disclose ICMWs on a timely basis. The purpose of this study is to examine whether the existence of ICMWs, including both the disclosed and the undisclosed ICMWs, improves misstatement prediction. I develop a two-stage machine learning model for misstatement prediction with the predicted existence of ICMWs as the intermediate concept; my model that outperforms the model with the ICMW disclosures. I also find that the model incorporating both the predicted existence and the disclosure of ICMWs outperforms those with only the disclosure or the predicted existence of ICMWs. These results hold across different input attributes, machine learning methods, and prediction periods, and training-test samples splitting methods. Finally, this study shows that the two-stage models outperform the one-stage models in predictions related to financial reporting quality.
73

Bayesian D-Optimal Design for Generalized Linear Models

Zhang, Ying 12 January 2007 (has links)
Bayesian optimal designs have received increasing attention in recent years, especially in biomedical and clinical trials. Bayesian design procedures can utilize the available prior information of the unknown parameters so that a better design can be achieved. However, a difficulty in dealing with the Bayesian design is the lack of efficient computational methods. In this research, a hybrid computational method, which consists of the combination of a rough global optima search and a more precise local optima search, is proposed to efficiently search for the Bayesian D-optimal designs for multi-variable generalized linear models. Particularly, Poisson regression models and logistic regression models are investigated. Designs are examined for a range of prior distributions and the equivalence theorem is used to verify the design optimality. Design efficiency for various models are examined and compared with non-Bayesian designs. Bayesian D-optimal designs are found to be more efficient and robust than non-Bayesian D-optimal designs. Furthermore, the idea of the Bayesian sequential design is introduced and the Bayesian two-stage D-optimal design approach is developed for generalized linear models. With the incorporation of the first stage data information into the second stage, the two-stage design procedure can improve the design efficiency and produce more accurate and robust designs. The Bayesian two-stage D-optimal designs for Poisson and logistic regression models are evaluated based on simulation studies. The Bayesian two-stage optimal design approach is superior to the one-stage approach in terms of a design efficiency criterion. / Ph. D.
74

Design and Implementation of a Multiphase Buck Converter for Front End 48V-12V Intermediate Bus Converters

Salvo, Christopher 25 July 2019 (has links)
The trend in isolated DC/DC bus converters is to increase the output power in the same brick form factors that have been used in the past. Traditional intermediate bus converters (IBCs) use silicon power metal oxide semiconductor field effect transistors (MOSFETs), which recently have reached the limit in terms of turn on resistance (RDSON) and switching frequency. In order to make the IBCs smaller, the switching frequency needs to be pushed higher, which will in turn shrink the magnetics, lowering the converter size, but increase the switching related losses, lowering the overall efficiency of the converter. Wide-bandgap semiconductor devices are becoming more popular in commercial products and gallium nitride (GaN) devices are able to push the switching frequency higher without sacrificing efficiency. GaN devices can shrink the size of the converter and provide better efficiency than its silicon counterpart provides. A survey of current IBCs was conducted in order to find a design point for efficiency and power density. A two-stage converter topology was explored, with a multiphase buck converter as the front end, followed by an LLC resonant converter. The multiphase buck converter provides regulation, while the LLC provides isolation. With the buck converter providing regulation, the switching frequency of the entire converter will be constant. A constant switching frequency allows for better electromagnetic interference (EMI) mitigation. This work includes the details to design and implement a hard-switched multiphase buck converter with planar magnetics using GaN devices. The efficiency includes both the buck efficiency and the overall efficiency of the two-stage converter including the LLC. The buck converter operates with 40V - 60V input, nominally 48V, and outputs 36V at 1 kW, which is the input to the LLC regulating 36V – 12V. Both open and closed loop was measured for the buck and the full converter. EMI performance was not measured or addressed in this work. / Master of Science / Traditional silicon devices are widely used in all power electronics applications today, however they have reached their limit in terms of size and performance. With the introduction of gallium nitride (GaN) field effect transistors (FETs), the limits of silicon can now be passed with GaN providing better performance. GaN devices can be switched at higher switching frequencies than silicon, which allows for the magnetics of power converters to be smaller. GaN devices can also achieve higher efficiency than silicon, so increasing the switching frequency will not hurt the overall efficiency of the power converter. GaN devices can handle higher switching frequencies and larger currents while maintaining the same or better efficiencies over their silicon counterparts. This work illustrates the design and implementation of GaN devices into a multiphase buck converter. This converter is the front end of a two-stage converter, where the buck will provide regulation and the second stage will provide isolation. With the use of higher switching frequencies, the magnetics can be decreased in size, meaning planar magnetics can be used in the power converter. Planar magnetics can be placed directly inside of the printing circuit board (PCB), which allows for higher power densities and easy manufacturing of the magnetics and overall converter. Finally, the open and closed loop were verified and compared to the current converters that are on the market in the 48V – 12V area of intermediate bus converters (IBCs).
75

Two-Stage Multi-Channel LED Driver with CLL Resonant Converter

Chen, Xuebing 05 September 2014 (has links)
LED is widely used in many applications, such as indoor lighting, backlighting and street lighting, etc. For these application, multiple LED strings structure is adopted for reasons of cost-effectiveness, reliability and safety concerns. Several methods and topologies have been proposed to drive multiple LED strings. However, the output current balance and efficiency are always the two major concerns for LED driver. A simple two-stage multi-channel LED driver is proposed. It is composed of a buck converter as the first stage and a multi-channel constant current (MC3) CLL resonant converter as the second stage. For the CLL resonant converter, the magnetizing inductance of the transformer can be as large as possible. Therefore, the magnetizing current of the transformer has little influence on the output currents. In addition, the currents of two LED strings driven by the same transformer is balanced by a DC blocking capacitor. As a result, the current balance among LED strings is very good, even if the load is severely unbalanced. Meanwhile, the current flowing through the external inductance Lr1, instead of the magnetizing current is used to help the primary-side switches to achieve ZVS. Therefore, large magnetizing inductance is good for current balance and properly designed Lr1 is helpful for ZVS achievement. These properties of MC3 CLL are preferred to drive multi-channel LED strings. In the design procedure of MC3 CLL resonant converter, the parasitic junction capacitor of the secondary-side rectifier is taken into account. It influences the operation during dead time significantly when the voltage step-up transformer is applied. The junction capacitors of the secondary-side rectifiers, and the output capacitors of the primary-side switches will resonate with the inductor Le2 during the dead time. Finally, this resonance impact the ZVS achievement of the primary-side switches. Therefore, the inductors Lr1 and Le2 should be designed according the charge needed to achieve ZVS with considering the resonance. Additionally, the control strategy for this two-stage structure is simple. Only the current of one specific LED string is sensed for feedback control to regulate the bus voltage, and the currents of other LED strings are cross-regulated. Furthermore, the MC3 CLL is unregulated and always working around the resonant frequency point to achieve best efficiency. The compensator is designed based on the derived small signal model of this two-stage LED driver. Due to the special electrical characteristics of LED, the soft start-up process with a delayed dimming signal is adopted and investigated. With the soft start-up, there is no overshoot for the output current. Finally, a prototype of the two-stage LED driver is built. The current balance capability of the LED driver is verified with the experiment. Good current balance is achieved under balanced and severely unbalanced load condition. In addition, the efficiency of the LED driver is also presented. High efficiency is guaranteed within a wide load range. Therefore, this two-stage structure is a very promising candidate for multi-channel LED driving applications. / Master of Science
76

Approaches to Joint Base Station Selection and Adaptive Slicing in Virtualized Wireless Networks

Teague, Kory Alan 19 November 2018 (has links)
Wireless network virtualization is a promising avenue of research for next-generation 5G cellular networks. This work investigates the problem of selecting base stations to construct virtual networks for a set of service providers, and adaptive slicing of the resources between the service providers to satisfy service provider demands. A two-stage stochastic optimization framework is introduced to solve this problem, and two methods are presented for approximating the stochastic model. The first method uses a sampling approach applied to the deterministic equivalent program of the stochastic model. The second method uses a genetic algorithm for base station selection and adaptively slicing via a single-stage linear optimization problem. A number of scenarios are simulated using a log-normal model designed to emulate demand from real world cellular networks. Simulations indicate that the first approach can provide a reasonably tight solution, but is constrained as the time expense grows exponentially with the number of parameters. The second approach provides a significant improvement in run time with the introduction of marginal error. / Master of Science / 5G, the next generation cellular network standard, promises to provide significant improvements over current generation standards. For 5G to be successful, this must be accompanied by similarly significant efficiency improvements. Wireless network virtualization is a promising technology that has been shown to improve the cost efficiency of current generation cellular networks. By abstracting the physical resource—such as cell tower base stations— from the use of the resource, virtual resources are formed. This work investigates the problem of selecting virtual resources (e.g., base stations) to construct virtual wireless networks with minimal cost and slicing the selected resources to individual networks to optimally satisfy individual network demands. This problem is framed in a stochastic optimization framework and two approaches are presented for approximation. The first approach converts the framework into a deterministic equivalent and reduces it to a tractable form. The second approach uses a genetic algorithm to approximate resource selection. Approaches are simulated and evaluated utilizing a demand model constructed to emulate the statistics of an observed real world urban network. Simulations indicate that the first approach can provide a reasonably tight solution with significant time expense, and that the second approach provides a solution in significantly less time with the introduction of marginal error.
77

Hybrid flow shop scheduling with prescription constraints on jobs

Simonneau, Nicolas 08 January 2004 (has links)
The sponsor of the thesis is the Composite Unit of AIRBUS Nantes plant, which manufactures aircraft composite. The basic process to manufacture composite parts is to lay-up raw composite material on a tool and involves very costly means and raw material. This process can be modeled as a two-stage hybrid flow shop problem with specific constraints, particularly prescription constraints on the jobs. This thesis restates the practical problem as a scheduling problem by doing hypotheses and restrictions. Then, it designs a mathematical model based on time-indexed variables. This model has been implemented in an IP solver to solve real based scenarios. A heuristic algorithm is developed for obtaining good solutions quickly. Finally, the heuristic is used to increase the execution speed of the IP solver. This thesis concludes by a discussion on the advantages and disadvantages of each option (IP solver vs. heuristic software) for the sponsor. / Master of Science
78

A comparative analysis of two-stage distress prediction models

Mousavi, Mohammad M., Quenniche, J., Tone, K. 11 February 2018 (has links)
Yes / On feature selection, as one of the critical steps to develop a distress prediction model (DPM), a variety of expert systems and machine learning approaches have analytically supported developers. Data envel- opment analysis (DEA) has provided this support by estimating the novel feature of managerial efficiency, which has frequently been used in recent two-stage DPMs. As key contributions, this study extends the application of expert system in credit scoring and distress prediction through applying diverse DEA mod- els to compute corporate market efficiency in addition to the prevailing managerial efficiency, and to estimate the decomposed measure of mix efficiency and investigate its contribution compared to Pure Technical Efficiency and Scale Efficiency in the performance of DPMs. Further, this paper provides a com- prehensive comparison between two-stage DPMs through estimating a variety of DEA efficiency measures in the first stage and employing static and dynamic classifiers in the second stage. Based on experimen- tal results, guidelines are provided to help practitioners develop two-stage DPMs; to be more specific, guidelines are provided to assist with the choice of the proper DEA models to use in the first stage, and the choice of the best corporate efficiency measures and classifiers to use in the second stage.
79

Measuring the efficiency of two stage network processes: a satisficing DEA approach

Mehdizadeh, S., Amirteimoori, A., Vincent, Charles, Behzadi, M.H., Kordrostami, S. 2020 March 1924 (has links)
No / Regular Network Data Envelopment Analysis (NDEA) models deal with evaluating the performance of a set of decision-making units (DMUs) with a two-stage construction in the context of a deterministic data set. In the real world, however, observations may display a stochastic behavior. To the best of our knowledge, despite the existing research done with different data types, studies on two-stage processes with stochastic data are still very limited. This paper proposes a two-stage network DEA model with stochastic data. The stochastic two-stage network DEA model is formulated based on the satisficing DEA models of chance-constrained programming and the leader-follower concepts. According to the probability distribution properties and under the assumption of the single random factor of the data, the probabilistic form of the model is transformed into its equivalent deterministic linear programming model. In addition, the relationship between the two stages as the leader and the follower, respectively, at different confidence levels and under different aspiration levels, is discussed. The proposed model is further applied to a real case concerning 16 commercial banks in China in order to confirm the applicability of the proposed approach at different confidence levels and under different aspiration levels.
80

Failure of polymeric materials at ultra-high strain rates

Callahan, Kyle Richard 10 May 2024 (has links) (PDF)
Understanding the failure behavior of polymers subjected to an ultrahigh strain rate (UHSR) impact is crucial for their applications in any protective shielding. But little is known about how polymers respond to UHSR events at the macroscale, or what effect their chemical makeups and morphology contribute. This dissertation aims to answer these questions by characterizing the responses of polymers subjected to UHSRs, investigating how the polymer molecular architecture and morphologies alter the macroscopic response during UHSRs via hypervelocity impact (HVI), linking the behaviors of UHSR events between the macro- and nano-length scales, and determining the consequences of UHSR impacts on polymer chains. Macroscale UHSR impacts are conducted using a two-stage light gas gun (2SLGG) to induce an HVI. Different molecular weights and thicknesses of polycarbonate were considered. The HVI behavior of polycarbonate is characterized using both real-time and postmortem techniques. The response depends on target thickness and impact velocity (vi). However, negligible difference is observed between the HVI results for the two differing entanglement densities. These contrasts previous conclusions drawn on the nanoscale during UHSR impacts which capture an increase in the energy arrested from the projectile with increasing entanglement density. To link the UHSR phenomena from nano to macroscale, laser-induced projectile impact testing (LIPIT) is conducted on polymethyl methacrylate (PMMA) thin films on the nanoscale in addition to ballistic and 2SLGG impacts at macroscale. Applying Buckingham-Π theorem, scaling relationships for the minimum perforation velocity and the residual velocity across these length scales were developed. It is shown that the ratios between target thickness to projectile radius, between projectile and target density, and the velocity of the compressive stress wave traveling through the target are the governing parameters for the UHSR responses of polymers across theses length scales. The effect UHSRs have on the polymer is investigated via ex-situ analysis by capturing polymer debris using a custom-built debris catcher. Different material-vi combinations are examined. X-ray diffraction and differential scanning calorimetry are used to characterize the HVI debris. Evidence of char was found within the debris. This dissertation advances the knowledge regarding the failure behavior of polymer materials subjected to UHSRs.

Page generated in 0.0715 seconds