• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 111
  • 28
  • 14
  • 12
  • 8
  • 8
  • 7
  • 6
  • 6
  • 4
  • 4
  • 4
  • 2
  • 1
  • 1
  • Tagged with
  • 244
  • 244
  • 41
  • 32
  • 31
  • 26
  • 25
  • 25
  • 22
  • 20
  • 18
  • 17
  • 17
  • 16
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Two-Stage Multi-Channel LED Driver with CLL Resonant Converter

Chen, Xuebing 05 September 2014 (has links)
LED is widely used in many applications, such as indoor lighting, backlighting and street lighting, etc. For these application, multiple LED strings structure is adopted for reasons of cost-effectiveness, reliability and safety concerns. Several methods and topologies have been proposed to drive multiple LED strings. However, the output current balance and efficiency are always the two major concerns for LED driver. A simple two-stage multi-channel LED driver is proposed. It is composed of a buck converter as the first stage and a multi-channel constant current (MC3) CLL resonant converter as the second stage. For the CLL resonant converter, the magnetizing inductance of the transformer can be as large as possible. Therefore, the magnetizing current of the transformer has little influence on the output currents. In addition, the currents of two LED strings driven by the same transformer is balanced by a DC blocking capacitor. As a result, the current balance among LED strings is very good, even if the load is severely unbalanced. Meanwhile, the current flowing through the external inductance Lr1, instead of the magnetizing current is used to help the primary-side switches to achieve ZVS. Therefore, large magnetizing inductance is good for current balance and properly designed Lr1 is helpful for ZVS achievement. These properties of MC3 CLL are preferred to drive multi-channel LED strings. In the design procedure of MC3 CLL resonant converter, the parasitic junction capacitor of the secondary-side rectifier is taken into account. It influences the operation during dead time significantly when the voltage step-up transformer is applied. The junction capacitors of the secondary-side rectifiers, and the output capacitors of the primary-side switches will resonate with the inductor Le2 during the dead time. Finally, this resonance impact the ZVS achievement of the primary-side switches. Therefore, the inductors Lr1 and Le2 should be designed according the charge needed to achieve ZVS with considering the resonance. Additionally, the control strategy for this two-stage structure is simple. Only the current of one specific LED string is sensed for feedback control to regulate the bus voltage, and the currents of other LED strings are cross-regulated. Furthermore, the MC3 CLL is unregulated and always working around the resonant frequency point to achieve best efficiency. The compensator is designed based on the derived small signal model of this two-stage LED driver. Due to the special electrical characteristics of LED, the soft start-up process with a delayed dimming signal is adopted and investigated. With the soft start-up, there is no overshoot for the output current. Finally, a prototype of the two-stage LED driver is built. The current balance capability of the LED driver is verified with the experiment. Good current balance is achieved under balanced and severely unbalanced load condition. In addition, the efficiency of the LED driver is also presented. High efficiency is guaranteed within a wide load range. Therefore, this two-stage structure is a very promising candidate for multi-channel LED driving applications. / Master of Science
82

Approaches to Joint Base Station Selection and Adaptive Slicing in Virtualized Wireless Networks

Teague, Kory Alan 19 November 2018 (has links)
Wireless network virtualization is a promising avenue of research for next-generation 5G cellular networks. This work investigates the problem of selecting base stations to construct virtual networks for a set of service providers, and adaptive slicing of the resources between the service providers to satisfy service provider demands. A two-stage stochastic optimization framework is introduced to solve this problem, and two methods are presented for approximating the stochastic model. The first method uses a sampling approach applied to the deterministic equivalent program of the stochastic model. The second method uses a genetic algorithm for base station selection and adaptively slicing via a single-stage linear optimization problem. A number of scenarios are simulated using a log-normal model designed to emulate demand from real world cellular networks. Simulations indicate that the first approach can provide a reasonably tight solution, but is constrained as the time expense grows exponentially with the number of parameters. The second approach provides a significant improvement in run time with the introduction of marginal error. / Master of Science / 5G, the next generation cellular network standard, promises to provide significant improvements over current generation standards. For 5G to be successful, this must be accompanied by similarly significant efficiency improvements. Wireless network virtualization is a promising technology that has been shown to improve the cost efficiency of current generation cellular networks. By abstracting the physical resource—such as cell tower base stations— from the use of the resource, virtual resources are formed. This work investigates the problem of selecting virtual resources (e.g., base stations) to construct virtual wireless networks with minimal cost and slicing the selected resources to individual networks to optimally satisfy individual network demands. This problem is framed in a stochastic optimization framework and two approaches are presented for approximation. The first approach converts the framework into a deterministic equivalent and reduces it to a tractable form. The second approach uses a genetic algorithm to approximate resource selection. Approaches are simulated and evaluated utilizing a demand model constructed to emulate the statistics of an observed real world urban network. Simulations indicate that the first approach can provide a reasonably tight solution with significant time expense, and that the second approach provides a solution in significantly less time with the introduction of marginal error.
83

Hybrid flow shop scheduling with prescription constraints on jobs

Simonneau, Nicolas 08 January 2004 (has links)
The sponsor of the thesis is the Composite Unit of AIRBUS Nantes plant, which manufactures aircraft composite. The basic process to manufacture composite parts is to lay-up raw composite material on a tool and involves very costly means and raw material. This process can be modeled as a two-stage hybrid flow shop problem with specific constraints, particularly prescription constraints on the jobs. This thesis restates the practical problem as a scheduling problem by doing hypotheses and restrictions. Then, it designs a mathematical model based on time-indexed variables. This model has been implemented in an IP solver to solve real based scenarios. A heuristic algorithm is developed for obtaining good solutions quickly. Finally, the heuristic is used to increase the execution speed of the IP solver. This thesis concludes by a discussion on the advantages and disadvantages of each option (IP solver vs. heuristic software) for the sponsor. / Master of Science
84

Unveiling Hidden Problems: A Two-Stage Machine Learning Approach to Predict Financial Misstatement Using the Existence of Internal Control Material Weaknesses

Sun, Jing 07 1900 (has links)
Prior research has provided evidence that the disclosure of internal controls material weaknesses (ICMWs) is a powerful input attribute in misstatement prediction. However, the disclosure of ICMWs is imperfect in capturing internal control quality because many firms with control problems fail to disclose ICMWs on a timely basis. The purpose of this study is to examine whether the existence of ICMWs, including both the disclosed and the undisclosed ICMWs, improves misstatement prediction. I develop a two-stage machine learning model for misstatement prediction with the predicted existence of ICMWs as the intermediate concept; my model that outperforms the model with the ICMW disclosures. I also find that the model incorporating both the predicted existence and the disclosure of ICMWs outperforms those with only the disclosure or the predicted existence of ICMWs. These results hold across different input attributes, machine learning methods, and prediction periods, and training-test samples splitting methods. Finally, this study shows that the two-stage models outperform the one-stage models in predictions related to financial reporting quality.
85

Failure of polymeric materials at ultra-high strain rates

Callahan, Kyle Richard 10 May 2024 (has links) (PDF)
Understanding the failure behavior of polymers subjected to an ultrahigh strain rate (UHSR) impact is crucial for their applications in any protective shielding. But little is known about how polymers respond to UHSR events at the macroscale, or what effect their chemical makeups and morphology contribute. This dissertation aims to answer these questions by characterizing the responses of polymers subjected to UHSRs, investigating how the polymer molecular architecture and morphologies alter the macroscopic response during UHSRs via hypervelocity impact (HVI), linking the behaviors of UHSR events between the macro- and nano-length scales, and determining the consequences of UHSR impacts on polymer chains. Macroscale UHSR impacts are conducted using a two-stage light gas gun (2SLGG) to induce an HVI. Different molecular weights and thicknesses of polycarbonate were considered. The HVI behavior of polycarbonate is characterized using both real-time and postmortem techniques. The response depends on target thickness and impact velocity (vi). However, negligible difference is observed between the HVI results for the two differing entanglement densities. These contrasts previous conclusions drawn on the nanoscale during UHSR impacts which capture an increase in the energy arrested from the projectile with increasing entanglement density. To link the UHSR phenomena from nano to macroscale, laser-induced projectile impact testing (LIPIT) is conducted on polymethyl methacrylate (PMMA) thin films on the nanoscale in addition to ballistic and 2SLGG impacts at macroscale. Applying Buckingham-Π theorem, scaling relationships for the minimum perforation velocity and the residual velocity across these length scales were developed. It is shown that the ratios between target thickness to projectile radius, between projectile and target density, and the velocity of the compressive stress wave traveling through the target are the governing parameters for the UHSR responses of polymers across theses length scales. The effect UHSRs have on the polymer is investigated via ex-situ analysis by capturing polymer debris using a custom-built debris catcher. Different material-vi combinations are examined. X-ray diffraction and differential scanning calorimetry are used to characterize the HVI debris. Evidence of char was found within the debris. This dissertation advances the knowledge regarding the failure behavior of polymer materials subjected to UHSRs.
86

Measuring the efficiency of two stage network processes: a satisficing DEA approach

Mehdizadeh, S., Amirteimoori, A., Vincent, Charles, Behzadi, M.H., Kordrostami, S. 2020 March 1924 (has links)
No / Regular Network Data Envelopment Analysis (NDEA) models deal with evaluating the performance of a set of decision-making units (DMUs) with a two-stage construction in the context of a deterministic data set. In the real world, however, observations may display a stochastic behavior. To the best of our knowledge, despite the existing research done with different data types, studies on two-stage processes with stochastic data are still very limited. This paper proposes a two-stage network DEA model with stochastic data. The stochastic two-stage network DEA model is formulated based on the satisficing DEA models of chance-constrained programming and the leader-follower concepts. According to the probability distribution properties and under the assumption of the single random factor of the data, the probabilistic form of the model is transformed into its equivalent deterministic linear programming model. In addition, the relationship between the two stages as the leader and the follower, respectively, at different confidence levels and under different aspiration levels, is discussed. The proposed model is further applied to a real case concerning 16 commercial banks in China in order to confirm the applicability of the proposed approach at different confidence levels and under different aspiration levels.
87

Two-Stage Stochastic Model to Invest in Distributed Generation Considering the Long-Term Uncertainties

Angarita-Márquez, Jorge L., Mokryani, Geev, Martínez-Crespo, J. 13 October 2021 (has links)
Yes / This paper used different risk management indicators applied to the investment optimization performed by consumers in Distributed Generation (DG). The objective function is the total cost incurred by the consumer including the energy and capacity payments, the savings, and the revenues from the installation of DG, alongside the operation and maintenance (O&M) and investment costs. Probability density function (PDF) was used to model the price volatility in the long-term. The mathematical model uses a two-stage stochastic approach: investment and operational stages. The investment decisions are included in the first stage and which do not change with the scenarios of the uncertainty. The operation variables are in the second stage and, therefore, take different values with every realization. Three risk indicators were used to assess the uncertainty risk: Value-at-Risk (VaR), Conditional Value-at-Risk (CVaR), and Expected Value (EV). The results showed the importance of migration from deterministic models to stochastic ones and, most importantly, the understanding of the ramifications of every risk indicator.
88

Simulation and optimisation of a two-stage/two-pass reverse osmosis system for improved removal of chlorophenol from wastewater

Al-Obaidi, Mudhar A.A.R., Kara-Zaitri, Chakib, Mujtaba, Iqbal 03 February 2018 (has links)
Yes / Reverse osmosis (RO) has become a common method for treating wastewater and removing several harmful organic compounds because of its relative ease of use and reduced costs. Chlorophenol is a toxic compound for humans and can readily be found in the wastewater of a wide range of industries. Previous research in this area of work has already provided promising results in respect of the performance of an individual spiral wound RO process for removing chlorophenol from wastewater, but the associated removal rates have stayed stubbornly low. The literature has so far confirmed that the efficiency of eliminating chlorophenol from wastewater using a pilot-scale of an individual spiral wound RO process is around 83 %, compared to 97 % for dimethylphenol. This paper explores the potential of an alternative configuration of two-stage/two-pass RO process for improving such low chlorophenol rejection rates via simulation and optimisation. The operational optimisation carried out is enhanced by constraining the total recovery rate to a realistic value by varying the system operating parameters according to the allowable limits of the process. The results indicate that the proposed configuration has the potential to increase the rejection of chlorophenol by 12.4 % while achieving 40 % total water recovery at an energy consumption of 1.949 kWh/m³.
89

A comparative analysis of two-stage distress prediction models

Mousavi, Mohammad M., Quenniche, J., Tone, K. 11 February 2018 (has links)
Yes / On feature selection, as one of the critical steps to develop a distress prediction model (DPM), a variety of expert systems and machine learning approaches have analytically supported developers. Data envel- opment analysis (DEA) has provided this support by estimating the novel feature of managerial efficiency, which has frequently been used in recent two-stage DPMs. As key contributions, this study extends the application of expert system in credit scoring and distress prediction through applying diverse DEA mod- els to compute corporate market efficiency in addition to the prevailing managerial efficiency, and to estimate the decomposed measure of mix efficiency and investigate its contribution compared to Pure Technical Efficiency and Scale Efficiency in the performance of DPMs. Further, this paper provides a com- prehensive comparison between two-stage DPMs through estimating a variety of DEA efficiency measures in the first stage and employing static and dynamic classifiers in the second stage. Based on experimen- tal results, guidelines are provided to help practitioners develop two-stage DPMs; to be more specific, guidelines are provided to assist with the choice of the proper DEA models to use in the first stage, and the choice of the best corporate efficiency measures and classifiers to use in the second stage.
90

Approches duales dans la résolution de problèmes stochastiques / Dual approaches in stochastic programming

Letournel, Marc 27 September 2013 (has links)
Le travail général de cette thèse consiste à étendre les outils analytiques et algébriques usuellement employés dans la résolution de problèmes combinatoires déterministes à un cadre combinatoire stochastique. Deux cadres distincts sont étudiés : les problèmes combinatoires stochastiques discrets et les problèmes stochastiques continus. Le cadre discret est abordé à travers le problème de la forêt couvrante de poids maximal dans une formulation Two-Stage à multi-scénarios. La version déterministe très connue de ce problème établit des liens entre la fonction de rang dans un matroïde et la formulation duale, via l'algorithme glouton. La formulation stochastique discrète du problème de la forêt maximale couvrante est transformée en un problème déterministe équivalent, mais du fait de la multiplicité des scénarios, le dual associé est en quelque sorte incomplet. Le travail réalisé ici consiste à comprendre en quelles circonstances la formulation duale atteint néanmoins un minimum égal au problème primal intégral. D'ordinaire, une approche combinatoire classique des problèmes de graphes pondérés consiste à rechercher des configurations particulières au sein des graphes, comme les circuits, et à explorer d'éventuelles recombinaisons. Pour donner une illustration simple, si on change d'une manière infinitésimale les valeurs de poids des arêtes d'un graphe, il est possible que la forêt couvrante de poids maximal se réorganise complètement. Ceci est vu comme un obstacle dans une approche purement combinatoire. Pourtant, certaines grandeurs analytiques vont varier de manière continue en fonction de ces variations infinitésimales, comme la somme des poids des arêtes choisies. Nous introduisons des fonctions qui rendent compte de ces variations continues, et nous examinons dans quels cas les formulations duales atteignent la même valeur que les formulations primales intégrales. Nous proposons une méthode d'approximation dans le cas contraire et nous statuons sur la NP complétude de ce type de problème.Les problèmes stochastiques continus sont abordés via le problème de sac à dos avec contrainte stochastique. La formulation est de type ``chance constraint'', et la dualisation par variable lagrangienne est adaptée à une situation où la probabilité de respecter la contrainte doit rester proche de $1$. Le modèle étudié est celui d'un sac à dos où les objets ont une valeur et un poids déterminés par des distributions normales. Dans notre approche, nous nous attachons à appliquer des méthodes de gradient directement sur la formulation en espérance de la fonction objectif et de la contrainte. Nous délaissons donc une possible reformulation classique du problème sous forme géométrique pour détailler les conditions de convergence de la méthode du gradient stochastique. Cette partie est illustrée par des tests numériques de comparaison avec la méthode SOCP sur des instances combinatoires avec méthode de Branch and Bound, et sur des instances relaxées. / The global purpose of this thesis is to study the conditions to extend analytical and algebraical properties commonly observed in the resolution of deterministic combinatorial problems to the corresponding stochastic formulations of these problems. Two distinct situations are treated : discrete combinatorial stochastic problems and continuous stochastic problems. Discrete situation is examined with the Two Stage formulation of the Maximum Weight Covering Forest. The well known corresponding deterministic formulation shows the connexions between the rank function of a matroid, the greedy algorithm , and the dual formulation. The discrete stochastic formulation of the Maximal Covering Forest is turned into a deterministic equivalent formulation, but, due to the number of scenarios, the associated dual is not complete. The work of this thesis leads to understand in which cases the dual formulation still has the same value as the primal integer formulation. Usually, classical combinatorial approaches aim to find particular configurations in the graph, as circuits, in order to handle possible reconfigurations. For example, slight modifications of the weights of the edges might change considerably the configuration of the Maximum Weight Covering Forest. This can be seen as an obstacle to handle pure combinatorial proofs. However, some global relevant quantities, like the global weight of the selected edges during the greedy algorithm, have a continuous variation in function of slight modifications. We introduce some functions in order to outline these continuous variations. And we state in which cases Primal integral problems have the same objective values as dual formulations. When it is not the case, we propose an approximation method and we examine the NP completeness of this problem.Continuous stochastic problems are presented with the stochastic Knapsack with chance constraint. Chance constraint and dual Lagrangian formulation are adapted in the case where the expected probability of not exceeding the knapsack capacity is close to $1$. The introduced model consists in items whose costs and rewards follow normal distributions. In our case, we try to apply direct gradient methods without reformulating the problem into geometrical terms. We detail convergence conditions of gradient based methods directly on the initial formulation. This part is illustrated with numerical tests on combinatorial instances and Branch and Bound evaluations on relaxed formulations.

Page generated in 0.0233 seconds