• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1265
  • 440
  • 229
  • 124
  • 93
  • 37
  • 27
  • 26
  • 22
  • 20
  • 16
  • 12
  • 11
  • 11
  • 10
  • Tagged with
  • 2791
  • 320
  • 317
  • 288
  • 233
  • 229
  • 191
  • 181
  • 179
  • 160
  • 155
  • 138
  • 137
  • 132
  • 130
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
491

PGNME: A Domain Decomposition Algorithm for Distributed Power System Dynamic Simulation on High Performance Computing Platforms

Sullivan, Brian Shane 12 August 2016 (has links)
Dynamic simulation of a large-scale electric power system involves solving a large number of differential algebraic equations (DAEs) every simulation time-step. With the ever-growing size and complexity of power grid, dynamic simulation becomes more and more time-consuming and computationally difficult using conventional sequential simulation techniques. This thesis presents a fully distributed approach intended for implementation on High Performance Computer (HPC) clusters. A novel, relaxation-based domain decomposition algorithm known as Parallel-General-Norton with Multiple-port Equivalent (PGNME) is proposed as the core technique of a two-stage decomposition approach to divide the overall dynamic simulation problem into a set of sub problems that can be solved concurrently. While the convergence property has traditionally been a concern for relaxation-based decomposition, an estimation mechanism based on multiple-port network equivalent is adopted as the preconditioner to enhance the convergence of the proposed algorithm. The algorithm is presented in detail and validated both in terms of accuracy and capability
492

Optimization Models and Algorithms for Vulnerability Analysis and Mitigation Planning of Pyro-Terrorism

Rashidi, Eghbal 12 August 2016 (has links)
In this dissertation, an important homeland security problem is studied. With the focus on wildfire and pyro-terrorism management. We begin the dissertation by studying the vulnerability of landscapes to pyro-terrorism. We develop a maximal covering based optimization model to investigate the impact of a pyro-terror attack on landscapes based on the ignition locations of fires. We use three test case landscapes for experimentation. We compare the impact of a pyro-terror wildfire with the impacts of naturally-caused wildfires with randomly located ignition points. Our results indicate that a pyro-terror attack, on average, has more than twice the impact on landscapes than wildfires with randomly located ignition points. In the next chapter, we develop a Stackelberg game model, a min-max network interdiction framework that identifies a fuel management schedule that, with limited budget, maximally mitigates the impact of a pyro-terror attack. We develop a decomposition algorithm called MinMaxDA to solve the model for three test case landscapes, located in Western U.S. Our results indicate that fuel management, even when conducted on a small scale (when 2% of a landscape is treated), can mitigate a pyro-terror attack by 14%, on average, comparing to doing nothing. For a fuel management plan with 5%, and 10% budget, it can reduce the damage by 27% and 43% on average. Finally, we extend our study to the problem of suppression response after a pyro-terror attack. We develop a max-min model to identify the vulnerability of initial attack resources when used to fight a pyro-terror attack. We use a test case landscape for experimentation and develop a decomposition algorithm called Bounded Decomposition Algorithm (BDA) to solve the problem since the model has bilevel max-min structure with binary variables in the lower level and therefore not solvable by conventional methods. Our results indicate that although pyro-terror attacks with one ignition point can be controlled with an initial attack, pyro-terror attacks with two and more ignition points may not be controlled by initial attack. Also, a faster response is more promising in controlling pyro-terror fires.
493

Biomass-To-Biofuels' Supply Chain Design And Management

Acharya, Ambarish Madhukar 10 December 2010 (has links)
The goal of this dissertation is to study optimization models that integrate location, production, inventory and transportation decisions for industrial products and apply the knowledge gained to develop supply chains for agricultural products (biomass). We estimate unit cost for the whole biomass-to-biofuels’ supply chain which is the per gallon cost for biofuels up till it reaches the markets. The unit cost estimated is the summation of location, production, inventory holding, and transportation costs. In this dissertation, we focus on building mathematical models for designing and managing the biomass-to-biofuels’ supply chains. The computational complexity of the developed models makes it advisable to use heuristic solution procedures. We develop a Lagrangean decomposition heuristic. In our heuristic, we divide the problem into two sub-problems, sub-problem 1 is a transportation problem and sub-problem 2 is a combination of a capacitated facility location and production planning problem. Subproblem 2 is further divided by commodities. The algorithm is tested for a number of different scenarios. We also develop a decision support system (DSS) for the biomass-to-biofuels’ supply chain. In our DSS, the main problem is divided into four easy-to-solve supply chain problems. These problems were determined based on our knowledge of supply chain and discussions with the experts from the biomass and biofuels’ sector. The DSS is coded using visual basic applications (VBA) for Excel and has a simple user interface which assists the user in running different types of supply chain problems and provides results in form of reports which are easy to understand.
494

Bacterial community dynamics during lignocellulose decomposition as affected by soil and residue types

Michel, Himaya Mula 30 April 2011 (has links)
This study was conducted to determine dynamics of bacterial communities during decomposition and to find out whether the occurrence of bacterial communities was affected by soil and residue types. It was hypothesized that there would be a shift in bacterial community structure during decomposition. Also, distinct microbial communities in different two soils associated with two residues would result in colonization by different microbial taxa. The first hypothesis was based on expected changes in the composition of decomposing residues. The second hypothesis was based on the fact that soil microbial diversity is soil-specific and immense with numerous different functionally redundant but phylogeneticaly different microbial types. Residues with different chemical properties were also expected to affect bacterial community composition, however, its impact would be lesser compared to soil. A 2 x 2 x 4 factorial experiment was conducted consisting of switchgrass (Panicum virgatum) and rice (Oryza sativa) straw; 2 soil types (Sharkey and Marietta series); and 4 incubation periods (3, 23, 48 and 110 days). Clone libraries of the bacterial communities were constructed from the detritusphere (residues and adhering soil). Non-metric multidimensional scaling of the detritusphere communities showed distinct separation of the communities at day 3 which coincided with high levels of cellulase enzyme activity and reduction of soluble carbon. style='mso-spacerun:yes'> Availability of labile carbon appeared to be important in driving bacterial community succession at early stage of colonization. During the later stages of decomposition (day 23-110), bacterial communities were segregated into two groups according to soil type. Although important, this segregation was relatively small compared to the community-level similarities observed between the soils and residues. For example, 16 of the 22 most abundant OTU's, dominated by a-,b- and style='fontamily:Symbol'>g- Proteobacteria, Bacilli and Shingobacteria, were shared among all soil and residue treatments indicating that residue decomposition is carried out by few key-player taxa. These results run counter to our hypothesis and suggest that decomposition process may be mediated by certain domineering bacterial taxa which occur at the later stage of decomposition. Further research is needed to determine whether key functional ecosystem processes are dominated by only a few taxa despite taxonomically hyper-diverse soils.
495

Stochastic Multiperiod Optimization of an Industrial Refinery Model

Boucheikhchoukh, Ariel January 2021 (has links)
The focus of this work is an industrial refinery model developed by TotalEnergies SE. The model is a sparse, large-scale, nonconvex, mixed-integer nonlinear program (MINLP). The nonconvexity of the problem arises from the many bilinear, trilinear, fractional, logarithmic, exponential, and sigmoidal terms. In order to account for various sources of uncertainty in refinery planning, the industrial refinery model is extended into a two-stage stochastic program, where binary scheduling decisions must be made prior to the realization of the uncertainty, and mixed-integer recourse decisions are made afterwards. Two case studies involving uncertainty are formulated and solved in order to demonstrate the economic and logistical benefits of robust solutions over their deterministic counterparts. A full-space solution strategy is proposed wherein the integrality constraints are relaxed and a multi-step initialization strategy is employed in order to gradually approach the feasible region of the multi-scenario problem. The full-space solution strategy was significantly hampered by difficulties with finding a feasible point and numerical problems. In order to facilitate the identification of a feasible point and to reduce the incidence of numerical difficulties, a hybrid surrogate refinery model was developed using the ALAMO modelling tool. An evaluation procedure was employed to assess the surrogate model, which was shown to be reasonably accurate for most output variables and to be more reliable than the high-fidelity model. Feasible solutions are obtained for the continuous relaxations of both case studies using the full-space solution strategy in conjunction with the surrogate model. In order to solve the original MINLP problems, a decomposition strategy based on the generalized Benders decomposition (GBD) algorithm is proposed. The binary decisions are designated as complicating variables that, when fixed, reduce the full-space problem to a series of independent scenario subproblems. Through the application of the GBD algorithm, feasible mixed-integer solutions are obtained for both case studies, however optimality could not be guaranteed. Solutions obtained via the stochastic programming framework are shown to be more robust than solutions obtained via a deterministic problem formulation. / Thesis / Master of Applied Science (MASc)
496

Large Dimensional Data Analysis using Orthogonally Decomposable Tensors: Statistical Optimality and Computational Tractability

Auddy, Arnab January 2023 (has links)
Modern data analysis requires the study of tensors, or multi-way arrays. We consider the case where the dimension d is large and the order p is fixed. For dimension reduction and for interpretability, one considers tensor decompositions, where a tensor T can be decomposed into a sum of rank one tensors. In this thesis, I will describe some recent work that illustrate why and how to use decompositions for orthogonally decomposable tensors. Our developments are motivated by statistical applications where the data dimension is large. The estimation procedures will therefore aim to be computationally tractable while providing error rates that depend optimally on the dimension. A tensor is said to be orthogonally decomposable if it can be decomposed into rank one tensors whose component vectors are orthogonal. A number of data analysis tasks can be recast as the problem of estimating the component vectors from a noisy observation of an orthogonally decomposable tensor. In our first set of results, we study this decompositionproblem and derive perturbation bounds. For any two orthogonally decomposable tensors which are ε-perturbations of one another, we derive sharp upper bounds on the distances between their component vectors. While this is motivated by the extensive literature on bounds for perturbation of singular value decomposition, our work shows fundamental differences and requires new techniques. We show that tensor perturbation bounds have no dependence on eigengap, a quantity which is inevitable for matrices. Moreover, our perturbation bounds depend on the tensor spectral norm of the noise, and we provide examples to show that this leads to optimal error rates in several high dimensional statistical learning problems. Our results imply that matricizing a tensor is sub-optimal in terms of dimension dependence. The tensor perturbation bounds derived so far are universal, in that they depend only on the spectral norm of the perturbation. In subsequent chapters, we show that one can extract further information from how a noise is generated, and thus improve over tensor perturbation bounds both statistically and computationally. We demonstrate this approach for two different problems: first, in estimating a rank one spiked tensor perturbed by independent heavy-tailed noise entries; and secondly, in performing inference from moment tensors in independent component analysis. We find that an estimator benefits immensely— both in terms of statistical accuracy and computational feasibility — from additional information about the structure of the noise. In one chapter, we consider independent noise elements, and in the next, the noise arises as a difference of sample and population fourth moments. In both cases, our estimation procedures are determined so as to avoid accumulating the errors from different sources. In a departure from the tensor perturbation bounds, we also find that the spectral norm of the error tensor does not lead to the sharpest estimation error rates in these cases. The error rates of estimating the component vectors are affected only by the noise projected in certain directions, and due to the orthogonality of the signal tensor, the projected errors do not accumulate, and can be controlled more easily.
497

BISER: fast characterization of segmental duplication structure in multiple genome assemblies

Iseric, Hamza January 2021 (has links)
The increasing availability of high-quality genome assemblies raised interest in the characterization of genomic architecture. Major architectural elements, such as common repeats and segmental duplications (SDs), increase genome plasticity that stimulates further evolution by changing the genomic structure and inventing new genes. Optimal computation of SDs within a genome requires quadratic-time local alignment algorithms that are impractical due to the size of most genomes. Additionally, to perform evolutionary analysis, one needs to characterize SDs in multiple genomes and find relations between those SDs and unique (non-duplicated) segments in other genomes. A na ̈ıve approach consisting of multiple sequence alignment would make the optimal solution to this problem even more impractical. Thus there is a need for fast and accurate algorithms to characterize SD structure in multiple genome assemblies to better understand the evolutionary forces that shaped the genomes of today. Here we introduce a new approach, BISER, to quickly detect SDs in multiple genomes and identify elementary SDs and core duplicons that drive the formation of such SDs. BISER improves earlier tools by (i) scaling the detection of SDs with low homology (75%) to multiple genomes while introducing further 10–34× speed-ups over the existing tools, and by (ii) characterizing elementary SDs and detecting core duplicons to help trace the evolutionary history of duplications to as far as 300 million years. / Graduate
498

Studies on catalyst materials and operating conditions for ammonia decomposition / アンモニア分解における触媒材料及び動作条件の研究

Younghwan, Im 24 November 2021 (has links)
京都大学 / 新制・課程博士 / 博士(工学) / 甲第23578号 / 工博第4933号 / 新制||工||1770(附属図書館) / 京都大学大学院工学研究科物質エネルギー化学専攻 / (主査)教授 江口 浩一, 教授 陰山 洋, 教授 阿部 竜 / 学位規則第4条第1項該当 / Doctor of Philosophy (Engineering) / Kyoto University / DGAM
499

Collaborative Products: A Design Methodology with Application to Engineering-Based Poverty Alleviation

Morrise, Jacob S. 08 August 2011 (has links) (PDF)
Collaborative products are created when physical components from two or more products are temporarily recombined to form another product capable of performing entirely new tasks. The method for designing collaborative products is useful in developing products with reduced cost, weight, and size. These reductions are valued in the developing world because collaborative products have a favorable task-per-cost ratio. In this paper, a method for designing collaborative products is introduced. The method identifies a set of products capable of being recombined into a collaborative product. These products are then designed to allow for this recombination. Three examples are provided to illustrate the method. These examples show that a collaborative block plane, apple peeler, and brick press, each created from a set of products, can increase the task-per-cost ratio of these products by 42%, 20%, and 30%, respectively. The author concludes that the method introduced herein provides a new and useful tool to design collaborative products and to engineer products that are valued in the developing world.
500

On the mechanism of H2O2 decomposition on UO2-surfaces / Mekanismen för sönderdelning av H2O2 på UO2-ytor

Pakarinen, Darius January 2018 (has links)
Deep geological repository has been investigated as a solution for long term storage of spent nuclear fuel in Sweden for more than 40 years now. The Swedish nuclear fuel and waste management company (SKB) are commissioning the deep repository and they must ensure that nuclear waste is isolated from the environment for thousands of years. During this time the containment must withstand physical stress and corrosion. It is important for a safety analysis to determine the different reactions that could occur during this time. If the physical barriers break down, radiolysis of water will occur. Hydrogen peroxide formed during the radiolysis can oxidize the exposed surface of the fuel, which increases the dissolution of radiotoxic products. Hydrogen peroxide can also catalytically decompose on the surfaces of the fuel. This project set out to figure out the selectivity for catalytic decomposition of hydrogen peroxide. This was to be done analytically with coumarin as a scavenger for detecting hydroxyl radicals formed when hydrogen peroxide decomposes. This produces the fluorescent 7-hydroxycoumarin that with high precision could be measured using spectrofluorometry. The results were giving approximately 0.16% ratio between •OH-production and hydrogen peroxide consumption. Similar experiments were done with ZrO2 for comparison, but the results were largely inconclusive. The effect of bicarbonate (a groundwater constituent) was also investigated. Adding bicarbonate increased the reproducibility of the experiments and increased the dissolution of uranium. Both the uranium and the bicarbonate increased the screening effects which minimized the fluorescent signal output by the 7-hydroxycoumarin. / Geologiskt djupförvar av förbrukat kärnbränsle har undersökts som lösning i Sverige i över 40 år nu. Svensk kärnbränslehantering (SKB) driftsätter det geologiska djupförvaret och måste säkerställa att det förbrukade kärnbränslet hålls isolerat från omgivningen i tusentals år. Under denna tid måste förseglingen stå emot fysikalisk stress och korrosion. Det är därför viktigt för en säkerhetsanalys att undersöka de olika reaktioner som kommer ske. Om förseglingen bryts ned kommer kärnbränslet i kontakt med vattnet i berggrunden vilket leder till radiolys av vatten. Väteperoxid som skapas under radiolysen kan sedan oxidera den exponerade ytan av kärnbränslet, detta ökar upplösningen av radiotoxiska produkter. Väteperoxiden kan även katalytisk sönderdelas på kärnbränslets yta. Syftet med arbetet var att få fram selektiviteten för katalytisk sönderdelning av väteperoxid. Detta skulle uppnås analytiskt med kumarin som avskiljare för detektion av hydroxylradikaler som bildas när väteperoxid sönderdelas. Detta producerade det fluorescerande 7-hydroxykumarinet som med hög precision kunde mätas spektrofluorometriskt. Resultaten gav en ca 0,16% förhållande mellan •OH-produktion och väteperoxidkonsumtion. Likartade experiment gjordes med ZrO2 för jämförelse men resultaten var ofullständiga. Effekten av bikarbonat (en beståndsdel i grundvatten) undersöktes också. Genom addition av bikarbonat ökade experimentens reproducerbarhet och ökade även upplösningen av uran. Både uranet och bikarbonaten minskade den utgående fluorescerande signalen från 7-hydroxykumarinet.

Page generated in 0.0708 seconds