• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 137
  • 79
  • 41
  • 23
  • 16
  • 4
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 370
  • 61
  • 56
  • 52
  • 51
  • 45
  • 39
  • 37
  • 36
  • 34
  • 33
  • 30
  • 29
  • 29
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Integration of New Technologies into Existing Mature Process to Improve Efficiency and Reduce Energy Consumption

Ahmed, Sajjad 17 June 2009 (has links)
Optimal operation of plants is becoming more important due to increasing competition and small and changing profit margins for many products. One major reason has been the realization by industry that potentially large savings can be achieved by improving processes. Growth rates and profitability are much lower now, and international competition has increased greatly. The industry is faced with a need to manufacture quality products, while minimizing production costs and complying with a variety of safety and environmental regulations. As industry is confronted with the challenge of moving toward a clearer and more sustainable path of production, new technologies are needed to achieve industrial requirements. In this research, a new methodology is proposed to integrate so-called new technologies into existing processes. Research shows that the new technologies must be carefully selected and adopted to match the complex requirements of an existing process. The new proposed methodology is based on four major steps. If the improvement in the process is not sufficient to meet business needs, new technologies can be considered. Application of a new technology is always perceived as a potential threat; therefore, financial risk assessment and reliability risk analysis help alleviate risk of investment. An industrial case study from the literature was selected to implement and validate the new methodology. The case study is a planning problem to plan the layout or design of a fleet of generating stations owned and operated by the electric utility company, Ontario Power Generation (OPG). The impact of new technology integration on the performance of a power grid consisting of a variety of power generation plants was evaluated. The reduction in carbon emissions is projected to be accomplished through a combination of fuel switching, fuel balancing and switching to new technologies: carbon capture and sequestration. The fuel-balancing technique is used to decrease carbon emissions by adjusting the operation of the fleet of existing electricity-generating stations; the technique of fuel-switching involves switching from carbon-intensive fuels to less carbon-intensive fuels, for instance, switching from coal to natural gas; carbon capture and sequestration are applied to meet carbon emission reduction requirements. Existing power plants with existing technologies consist of fossil fuel stations, nuclear stations, hydroelectric stations, wind power stations, pulverized coal stations and a natural gas combined cycle, while hypothesized power plants with new technologies include solar stations, wind power stations, pulverized coal stations, a natural gas combined cycle and an integrated gasification combined cycle with and without capture and sequestration. The proposed methodology includes financial risk management in the framework of a two stage stochastic programme for energy planning under uncertainty: demands and fuel price. A deterministic mixed integer linear programming formulation is extended to a two-stage stochastic programming model in order to take into account random parameters, which have discrete and finite probabilistic distributions. Thus, the expected value of the total costs of power generation is minimized, while the objective of carbon emission reduction is achieved. Furthermore, conditional value at risk (CVaR), a most preferable risk measure in the financial risk management, is incorporated within the framework of two-stage mixed integer programming. The mathematical formulation, which is called mean-risk model, is applied for the purpose of minimizing expected value. The process is formulated as a mixed integer linear programming model, implemented in GAMS (General Algebraic Modeling System) and solved using the CPLEX algorithm, a commercial solver embedded in GAMS. The computational results demonstrate the effectiveness of the proposed new methodology. The optimization model is applied to an existing Ontario Power Generation (OPG) fleet. Four planning scenarios are considered: a base load demand, a 1.0% growth rate in demand, a 5.0% growth rate in demand, a 10% growth rate in demand and a 20% growth rate in demand. A sensitivity analysis study is accomplished in order to investigate the effect of parameter uncertainties, such as uncertain factors on coal price and natural gas price. The optimization results demonstrate how to achieve the carbon emission mitigation goal with and without new technologies, while minimizing costs affects the configuration of the OPG fleet in terms of generation mix, capacity mix and optimal configuration. The selected new technologies are assessed in order to determine the risks of investment. Electricity costs with new technologies are lower than with the existing technologies. 60% CO2 reduction can be achieved at 20% growth in base load demand with new technologies. The total cost of electricity increases as we increase CO2 reduction or increase electricity demand. However, there is no significant change in CO2 reduction cost when CO2 reduction increases with new technologies. Total cost of electricity increases when fuel price increases. The total cost of electricity increases with financial risk management in order to lower the risk. Therefore, more electricity is produced for the industry to be on the safe side.
152

Java Code Transformation for Parallelization

Iftikhar, Muhammad Usman January 2011 (has links)
This thesis describes techniques for defining independent tasks in Java programs forparallelization. Existing Java parallelization APIs like JOMP, Parallel Java,Deterministic Parallel Java, JConqurr and JaMP are discussed. We have seen that JaMPis an implementation of OpenMP for Java, and it has a set of OpenMP directives andruntime library functions. We have discussed that JaMP has source to byte codecompiler, and it does not help in debugging the parallel source codes. There is no designtime syntax checking support of JaMP directives, and we know about mistakes onlywhen we compile the source code with JaMP compiler. So we have decided tocontribute JaMP with adding an option in the compiler to get parallel source code. Wehave created an eclipse plug-in to support design time syntax checking of JaMPdirectives too. It also helps the programmers to get quickly parallel source code withjust one click instead of using shell commands with JaMP compiler.
153

Integration of New Technologies into Existing Mature Process to Improve Efficiency and Reduce Energy Consumption

Ahmed, Sajjad 17 June 2009 (has links)
Optimal operation of plants is becoming more important due to increasing competition and small and changing profit margins for many products. One major reason has been the realization by industry that potentially large savings can be achieved by improving processes. Growth rates and profitability are much lower now, and international competition has increased greatly. The industry is faced with a need to manufacture quality products, while minimizing production costs and complying with a variety of safety and environmental regulations. As industry is confronted with the challenge of moving toward a clearer and more sustainable path of production, new technologies are needed to achieve industrial requirements. In this research, a new methodology is proposed to integrate so-called new technologies into existing processes. Research shows that the new technologies must be carefully selected and adopted to match the complex requirements of an existing process. The new proposed methodology is based on four major steps. If the improvement in the process is not sufficient to meet business needs, new technologies can be considered. Application of a new technology is always perceived as a potential threat; therefore, financial risk assessment and reliability risk analysis help alleviate risk of investment. An industrial case study from the literature was selected to implement and validate the new methodology. The case study is a planning problem to plan the layout or design of a fleet of generating stations owned and operated by the electric utility company, Ontario Power Generation (OPG). The impact of new technology integration on the performance of a power grid consisting of a variety of power generation plants was evaluated. The reduction in carbon emissions is projected to be accomplished through a combination of fuel switching, fuel balancing and switching to new technologies: carbon capture and sequestration. The fuel-balancing technique is used to decrease carbon emissions by adjusting the operation of the fleet of existing electricity-generating stations; the technique of fuel-switching involves switching from carbon-intensive fuels to less carbon-intensive fuels, for instance, switching from coal to natural gas; carbon capture and sequestration are applied to meet carbon emission reduction requirements. Existing power plants with existing technologies consist of fossil fuel stations, nuclear stations, hydroelectric stations, wind power stations, pulverized coal stations and a natural gas combined cycle, while hypothesized power plants with new technologies include solar stations, wind power stations, pulverized coal stations, a natural gas combined cycle and an integrated gasification combined cycle with and without capture and sequestration. The proposed methodology includes financial risk management in the framework of a two stage stochastic programme for energy planning under uncertainty: demands and fuel price. A deterministic mixed integer linear programming formulation is extended to a two-stage stochastic programming model in order to take into account random parameters, which have discrete and finite probabilistic distributions. Thus, the expected value of the total costs of power generation is minimized, while the objective of carbon emission reduction is achieved. Furthermore, conditional value at risk (CVaR), a most preferable risk measure in the financial risk management, is incorporated within the framework of two-stage mixed integer programming. The mathematical formulation, which is called mean-risk model, is applied for the purpose of minimizing expected value. The process is formulated as a mixed integer linear programming model, implemented in GAMS (General Algebraic Modeling System) and solved using the CPLEX algorithm, a commercial solver embedded in GAMS. The computational results demonstrate the effectiveness of the proposed new methodology. The optimization model is applied to an existing Ontario Power Generation (OPG) fleet. Four planning scenarios are considered: a base load demand, a 1.0% growth rate in demand, a 5.0% growth rate in demand, a 10% growth rate in demand and a 20% growth rate in demand. A sensitivity analysis study is accomplished in order to investigate the effect of parameter uncertainties, such as uncertain factors on coal price and natural gas price. The optimization results demonstrate how to achieve the carbon emission mitigation goal with and without new technologies, while minimizing costs affects the configuration of the OPG fleet in terms of generation mix, capacity mix and optimal configuration. The selected new technologies are assessed in order to determine the risks of investment. Electricity costs with new technologies are lower than with the existing technologies. 60% CO2 reduction can be achieved at 20% growth in base load demand with new technologies. The total cost of electricity increases as we increase CO2 reduction or increase electricity demand. However, there is no significant change in CO2 reduction cost when CO2 reduction increases with new technologies. Total cost of electricity increases when fuel price increases. The total cost of electricity increases with financial risk management in order to lower the risk. Therefore, more electricity is produced for the industry to be on the safe side.
154

Truck transport emissions model

Couraud, Amelie 17 September 2007 (has links)
In the past, transportation related economic analysis has considered agency related costs only. However, transportation managers are moving towards more holistic economic analysis including road user and environmental costs and benefits. In particular, transportation air pollution is causing increasing harm to health and the environment. Transport managers are now considering related emissions in transport economical analyses, and have established strategies to help meet Kyoto Protocol targets, which specified a fifteen percent reduction in Canada's emissions related to 1990 levels within 2008-2012.<p>The objectives of this research are to model heavy vehicle emissions using a emissions computer model which is able to assess various transport applications, and help improve holistic economic transport modeling. Two case studies were evaluated with the model developed.<p>Firstly, the environmental benefits of deploying weigh-in-motion systems at weigh stations to pre-sort heavy vehicles and reduce delays were assessed. The second case study evaluates alternative truck sizes and road upgrades within short heavy oilfield haul in Western Canada. <p>The model developed herein employed a deterministic framework from a sensitivity analysis across independent variables, which identified the most sensitive variables to primary field state conditions. The variables found to be significant included idling time for the weigh-in-motion case study, road stiffness and road grades for the short heavy haul oilfield case study.<p>According to this research, employing WIM at weigh stations would reduce annual Canadian transportation CO<sub>2</sub> emissions by nearly 228 kilo tonnes, or 1.04 percent of the Canadian Kyoto Protocol targets. Regarding direct fuel savings, WIM would save from 90 to 190 million litres of fuel annually, or between $59 and $190 million of direct operating costs.<p>Regarding the short heavy oil haul case study, increasing allowable heavy vehicle sizes while upgrading roads could decrease the annual emissions, the fuel consumption, and their associated costs by an average of 68 percent. Therefore, this could reduce each rural Saskatchewan municipality's annual CO<sub>2</sub> emissions from 13 to 26.7-kilo tonnes, which translates to 0.06 and 0.12 percent of the Canadian Kyoto Protocol targets or between $544,000 and $ 1.1 million annually. <p>Based on these results, the model demonstrates its functionality, and was successfully applied to two typical transportation field state applications. The model generated emissions savings results that appear to be realistic, in terms of potential Kyoto targets, as well as users cost reductions and fuel savings.
155

Nondeterministic Linear Static Finite Element Analysis: An Interval Approach

Zhang, Hao 26 August 2005 (has links)
This thesis presents a nontraditional treatment for uncertainties in the material, geometry, and load parameters in linear static finite element analysis (FEA) for mechanics problems. Uncertainties are introduced as bounded possible values (intervals). FEA with interval parameters (interval FEA, IFEA) calculates the bounds on the system response based on the ranges of the system parameters. The obtained results should be accurate and efficiently computed. Toward this end, a rigorous interval FEA is developed and implemented. In this study, interval arithmetic is used in the formulation to guarantee an enclosure for the response range. The main difficulty associated with interval computation is the dependence problem, which results in severe overestimation of the system response ranges. Particular attention in the development of the present method is given to control the dependence problem for sharp results. The developed method is based on an Element-By-Element (EBE) technique. By using the EBE technique, the interval parameters can be handled more efficiently to control the dependence problem. The penalty method and Lagrange multiplier method are used to impose the necessary constraints for compatibility and equilibrium. The resulting structure equations are a system of parametric linear interval equations. The standard fixed point iteration is modified, enhanced, and used to solve the interval equations accurately and efficiently. The newly developed dependence control algorithm ensures the convergence of the fixed point iteration even for problems with relatively large uncertainties. Further, special algorithms have been developed to calculate sharp results for stress and element nodal force. The present method is generally applicable to linear static interval FEA, regardless of element type. Numerical examples are presented to demonstrate the capabilities of the developed method. It is illustrated that the present method yields rigorous and accurate results which are guaranteed to enclose the true response ranges in all the problems considered, including those with a large number of interval variables (e.g., more than 250). The scalability of the present method is also illustrated. In addition to its accuracy, rigorousness and scalability, the efficiency of the present method is also significantly superior to conventional methods such as the combinatorial, the sensitivity analysis, and the Monte Carlo sampling method.
156

Stochastic Switching in Evolution Equations

Lawley, Sean David January 2014 (has links)
<p>We consider stochastic hybrid systems that stem from evolution equations with right-hand sides that stochastically switch between a given set of right-hand sides. To begin our study, we consider a linear ordinary differential equation whose right-hand side stochastically switches between a collection of different matrices. Despite its apparent simplicity, we prove that this system can exhibit surprising behavior.</p><p>Next, we construct mathematical machinery for analyzing general stochastic hybrid systems. This machinery combines techniques from various fields of mathematics to prove convergence to a steady state distribution and to analyze its structure.</p><p>Finally, we apply the tools from our general framework to partial differential equations with randomly switching boundary conditions. There, we see that these tools yield explicit formulae for statistics of the process and make seemingly intractable problems amenable to analysis.</p> / Dissertation
157

The Inventory Routing Problem With Deterministic Order-up-to Level Inventory Policies

Ozlem, Pinar 01 September 2005 (has links) (PDF)
This study is concerned with the inventory routing problem with deterministic, dynamic demand and order-up-to level inventory policy. The problem mainly arises in the supply chain management context. It incorporates simultaneous decision making on inventory management and vehicle routing with the purpose of gaining advantage from coordinated decisions. An integrated mathematical model that represents the features of the problem is presented. Due to the magnitude of the model, lagrangean relaxation solution procedures that identify upper bounds and lower bounds for the problem are developed. Satisfactory computational results are obtained with the solution procedures suggested on the test instances taken from the literature.
158

Probabilistic Transmission Expansion Planning in a Competitive Electricity Market

Miao Lu Unknown Date (has links)
Changes in the electric power industry have brought great challenges and uncertainties in transmission planning area. More effective planning of transmission grids with the appropriate development of advanced planning technologies is badly-needed. The aim of this research is to develop an advanced probabilistic transmission expansion planning (TEP) methodology in a continually changing market environment. The methodology should be able to strengthen and increase the robustness of existing transmission network. By using the proposed probabilistic TEP methodology, it can reduce the risks of major outages and identify weak buses in the system. The significance of this research is shown by its comprehensiveness and powerful practicability. Results from this research are able to improve the planning efficiency and reliability with consideration of financial risks in an electricity market. In order to achieve the target, this research methodologies focused on two main important issues, (1) probability based technical assessment and (2) financial investment evaluation. During the first stage study, probabilistic congestion management, probabilistic reliability evaluation and probabilistic load flow for TEP under uncertainties have been investigated and improved. The developed methodologies and indices, which truly represent the composite impact from both critical state and probability, have linked with financial terms. At financial investment evaluation part, Monte Carlo market simulation is performed to assist economic analysis. The overall planning process has been treated as a constrained multi-objective optimisation task. Comprehensive investigations are conducted on several test systems and testified by real power systems using the available reliability data and economic information from the Australian National Electricity Market (NEM). Overall, this research developed probabilistic transmission planning methodologies that can reflect modern market structures more accurately and it enable a greater utilization of current generation and transmission resources to increase potential operation efficiencies.
159

Développement d'un dispositif microfluidique de Déplacement Latéral Déterministe (DLD) pour la préparation d'échantillons biologiques, en vue de l'extraction de vésicules extracellulaires / Development of a microfluidic device based on Deterministic Lateral Displacement (DLD) for biological sample preparation, towards the extraction of extracellular vesicles

Pariset, Eloïse 01 October 2018 (has links)
Les vésicules extracellulaires (EVs) apparaissent depuis une dizaine d'années comme de nouveaux biomarqueurs à fort potentiel pour des applications de biopsie liquide. En effet, les EVs portent la signature de leurs cellules émettrices, par le transport de matériel génétique et protéique cellulaire, qui peut être exploité comme outil de diagnostic précoce. L’une des principales limitations actuelles à l'utilisation clinique des EVs est la difficulté à extraire ces nano-objets à partir de biofluides complexes et à standardiser les protocoles de préparation d'échantillon. En effet, de nouvelles technologies sont requises pour effectuer un isolement efficace, bas coût et rapide de sous-populations d'EVs, sans altérer leur intégrité et à partir de faibles volumes d'échantillon. La technique microfluidique de Déplacement Latéral Déterministe (DLD) apparaît comme une des technologies prometteuses pour atteindre ces performances grâce à une purification passive et sans marquage. Les dispositifs de DLD mettent en oeuvre un réseau de piliers générant un tri en taille des particules, et dont les paramètres géométriques permettent de contrôler précisément le diamètre de séparation. Parmi les nombreuses applications de cette technologie dans le secteur biomédical, aucune ne permet pour le moment de réaliser l'extraction complète d'EVs directement à partir du biofluide d'intérêt, sans étapes de purification intermédiaires par centrifugation par exemple. Dans cette perspective, nos développements technologiques ont pour but d'améliorer la fiablilité, l'efficacité et l'intégration des dispositifs de DLD. A partir d'études numériques et expérimentales, nous proposons ici de nouveaux modèles pour anticiper au mieux le comportement des particules lors de la conception de réseaux de DLD. Par ailleurs, dans une approche orientée système, nous proposons également un packaging fluidique des dispositifs de DLD. Plusieurs étapes de tri étant généralement requises pour la purification d’échantillons biologiques, nos développements portent également sur la façon d’interconnecter ces modules au sein d'une configuration en série. Deux applications biologiques sont adressées et démontrent la versatilité de la technologie de DLD : l'isolement de bactéries E. coli à partir de prélèvements sanguins humains - en vue du diagnostic du sepsis - et l'extraction d'EVs dans des milieux de culture cellulaires - avec en perspective la détection d'EVs spécifiques par biopsie liquide. L'étape de préparation d'échantillon ne peut être dissociée de l'étape de caractérisation. C'est pourquoi, l'isolement des EVs devra dans un second temps être couplé à leur analyse au sein d'un dispositif intégré, portable et autonome, ce qui pourrait ouvrir de nouvelles perspectives vers l'application clinique des recherches actuelles sur les EVs. / Over the past decades, Extracellular Vesicles (EVs) have demonstrated strong potential as new biomarkers for liquid biopsy. Indeed, since EVs are fingerprints of parent cells, they can be exploited as early diagnostic tools. However, owing to their small size and high heterogeneity, EVs are challenging to extract from biofluids. In particular, reproducible and standardized protocols are required to perform fast, efficient, and cost-effective preparation of undamaged EV subpopulations from limited sample volumes. Deterministic Lateral Displacement (DLD) appears to be a promising microfluidic technology for this preparation by means of passive and label-free separation. DLD performs size-based separation of particles around a critical diameter that can be fine-tuned according to design parameters in an array of micropillars. Across the numerous biotechnological applications of DLD, none has yet successfully performed the complete extraction of EVs from unprocessed biofluids. This is the underlying motivation of this thesis, which outlines technological enhancements that make DLD separation more predictable, efficient, and easy-to-integrate. Based on both numerical and experimental developments, predictive models are proposed in order to anticipate particle behavior and to help in the design of efficient DLD devices. In addition to the optimization of single DLD devices, this thesis also addresses the issue of system integration. An innovative approach of serial connection between DLD modules is proposed to address the sequential sorting of particles from a complex biofluid and ensure that there is no loss of function of individual DLD devices when operated alone or in series. Two biological applications illustrate the potential of DLD-based sample preparation systems: the isolation of E. coli bacteria from human blood samples for sepsis diagnostics and the extraction of EVs from cell culture media with the perspective of liquid biopsy applications. And as sample preparation cannot be dissociated from detection or characterization, this thesis moreover highlights the potential integration of DLD in an all-in-one microfluidic device for both sample preparation and analysis of extracted EVs. Such a portable and autonomous device could overcome some of the current limitations with regard to the clinical use of EVs.
160

Etudes de la convergence d'un calcul Monte Carlo de criticité : utilisation d'un calcul déterministe et détection automatisée du transitoire / Studies on the convergence of a Monte Carlo criticality calculation : coupling with a deterministic code and automated transient detection

Jinaphanh, Alexis 03 December 2012 (has links)
Les calculs Monte Carlo en neutronique-criticité permettent d'estimer le coefficient de multiplication effectif ainsi que des grandeurs locales comme le flux ou les taux de réaction. Certaines configurations présentant de faibles couplages neutroniques (modélisation de cœurs complets, prise en compte de profils d'irradiations, ...) peuvent conduire à de mauvaises estimations du kef f ou des flux locaux. L'objet de cette thèse est de contribuer à rendre plus robuste l'algorithme Monte Carlo utilisé et améliorer la détection de la convergence. L'amélioration du calcul envisagée passe par l'utilisation, lors du calcul Monte Carlo, d'un flux adjoint obtenu par un pré-calcul détermi- niste réalisé en amont. Ce flux adjoint est ensuite utilisé pour déterminer le positionnement de la première génération, modifier la sélection des sites de naissance, et modifier la marche aléatoire par des stratégies de splitting et de roulette russe. Une méthode de détection automatique du transitoire a été développée. Elle repose sur la modélisation des séries de sortie par un processus auto régressif d'ordre 1 et un test statistique dont la variable de décision est la moyenne du pont de Student. Cette méthode a été appli- quée au kef f et à l'entropie de Shannon. Elle est suffisamment générale pour être utilisée sur n'importe quelle série issue d'un calcul Monte Carlo itératif. Les méthodes développées dans cette thèse ont été testées sur plusieurs cas simplifiés présentant des difficultés de convergence neutroniques. / Monte Carlo criticality calculation allows to estimate the effective mu- tiplication factor as well as local quantities such as local reaction rates. Some configurations presenting weak neutronic coupling (high burn up pro- file, complete reactor core, ...) may induce biased estimations for kef f or reaction rates. In order to improve robustness of the iterative Monte Carlo méthods, a coupling with a deterministic code was studied. An adjoint flux is obtained by a deterministic calculation and then used in the Monte Carlo. The initial guess is then automated, the sampling of fission sites is modi- fied and the random walk of neutrons is modified using splitting and russian roulette strategies. An automated convergence detection method has been developped. It locates and suppresses the transient due to the initialization in an output series, applied here to kef f and Shannon entropy. It relies on modeling stationary series by an order 1 auto regressive process and applying statistical tests based on a Student Bridge statistics. This method can easily be extended to every output of an iterative Monte Carlo. Methods developed in this thesis are tested on different test cases.

Page generated in 0.4574 seconds