Spelling suggestions: "subject:"[een] TWO-STAGE"" "subject:"[enn] TWO-STAGE""
161 |
ASSESSMENT OF AGREEMENT AND SELECTION OF THE BEST INSTRUMENT IN METHOD COMPARISON STUDIESChoudhary, Pankaj K. 11 September 2002 (has links)
No description available.
|
162 |
[pt] MODELAGEM DE GOTAS DISPERSAS EM ESCOAMENTO ANULAR VERTICAL / [en] MODELLING OF DISPERSED DROPLETS IN VERTICAL ANNULAR TWO-PHASE FLOWJOAO GABRIEL CARVALHO DE SIQUEIRA 30 April 2020 (has links)
[pt] O escoamento anular é caracterizado por um núcleo gasoso fluindo a alta
velocidade com um filme líquido no seu entorno, molhando a parede do duto. A
presença de gotículas líquidas no núcleo gasoso resulta em impacto relevante em
características do escoamento anular, como gradiente de pressão e propriedades das
ondas presentes no filme líquido. A formação de gotículas se dá usualmente na
crista das ondas de perturbação presentes na interface líquido-gás. No presente
trabalho, é realizado um estudo do regime anular com presença de gotículas em
tubulações verticais utilizando o Modelo de Dois Fluidos unidimensional. Um
modelo de transferência de massa das gotículas é desenvolvido e acoplado ao
modelo de Dois Fluidos. O modelo resultante permite capturar a evolução
automática da interface gás-líquido e a formação de ondas de filme líquido e sua
influência no desprendimento e deposição de gotículas. Analisa-se o desempenho
de três modelos de entranhamento de gotículas disponíveis na literatura, além de
um modelo de deposição de gotículas. Considerando que gotículas são criadas por
cisalhamento nas cristas das ondas de perturbação, modificações dos modelos são
propostas com a finalidade de melhor capturar a influência das ondas do filme
líquido nos processos de entranhamento e deposição de gotículas. Parâmetros do
escoamento como gradiente de pressão, espessura do filme do líquido e variáveis
relacionadas com as ondas interfaciais são avaliados, mostrando boa concordância
com dados experimentais disponíveis na literatura. / [en] Annular flow is characterized by a high velocity gas core flow, with a thin
liquid film around it, wetting the pipe wall. The presence of liquid droplets in the
gas core has relevant impact on annular flow characteristics, such as pressure drop
and liquid film wave properties. Droplets are usually created by shear at disturbance
wave crests, along the gas-liquid interface. At the present work, vertical annular
flow with droplet entrainment is studied using the 1-D Two-Fluid model. A droplet
mass transfer model is developed and coupled to the Two-Fluid model. The
resulting model allows capturing the automatic evolution of the gas-liquid interface,
liquid film wave formation and the waves influence on droplet entrainment and
deposition. A performance analysis in carried out for three droplet entrainment
models available in literature, as well as one deposition model. Taking into account
that droplets are created by disturbance wave crest shearing, model modifications
are proposed, aiming to better capture the influence of liquid film waves on droplet
entrainment and deposition mechanisms. Flow parameters such as pressure drop,
film thickness and wave features are evaluated, showing good agreement with
experimental data found in literature.
|
163 |
GAN-based Automatic Segmentation of Thoracic Aorta from Non-contrast-Enhanced CT Images / GAN-baserad automatisk segmentering avthoraxorta från icke-kontrastförstärkta CT-bilderXu, Libo January 2021 (has links)
The deep learning-based automatic segmentation methods have developed rapidly in recent years to give a promising performance in the medical image segmentation tasks, which provide clinical medicine with an accurate and fast computer-aided diagnosis method. Generative adversarial networks and their extended frameworks have achieved encouraging results on image-to-image translation problems. In this report, the proposed hybrid network combined cycle-consistent adversarial networks, which transformed contrast-enhanced images from computed tomography angiography to the conventional low-contrast CT scans, with the segmentation network and trained them simultaneously in an end-to-end manner. The trained segmentation network was tested on the non-contrast-enhanced CT images. The synthetic process and the segmentation process were also implemented in a two-stage manner. The two-stage process achieved a higher Dice similarity coefficient than the baseline U-Net did on test data, but the proposed hybrid network did not outperform the baseline due to the field of view difference between the two training data sets.
|
164 |
Building Resilience through Supply Chain Agility: Cross-sectional and Longitudinal StudiesWen, Zhezhu 15 September 2022 (has links)
No description available.
|
165 |
Methodology for Designing Bespoke Air Handling UnitsMalysheva, Alexandra January 2023 (has links)
This master's thesis explores the role of bespoke air handling units in enhancing energy efficiency in existing buildings. The context for the study is set against the backdrop of global initiatives, including the United Nations' Sustainable Development Goals, specifically Goal 7, which emphasizes the need to improve energy efficiency to combat climate change. The significance of enhancing energy efficiency is well-established, evident both at the EU level and in national policies and regulations. Buildings represent a significant portion of the energy utilization puzzle, with substantial potential for enhancing energy efficiency, although it is often underutilized. One of the contributing factors to inefficiency is outdated ventilation systems, which lead to high thermal losses. This challenge can be addressed by retrofitting these systems with modern, efficient air handling units, thus contributing to energy conservation and cost savings. This study focuses on the adoption of bespoke air handling units adjusted to the site and capable of accommodating constraints related to factors such as space limitations in machine rooms, existing ductwork layouts, and the location of shafts. The primary goal is to empower engineers to move beyond conventional approaches, enabling them to optimize technology choices based on local conditions, specific system performance requirements, and the economic viability of each project. The aim of this study is twofold: first, to develop a methodology for designing bespoke air handling units; and second, to demonstrate the practical application of this methodology in the context of two distinct renovation projects. In line with the aim of the thesis, a design methodology for site-tailored units equipped with a two stage flat crossflow heat exchanger and an indirect evaporative cooling system was developed. The methodology delves into different aspects of data analysis, 3D modeling, and the conduct of performance calculations.The established methodology was applied in two reconstruction projects in central Stockholm, where bespoke air handling units were designed in compliance with provided technical specifications. In both scenarios, a viable option emerged for accommodating a tailored unit within the technical room situated on the first floor. For both units, the energy performance metrics signify a notable achievement in terms of heat recovery efficiency, coupled with relatively modest requirements for heating and cooling power capacity from the combined heating and cooling aircoil. However, the calculated maximum specific fan power for a single unit with heat recovery exceeded the stipulated value specified in the technical specifications, which was accepted by the client. The results of the study included air handling unit product drawings, ventilation blueprints of the technical room with the integrated air handling unit, component specifications, unit flowcharts, performance calculations, and control operating pictures. The results of this work indicate that the improvement of the building's energy efficiency is rendered feasible through the installation of bespoke air handling units in the studied reconstruction projects.
|
166 |
Two-Stage Stochastic Mixed Integer Nonlinear Programming: Theory, Algorithms, and ApplicationsZhang, Yingqiu 30 September 2021 (has links)
With the rapidly growing need for long-term decision making in the presence of stochastic future events, it is important to devise novel mathematical optimization tools and develop computationally efficient solution approaches for solving them. Two-stage stochastic programming is one of the powerful modeling tools that allows probabilistic data parameters in mixed integer programming, a well-known tool for optimization modeling with deterministic input data. However, akin to the mixed integer programs, these stochastic models are theoretically intractable and computationally challenging to solve because of the presence of integer variables. This dissertation focuses on theory, algorithms and applications of two-stage stochastic mixed integer (non)linear programs and it has three-pronged plan. In the first direction, we study two-stage stochastic p-order conic mixed integer programs (TSS-CMIPs) with p-order conic terms in the second-stage objectives. We develop so called scenario-based (non)linear cuts which are added to the deterministic equivalent of TSS-CMIPs (a large-scale deterministic conic mixed integer program). We provide conditions under which these cuts are sufficient to relax the integrality restrictions on the second-stage integer variables without impacting the integrality of the optimal solution of the TSS-CMIP. We also introduce a multi-module capacitated stochastic facility location problem and TSS-CMIPs with structured CMIPs in the second stage to demonstrate the significance of the foregoing results for solving these problems. In the second direction, we propose risk-neutral and risk-averse two-stage stochastic mixed integer linear programs for load shed recovery with uncertain renewable generation and demand. The models are implemented using a scenario-based approach where the objective is to maximize load shed recovery in the bulk transmission network by switching transmission lines and performing other corrective actions (e.g. generator re-dispatch) after the topology is modified. Experiments highlight how the proposed approach can serve as an offline contingency analysis tool, and how this method aids self-healing by recovering more load shedding. In the third direction, we develop a dual decomposition approach for solving two-stage stochastic quadratically constrained quadratic mixed integer programs. We also create a new module for an open-source package DSP (Decomposition for Structured Programming) to solve this problem. We evaluate the effectiveness of this module and our approach by solving a stochastic quadratic facility location problem. / Doctor of Philosophy / With the rapidly growing need for long-term decision making in the presence of stochastic future events, it is important to devise novel mathematical optimization tools and develop computationally efficient solution approaches for solving them. Two-stage stochastic programming is one of the powerful modeling tools that allows two-stages of decision making where the first-stage strategic decisions (such as deciding the locations of facilities or topology of a power transmission network) are taken before the realization of uncertainty, and the second-stage operational decisions (such as transportation decisions between customers and facilities or power flow in the transmission network) are taken in response to the first-stage decision and a realization of the uncertain (demand) data. This modeling tool is gaining wide acceptance because of its applications in healthcare, power systems, wildfire planning, logistics, and chemical industries, among others. Though intriguing, two-stage stochastic programs are computationally challenging. Therefore, it is crucial to develop theoretical results and computationally efficient algorithms, so that these models for real-world applied problems can be solved in a realistic time frame. In this dissertation, we consider two-stage stochastic mixed integer (non)linear programs, provide theoretical and algorithmic results for them, and introduce their applications in logistics and power systems.
First, we consider a two-stage stochastic mixed integer program with p-order conic terms in the objective that has applications in facility location problem, power system, portfolio optimization, and many more. We provide a so-called second-stage convexification technique which greatly reduces the computational time to solve a facility location problem, in comparison to solving it directly with a state-of-the-art solver, CPLEX, with its default settings. Second, we introduce risk-averse and risk-neutral two-stage stochastic models to deal with uncertainties in power systems, as well as the risk preference of decision makers. We leverage the inherent flexibility of the bulk transmission network through the systematic switching of transmission lines in/out of service while accounting for uncertainty in generation and demand during an emergency. We provide abundant computational experiments to quantify our proposed models, and justify how the proposed approach can serve as an offline contingency analysis tool. Third, we develop a new solution approach for two-stage stochastic mixed integer programs with quadratic terms in the objective function and constraints and implement it as a new module for an open-source package DSP We perform computational experiments on a stochastic quadratic facility location problem to evaluate the performance of this module.
|
167 |
On topological measures and network vulnerability patterns: a review and comparative analysisSaei, Saviz 13 December 2024 (has links) (PDF)
Despite much hope for climate change to slow down or even reverse, younger generations face a future overshadowed by extreme events. The indisputable reality is that unless the United Nations establishes comprehensive and sustained climate justice policies, children today will experience five times more extreme events than those that took place a century ago. On Monday, July 3rd of 2023, an unprecedented peak in global temperatures was documented, marking the highest global temperature ever recorded, as the U.S. National Centers for Environmental Prediction reported. These increasing temperatures indicate the ongoing and intensifying phenomenon of climate change, which amplifies the frequency and severity of certain natural disasters. Given that vulnerability reflects the extent of damage following a disruptive event, reducing vulnerability is a critical initial step toward enhancing resilience—the capacity to withstand and recover from such disruptions. Reflecting on the words of H. James Harrington, the seminal figure in organizational performance improvement, “Measurement is the first step that leads to control and eventually to improvement. If you can’t measure something, you can’t understand it. If you can’t understand it, you can’t control it. If you can’t control it, you can’t improve it.” The findings of this study highlight a macroscopic approach to understanding and predicting network vulnerability in the face of uncertain disruptive events by focusing on the statistical analysis of global measures (GMs) related to network topological characteristics. The distribution of GM values across 15 pure network topologies reveals specific patterns. This discovery offers a novel metric for assessing the performance of networks with unknown topologies by comparing their GM patterns to those of the studied topologies. Furthermore, by intertwining local vulnerability assessments with our scenario-based strategy, we aim to conduct a thorough examination of each node’s significance in maintaining network integrity during disruptions. This analysis is intended to uncover the underlying structural intricacies of these networks, enabling a comparison with established topological standards to identify opportunities for optimization. Additionally, we expand the scope of our model by incorporating traffic flow considerations using the Bureau of Public Roads (BPR) function to optimize network resilience. Key words: Global Measures, Vulnerability, Uncertainty in vulnerability, Connectivity, Accessibility, Criticality, Network topology, Local Measures, Bureau of Public Roads (BPR), Scenario based-Two stage stochastic programming, Risk
|
168 |
Impacto de la educación de la madre sobre la desnutrición crónica infantil para los años 2002 al 2016 en Perú / Impact of maternal education and household wealth on stunted children for 2002 – 2016 in PeruRengifo Calmet, Jessica Alexandra 19 November 2020 (has links)
El presente trabajo estudia el impacto de la educación de la madre sobre la desnutrición crónica infantil para los años 2002 al 2016 en Perú. Este estudio se realizó mediante un análisis por diversos métodos del modelo econométrico de Variables Instrumentales. Se presentan los resultados para Mínimos Cuadrados en dos Etapas y Método Generalizado del Momento. También se realizan los modelos econométricos de Inclusión Residual en Dos Etapas y Probit para datos de panel. En la presente investigación se utilizó la Base de Datos Niños del Milenio. Se obtiene como resultado principal que la educación de la madre tiene un impacto negativo sobre la desnutrición crónica infantil por cada modelo econométrico para los años 2002 al 2016 en Perú.
Palabras clave: Mínimos Cuadrados en dos Etapas; Método Generalizado del Momento; Modelo de Inclusión Residual en Dos Etapas; Probit Panel; Salud; Desnutrición Crónica Infantil; Riqueza; Logro Educativo; Perú; Niños del Milenio. / The document studies the impact of maternal education on stunted children from 2002 to 2016 in Peru. It is analyzed by two different Instrumental Variables Methods, Two-Stage Least Squares and General Method of Moments. Also, the document uses Two-Stage Residual Inclusion Model, and Dynamic Probit Model for panel data. The present investigation uses the Young Lives data base. The main result is that the mother's education has a negative impact on stunted children for each model from 2002 to 2016 in Peru.
Keywords: Two-Stage Least Squares; General Method of Moments; Two-Stage Residual Inclusion; Panel Probit; Health; Wealth; Stunting; Education Attainment; Young Lives; Peru / Tesis
|
169 |
Transient engine model for calibration using two-stage regression approachKhan, Muhammad Alam Z. January 2011 (has links)
Engine mapping is the process of empirically modelling engine behaviour as a function of adjustable engine parameters, predicting the output of the engine. The aim is to calibrate the electronic engine controller to meet decreasing emission requirements and increasing fuel economy demands. Modern engines have an increasing number of control parameters that are having a dramatic impact on time and e ort required to obtain optimal engine calibrations. These are further complicated due to transient engine operating mode. A new model-based transient calibration method has been built on the application of hierarchical statistical modelling methods, and analysis of repeated experiments for the application of engine mapping. The methodology is based on two-stage regression approach, which organise the engine data for the mapping process in sweeps. The introduction of time-dependent covariates in the hierarchy of the modelling led to the development of a new approach for the problem of transient engine calibration. This new approach for transient engine modelling is analysed using a small designed data set for a throttle body inferred air ow phenomenon. The data collection for the model was performed on a transient engine test bed as a part of this work, with sophisticated software and hardware installed on it. Models and their associated experimental design protocols have been identi ed that permits the models capable of accurately predicting the desired response features over the whole region of operability. Further, during the course of the work, the utility of multi-layer perceptron (MLP) neural network based model for the multi-covariate case has been demonstrated. The MLP neural network performs slightly better than the radial basis function (RBF) model. The basis of this comparison is made on assessing relevant model selection criteria, as well as internal and external validation ts. Finally, the general ability of the model was demonstrated through the implementation of this methodology for use in the calibration process, for populating the electronic engine control module lookup tables.
|
170 |
Firm performance, sources and drivers of innovation and sectoral technological trajectories : an empirical study using recent french CIS / Performance économique, sources et leviers de l'innovation et filières technologiques : une étude économétrique à partir de données CIS françaisesHaned, Naciba 10 June 2011 (has links)
Cette thèse présente trois chapitres qui mobilisent un cadre d’analyse évolutionniste et qui étudient empiriquement la relation « innovation-performance » à partir de données CIS. Nous souhaitons montrer que les sources de l’innovation et les méthodes d’appropriation varient en fonction des secteurs d’activité et des stratégies d’innovation des firmes. Dans un premier temps, nous décrivons les tendances de l’innovation à partir de quatre vagues d’enquêtes CIS (1994-2006) et nous analysons la persistance de l’innovation sur un échantillon de 431 firmes avec une régression logistique binaire. Nous montrons que la persistance de l’innovation est plus élevée pour les firmes qui innovent en produits car ces firmes sont contraintes d’investir de manière continue dans des projets d’innovation pour rester compétitives. Les firmes qui innovent en procédés sont moins persistantes car leur stratégie est plus orientée vers des ajustements de la qualité des produits ainsi que vers l’amélioration des processus de production. Les deux derniers essais explorent avec la méthode des doubles moindres carrés le lien innovation-performance économique sur un échantillon de 7 742 firmes portant sur la période 2002-2005. Nous expliquons que la source principale de l’innovation des firmes à « forte intensité scientifique » est la R&D, d’une part, et que les méthodes d’appropriation des rentes de l’innovation passent par l’acquisition d’actifs complémentaires (tels que l’utilisation combinée de titres de propriété intellectuelle et de secrets de fabrication), d’autre part. En revanche, les firmes dans les autres catégories (notamment celles à fortes économies d’échelle) fondent leurs avancées technologiques sur des sources externes de l’innovation telles que les concurrents, les fournisseurs et les utilisateurs avancés. De plus, ces firmes utilisent de manière plus importante des méthodes d’appropriation commerciale telles que les marques ou les stratégies marketing, car leurs produits sont moins exposés au risque d’imitation certes, mais aussi parce qu’elles sont sensibles aux changements de coûts. / This thesis is structured in three essays based on evolutionary theoretical grounds and provides empirical evidence from CIS. It aims at showing that the sources of innovation and the appropriation of innovation rents vary in function of firms’ activities and innovation strategies.In essay 1, we describe four waves of CIS, covering the period 1994-2006 and we study persistent innovation behavior with a discrete choice model on a data set of 431 firms. We find that innovation persistence is more important for product innovators because they need novel products to be more competitive and therefore enrich their base of knowledge continuously. By contrast, process innovators are less persistent because innovation strategy is less “market” oriented and intends to meet quality or production adjustments. The two last essays explore with the two stage least squares method how firms benefit economically from their innovations on a sample of 7 742 firms, on the period 2002-2005. We show that science-based firms rely more on R&D investments to develop their products and maintain their leads by acquiring complementary assets, i.e. they use mixed methods to appropriate the rents of innovation (the combined use IPRs and strategic methods for instance secrecy). By contrast, firms in other categories (for instance firms using cost-cutting strategies) draw more on external sources of knowledge coming either from suppliers or advanced users. Additionally, these firms use more extensively trademarks or non technological methods of appropriation (as marketing devices), because they are less exposed to potential imitation and because they are price sensitive.
|
Page generated in 0.0527 seconds