• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 100
  • 28
  • 12
  • 12
  • 8
  • 7
  • 7
  • 6
  • 4
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 223
  • 223
  • 39
  • 28
  • 28
  • 25
  • 24
  • 21
  • 21
  • 18
  • 17
  • 16
  • 15
  • 14
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Utilização de filtração direta ascendente como pré-tratamento à filtração rápida descendente para tratamento de água para abastecimento / The use of direct up-flow filtration as a pretreatment for rapid filtration in drinking water production

Gusmão, Paulo Tadeu Ribeiro de 19 April 2001 (has links)
Em instalação piloto, de 06/1998 a 10/1999, utilizando-se água natural de superfície com baixas turbidez e cor, foram avaliados dois sistemas de dupla filtração, com pré-tratamento em filtração direta ascendente em areia com taxas de 200 a 360 m3/m2.dia (sistema S01) e em filtração direta ascendente em pedregulho com taxas de 80 a 190 m3/m2.dia (sistema S02) e com tratamento final em filtração rápida descendente com taxas de 115 a 480 m3/m2.dia. As dosagens do coagulante (produto comercial de sulfato de alumínio) foram definidas utilizando-se filtro de laboratório de areia, havendo indicações de coagulação no mecanismo de adsorção, com neutralização parcial de carga. Foram executadas descargas de fundo intermediárias nas unidades de filtração direta ascendente, com significativa redução na perda de carga no meio granular de pedregulho. A turbidez, cor aparente, Fe e Mn totais dos efluentes finais dos sistemas atenderam às Normas Brasileiras para produção de água potável. No sistema S02 o crescimento da perda de carga no meio granular foi menos intenso na unidade de filtração direta ascendente do que no filtro rápido descendente, ocorrendo o inverso no sistema S01, no qual, em alguns ensaios, o filtro rápido descendente mostrou-se desnecessário. As carreiras de filtração foram de 27 a 88 h no sistema S01 e de 14 a 35 h no sistema S02. Em determinados casos, a floculação predominou na unidade de filtração direta ascendente em pedregulho, prejudicando a qualidade do seu efluente. O sistema S02 mostrou-se mais vantajoso que o sistema S01, exceto quando, neste, a filtração rápida descendente foi desnecessária. / Based on me investigation carried out in pilot plant, from June 1998 to October 1999, using natural surface raw water with low turbidity and color, two systems of two-stage filtration have been evaluated. As a pretreatment, the first system (S01) used direct up-flow sand filtration (with filtration rates from 200 to 360 m3/m2.day), and the second system (S02) used direct up-flow roughing filtration (with filtration rates from 80 to 190 m3/m2.day), both with final treatment based on rapid filtration (with filtration rates from 115 to 480 m3/m2.day) The coagulant doses (commercial product of aluminum sulfate) have been established through the use of laboratory scale sand filter, presenting signs of coagulation in the adsorption mechanism, with partial charge neutralization. Intermediate bottom discharges in the direct up-flow filtration units have resulted in significant reduction in thee headloss in gravel granular medium. The turbidity, apparent-color, total iron and manganese concentrations of the final effluents of the systems have attended the Brazilian Standards for drinking water production. The increase of headloss in gravel granular medium in system S02 was smaller in me direct up-fiow filtration unit than in the rapid filter, the opposite occurring in system S01, where some tests have proved lhe rapid filter redundant. The filter runs time were 27 to 88 hours in system S01 and 14 to 35 hours in system S02. In certain cases, flocculation was predominant in the direct up-flow roughing filtration unit, jeopardizing the effiuent quality. System S02 has proved advantageous in relation to system S01, except when in system S01 the rapid filter was redundant.
172

Essays on energy efficiency and fuel subsidy reforms

Tajudeen, Ibrahim January 2018 (has links)
This thesis uses innovative approaches to analyse energy policy interventions aimed at enhancing the environmental sustainability of energy use as well as its consequential welfare implications. First, we examine the relationship between energy efficiency improvement and CO2 emissions at the macro level. We use the Index Decomposition Analysis to derive energy efficiency by separating out the impact of shifts in economic activity on energy intensity. We then employ econometric models to relate energy efficiency and CO2 emissions accounting for non-economic factors such as consumers lifestyle and attitudes. The applications for 13 OPEC and 30 OECD countries show that at the country-group and individual country level, increase in energy intensity for OPEC is associated with both deteriorations in energy efficiency and shifts towards energy-intensive activities. The model results suggest that the reduction in energy efficiency in general go in tandem with substantial increases in CO2 emissions. The decline in energy intensity for OECD can be attributed mainly to improvements in energy efficiency which is found to compensate for the impact on CO2 emissions of income changes. The results confirm the empirical relevance of energy efficiency improvements for the mitigation of CO2 emissions. The method developed in this chapter further enables the separate assessment of non-economic behavioural factors which according to the results exert a non-trivial influence on CO2 emissions. Secondly, having empirically confirmed the relationship between energy efficiency improvements and CO2 emission at the macro level in Chapter 2, we investigate potential underlying drivers of energy efficiency improvements taking into account potential asymmetric effects of energy price change in Chapter 3. This is crucial for designing effective and efficient policy measures that can promote energy efficiency. In addition to the Index Decomposition Analysis used to estimate the economy-wide energy efficiency in Chapter 2, we also use Stochastic Frontier Analysis and Data Envelop Analysis as alternative methods. The driving factors are examined using static and dynamic panel model methods that account for both observed and unobserved country heterogeneity. The application for 32 OECD countries shows that none of the three methods leads to correspondence in term of ranking between energy efficiency estimates and energy intensity at the country level corroborating the criticism that energy intensity is a poor proxy for energy efficiency. The panel-data regression results using the results of the three methods show similarities in the impacts of the determinants on the energy efficiency levels. Also, we find insignificant evidence of asymmetric effects of total energy price but there is proof of asymmetry using energy specific prices. Thirdly, in Chapter 4 we offer an improved understanding of the impacts to expect of abolishing fuel price subsidy on fuel consumption, and also of the welfare and distributional impacts at the household level. We develop a two-step approach for this purpose. Key aspect of the first step is a two-stage budgeting model to estimate various fuel types elasticities using micro-data. Relying on these estimates and the information on households expenditure shares for different commodities, the second step estimates the welfare (direct and indirect) and distributional impacts. The application for Nigeria emphasises the relevance of this approach. We find heterogeneous elasticities of fuel demand among household groups. The distributional impact of abolishing the kerosene subsidy shows a regressive welfare loss. Although we find a progressive loss for petrol, the loss gap between the low- and high-income groups is small relative to the loss gap from stopping kerosene subsidy, making the low-income groups to suffer a higher total welfare loss. Finally, from the highlighted results, we draw the following concluding remarks in chapter 5. Energy efficiency appears a key option to mitigate CO2 emissions but there is also a need for additional policies aiming for behavioural change; energy specific prices and allowing for asymmetry in analysing the changes in energy efficiency is more appropriate and informative in formulating reliable energy policies; the hypothesis that only the rich would be worse-off from fuel subsidy removal is rejected and the results further suggest that timing of the fuel subsidy removal would be crucial as a higher international oil price will lead to higher deregulated fuel price and consequently, larger welfare loss.
173

Planejamento de curto prazo de redes de distribuição de energia elétrica considerando incertezas na geração e demanda /

Melgar Dominguez, Ozy Daniel. January 2018 (has links)
Orientador: José Roberto Sanches Mantovani / Resumo: O planejamento de curto prazo é uma estratégia de tomada de decisão que visa assegurar o desempenho adequado de um sistema de distribuição de energia elétrica e fornecer um produto de alta qualidade aos usuários finais. Este processo considera ações tradicionais para um controle efetivo no fluxo de potência reativa, fator de potência e magnitude de tensão nas barras do sistema. Nos últimos anos, este tipo de planejamento enfrenta-se com significativos desafios devido à integração de novas tecnologias e a filosofia de operação das redes de distribuição de média tensão. Desta forma, o desenvolvimento de algoritmos e ferramentas computacionais sofisticadas são necessárias para contornar essas complexidades. Nessa perspectiva, neste trabalho apresenta-se uma estratégia para a solução do problema de planejamento de curto prazo para redes de distribuição. Em que, a integração de unidades de geração distribuída e sistemas de armazenamento de energia elétrica é considerada simultaneamente com as ações tradicionais de planejamento para melhorar a eficiência do sistema. Diferentes alternativas de investimento, tais como a localização e dimensionamento de bancos de capacitores, unidades de armazenamento de energia e unidades de geração baseadas em energia fotovoltaica e eólica, seleção e substituição de condutores dos circuitos sobrecarregados e alocação de reguladores de tensão são consideradas como variáveis de decisão no problema de otimização. Adicionalmente, na formulação deste pro... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: Short-term planning is a decision-making strategy that aims to enhance proper electric distribution network performance and provide high-quality service to consumers. This process considers traditional planning actions to effectively control the reactive power flow, power factor, and the voltage profile of the network. In the last years, this type of distribution network planning has faced important challenges due to the integration of modern technologies and operating aspects of medium-voltage distribution networks. In this regard, development of sophisticated algorithms and computational tools are necessary to cope with these complexities. In this perspective, a strategy to determine the solution of the short-term planning problem for distribution networks is presented in this work, where, integration of distributed generation units and electric energy storage systems are considered simultaneously with traditional planning actions to improve the network performance. Several investment alternatives such as siting and sizing of capacitors banks, energy storage systems, photovoltaic- and wind- based generation units, conductor replacement of overloaded circuits, and voltage regulators allocation are considered as decision variables in the optimization problem. Additionally, environmental aspects at distribution level are duly addressed via Cap and Trade mechanism. Inherently, this optimization problem is represented by a non-convex mixed integer nonlinear programming problem. ... (Complete abstract click electronic access below) / Doutor
174

Estudo da quantificação de incertezas para o problema de contaminação de meios porosos heterogêneos / Study the uncertainty quantification to the problem of contamination of heterogeneous porous media

Thiago Jordem Pereira 10 October 2012 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / As técnicas de injeção de traçadores têm sido amplamente utilizadas na investigação de escoamentos em meios porosos, principalmente em problemas envolvendo a simulação numérica de escoamentos miscíveis em reservatórios de petróleo e o transporte de contaminantes em aquíferos. Reservatórios subterrâneos são em geral heterogêneos e podem apresentar variações significativas das suas propriedades em várias escalas de comprimento. Estas variações espaciais são incorporadas às equações que governam o escoamento no interior do meio poroso por meio de campos aleatórios. Estes campos podem prover uma descrição das heterogeneidades da formação subterrânea nos casos onde o conhecimento geológico não fornece o detalhamento necessário para a predição determinística do escoamento através do meio poroso. Nesta tese é empregado um modelo lognormal para o campo de permeabilidades a fim de reproduzir-se a distribuição de permeabilidades do meio real, e a geração numérica destes campos aleatórios é feita pelo método da Soma Sucessiva de Campos Gaussianos Independentes (SSCGI). O objetivo principal deste trabalho é o estudo da quantificação de incertezas para o problema inverso do transporte de um traçador em um meio poroso heterogêneo empregando uma abordagem Bayesiana para a atualização dos campos de permeabilidades, baseada na medição dos valores da concentração espacial do traçador em tempos específicos. Um método do tipo Markov Chain Monte Carlo a dois estágios é utilizado na amostragem da distribuição de probabilidade a posteriori e a cadeia de Markov é construída a partir da reconstrução aleatória dos campos de permeabilidades. Na resolução do problema de pressão-velocidade que governa o escoamento empregase um método do tipo Elementos Finitos Mistos adequado para o cálculo acurado dos fluxos em campos de permeabilidades heterogêneos e uma abordagem Lagrangiana, o método Forward Integral Tracking (FIT), é utilizada na simulação numérica do problema do transporte do traçador. Resultados numéricos são obtidos e apresentados para um conjunto de realizações amostrais dos campos de permeabilidades. / Tracer injection techniques have been widely used to investigate flows in heterogeneous porous media, especially in problems related to numerical simulation of miscible flows in oil reservoirs and to contaminant transport in aquifers. Oil reservoirs are generally heterogeneous and may possess spatially significant variations in their properties on several length scales. These spatial variations are incorporated into the governing equations for flow problems in porous media on the basis of random fields. Random fields provide a natural description of rock heterogeneities in the typical case in which the geological knowledge of rock is much less detailed than is necessary to predict flow properties through it deterministically. In this thesis we adopt a scalar log-normal permeability field k(x) to reproduce the statistical distribution of the permeability values of a real medium, and the numerical generation of these random fields is based on a Successive Sum of Independent Gaussian Fields defined on multiple length scales. The aim of this work is to study the uncertainty quantification in inverse problems for tracer transport in heterogeneous porous media in a Bayesian framework and propose the permeability update based on observed measurements of spatially sparse tracer concentration at certain times. A two-stage Markov chain Monte Carlo (MCMC) method is used to sample posterior probability distribution with hierarchical priors and the Markov chain is constructed from random reconstruction of the permeability fields. To solve the Darcys law we use a mixed finite elements method which are suitable to compute accurately the relevant fluxes in heterogeneous permeability fields and a Lagrangian strategy, the Forward Integral Tracking (FIT) method, for the numerical simulation of tracer transport problem. Numerical results are presented for a set of sampled realizations of the permeability fields.
175

Techniques innovantes en chirurgie hépatique : usages et impacts sur la prise en charge des métastases hépatiques d'origine colorectale / Innovative techniques in liver surgery : use and impact on management of patients with colorectal liver metastases

Dupré, Aurélien 07 December 2015 (has links)
Le cancer colorectal est un enjeu majeur de Santé publique. Près de la moitié des patients porteurs d’un cancer colorectal va développer des métastases hépatiques. La chirurgie est le seul traitement potentiellement curatif. Tout doit donc être mis en oeuvre pour que ces patients accèdent à un geste chirurgical. Le volume de foie restant après hépatectomie est un des principaux facteurs limitant en chirurgie hépatique. Des techniques innovantes ont été développées dans l’optique d’une épargne parenchymateuse : l’utilisation conjointe des techniques de destruction focalisée et la chirurgie hépatique en deux temps. La chirurgie hépatique en deux temps est efficace d’un point de vue carcinologique, mais reste une procédure complexe techniquement avec une morbidité non négligeable, notamment à cause des adhérences péri-hépatiques post-opératoires. Ces adhérences sont systématiques après chirurgie hépatique mais peuvent être prévenues par l’utilisation de membranes antiadhérences après la première hépatectomie. Dans cette indication, l’étude de phase II multicentrique, présentée dans ce travail, retrouvait une diminution de l’incidence et de la sévérité de ces adhérences après utilisation de seprafilm®, ce qui facilitait la seconde hépatectomie. Les techniques de destruction focalisée peuvent dans certains cas se substituer à la chirurgie en cas de métastases résécables. Elles permettent également d’augmenter le nombre de patients candidats à une prise en charge à visée curative, lorsqu’elles sont associées à la chirurgie, en cas de métastases non résécables. Ces techniques présentent néanmoins plusieurs inconvénients qui limitent leur utilisation. Les ultrasons focalisés de haute intensité (HIFU) sont une technique récente, non ionisante, de destruction focalisée dont les avantages théoriques sont particulièrement adaptés au traitement des tumeurs hépatiques. La technologie HIFU actuelle repose sur un dispositif de traitement extra-corporel, dont la limite principale est la faible taille des ablations, qui doivent être juxtaposées pour traiter des tumeurs de quelques centimètres, ce qui nécessite des temps de traitement de plusieurs dizaines de minutes. Le développement d’une sonde HIFU per-opératoire, à géométrie torique, a permis d’obtenir des ablations d’environ 7 cm3 en 40 secondes sur un modèle porcin. Nous avons pu montrer, lors de l’étude de phase I-IIa présentée dans ce travail, que ces résultats étaient reproduits chez l’homme sur foie sain destiné à être réséqué. Les résultats positifs de ces deux études prospectives nous ont permis d’envisager une étude de phase III sur la prévention des adhérences péri-hépatiques, la poursuite de l’étude HIFU en ciblant cette fois les métastases hépatiques, et la réalisation d’une étude de phase II sur la résection hépatique assistée par HIFU comme aide à l’hémostase / Colorectal cancer is a major health problem. Almost half of patients will develop liver metastasis. Surgery is the only potentially curative treatment. Everything possible must be done for these patients to perform liver surgery. Remnant liver volume after hepatectomy is the main limit of liver surgery. Innovative techniques have been developed to spare liver parenchyma: concomitant use of focal destruction and two-stage hepatectomy. Two-stage hepatectomy is oncologically effective but is a challenging procedure with high morbidity, in part because of peri-hepatic adhesions. These adhesions occur systematically after liver surgery but can be prevented by the use of anti-adhesion membranes at the end of the first hepatectomy. In this indication, the multicentre phase II study presented herein, showed a decrease in extent and severity of adhesions with use of seprafilm®. It facilitated the dissection and so the second hepatectomy. Techniques involving focal destruction can replace surgery in selected cases of resectable metastases. They also increase the number of patients candidates to curative liver-directed therapy in unresectable metastases. These techniques have however several disadvantages, which limit their use. High intensity focused ultrasound (HIFU) is a recent, non-ionizing, technology of focal destruction. Theoretical advantages make HIFU a promising technique for focal ablation of liver tumours. Current technology is based on extra-corporeal treatment. The main limit is that elementary ablations are small and must be juxtaposed to treat supra-centimetric tumours, resulting in long-time treatment. A new and powerful HIFU device enabling destruction of larger liver volumes (7 cm3 in 40 seconds) has been developed based on toroidal transducers. We showed, in a phase I-IIa study presented herein, that preclinical results could be reproduced on healthy liver of patients undergoing hepatectomy. Positive results of these two prospective studies have allowed to design a phase III trial on prevention of peri-hepatic adhesions, to continue the evaluation of HIFU by targeting liver metastases and by assisting liver resection, in a phase II study, as a sealing device
176

Utilização de filtração direta ascendente como pré-tratamento à filtração rápida descendente para tratamento de água para abastecimento / The use of direct up-flow filtration as a pretreatment for rapid filtration in drinking water production

Paulo Tadeu Ribeiro de Gusmão 19 April 2001 (has links)
Em instalação piloto, de 06/1998 a 10/1999, utilizando-se água natural de superfície com baixas turbidez e cor, foram avaliados dois sistemas de dupla filtração, com pré-tratamento em filtração direta ascendente em areia com taxas de 200 a 360 m3/m2.dia (sistema S01) e em filtração direta ascendente em pedregulho com taxas de 80 a 190 m3/m2.dia (sistema S02) e com tratamento final em filtração rápida descendente com taxas de 115 a 480 m3/m2.dia. As dosagens do coagulante (produto comercial de sulfato de alumínio) foram definidas utilizando-se filtro de laboratório de areia, havendo indicações de coagulação no mecanismo de adsorção, com neutralização parcial de carga. Foram executadas descargas de fundo intermediárias nas unidades de filtração direta ascendente, com significativa redução na perda de carga no meio granular de pedregulho. A turbidez, cor aparente, Fe e Mn totais dos efluentes finais dos sistemas atenderam às Normas Brasileiras para produção de água potável. No sistema S02 o crescimento da perda de carga no meio granular foi menos intenso na unidade de filtração direta ascendente do que no filtro rápido descendente, ocorrendo o inverso no sistema S01, no qual, em alguns ensaios, o filtro rápido descendente mostrou-se desnecessário. As carreiras de filtração foram de 27 a 88 h no sistema S01 e de 14 a 35 h no sistema S02. Em determinados casos, a floculação predominou na unidade de filtração direta ascendente em pedregulho, prejudicando a qualidade do seu efluente. O sistema S02 mostrou-se mais vantajoso que o sistema S01, exceto quando, neste, a filtração rápida descendente foi desnecessária. / Based on me investigation carried out in pilot plant, from June 1998 to October 1999, using natural surface raw water with low turbidity and color, two systems of two-stage filtration have been evaluated. As a pretreatment, the first system (S01) used direct up-flow sand filtration (with filtration rates from 200 to 360 m3/m2.day), and the second system (S02) used direct up-flow roughing filtration (with filtration rates from 80 to 190 m3/m2.day), both with final treatment based on rapid filtration (with filtration rates from 115 to 480 m3/m2.day) The coagulant doses (commercial product of aluminum sulfate) have been established through the use of laboratory scale sand filter, presenting signs of coagulation in the adsorption mechanism, with partial charge neutralization. Intermediate bottom discharges in the direct up-flow filtration units have resulted in significant reduction in thee headloss in gravel granular medium. The turbidity, apparent-color, total iron and manganese concentrations of the final effluents of the systems have attended the Brazilian Standards for drinking water production. The increase of headloss in gravel granular medium in system S02 was smaller in me direct up-fiow filtration unit than in the rapid filter, the opposite occurring in system S01, where some tests have proved lhe rapid filter redundant. The filter runs time were 27 to 88 hours in system S01 and 14 to 35 hours in system S02. In certain cases, flocculation was predominant in the direct up-flow roughing filtration unit, jeopardizing the effiuent quality. System S02 has proved advantageous in relation to system S01, except when in system S01 the rapid filter was redundant.
177

Modelling and simulation of large-scale complex networks

Luo, Hongwei, Hongwei.luo@rmit.edu.au January 2007 (has links)
Real-world large-scale complex networks such as the Internet, social networks and biological networks have increasingly attracted the interest of researchers from many areas. Accurate modelling of the statistical regularities of these large-scale networks is critical to understand their global evolving structures and local dynamical patterns. Traditionally, the Erdos and Renyi random graph model has helped the investigation of various homogeneous networks. During the past decade, a special computational methodology has emerged to study complex networks, the outcome of which is identified by two models: the Watts and Strogatz small-world model and the Barabasi-Albert scale-free model. At the core of the complex network modelling process is the extraction of characteristics of real-world networks. I have developed computer simulation algorithms for study of the properties of current theoretical models as well as for the measurement of two real-world complex networks, which lead to the isolation of three complex network modelling essentials. The main contribution of the thesis is the introduction and study of a new General Two-Stage growth model (GTS Model), which aims to describe and analyze many common-featured real-world complex networks. The tools we use to create the model and later perform many measurements on it consist of computer simulations, numerical analysis and mathematical derivations. In particular, two major cases of this GTS model have been studied. One is named the U-P model, which employs a new functional form of the network growth rule: a linear combination of preferential attachment and uniform attachment. The degree distribution of the model is first studied by computer simulation, while the exact solution is also obtained analytically. Two other important properties of complex networks: the characteristic path length and the clustering coefficient are also extensively investigated, obtaining either analytically derived solutions or numerical results by computer simulations. Furthermore, I demonstrate that the hub-hub interaction behaves in effect as the link between a network's topology and resilience property. The other is called the Hybrid model, which incorporates two stages of growth and studies the transition behaviour between the Erdos and Renyi random graph model and the Barabasi-Albert scale-free model. The Hybrid model is measured by extensive numerical simulations focusing on its degree distribution, characteristic path length and clustering coefficient. Although either of the two cases serves as a new approach to modelling real-world large-scale complex networks, perhaps more importantly, the general two-stage model provides a new theoretical framework for complex network modelling, which can be extended in many ways besides the two studied in this thesis.
178

Design and development of material-based resolution enhancement techniques for optical lithography

Gu, Xinyu 18 November 2013 (has links)
The relentless commercial drive for smaller, faster, and cheaper semi-conductor devices has pushed the existing patterning technologies to their limits. Photolithography, one of the crucial processes that determine the feature size in a microchip, is currently facing this challenge. The immaturity of next generation lithography (NGL) technology, particularly EUV, forces the semiconductor industry to explore new processing technologies that can extend the use of the existing lithographic method (i.e. ArF lithography) to enable production beyond the 32 nm node. Two new resolution enhancement techniques, double exposure lithography (DEL) and pitch division lithography (PDL), were proposed that could extend the resolution capability of the current lithography tools. This thesis describes the material and process development for these two techniques. DEL technique requires two exposure passes in a single lithographic cycle. The first exposure is performed with a mask that has a relaxed pitch, and the mask is then shifted by half pitch and re-used for the second exposure. The resolution of the resulting pattern on the wafer is doubled with respect to the features on the mask. This technique can be enabled with a type of material that functions as optical threshold layer (OTL). The key requirements for materials to be useful for OTL are a photoinduced isothermal phase transition and permeance modulation with reverse capabilities. A number of materials were designed and tested based on long alkyl side chain crystalline polymers that bear azobenzene pendant groups on the main chain. The target copolymers were synthesized and fully characterized. A proof-of-concept for the OTL design was successfully demonstrated with a series of customized analytical techniques. PDL technique doubles the line density of a grating mask with only a single exposure and is fully compatible with current lithography tools. Thus, this technique is capable of extending the resolution limit of the current ArF lithography without increasing the cost-of-ownership. Pitch division with a single exposure is accomplished by a dual-tone photoresist. This thesis presents a novel method to enable a dual-tone behavior by addition of a photobase generator (PBG) into a conventional resist formulation. The PBG was optimized to function as an exposure-dependent base quencher, which mainly neutralizes the acid generated in high dose regions but has only a minor influence in low dose regions. The resulting acid concentration profile is a parabola-like function of exposure dose, and only the medium exposure dose produces a sufficient amount of acid to switch the resist solubility. This acid response is exploited to produce pitch division patterns by creating a set of negative-tone lines in the overexposed regions in addition to the conventional positive-tone lines. A number of PBGs were synthesized and characterized, and their decomposition rate constants were studied using various techniques. Simulations were carried out to assess the feasibility of pitch division lithography. It was concluded that pitch division lithography is advantageous when the process aggressiveness factor k₁ is below 0.27. Finally, lithography evaluations of these dual-tone resists demonstrated a proof-of-concept for pitch division lithography with 45 nm pitch divided line and space patterns for a k₁ of 0.13. / text
179

Energy valorization of agro-industrial wastes and sweet sorghum for the production of gaseous biofuels through anaerobic digestion / Ενεργειακή αξιοποίηση αγροτο-βιομηχανικών αποβλήτων και γλυκού σόργου για την παραγωγή αέριων βιοκαυσίμων μέσω αναερόβιας χώνευσης

Δαρειώτη, Μαργαρίτα 09 February 2015 (has links)
It is clear that renewable resources have received great interest from the international community during the last decades and play a crucial role in the current CO2-mitigation policy. In this regard, energy from biomass and waste is seen as one of the most dominant future renewable energy sources. Thus, organic waste i.e. animal wastes, wastewaters, energy crops, agricultural and agro-industrial residues are of specific importance since these sources do not compete with food crops in agricultural land usage. The various technologies that are available for power generation from biomass and waste can be subdivided into thermochemical, biochemical and physicochemical conversion processes. Anaerobic digestion (AD), classified within the biochemical conversion processes, is a robust process and is widely applied. Various types of biomass and waste, can be anaerobically co-digested to generate a homogeneous mixture increasing both process and equipment performance. This study focused on the valorization of agro-industrial wastes (such as olive mill wastewater (OMW), cheese whey (CW) and liquid cow manure (LCM)) and sweet sorghum stalks. Olive mills, cheese factories and cow farms are agro-industries that represent a considerable share of the worldwide economy with particular interest focused in the Mediterranean region. These industries generate millions of tons of wastewaters and large amounts of by-products, which are in many cases totally unexploited and thus dangerous for the environment. On the other hand, sweet sorghum as a lignocellulosic material represents an interesting substrate for biofuels production due to its structure and composition. Anaerobic co-digestion experiments using different mixtures of agro-industrial wastes were performed in a two-stage system consisting of two continuously stirred tank reactors (CSTRs) under mesophilic conditions (37°C). Subsequently, more mixtures were studied, where sweet sorghum was added, in order to simulate the operation of a centralized AD plant fed with regional agro-wastes which lacks OMW or/and CW due to seasonal unavailability. Two operational parameters were examined in a two-stage system, including pH and HRT. Batch experiments were performed in order to investigate the impact of controlled pH on the production of bio-hydrogen and volatile fatty acids, whereas continuous experiments (CSTRs) were conducted for the evaluation of HRT effect on hydrogen and methane production. Moreover, further exploitation of digestate from an anaerobic methanogenic reactor was studied using a combined ultrafiltration/nanofiltration system and further COD reduction was obtained. On the other hand, vermicomposting was conducted in order to evaluate the sludge transformation to compost and as a result, good results in terms of increased N-P-K concentration values were obtained. Furthermore, simulation of mesophilic anaerobic (co)-digestion of different substrates was applied, using the ADM1 modified model, where the results indicated that the modified ADM1 was able to predict reasonably well the steady-state experimental data. / Είναι φανερό ότι οι ανανεώσιμες πηγές ενέργειας έχουν προσελκύσει το ενδιαφέρον της διεθνούς κοινότητας τις τελευταίες δεκαετίες καθώς διαδραματίζουν καθοριστικό ρόλο στην μείωση του CO2. Η ενέργεια από βιομάζα και απόβλητα θεωρείται ως μία από τις πλέον κυρίαρχες ανανεώσιμες πηγές ενέργειας του μέλλοντος. Έτσι, τα οργανικά απόβλητα όπως κτηνοτροφικά, ενεργειακές καλλιέργειες, γεωργικά και βιομηχανικά υπολείμματα κ.ά έχουν ιδιαίτερη σημασία, δεδομένου ότι οι πηγές αυτές δεν ανταγωνίζονται με τις καλλιέργειες τροφίμων της γεωργικής γης και ωστόσο μπορούν να χρησιμοποιηθούν για την παραγωγή ηλεκτρικής ενέργειας, θερμότητας και βιοκαυσίμων. Το αυξημένο ενδιαφέρον για τις διεργασίες που αφορούν στη μετατροπή της βιομάζας σε ανανεώσιμες πηγές ενέργειας, όπως είναι η αναερόβια χώνευση, τόνωσε την έρευνα σε αυτόν τον τομέα με αποτέλεσμα την υλοποίηση ενός σημαντικού αριθμού ερευνητικών έργων για να αξιολογηθούν οι ιδανικές συνθήκες χώνευσης διαφόρων υποστρωμάτων, όπως είναι τα αγροτο-βιομηχανικά απόβλητα και οι ενεργειακές καλλιέργειες. Στη παρούσα διατριβή πραγματοποιήθηκαν πειράματα αναερόβιας συγχώνευσης χρησιμοποιώντας αγροτο-βιομηχανικά απόβλητα ή/και γλυκό σόργο. Τα αγροτο-βιομηχανικά απόβλητα, όπως είναι τα απόβλητα ελαιοτριβείου, τυροκομείου αλλά και βουστασίου, χαρακτηρίζονται από υψηλό οργανικό φορτίο και συνεπώς θεωρούνται ακατάλληλα για απευθείας διάθεση σε περιβαλλοντικούς αποδέκτες. Συγχώνευση αυτών οδήγησε σε υψηλές αποδόσεις μεθανίου κάτι το οποίο οφείλεται σε συνεργιστικές επιδράσεις όπως η συμβολή επιπλέον αλκαλικότητας, ιχνοστοιχείων, θρεπτικών κτλ. Περαιτέρω μίγματα μελετήθηκαν χρησιμοποιώντας το γλυκό σόργο με σκοπό την προσομοίωση λειτουργίας μιας κεντρικής μονάδας αναερόβιας χώνευσης, η οποία τροφοδοτείται με τοπικά απόβλητα τα οποία θα αντικατασταθούν σε περίοδο μη εποχικής διαθεσιμότητας από το γλυκό σόργο. Τα μίγματα αυτά μελετήθηκαν σε διβάθμιο σύστημα διερευνώντας την επίδραση των δύο σημαντικότερων λειτουργικών παραμέτρων (του pH και του υδραυλικού χρόνου παραμονής, HRT) στην απόδοση του συστήματος. Πιο συγκεκριμένα, πραγματοποιήθηκαν πειράματα διαλείποντος έργου προκειμένου να διερευνηθεί η επίδραση του pH στην παραγωγή υδρογόνου και μεταβολικών προϊόντων, ενώ πειράματα συνεχούς λειτουργίας διεξήχθηκαν για τη μελέτη της επίδρασης του HRT στην παραγωγή υδρογόνου και μεθανίου σε διβάθμιο σύστημα. Περαιτέρω αξιοποίηση του χωνευμένου υπολείμματος μελετήθηκε με χρήση συνδυασμένου συστήματος υπερδιήθησης/νανοδιήθησης επιτυγχάνοντας επιπρόσθετη μείωση του οργανικού φορτίου στο διήθημα. Η μετατροπή της αναερόβια χωνευμένης ιλύος σε λίπασμα αξιολογήθηκε μέσω κομποστοποίησης με γεωσκώληκες (vermi-composting) επιτυγχάνοντας ικανοποιητικά αποτελέσματα στην αύξηση των συγκεντρώσεων N-P-K. Επιπλέον, αναπτύχθηκε τροποποιημένο μοντέλο της αναερόβιας χώνευσης (ADM1) με στόχο την προσομοίωση της αναερόβιας συγχώνευσης διαφορετικών υποστρωμάτων. Τα αποτελέσματα που προέκυψαν έδειξαν ότι το μοντέλο ήταν σε θέση να προβλέψει σε ικανοποιητικό βαθμό την πορεία των πειραματικών δεδομένων.
180

Optimisation du processus de développement du médicament grâce à la modélisation PK et les simulations d’études cliniques

Colucci, Philippe 12 1900 (has links)
Le développement d’un médicament est non seulement complexe mais les retours sur investissment ne sont pas toujours ceux voulus ou anticipés. Plusieurs médicaments échouent encore en Phase III même avec les progrès technologiques réalisés au niveau de plusieurs aspects du développement du médicament. Ceci se traduit en un nombre décroissant de médicaments qui sont commercialisés. Il faut donc améliorer le processus traditionnel de développement des médicaments afin de faciliter la disponibilité de nouveaux produits aux patients qui en ont besoin. Le but de cette recherche était d’explorer et de proposer des changements au processus de développement du médicament en utilisant les principes de la modélisation avancée et des simulations d’essais cliniques. Dans le premier volet de cette recherche, de nouveaux algorithmes disponibles dans le logiciel ADAPT 5® ont été comparés avec d’autres algorithmes déjà disponibles afin de déterminer leurs avantages et leurs faiblesses. Les deux nouveaux algorithmes vérifiés sont l’itératif à deux étapes (ITS) et le maximum de vraisemblance avec maximisation de l’espérance (MLEM). Les résultats de nos recherche ont démontré que MLEM était supérieur à ITS. La méthode MLEM était comparable à l’algorithme d’estimation conditionnelle de premier ordre (FOCE) disponible dans le logiciel NONMEM® avec moins de problèmes de rétrécissement pour les estimés de variances. Donc, ces nouveaux algorithmes ont été utilisés pour la recherche présentée dans cette thèse. Durant le processus de développement d’un médicament, afin que les paramètres pharmacocinétiques calculés de façon noncompartimentale soient adéquats, il faut que la demi-vie terminale soit bien établie. Des études pharmacocinétiques bien conçues et bien analysées sont essentielles durant le développement des médicaments surtout pour les soumissions de produits génériques et supergénériques (une formulation dont l'ingrédient actif est le même que celui du médicament de marque, mais dont le profil de libération du médicament est différent de celui-ci) car elles sont souvent les seules études essentielles nécessaires afin de décider si un produit peut être commercialisé ou non. Donc, le deuxième volet de la recherche visait à évaluer si les paramètres calculer d’une demi-vie obtenue à partir d'une durée d'échantillonnage réputée trop courte pour un individu pouvaient avoir une incidence sur les conclusions d’une étude de bioéquivalence et s’ils devaient être soustraits d’analyses statistiques. Les résultats ont démontré que les paramètres calculer d’une demi-vie obtenue à partir d'une durée d'échantillonnage réputée trop courte influençaient de façon négative les résultats si ceux-ci étaient maintenus dans l’analyse de variance. Donc, le paramètre de surface sous la courbe à l’infini pour ces sujets devrait être enlevé de l’analyse statistique et des directives à cet effet sont nécessaires a priori. Les études finales de pharmacocinétique nécessaires dans le cadre du développement d’un médicament devraient donc suivre cette recommandation afin que les bonnes décisions soient prises sur un produit. Ces informations ont été utilisées dans le cadre des simulations d’essais cliniques qui ont été réalisées durant la recherche présentée dans cette thèse afin de s’assurer d’obtenir les conclusions les plus probables. Dans le dernier volet de cette thèse, des simulations d’essais cliniques ont amélioré le processus du développement clinique d’un médicament. Les résultats d’une étude clinique pilote pour un supergénérique en voie de développement semblaient très encourageants. Cependant, certaines questions ont été soulevées par rapport aux résultats et il fallait déterminer si le produit test et référence seraient équivalents lors des études finales entreprises à jeun et en mangeant, et ce, après une dose unique et des doses répétées. Des simulations d’essais cliniques ont été entreprises pour résoudre certaines questions soulevées par l’étude pilote et ces simulations suggéraient que la nouvelle formulation ne rencontrerait pas les critères d’équivalence lors des études finales. Ces simulations ont aussi aidé à déterminer quelles modifications à la nouvelle formulation étaient nécessaires afin d’améliorer les chances de rencontrer les critères d’équivalence. Cette recherche a apporté des solutions afin d’améliorer différents aspects du processus du développement d’un médicament. Particulièrement, les simulations d’essais cliniques ont réduit le nombre d’études nécessaires pour le développement du supergénérique, le nombre de sujets exposés inutilement au médicament, et les coûts de développement. Enfin, elles nous ont permis d’établir de nouveaux critères d’exclusion pour des analyses statistiques de bioéquivalence. La recherche présentée dans cette thèse est de suggérer des améliorations au processus du développement d’un médicament en évaluant de nouveaux algorithmes pour des analyses compartimentales, en établissant des critères d’exclusion de paramètres pharmacocinétiques (PK) pour certaines analyses et en démontrant comment les simulations d’essais cliniques sont utiles. / Drug development is complex with results often differing from those anticipated or sought after. Despite technological advances in the many fields which are a part of drug development, there are still many drugs that fail in the late stages of clinical development. Indeed, the success rate of drugs reaching commercialization is declining. Improvements to the conventional drug process are therefore required in order to facilitate development and allow new medications to be provided more rapidly to patients who require them. The aim of this Ph.D. project was to explore and propose ways to improve this inefficient drug development process with the use of advanced modeling and clinical trial simulations. For the first part of this research, new algorithms available in ADAPT 5® were tested against other available algorithms in order to determine their potential strengths and weaknesses. The two new algorithms tested were the iterative two-stage and the maximum likelihood expectation maximization (MLEM) methods. Our results demonstrated that the MLEM algorithm was consistently better than the iterative two-stage algorithm. It was also comparable with the first order conditional estimate method available in NONMEM®, with significantly fewer shrinkage issues in the estimation of the variances. Therefore, these new tools were used for the clinical trial simulations performed during the course of this Ph.D. research. In order to calculate appropriate noncompartmental pharmacokinetic parameter estimates during the drug development process, it is essential that the terminal elimination half-life be well characterized. Properly conducted and analyzed pharmacokinetic studies are essential to any drug development plan, and even more so for generic and supergeneric (a formulation similar to the reference product, containing the same active ingredient; however differing from the original reference product it its delivery process) submission where they often are the only pivotal studies that need to be done to decide if a drug product is good or not. Thus, the purpose of the second part of the research was to determine if the pharmacokinetic (PK) parameters obtained from a subject whose half-life is calculated from a sampling scheme duration that is considered too short could bias the bioequivalence conclusions of a study and if these parameters should be removed from statistical analyses. Results demonstrated that subjects with a long half-life relative to the duration of the sampling scheme negatively influenced results when these were maintained in the analysis of variance. Therefore, these subjects should be removed from the analyses and guidelines to this effect are required a priori. Pivotal pharmacokinetic studies done within the drug development process should therefore follow this recommendation to make sure that the right decision be taken on a drug product formulation. This information was utilized with the clinical trial simulations that were subsequently performed in this research in order to ensure the most accurate conclusions. Finally, clinical trial simulations were used to improve the development process of a nonsteroidal anti-inflammatory drug. A supergeneric was being developed and results from a pilot study were promising. However, some results from the pilot study required closer attention to determine if the test and reference compounds were indeed equivalent and if the test compound would meet the equivalence criteria of the different required pivotal studies. Clinical trial simulations were therefore undertaken to address the multiple questions left unanswered by the pilot study and these suggested that the test compound would probably not meet the equivalence criteria. In addition, these results helped determine what modifications to the test drug would be required to meet the equivalence criteria. This research brought forward solutions to improve different aspects of the drug development process. Notably, clinical trial simulations reduced the number of studies that would have been done for a supergeneric, decreased the number of subjects unnecessarily exposed to a drug, lowered costs and helped established new criteria for the exclusion of subjects from analyses. Research conducted during this Ph.D. provided concrete ways to improve the drug development process by evaluating some newly available tools for compartmental analyses, setting standards stipulating which estimated PK parameters should be excluded from certain PK analyses and illustrating how clinical trial simulations are useful to throughout the process.

Page generated in 0.078 seconds