• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 44
  • 33
  • 5
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 95
  • 95
  • 31
  • 18
  • 16
  • 14
  • 13
  • 12
  • 12
  • 12
  • 12
  • 11
  • 10
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Analysis Of The Florida's Showcase Green Envirohome Water/wastewater Systems And Development Of A Cost-benefit Green Roof Optimization Model

Rivera, Brian 01 January 2010 (has links)
The Florida Showcase Green Envirohome (FSGE) incorporates many green technologies. FSGE is built to meet or exceed 12 green building guidelines and obtain 8 green building certificates. The two-story 3292 ft2 home is a "Near Zero-Loss Home", "Near Zero-Energy Home", "Near Zero-Runoff Home", and "Near Zero-Maintenance Home". It is spawned from the consumer-driven necessity to build a home resistant to hurricanes, tornadoes, floods, fire, mold, termites, impacts, and even earthquakes given up to 500% increase in insurance premiums in natural disaster zones, the dwindling flexibility and coverage of insurance policies, and rising energy, water and maintenance costs (FSGE 2008). The FSGE captures its stormwater runoff from the green roof, metal roof and wood decking area and routes it to the sustainable water cistern. Graywater from the home (after being disinfected using ozone) is also routed to the sustainable water cistern. This water stored in the sustainable water cistern is used for irrigation of the green roof, ground level landscape, and for toilet flushing water. This study was done in two phases. During phase one, only stormwater runoff from the green roof, metal roof and wood decking area is routed to the sustainable water cistern. Then, during phase two, the water from the graywater system is added to the sustainable water cistern. The sustainable water cistern quality is analyzed during both phases to determine if the water is acceptable for irrigation and also if it is suitable for use as toilet flushing water. The water quality of the sustainable cistern is acceptable for irrigation. The intent of the home is to not pollute the environment, so as much nutrients as possible should be removed from the wastewater before it is discharged into the groundwater. Thus, the FSGE design is to evaluate a new on-site sewage treatment and disposal (OSTD) system which consists of a sorption media labeled as Bold and GoldTM filtration media. The Bold and GoldTM filtration media is a mixture of tire crumb and other materials. This new OSTD system has sampling ports through the system to monitor the wastewater quality as it passes through. Also, the effluent wastewater quality is compared to that of a conventional system on the campus of the University of Central Florida. The cost-benefit optimization model focused on designing a residential home which incorporated a green roof, cistern and graywater systems. This model had two forms, the base model and the grey linear model. The base model used current average cost of construction of materials and installation. The grey model used an interval for the cost of construction materials and green roof energy savings. Both models included a probabilistic term to describe the rainfall amount. The cost and energy operation of a typical Florida home was used as a case study for these models. Also, some of the parameters of the model were varied to determine their effect on the results. The modeling showed that the FSGE 4500 gallon cistern design was cost effective in providing irrigation water. Also, the green roof area could have been smaller to be cost effective, because the green roof cost is relatively much higher than the cost of a regular roof.
72

Developing a Decision Making Approach for District Cooling Systems Design using Multi-objective Optimization

Kamali, Aslan 18 August 2016 (has links) (PDF)
Energy consumption rates have been dramatically increasing on a global scale within the last few decades. A significant role in this increase is subjected by the recent high temperature levels especially at summer time which caused a rapid increase in the air conditioning demands. Such phenomena can be clearly observed in developing countries, especially those in hot climate regions, where people depend mainly on conventional air conditioning systems. These systems often show poor performance and thus negatively impact the environment which in turn contributes to global warming phenomena. In recent years, the demand for urban or district cooling technologies and networks has been increasing significantly as an alternative to conventional systems due to their higher efficiency and improved ecological impact. However, to obtain an efficient design for district cooling systems is a complex task that requires considering a wide range of cooling technologies, various network layout configuration possibilities, and several energy resources to be integrated. Thus, critical decisions have to be made regarding a variety of opportunities, options and technologies. The main objective of this thesis is to develop a tool to obtain preliminary design configurations and operation patterns for district cooling energy systems by performing roughly detailed optimizations and further, to introduce a decision-making approach to help decision makers in evaluating the economic aspects and environmental performance of urban cooling systems at an early design stage. Different aspects of the subject have been investigated in the literature by several researchers. A brief survey of the state of the art was carried out and revealed that mathematical programming models were the most common and successful technique for configuring and designing cooling systems for urban areas. As an outcome of the survey, multi objective optimization models were decided to be utilized to support the decision-making process. Hence, a multi objective optimization model has been developed to address the complicated issue of decision-making when designing a cooling system for an urban area or district. The model aims to optimize several elements of a cooling system such as: cooling network, cooling technologies, capacity and location of system equipment. In addition, various energy resources have been taken into consideration as well as different solar technologies such as: trough solar concentrators, vacuum solar collectors and PV panels. The model was developed based on the mixed integer linear programming method (MILP) and implemented using GAMS language. Two case studies were investigated using the developed model. The first case study consists of seven buildings representing a residential district while the second case study was a university campus district dominated by non-residential buildings. The study was carried out for several groups of scenarios investigating certain design parameters and operation conditions such as: Available area, production plant location, cold storage location constraints, piping prices, investment cost, constant and variable electricity tariffs, solar energy integration policy, waste heat availability, load shifting strategies, and the effect of outdoor temperature in hot regions on the district cooling system performance. The investigation consisted of three stages, with total annual cost and CO2 emissions being the first and second single objective optimization stages. The third stage was a multi objective optimization combining the earlier two single objectives. Later on, non-dominated solutions, i.e. Pareto solutions, were generated by obtaining several multi objective optimization scenarios based on the decision-makers’ preferences. Eventually, a decision-making approach was developed to help decision-makers in selecting a specific solution that best fits the designers’ or decision makers’ desires, based on the difference between the Utopia and Nadir values, i.e. total annual cost and CO2 emissions obtained at the single optimization stages. / Die Energieverbrauchsraten haben in den letzten Jahrzehnten auf globaler Ebene dramatisch zugenommen. Diese Erhöhung ist zu einem großen Teil in den jüngst hohen Temperaturniveaus, vor allem in der Sommerzeit, begründet, die einen starken Anstieg der Nachfrage nach Klimaanlagen verursachen. Solche Ereignisse sind deutlich in Entwicklungsländern zu beobachten, vor allem in heißen Klimaregionen, wo Menschen vor allem konventionelle Klimaanlagensysteme benutzen. Diese Systeme verfügen meist über eine ineffiziente Leistungsfähigkeit und wirken sich somit negativ auf die Umwelt aus, was wiederum zur globalen Erwärmung beiträgt. In den letzten Jahren ist die Nachfrage nach Stadt- oder Fernkältetechnologien und -Netzwerken als Alternative zu konventionellen Systemen aufgrund ihrer höheren Effizienz und besseren ökologischen Verträglichkeit satrk gestiegen. Ein effizientes Design für Fernkühlsysteme zu erhalten, ist allerdings eine komplexe Aufgabe, die die Integration einer breite Palette von Kühltechnologien, verschiedener Konfigurationsmöglichkeiten von Netzwerk-Layouts und unterschiedlicher Energiequellen erfordert. Hierfür ist das Treffen kritischer Entscheidungen hinsichtlich einer Vielzahl von Möglichkeiten, Optionen und Technologien unabdingbar. Das Hauptziel dieser Arbeit ist es, ein Werkzeug zu entwickeln, das vorläufige Design-Konfigurationen und Betriebsmuster für Fernkälteenergiesysteme liefert, indem aureichend detaillierte Optimierungen durchgeführt werden. Zudem soll auch ein Ansatz zur Entscheidungsfindung vorgestellt werden, der Entscheidungsträger in einem frühen Planungsstadium bei der Bewertung städtischer Kühlungssysteme hinsichtlich der wirtschaftlichen Aspekte und Umweltleistung unterstützen soll. Unterschiedliche Aspekte dieser Problemstellung wurden in der Literatur von verschiedenen Forschern untersucht. Eine kurze Analyse des derzeitigen Stands der Technik ergab, dass mathematische Programmiermodelle die am weitesten verbreitete und erfolgreichste Methode für die Konfiguration und Gestaltung von Kühlsystemen für städtische Gebiete sind. Ein weiteres Ergebnis der Analyse war die Festlegung von Mehrzieloptimierungs-Modelles für die Unterstützung des Entscheidungsprozesses. Darauf basierend wurde im Rahmen der vorliegenden Arbeit ein Mehrzieloptimierungs-Modell für die Lösung des komplexen Entscheidungsfindungsprozesses bei der Gestaltung eines Kühlsystems für ein Stadtgebiet oder einen Bezirk entwickelt. Das Modell zielt darauf ab, mehrere Elemente des Kühlsystems zu optimieren, wie beispielsweise Kühlnetzwerke, Kühltechnologien sowie Kapazität und Lage der Systemtechnik. Zusätzlich werden verschiedene Energiequellen, auch solare wie Solarkonzentratoren, Vakuum-Solarkollektoren und PV-Module, berücksichtigt. Das Modell wurde auf Basis der gemischt-ganzzahlig linearen Optimierung (MILP) entwickelt und in GAMS Sprache implementiert. Zwei Fallstudien wurden mit dem entwickelten Modell untersucht. Die erste Fallstudie besteht aus sieben Gebäuden, die ein Wohnviertel darstellen, während die zweite Fallstudie einen Universitätscampus dominiert von Nichtwohngebäuden repräsentiert. Die Untersuchung wurde für mehrere Gruppen von Szenarien durchgeführt, wobei bestimmte Designparameter und Betriebsbedingungen überprüft werden, wie zum Beispiel die zur Verfügung stehende Fläche, Lage der Kühlanlage, örtliche Restriktionen der Kältespeicherung, Rohrpreise, Investitionskosten, konstante und variable Stromtarife, Strategie zur Einbindung der Solarenergie, Verfügbarkeit von Abwärme, Strategien der Lastenverschiebung, und die Wirkung der Außentemperatur in heißen Regionen auf die Leistung des Kühlsystems. Die Untersuchung bestand aus drei Stufen, wobei die jährlichen Gesamtkosten und die CO2-Emissionen die erste und zweite Einzelzieloptimierungsstufe darstellen. Die dritte Stufe war ein Pareto-Optimierung, die die beiden ersten Ziele kombiniert. Im Anschluss wurden nicht-dominante Lösungen, also Pareto-Lösungen, erzeugt, indem mehrere Pareto-Optimierungs-Szenarien basierend auf den Präferenzen der Entscheidungsträger abgebildet wurden. Schließlich wurde ein Ansatz zur Entscheidungsfindung entwickelt, um Entscheidungsträger bei der Auswahl einer bestimmten Lösung zu unterstützen, die am besten den Präferenzen des Planers oder des Entscheidungsträgers enstpricht, basierend auf der Differenz der Utopia und Nadir Werte, d.h. der jährlichen Gesamtkosten und CO2-Emissionen, die Ergebnis der einzelnen Optimierungsstufen sind.
73

Analyse, Modellierung und Verfahren zur Kompensation von CDN-bedingten Verkehrslastverschiebungen in ISP-Netzen

Windisch, Gerd 17 March 2017 (has links) (PDF)
Ein großer Anteil des Datenverkehrs in „Internet Service Provider“ (ISP)-Netzen wird heutzutage von „Content Delivery Networks“ (CDNs) verursacht. Betreiber von CDNs verwenden Lastverteilungsmechanismen um die Auslastung ihrer CDN-Infrastruktur zu vergleichmäßigen (Load Balancing). Dies geschieht ohne Abstimmung mit den ISP-Betreibern. Es können daher große Verkehrslastverschiebungen sowohl innerhalb eines ISP-Netzes, als auch auf den Verbindungsleitungen zwischen ISP-Netz und CDNs auftreten. In der vorliegenden Arbeit wird untersucht, welche nicht-kooperativen Möglichkeiten ein ISP hat, um Verkehrslastverschiebungen, welche durch Lastverteilungsmechanismen innerhalb eines CDNs verursacht werden, entgegenzuwirken bzw. abzumildern. Die Grundlage für diese Untersuchung bildet die Analyse des Serverauswahlverhaltens des YouTube-CDNs. Hierzu ist ein aktives Messverfahren entwickelt worden, um das räumliche und zeitliche Verhalten der YouTube-Serverauswahl bestimmen zu können. In zwei Messstudien wird die Serverauswahl in deutschen und europäischen ISP-Netzen untersucht. Auf Basis dieser Studien wird ein Verkehrsmodell entwickelt, welches die durch Änderungen der YouTube-Serverauswahl verursachten Verkehrslastverschiebungen abbildet. Das Verkehrsmodell wiederum bildet die Grundlage für die Bestimmung optimaler Routen im ISP-Netz, welche hohe Robustheit gegenüber CDN-bedingte Verkehrslastverschiebungen aufweisen (Alpha-robuste Routingoptimierung). Für die Lösung des robusten Routing-Optimierungsproblems wird ein iteratives Verfahren entwickelt sowie eine kompakte Reformulierung vorgestellt. Die Leistungsfähigkeit des Alpha-robusten Routings wird anhand von drei Beispielnetztopologien untersucht. Das neue Verfahren wird mit alternativen robusten Routingverfahren und einem nicht-robusten Verfahren verglichen. Neben der robusten Routingoptimierung werden in der Arbeit drei weitere Ideen für nicht-kooperative Methoden vorgestellt (BGP-, IP-Präix- und DNS-basierte Methode), um CDN-bedingten Verkehrslastverschiebungen entgegenzuwirken.
74

Caractérisation thermique de matériaux anisotropes à hautes températures / Thermal characterization of anisotropic materials at high temperatures

Souhar, Youssef 20 May 2011 (has links)
Le sujet de l'étude concerne la caractérisation thermique à hautes températures de matériaux anisotropes dont la diffusivité thermique varie selon la direction considérée. Cette mesure de la diffusivité est permise par l'observation des variations transitoires de température d'un matériau soumis à un flux de chaleur de type impulsionnel. L’excitation provient d’un Laser et la mesure de température est réalisée par thermographie infrarouge sur la face opposée à l'excitation thermique. Le champ de température ainsi obtenu permet de déterminer les trois diffusivités du matériau selon ses directions d'anisotropie. En effet, grâce à des transformations intégrales du champ de température, il est possible d'obtenir un modèle théorique décrivant les variations de température au sein du matériau. Les estimations des diffusivités s'obtiennent alors par la minimisation de la somme des écarts quadratiques entre les modèles théoriques et leurs équivalents expérimentaux. Il s'agit de problèmes d'optimisation non linéaire et les estimations sont réalisées dans le domaine des fréquences spatiales et dans le temps grâce à une inversion numérique de Laplace. Basée sur des dispositifs optiques, cette méthode est non intrusive et grâce aux modèles analytiques les mesures sont rapides et précises même à haute température. La méthode ainsi que le nouveau banc expérimental mis en place rendent possible la mesure des trois diffusivités en une unique expérience pour des excitations de forme quelconque en espace et non nécessairement Dirac en temps / The study concerns the thermal characterization at high temperatures of anisotropic materials whose thermal diffusivity varies according to the direction considered. This measurement of diffusivity is allowed by the observation of the transient variations of temperature of a material subjected to a heat pulse source. The excitation is performed by a Laser and the temperature measurement is carried out by infrared thermography on the opposite face of the thermal excitation. The temperature field thus obtained makes it possible to determine the three diffusivities of the material according to its directions of anisotropy. Indeed, thanks to integral transforms of the temperature field, it is possible to obtain a theoretical model describing the temperature variations within the material. The estimates of diffusivities are then obtained by the minimization of the sum of squared residuals between the theoretical models and their experimental equivalents. These are problems of nonlinear optimization and the estimations are carried out in the spatial frequency domain and in time thanks to a numerical inversion of Laplace. Based on optical devices this method is non-intrusive and thanks to the use of analytical models the estimations are fast and accurate even at high temperatures. The method and the new experimental facility make it possible to estimate the three thermal diffusivities in a single experiment and this for excitations of any shape in space and not necessarily Dirac’s delta function in time
75

[en] TOWARD GPU-BASED GROUND STRUCTURES FOR LARGE SCALE TOPOLOGY OPTIMIZATION / [pt] OTIMIZAÇÃO TOPOLÓGICA DE ESTRUTURAS DE GRANDE PORTE UTILIZANDO O MÉTODO DE GROUND STRUCTURES EM GPU

ARTURO ELI CUBAS RODRIGUEZ 14 May 2019 (has links)
[pt] A otimização topológica tem como objetivo encontrar a distribuição mais eficiente de material em um domínio especificado sem violar as restrições de projeto definidas pelo usuário. Quando aplicada a estruturas contínuas, a otimização topológica é geralmente realizada por meio de métodos de densidade, conhecidos na literatura técnica. Neste trabalho, daremos ênfase à aplicação de sua formulação discreta, na qual um determinado domínio é discretizado na forma de uma estrutura base, ou seja, uma distribuição espacial finita de nós conectados entre si por meio de barras de treliça. O método de estrutura base fornece uma aproximação para as estruturas de Michell, que são compostas por um número infinito de barras, por meio de um número reduzido de elementos de treliça. O problema de determinar a estrutura final com peso mínimo, para um único caso de carregamento, considerando um comportamento linear elástico do material e restrições de tensão, pode ser formulado como um problema de programação linear. O objetivo deste trabalho é fornecer uma implementação escalável para o problema de otimização de treliças com peso mínimo, considerando domínios com geometrias arbitrárias. O método remove os elementos que são desnecessários, partindo de uma treliça cujo grau de conectividade é definido pelo usuário, mantendo-se fixos os pontos nodais. Propomos uma implementação escalável do método de estrutura base, utilizando um algoritmo de pontos interiores eficiente e robusto, em um ambiente de computação paralela (envolvendo unidades de processamento gráfico ou GPUs). Os resultados apresentados, em estruturas bi e tridimensionais com milhões de barras, ilustram a viabilidade e a eficiência computacional da implementação proposta. / [en] Topology optimization aims to find the most efficient material distribution in a specified domain without violating user-defined design constraints. When applied to continuum structures, topology optimization is usually performed by means of the well-known density methods. In this work we focus on the application of its discrete formulation where a given domain is discretized into a ground structure, i.e., a finite spatial distribution of nodes connected using truss members. The ground structure method provides an approximation to optimal Michell-type structures, composed of an infinite number of members, by using a reduced number of truss members. The optimal least weight truss for a single load case, under linear elastic conditions, subjected to stress constraints can be posed as a linear programming problem. The aim of this work is to provide a scalable implementation for the optimization of least weight trusses embedded in any domain geometry. The method removes unnecessary members from a truss that has a user-defined degree of connectivity while keeping the nodal locations fixed. We discuss in detail the scalable implementation of the ground structure method using an efficient and robust interior point algorithm within a parallel computing environment (involving Graphics Processing Units or GPUs). The capabilities of the proposed implementation is illustrated by means of large scale applications on practical problems with millions of members in both 2D and 3D structures.
76

[en] ALLOCATION OF FIRM ENERGY RIGHTS AMONG HYDRO PLANTS: A GAME THEORETIC APPROACH / [pt] APLICAÇÃO DE TEORIA DOS JOGOS À REPARTIÇÃO DA ENERGIA FIRME DE UM SISTEMA HIDRELÉTRICO

EDUARDO THOMAZ FARIA 16 November 2004 (has links)
[pt] O objetivo desta monografia é investigar a aplicação de distintas metodologias de alocação de energia firme de usinas hidrelétricas através da teoria dos jogos de coalizão. Mostra-se que não existe uma maneira ótima, única, de se fazer esta repartição, mas existem critérios para verificar se uma metodologia de repartição específica apresenta algum aspecto inadequado. Um desses critérios é a justiça. Mostra-se que este critério equivale a pertencer ao chamado núcleo de um jogo cooperativo. O cálculo da energia firme será formulado como um problema de otimização linear e serão investigadas vantagens e desvantagens de distintos métodos de alocação (a benefícios marginais, geração média no período crítico, última adição e nucleolus). Em seguida será desenvolvida uma aplicação do esquema Aumann-Shapley (AS) à repartição da energia firme de usinas hidrelétricas. Demonstra-se que além de robusto em relação aos tamanhos dos recursos e eficiente computacionalmente, este método fornece para o problema do firme uma alocação pertencente ao núcleo e, portanto, atende à condição de justiça. A aplicação do esquema AS será apresentada para o Sistema Brasileiro e serão comparados os resultados obtidos por este método com outros esquemas de alocação adotados no Sistema Hidrelétrico Brasileiro. / [en] The objective of this work is to investigate the application of different methodologies of allocation of firm energy rights among hydro plants using a gametheoretic framework. It is shown that there is not an optimal and unique approach to make this allocation but there are criteria to verify if a given approach presents any inadequate aspect. One of these criteria is the justice, or fairness. It is shown that this criterion is equivalent to the condition of the core of a cooperative game. The calculation of the firm energy will be formulated as a linear program and advantages/disadvantages of different allocation methods (marginal allocation, average production on the dry period, incremental allocation and nucleolus) will be investigated. Next, an application of the Aumann-Shapley (AS) scheme to the problem of allocation of firm energy rights will be developed. It is shown that, besides being robust and computationally efficient, this scheme provides an allocation that belongs to the core of the game and therefore meets the condition of justice. The AS scheme will be applied to the Brazilian system (composed of about 100 hydro plants) and the results obtained will be compared with the allocation schemes currently adopted in the Brazilian system.
77

Exploring flexibility and context dependency in the mycobacterial central carbon metabolism

Tummler, Katja 11 May 2017 (has links)
Tuberkulose ist auch heute noch eine der bedrohlichsten Infektionskrankheiten weltweit, verantwortlich für über 1.5 Millionen Todesfälle jährlich. Diese „Erfolgsgeschichte“ ihres Erregers Mycobacterium tuberculosis ist dabei wesentlich durch einen extrem flexiblen Stoffwechsel bestimmt, der dem Bakterium das Wachstum unter den restriktiven Bedingungen der menschlichen Wirtszelle erlaubt. Diese Arbeit erkundet die Flexibilität des zentralen Kohlenstoffmetabolismus in Mykobakterien mit Hilfe mathematischer Modellierungsansätze, ergänzt durch die Integration von qualitativ hochwertigen experimentellen Daten. Ausgehend von einem Überblick über die metabolische Landschaft des zentralen Kohlenstoffmetabolismus, erhöht sich Schritt für Schritt die Detailtiefe bis hin zur genauen Analyse spezieller infektionsrelevanter metabolischer Wege. Die Verknüpfung des zentralen Kohlenstoffmetabolismus zu umgebenden Stoffwechsel- und Biosynthesewegen wird systematisch offen gelegt, als Voraussetzung für eine thermodynamische Charakterisierung des Systems, welche die Glykolyse als limitierenden Stoffwechselweg unter verschiedenen Wachstumsbedingungen charakterisiert. Basierend auf Protein- und Metabolitdaten im Fleißgleichgewicht, erlaubt eine neu vorgestellte Methode die Vorhersage regulatorischer Punkte für den metabolischen Übergang zwischen verschiedenen Kohlenstoffquellen. Abschließend wird mit Hilfe thermodynamisch-kinetischer Modellierung das Zusammenspiel zweier Stoffwechselwege mechanistisch erklärt, welche den robusten Abbau einer intrazellulären Kohlenstoffquelle ermöglichen. Durch die Entwicklung neuer Modellierungstechniken in Kombination mit hochauflösenden experimentellen Daten, trägt diese Arbeit zum besseren Verständnis der kontextabhängigen Flexibilität des mycobakteriellen Stoffwechsels bei, einem vielversprechenden Angriffspunkt für die Entwicklung neuer Medikamente gegen Tuberkulose. / Tuberculosis remains one of the major global health threats responsible for over 1.5 million deaths each year. This ’success story’ of the causative agent Mycobacterium tuberculosis is thereby closely linked to a flexible metabolism, allowing growth despite the restrictive conditions within the human host. In this thesis, the flexibility of the mycobacterial central carbon metabolism is explored by modeling approaches integrating high-quality experimental data. The analyses zoom in from a network based view to the detailed functionalities of individual, virulence relevant pathways. The interconnection of the central carbon metabolism to the remaining metabolic network is charted as a prerequisite to characterize its thermodynamic landscape, debunking glycolysis as bottleneck in different nutritional conditions. Based on steady state metabolomics and proteomics data, regulatory sites for the metabolic transition between different carbon sources are predicted by a novel method. Finally, the flexible interplay between two seemingly redundant pathways for the catabolism of an in vivo-like carbon source is explained mechanistically by means of thermodynamic-kinetic modeling. By employing novel modeling methods in combination with high-resolution experimental data, this work adds to the mechanistic understanding of the context dependent flexibility of mycobacterial metabolism, an important target for the development of novel drugs in the battle against tuberculosis.
78

[en] DISAGGREGATION OF ELECTRICAL ENERGY BY HOME APPLIANCES FOR RESIDENTIAL CONSUMERS / [pt] DESAGREGAÇÃO DA ENERGIA ELÉTRICA POR ELETRODOMÉSTICOS PARA CONSUMIDORES RESIDENCIAIS

ESTIVEN OROZCO ZULUAGA 24 January 2019 (has links)
[pt] Nos últimos anos, o custo com energia elétrica tem aumentado de forma significativa para os consumidores no Brasil. Grandes consumidores, como indústrias e comércios, atualmente dispõem de alternativas para mitigar estes custos, como a otimização do contrato de demanda, a correção do baixo fator de potência, a utilização de geração própria, renovável ou não renovável, além da possibilidade de migrar para o mercado livre de energia elétrica, com diversas modalidades de contratos, preços e prazos. Já os consumidores residenciais, em função dos custos menores com as faturas de energia e da limitação técnica dos medidores, até agora dispunham de poucos mecanismos para atenuar seus custos. Entretanto, nos últimos anos tem sido cada vez mais comum a utilização de geração distribuída, principalmente com o uso de painéis fotovoltaicos por parte destes consumidores. Além disto, com a redução dos custos dos medidores inteligentes de energia elétrica, estes consumidores também podem monitorar seu consumo em tempo real, promovendo ações de aumento de eficiência energética para reduzir custos. Mais recentemente, foram criadas as bandeiras tarifárias, que propõem identificar as condições sistêmicas por cores verde, amarela e vermelha. As cores amarela e vermelha sinalizam aumentos de custos na produção de energia elétrica e, consequentemente, são repassados para o consumidor na forma de aumento de tarifa, promovendo resposta da demanda. Assim, há uma razão adicional para os consumidores monitorarem seu consumo. Não obstante, em 2018 foi adotada uma nova modalidade tarifária voltada para esta classe de consumidor chamada tarifa branca. Nesta modalidade, o consumidor possui diferentes valores de tarifas para diferentes períodos do dia. Assim, o consumidor que optar por esta modalidade pode reduzir o custo da sua fatura deslocando o consumo de horários de maior valor de tarifa para horários de menor valor de tarifa. Esta dissertação busca analisar em detalhes a viabilidade de um consumidor residencial migrar seu contrato para a chamada tarifa branca. Para isto, é proposto um modelo de otimização linear inteiro misto que busca desagregar o consumo de energia elétrica, medido de forma não invasiva, do consumidor para os diferentes eletrodomésticos da casa. Logo, o consumidor poderá decidir pela mudança contratual avaliando a perda de conforto que terá em mudar seus hábitos de consumo. A aplicação do modelo proposto é interessante não só por apresentar um diagnóstico mais detalhado do consumo de energia elétrica, mas também por identificar o funcionamento de eletrodomésticos como geladeira, ar condicionado e frigobar, que possuem diferentes estados de operação que dificilmente seriam capturados por uma simples inspeção destes eletrodomésticos. Para ilustrar o modelo proposto, nesta dissertação, dados de um consumidor real foram utilizados e a acurácia do modelo pôde ser comprovada com medições diretas de alguns eletrodomésticos. Desta forma, o consumidor tem a sua disposição uma ferramenta de apoio à decisão importante para monitorar o funcionamento dos eletrodomésticos e definir se deve migrar para a nova modalidade tarifária. / [en] In the last years, energy consumption has increased significantly for consumers in Brazil. Large consumers, such as industrial and commercial customers, are currently subject to cost-mitigation alternatives such as demand contract optimization, power factor reduction, self-generation, renewable or non-renewable generation, and the possibility of migrating to the free market of electric energy, with various modes of purchase, prices and deadlines. The consumer, in which the means of the upper costs with the fat means of the data of the meters, is in function of minor engines to reduce their costs. However, on a constant basis, with the use of photovoltaic panels, by these consumers. In addition, with the help of the costs of smart electric power meters, these profits are potentially higher, in real time, the ability to generate weaker sound profits for the cost image. More recently, they were created as tariff plates, which identify the systemic conditions by the green, yellow and red nuclei. The yellow and red samples are generated from the temperature of electric energy production and, consequently, are passed on to the consumer in the form of temperature increase. Thus, there is a large difference in consumption levels of your consumption. Nevertheless, in 2015 a new tariff modality was implemented for this class of energy consumption called the white tariff. In this mode, the buyer has different rate values for different periods of the day. Thus, consumers who have this option can reduce the cost of their invoice in relation to the consumption of schedules of higher tariff value for the hours of lower tariff value. This dissertation looks at the analysis on a feasibility of a residential ad migrating its contract to a so-called white tariff. To this end, it is necessary a linear model that makes the difference in consumption of electric energy, measured non-invasively, from consumer to the different units of household appliances of the house. Therefore, the consumer is also evaluated by contracting a service that improves their consumption capacity. The application of the model is more interesting, but no longer presents the power of electric power, but also has the same standard of electricity as the refrigerator, air conditioning and minibar, which have different states of operation that are hardly captured by a simple inspection of each appliance. To illustrate the proposed model, this dissertation, data from a real consumer were used and an accuracy of the model can be proven with the direct measurements of some home appliances. The way in which the consumer has a migration support tool for the operation of the equipment and defines whether to migrate to a new tariff modality.
79

Applications and algorithms for two-stage robust linear optimization / Applications et algorithmes pour l'optimisation linéaire robuste en deux étapes

Costa da Silva, Marco Aurelio 13 November 2018 (has links)
Le domaine de recherche de cette thèse est l'optimisation linéaire robuste en deux étapes. Nous sommes intéressés par des algorithmes d'exploration de sa structure et aussi pour ajouter des alternatives afin d'atténuer le conservatisme inhérent à une solution robuste. Nous développons des algorithmes qui incorporent ces alternatives et sont personnalisés pour fonctionner avec des exemples de problèmes à moyenne ou grande échelle. En faisant cela, nous expérimentons une approche holistique du conservatisme en optimisation linéaire robuste et nous rassemblons les dernières avancées dans des domaines tels que l'optimisation robuste basée sur les données, optimisation robuste par distribution et optimisation robuste adaptative. Nous appliquons ces algorithmes dans des applications définies du problème de conception / chargement du réseau, problème de planification, problème combinatoire min-max-min et problème d'affectation de la flotte aérienne. Nous montrons comment les algorithmes développés améliorent les performances par rapport aux implémentations précédentes. / The research scope of this thesis is two-stage robust linear optimization. We are interested in investigating algorithms that can explore its structure and also on adding alternatives to mitigate conservatism inherent to a robust solution. We develop algorithms that incorporate these alternatives and are customized to work with rather medium or large scale instances of problems. By doing this we experiment a holistic approach to conservatism in robust linear optimization and bring together the most recent advances in areas such as data-driven robust optimization, distributionally robust optimization and adaptive robust optimization. We apply these algorithms in defined applications of the network design/loading problem, the scheduling problem, a min-max-min combinatorial problem and the airline fleet assignment problem. We show how the algorithms developed improve performance when compared to previous implementations.
80

Optimisation convexe non-différentiable et méthodes de décomposition en recherche opérationnelle / Convex nonsmooth optimization and decomposition methods in operations research

Zaourar, Sofia 04 November 2014 (has links)
Les méthodes de décomposition sont une application du concept de diviser pour régner en optimisation. L'idée est de décomposer un problème d'optimisation donné en une séquence de sous-problèmes plus faciles à résoudre. Bien que ces méthodes soient les meilleures pour un grand nombre de problèmes de recherche opérationnelle, leur application à des problèmes réels de grande taille présente encore de nombreux défis. Cette thèse propose des améliorations méthodologiques et algorithmiques de méthodes de décomposition. Notre approche est basée sur l'analyse convexe et l'optimisation non-différentiable. Dans la décomposition par les contraintes (ou relaxation lagrangienne) du problème de planification de production électrique, même les sous-problèmes sont trop difficiles pour être résolus exactement. Mais des solutions approchées résultent en des prix instables et chahutés. Nous présentons un moyen simple d'améliorer la structure des prix en pénalisant leurs oscillations, en utilisant en particulier une régularisation par variation totale. La consistance de notre approche est illustrée sur des problèmes d'EDF. Nous considérons ensuite la décomposition par les variables (ou de Benders) qui peut avoir une convergence excessivement lente. Avec un point de vue d'optimisation non-différentiable, nous nous concentrons sur l'instabilité de l'algorithme de plans sécants sous-jacent à la méthode. Nous proposons une stabilisation quadratique de l'algorithme de Benders, inspirée par les méthodes de faisceaux en optimisation convexe. L'accélération résultant de cette stabilisation est illustrée sur des problèmes de conception de réseau et de localisation de plates-formes de correspondance (hubs). Nous nous intéressons aussi plus généralement aux problèmes d'optimisation convexe non-différentiable dont l'objectif est coûteux à évaluer. C'est en particulier une situation courante dans les procédures de décomposition. Nous montrons qu'il existe souvent des informations supplémentaires sur le problème, faciles à obtenir mais avec une précision inconnue, qui ne sont pas utilisées dans les algorithmes. Nous proposons un moyen d'incorporer ces informations incontrôlées dans des méthodes classiques d'optimisation convexe non-différentiable. Cette approche est appliquée avec succès à desproblèmes d'optimisation stochastique. Finalement, nous introduisons une stratégie de décomposition pour un problème de réaffectation de machines. Cette décomposition mène à une nouvelle variante de problèmes de conditionnement vectoriel (vectorbin packing) où les boîtes sont de taille variable. Nous proposons des heuristiques efficaces pour ce problème, qui améliorent les résultats de l'état de l'art du conditionnement vectoriel. Une adaptation de ces heuristiques permet de construire des solutions réalisables au problème de réaffectation de machines de Google. / Decomposition methods are an application of the divide and conquer principle to large-scale optimization. Their idea is to decompose a given optimization problem into a sequence of easier subproblems. Although successful for many applications, these methods still present challenges. In this thesis, we propose methodological and algorithmic improvements of decomposition methods and illustrate them on several operations research problems. Our approach heavily relies on convex analysis and nonsmooth optimization. In constraint decomposition (or Lagrangian relaxation) applied to short-term electricity generation management, even the subproblems are too difficult to solve exactly. When solved approximately though, the obtained prices show an unstable noisy behaviour. We present a simple way to improve the structure of the prices by penalizing their noisy behaviour, in particular using a total variation regularization. We illustrate the consistency of our regularization on real-life problems from EDF. We then consider variable decomposition (or Benders decomposition), that can have a very slow convergence. With a nonsmooth optimization point of view on this method, we address the instability of Benders cutting-planes algorithm. We present an algorithmic stabilization inspired by bundle methods for convex optimization. The acceleration provided by this stabilization is illustrated on network design andhub location problems. We also study more general convex nonsmooth problems whose objective function is expensive to evaluate. This situation typically arises in decomposition methods. We show that it often exists extra information about the problem, cheap but with unknown accuracy, that is not used by the algorithms. We propose a way to incorporate this coarseinformation into classical nonsmooth optimization algorithms and apply it successfully to two-stage stochastic problems.Finally, we introduce a decomposition strategy for the machine reassignment problem. This decomposition leads to a new variant of vector bin packing problems, where the bins have variable sizes. We propose fast and efficient heuristics for this problem that improve on state of the art results of vector bin packing problems. An adaptation of these heuristics is also able to generate feasible solutions for Google instances of the machine reassignment problem.

Page generated in 0.1258 seconds