1431 |
Pathwise anticipating random periodic solutions of SDEs and SPDEs with linear multiplicative noiseWu, Yue January 2014 (has links)
In this thesis, we study the existence of pathwise random periodic solutions to both the semilinear stochastic differential equations with linear multiplicative noise and the semilinear stochastic partial differential equations with linear multiplicative noise in a Hilbert space. We identify them as the solutions of coupled forward-backward infinite horizon stochastic integral equations in general cases, and then perform the argument of the relative compactness of Wiener-Sobolev spaces in C([0, T],L2Ω,Rd)) or C([0, T],L2(Ω x O)) and Schauder's fixed point theorem to show the existence of a solution of the coupled stochastic forward-backward infinite horizon integral equations.
|
1432 |
Simulation numérique du procédé de refusion sous laitier électroconducteur / A comprehensive model of the electroslag remelting processWeber, Valentine 27 February 2008 (has links)
Le procédé de refusion sous laitier électroconducteur (Electro Slag Remelting ou ESR) est aujourd’hui largement utilisé pour la production d’alliages métalliques à haute valeur ajoutée, comme les aciers spéciaux ou les superalliages base nickel. La modélisation mathématique et la simulation numérique du procédé ESR présentent un grand intérêt puisque les études expérimentales sur installations industrielles sont coûteuses et souvent difficiles à mettre en oeuvre. Ainsi, afin d’améliorer la compréhension et la maîtrise de la conduite d’une refusion, un modèle prédictif a été développé dans le cadre de cette étude. Il décrit les transferts couplés de chaleur et de quantité de mouvement lors de la croissance et de la solidification d’un lingot, en géométrie axisymétrique. La résolution des équations est basée sur une approche de type volumes finis. Le modèle tient compte de l’effet Joule dans le laitier résistif, des forces électromagnétiques et de la turbulence éventuelle de l’écoulement des phases liquides. La zone pâteuse est traitée comme un milieu poreux. Le modèle permet notamment de prédire la formation de la peau de laitier solide qui entoure le laitier et le lingot. Par ailleurs, il offre l’avantage de simuler le comportement du lingot et du laitier après la coupure finale du courant.Le développement s’est accompagné d’une importante étape de validation. Quatre refusions à l’échelle industrielle ont ainsi été réalisées à l’aciérie des Ancizes (Aubert&Duval). Les observations expérimentales ont ensuite été confrontées aux résultats du calcul. La comparaison a montré que le modèle peut être utilisé afin de prédire le comportement du procédé, à condition d’accorder une attention particulière à l’estimation des propriétés thermophysiques du métal, et surtout du laitier. Enfin, afin d’illustrer l’utilisation du modèle comme support à la compréhension du procédé, nous avons étudié l’influence de la variation de paramètres opératoires tels que la profondeur d’immersion de l’électrode, le taux de remplissage ou la pression de l’eau de refroidissement. / Electro Slag Remelting (ESR) is widely used for the production of high-value-added alloys such as special steels or nickel-based superalloys. Because of high trial costs and complexity of the process, trial-and-error based approaches are not well suitable for fundamental studies and optimization of the process.Consequently, a transient-state numerical model which accounts for electromagnetic phenomena and coupled heat and momentum transfers in an axisymmetrical geometry has been developed. The model simulates the continuous growth of the electroslag remelted ingot through a mesh-splitting method. In addition, solidification of the metal and slag is modelled by an enthalpy-based technique. A turbulence model is implemented to compute the motion of liquid phases (slag and metal), while the mushy zone is described as a porous medium whose permeability varies with the liquid fraction, thus enabling an accurate calculation of solid/liquid interaction. The coupled partial differential equations are solved using a finite-volume technique.Computed results are compared to experimental observation of 4 industrial remelted ingots fully dedicated to the model validation step. Pool depth and shape are particularly investigated in order to validate the model. Comparison shows that the model can be used as a predictive tool to analyse the process behavior. Nevertheless, it is necessary to pay a particular attention to the estimation of the thermophysical properties of metal and especially slag.These results provide valuable information about the process performance and influence of operating parameters. In this way, we present some examples of model use as a support to analyse the influence of operating parameters. We have studied the variation of electrode immersion depth, fill ratio and water pressure in the cooling circuit.
|
1433 |
Dynamic instruction set extension of microprocessors with embedded FPGAsBauer, Heiner 13 April 2017 (has links) (PDF)
Increasingly complex applications and recent shifts in technology scaling have created a large demand for microprocessors which can perform tasks more quickly and more energy efficient. Conventional microarchitectures exploit multiple levels of parallelism to increase instruction throughput and use application specific instruction sets or hardware accelerators to increase energy efficiency. Reconfigurable microprocessors adopt the same principle of providing application specific hardware, however, with the significant advantage of post-fabrication flexibility. Not only does this offer similar gains in performance but also the flexibility to configure each device individually.
This thesis explored the benefit of a tight coupled and fine-grained reconfigurable microprocessor. In contrast to previous research, a detailed design space exploration of logical architectures for island-style field programmable gate arrays (FPGAs) has been performed in the context of a commercial 22nm process technology. Other research projects either reused general purpose architectures or spent little effort to design and characterize custom fabrics, which are critical to system performance and the practicality of frequently proposed high-level software techniques. Here, detailed circuit implementations and a custom area model were used to estimate the performance of over 200 different logical FPGA architectures with single-driver routing. Results of this exploration revealed similar tradeoffs and trends described by previous studies. The number of lookup table (LUT) inputs and the structure of the global routing network were shown to have a major impact on the area delay product. However, results suggested a much larger region of efficient architectures than before. Finally, an architecture with 5-LUTs and 8 logic elements per cluster was selected. Modifications to the microprocessor, whichwas based on an industry proven instruction set architecture, and its software toolchain provided access to this embedded reconfigurable fabric via custom instructions. The baseline microprocessor was characterized with estimates from signoff data for a 28nm hardware implementation. A modified academic FPGA tool flow was used to transform Verilog implementations of custom instructions into a post-routing netlist with timing annotations. Simulation-based verification of the system was performed with a cycle-accurate processor model and diverse application benchmarks, ranging from signal processing, over encryption to computation of elementary functions.
For these benchmarks, a significant increase in performance with speedups from 3 to 15 relative to the baseline microprocessor was achieved with the extended instruction set. Except for one case, application speedup clearly outweighed the area overhead for the extended system, even though the modeled fabric architecturewas primitive and contained no explicit arithmetic enhancements. Insights into fundamental tradeoffs of island-style FPGA architectures, the developed exploration flow, and a concrete cost model are relevant for the development of more advanced architectures. Hence, this work is a successful proof of concept and has laid the basis for further investigations into architectural extensions and physical implementations. Potential for further optimizationwas identified on multiple levels and numerous directions for future research were described. / Zunehmend komplexere Anwendungen und Besonderheiten moderner Halbleitertechnologien haben zu einer großen Nachfrage an leistungsfähigen und gleichzeitig sehr energieeffizienten Mikroprozessoren geführt. Konventionelle Architekturen versuchen den Befehlsdurchsatz durch Parallelisierung zu steigern und stellen anwendungsspezifische Befehlssätze oder Hardwarebeschleuniger zur Steigerung der Energieeffizienz bereit. Rekonfigurierbare Prozessoren ermöglichen ähnliche Performancesteigerungen und besitzen gleichzeitig den enormen Vorteil, dass die Spezialisierung auf eine bestimmte Anwendung nach der Herstellung erfolgen kann.
In dieser Diplomarbeit wurde ein rekonfigurierbarer Mikroprozessor mit einem eng gekoppelten FPGA untersucht. Im Gegensatz zu früheren Forschungsansätzen wurde eine umfangreiche Entwurfsraumexploration der FPGA-Architektur im Zusammenhang mit einem kommerziellen 22nm Herstellungsprozess durchgeführt. Bisher verwendeten die meisten Forschungsprojekte entweder kommerzielle Architekturen, die nicht unbedingt auf diesen Anwendungsfall zugeschnitten sind, oder die vorgeschlagenen FGPA-Komponenten wurden nur unzureichend untersucht und charakterisiert. Jedoch ist gerade dieser Baustein ausschlaggebend für die Leistungsfähigkeit des gesamten Systems. Deshalb wurden im Rahmen dieser Arbeit über 200 verschiedene logische FPGA-Architekturen untersucht. Zur Modellierung wurden konkrete Schaltungstopologien und ein auf den Herstellungsprozess zugeschnittenes Modell zur Abschätzung der Layoutfläche verwendet. Generell wurden die gleichen Trends wie bei vorhergehenden und ähnlich umfangreichen Untersuchungen beobachtet. Auch hier wurden die Ergebnisse maßgeblich von der Größe der LUTs (engl. "Lookup Tables") und der Struktur des Routingnetzwerks bestimmt. Gleichzeitig wurde ein viel breiterer Bereich von Architekturen mit nahezu gleicher Effizienz identifiziert. Zur weiteren Evaluation wurde eine FPGA-Architektur mit 5-LUTs und 8 Logikelementen ausgewählt. Die Performance des ausgewählten Mikroprozessors, der auf einer erprobten Befehlssatzarchitektur aufbaut, wurde mit Ergebnissen eines 28nm Testchips abgeschätzt. Eine modifizierte Sammlung von akademischen Softwarewerkzeugen wurde verwendet, um Spezialbefehle auf die modellierte FPGA-Architektur abzubilden und eine Netzliste für die anschließende Simulation und Verifikation zu erzeugen.
Für eine Reihe unterschiedlicher Anwendungs-Benchmarks wurde eine relative Leistungssteigerung zwischen 3 und 15 gegenüber dem ursprünglichen Prozessor ermittelt. Obwohl die vorgeschlagene FPGA-Architektur vergleichsweise primitiv ist und keinerlei arithmetische Erweiterungen besitzt, musste dabei, bis auf eine Ausnahme, kein überproportionaler Anstieg der Chipfläche in Kauf genommen werden. Die gewonnen Erkenntnisse zu den Abhängigkeiten zwischen den Architekturparametern, der entwickelte Ablauf für die Exploration und das konkrete Kostenmodell sind essenziell für weitere Verbesserungen der FPGA-Architektur. Die vorliegende Arbeit hat somit erfolgreich den Vorteil der untersuchten Systemarchitektur gezeigt und den Weg für mögliche Erweiterungen und Hardwareimplementierungen geebnet. Zusätzlich wurden eine Reihe von Optimierungen der Architektur und weitere potenziellen Forschungsansätzen aufgezeigt.
|
1434 |
The potential benefits of combined heat and power based district energy gridsDuquette, Jean 28 February 2017 (has links)
In this dissertation, an assessment is conducted of the potential benefits of combined heat and power (CHP) based district energy (DE) grids in energy systems of different scale having significant fossil fuel fired electrical generation capacity. Three studies are included in the research.
In the first study, the potential benefits of expanding CHP-based DE grids in a large scale energy system are investigated. The impacts of expanding wind power systems are also investigated and a comparison between these technologies is made with respect to fossil fuel utilization and CO2 emissions. A model is constructed and five scenarios are evaluated with the EnergyPLAN software taking the province of Ontario, Canada as the case study. Results show that reductions in fuel utilization and CO2 emissions of up to 8.5% and 32%, respectively, are possible when switching to an energy system comprising widespread CHP-based DE grids.
In the second study, a high temporal resolution numerical model (i.e. the SS-VTD model) is developed that is capable of rapidly calculating distribution losses in small scale variable flow DE grids with low error and computational intensity. The SS-VTD model is validated by comparing simulated temperature data with measured temperature data from an existing network. The Saanich DE grid, located near Victoria, Canada, is used as the case study for validation.
In the third study, the potential benefits of integrating high penetrations of renewable energy via a power-to-heat plant in a small scale CHP-based DE grid are investigated. The impacts of switching to a CHP-based DE grid equipped with an electric boiler plant versus a conventional wave power system are compared with respect to fossil fuel utilization and CO2 emissions. The SS-VTD model is used to conduct the study. The energy system of the Hot Springs Cove community, located on the west coast of Vancouver Island, Canada is used as the case study in the analysis. Results show that relative to the conventional wave power system, reductions in fuel utilization and CO2 emissions of up to 47% are possible when switching to a CHP-based DE grid. / Graduate
|
1435 |
GIS-based coupled cellular automaton model to allocate irrigated agriculture land use in the High Plains Aquifer RegionWang, Peiwen January 1900 (has links)
Master of Landscape Architecture / Department of Landscape Architecture and Regional and Community Planning / Eric A. Bernard / The Kansas High Plains region is a key global agricultural production center (U.S. G.S, 2009). The High Plains physiography is ideal agricultural production landscape except for the semi-arid climate. Consequently, farmers mine vast groundwater resources from the High Plains Ogallala Aquifer formations to augment precipitation for crop production. Growing global population, current policy and subsidy programs, declining aquifer levels coupled with regional climatic changes call into question both short-term and long-term resilience of this agrarian landscape and food and water security.
This project proposes a means to simulate future irrigated agriculture land use and crop cover patterns in the Kansas High Plains Aquifer region based on coupled modeling results from ongoing research at Kansas State University. A Cellular Automata (CA) modeling framework is used to simulate potential land use distribution, based on coupled modeling results from groundwater, economic, and crop models. The CA approach considers existing infrastructure resources, industrial and commercial systems, existing land use patterns, and suitability modeling results for agricultural production. The results of the distribution of irrigated land produced from the CA model provide necessary variable inputs for the next temporal coupled modeling iteration. For example, the groundwater model estimates water availability in saturated thickness and depth to water. The economic model projects which crops will be grown based on water availability and commodity prices at a county scale. The crop model estimates potential yield of a crop under specific soil, climate and growing conditions which further informs the economic model providing an estimate of profit, which informs regional economic and population models.
Integrating the CA model into the coupled modeling system provides a key linkage to simulate spatial patterns of irrigated land use and crop type land cover based on coupled model results. Implementing the CA model in GIS offers visualization of coupled model components and results as well as the CA model land use and land cover. The project outcome hopes to afford decision-makers, including farmers, the ability to use the actual landscape data and the developed coupled modeling framework to strategically inform decisions with long-term resiliency.
|
1436 |
Modélisation de la chlorophylle de surface du lagon de Nouvelle Calédonie comme indicateur de l'état de santé de zones récifales côtièresFuchs, Rosalie 29 March 2013 (has links)
Devant l'intérêt croissant pour l'environnement et la conservation de la biodiversité, comprendre les principaux mécanismes des cycles biogéochimiques ayant lieu dans les écosystèmes coralliens et lagonaires est une priorité. Un modèle 3D couplé physique-biochéochimique a été mis en place sur le lagon de Nouvelle-Calédonie (NC) : un 'hot spot' de biodiversité sous l'influence de divers forçages d'origines naturelles et anthropiques.Les interactions terre-lagon ont été abordées à travers l'étude d'un événement extrême La Nina (2008) qui cause de fortes précipitations, amenant d'importants apports dans le lagon.Les résultats du modèle fournissent une vue synoptique de la réponse biogéochimique-physique du lagon, mettant en évidence que la totalité du lagon fût impacté par les apports des rivières et un hydrodynamisme plus actif, où les concentrations en chlorophylle-a ont été doublées.L'interaction complexe océan-lagon a été abordée à travers la modélisation des processus d'upwelling du Sud Ouest (SO) de la NC. Quatre étés australs ont été simulés, mettant en évidence l'importance des processus d'upwelling qui représentent un important forçage de la production primaire au SO de la NC. Une analyse lagrangienne du transport a montré que les eaux issues de l'upwelling peuvent atteindre le lagon SO sous certaines conditions, un phénomène pouvant avoir des conséquences sur le recrutement larvaire et l'enrichissement du lagon. Le modèle 3D couplé est un outil robuste pour l'étude de cet environnement très variable et complexe. Il peut représenter une aide à la décision des managers ainsi qu'un support d'analyse et de planification d'échantillonnage aux scientifiques. / In view of increasing environmental awareness and biodiversity conservation, understanding the main forcing mechanism driving biogeochemical cycles in coral reefs and lagoon coastal areas is a priority. We used a 3D coupled 'on-line' physical-biogeochemical model on the New Caledonia lagoon : a hot spot of biodiversity under several forcing from climate to human origin.Interactions between land and lagoon were investigated through the study of an extreme event La Niña (2008) that caused heavy rainfalls and large organic and inorganic inputs in the lagoon.Model results provided a synoptic view of the lagoon biogeochemical-physical response, highlighting that the whole lagoon was impacted by river inputs and stronger hydrodynamics were the chlorophyll-a concentration was almost double.The complex interaction between the ocean and the lagoon was investigated through the modeling of the South Western (SW) wind-driven upwelling. Four austral summers (2005-2008) were simulated and results were found to be in good agreement with measured data reported in previous publications, highlighting that upwelling processes represent strong drivers of the primary production in the SW of NC. A Lagrangian transport analysis showed that oceanic upwelled waters were able to reach the South West lagoon under certain conditions, representing an important issue for larvae recruitment and lagoon enrichment. The 3D coupled on-line biogeochemical-physical model was a robust tool to study such complex and highly variable environment. It could represent a support for decision makers to manage coastal areas as well as for scientists to plan sampling strategy or to analyse cruise data.
|
1437 |
[en] DEVELOPMENT OF CAPILLARY ELECTROPHORESIS BASED METHODS WITH DIFFERENT DETECTION APPROACHES FOR DETERMINATION OF ORGANOTINS, STROBILURINS AND AMINOGLYCOSIDES / [pt] DESENVOLVIMENTO DE MÉTODOS BASEADOS NA ELETROFORESE CAPILAR COM DIFERENTES ABORDAGENS DE DETECÇÃO PARA DETERMINAÇÃO DE ORGANOESTANHOS, ESTROBILURINAS E AMINOGLICOSÍDEOSCABRINI FERRAZ DE SOUZA 02 July 2014 (has links)
[pt] Neste trabalho, métodos baseados em diferentes abordagens em eletroforese capilar (CE) foram propostos. No caso da determinação de compostos organoestanhos ou OTs (difenilestanho e monofenilestanho) em fluidos biológicos, foi usada abordagem de eletroforese capilar por zona (CZE)
hifenada com a espectrometria de massas (do tipo quadrupolo) com fonte de plasma indutivamente acoplado (CE-ICP-MS). As condições de análise foram estudadas no modo univariado visando otimizar a composição da solução eletrolítica (tampão acetato 5,0 mmol L(-1), pH 2,8) e obter os parâmetros instrumentais (45 graus Celsius, mais 30 kV e 30 s de tempo de introdução hidrodinâmica de amostra). A solução de complementação foi uma solução aquosa 5,0 mmol L(-1) de NH4NO3 contendo 10 por cento metanol em volume e 1,0 Mg L(-1) de Cspositivo, com pH ajustado para 2,8 com tampão acetato. A vazão dessa solução foi mantida em 40 ML min(-1). Os OTs foram diluídos em solução de metanol:tampão acetato de sódio 50:50 por cento v/v ou apenas em tampão acetato de sódio pH 2,8. As condições de detecção do ICP-MS foram ajustadas em 1200 W, 15 L min(-1) de vazão de argônio para formação do plasma, 1 L min(-1) de vazão de argônio auxiliar. A vazão de argônio do nebulizador foi ajustada diariamente. Os isótopos de estanho 120Sn e 118Sn foram monitorados, assim como o 133Cspositivo para controlar a eficiência e estabilidade do processo de nebulização. A resposta linear do método ficou entre 0,050 a 2,0 mg L(-1) de Sn (0,42 a 17 Mmol L-(1)). Os limites de detecção (LOD) e de quantificação (LOQ) em termos de Sn foram de 15 Mg L(-1) (0,13 Mmol L(-1)) e 50 Mg L-1 (0,42 Mmol L(-1)), calculados utilizando a menor concentração dos picos dos analitos que podem ser diferenciados do sinal de fundo. A repetibilidade para o tempo de migração e área dos picos ficou próximo a 5 por cento. O método foi aplicado na análise de urina, sangue total e plasma fortificados com os OTs. Recuperações entre 75 e 95 por cento foram obtidas. No caso da determinação de sete pesticidas da classe das estrobilurinas (azoxistrobina, 9 dimoxistrobina, fluoxastrobina, picoxistrobina, piraclostrobina, trifloxistrobina e kresoxim-metil) em sopas infantis, foi usada a cromatografia eletrocinética capilar micelar (MEKC) com detecção fotométrica (no UV) com capilar de caminho óptico estendido. Um estudo multivariado, usando um planejamento 33 Box Behnken, indicou que a melhor separação para os pesticidas foi com solução aquosa de eletrólito composto por tampão tetraborato de sódio (5,1 mmol L(-1), pH 9,0) contendo 51 mmol L(-1) de dodecil sulfato de sódio (SDS) e 24 por cento acetonitrila (ACN) em volume. As condições instrumentais foram 25 C e mais 30 kV de diferença de potencial aplicada, 45 s de tempo de introdução hidrodinâmica de amostra e detecção em 210 nm. Para aumentar o poder de detecção, foi usada a concentração dos analitos no capilar. Para tal, as soluções de padrões e amostras foram dissolvidas em solução tampão tetraborato de sódio 45 mmol L(-1): acetonitrila 80:20 por cento v/v. As curvas analíticas apresentaram comportamento linear e os valores de LOD ficaram entre 7,0 Mg L(-1) ou 18 nmol L(-1) (piraclostrobina) a 15 Mg L-1 ou 33 nmol L(-1) (fluoxastrobina). Os valores de LOQ ficaram entre 21 Mg L(-1) ou 54 nmol L(-1) (piraclostrobina) a 45 Mg L(-1) ou 98 nmol L(-1) (fluoxastrobina). A repetibilidade ficou entre 1,7 a 7,9 por cento para a área de pico e entre 0,25 a 0,71 por cento para o tempo de migração. A precisão intermediária, avaliada com análises realizadas em diferentes dias, apresentou valores entre 1,3
a 5,3 por cento para a área de pico e entre 0,06 a 0,90 por cento para o tempo de migração. O método foi aplicado na análise de sopas prontas infantis fortificadas com as estrobilurinas. Os pesticidas foram extraídos aplicando o método QuEChERS com ajuste de pH com tampão acetato e limpeza com extração em fase sólida dispersiva. Os resultados das análises obtidos com um método cromatográfico adaptado da literatura foram estatisticamente iguais aos alcançados com o método proposto. A CZE foi o modo de separação escolhido para mostrar o potencial da determinação indireta de aminoglicosídeos com medição de fluorescência de pontos quânticos (excitação com laser de diodo em 410 nm) amplificada na presença dos analitos. A fotoluminescência dos pontos quânticos (nanopartículas de CdTe modificados com ácido tioglicólico monodispersas em solução) foi mais intensa em solução tampão (pH 8,0) contendo entre 5 e 10
mmol L(-1) de tetraborato de sódio. A interação no capilar entre aminoglicosídeos (neomicina e canamicina) e os pontos quânticos provocou aumento de fotoluminescência dependente do pH do meio (indício de interação de natureza 10 eletrostática). Alguns parâmetros de mérito foram avaliados com uma faixa linear curta (0,1 a 1,0 mol L(-1) para canamicina e 0,03 a 0,5 mol L(-1) para neomicina). Os valores mínimos detectados de 0,1 Mmol L(-1) ou 58 Mg L(-1) (canamicina) e 0,03 Mmol L(-1) ou 27 Mg L(-1) (neomicina) mostram que essa é uma abordagem interessante para a determinação sensível de aminoglicosídeos. / [en] In this work, analytical methods based on different approaches using capillary electrophoresis (CE) have been proposed. For the determination of organotins or OTs (diphenyltin and monophenyltin) in biological fluids, the separation using capillary zone electrophoresis (CZE) was applied using tandem with inductively coupled plasma mass spectrometry (CE-ICP-MS). The conditions for the analysis were optimized in an unvaried way aiming to find the conditions for the electrolyte solution (acetate buffer, 5.0 mmol L (-1), pH 2.8) and the employed instrumental parameters (45C, 30 kV and 30 s of the time for hydrodynamic introduction of the sample). A complementary solution was composed by NH4NO3 5.0 mmol L(-1), 10 por cento v/v of methanol and 1.0 g L(-1) of Cspositive in acetate buffer with pH adjusted to 2.8. The flow of this solution was set to 40 ML min(-1). The OTs were diluted either in a methanol: acetate buffer 50:50 por cento v/v solution or only in sodium acetate buffer pH 2.8. The conditions for detection by ICP-MS were set to 1200 W, 15 L min(-1) for the Ar plasma flow and 1,0 L min(-1) for the auxiliary Ar. The nebulizer Ar flow was adjusted daily. The monitored tin isotopes were 120Sn 118Sn. The isotope 133Cs was also monitored in order to control the efficiency and stability of the nebulization. The method presented a linear response between 0.05 and 2.0 mg L(-1) (0.42 a 17 Mmol L(-1)) for Sn. The value for the limits of detection (LOD) and for the limits of quantification (LOQ) for Sn were 15 Mg L(-1) (0.13 Mmol L-1) e 50 Mg L(-1) (0.42 Mmol L(-1)), calculated based on the lowest concentration of the analyte peaks that can be differentiated from the background signal. The repeatability for migration time and peak area was approximately 5 per cent. The method was applied in the analysis of organotin fortified blood and urine samples with recoveries between 75 and 95 per cent. In the case of the determination of seven strobilurin class pesticides (azoxystrobin, dimoxystrobin, fluoxastrobin, picoxystrobin, pyraclostrobin, trifloxystrobin and kresoxim-methyl) in baby food (vegetable and fruit soups), 12 the micellar electrokinetic capillary chromatography (MEKC) was used using photometric detection (UV) in a capillary with extended optical path. A multivariate study, with 33 Box Behnken design, indicated the best composition for the electrolytic solution to separate the seven pesticides: a sodium tetraborate buffer (5.1 mmol L(-1), pH 9.0) solution containing 51 mmol L(-1) sodium dodecyl sulfate and acetonitrile (24 por cento in volume). The instrumental conditions were 25C, 30 kV of applied voltage, 45 s for hydrodynamic introduction of the sample and detection at 210 nm. To increase the detection power, the concentration of the analytes into the capillary was used by using the Normal Stacking Mode. For this purpose, the solutions of standards and samples were prepared in 45 mmol L(-1) sodium tetraborate buffer solution: acetonitrile 80:20 por cento v/v. The analytical curves presented a linear behavior and the LOD values were between 7.0 Mg L (-1) or 18 nmol L(-1) (pyraclostrobin) to 15 Mg L(-1) or 33 nmol L(-1) (fluoxastrobin). The LOQ values were between 21 Mg L(-1) or 54 nmol L(-1) (pyraclostrobin) a 45 Mg L(-1) ou 98 nmol L(-1) (fluoxastrobin). The repeatability was between 1.7 to 7.9 por cento for the peak area and between 0.25 to 0.71 por cento for the migration time. The intermediate precision, evaluated by the analysis performed in different days were between 1.3 to 5.3 por cento for the peak area and between 0.06 and 0.90 per cent for the migration time. The method was applied in the analysis of baby food spiked with strobilurin. Pesticides were extracted using the QuEChERS method with pH adjustment with acetate buffer and clean-up using the dispersive solid phase extraction. The analysis results were statistically identical to those obtained with a chromatographic method adapted from the literature. The CZE separation mode was chosen to evaluate the potential of the indirect determination of aminoglycosides through the amplified photoluminescence from quantum dots (excitation laser diode 410 nm) in the presence of the analytes. The photoluminescence from quantum dots (monodispersed CdTe nanoparticles modified with thioglycolic acid) was more intense in buffer solution (pH 8.0) containing between 5 and 10 mmol L(-1) sodium tetraborate. The interaction between aminoglycosides (kanamycin and neomycin) and quantum dots inside the capillary caused the increasing of fluorescence in a pH-dependent way (indicating the electrostatic nature for the interaction). A few figures of merit were evaluated with a short linear range (0.1) to 1.0 Mmol L(-1) for kanamycin and 0.03 to 0.5 Mmol L(-1) for neomycin). The 13 minimum values detected were 0.1 nmol L(-1) or 58 Mg L(-1) (kanamycin) and 0.03 nmol L(-1) or 27 Mg L(-1) (neomycin) showing that the proposed approach can be used to detect aminoglycosides in a relatively sensitive way.
|
1438 |
Modélisation multi-physique en génie électrique. Application au couplage magnéto-thermo-mécanique / Multiphysics modeling in electrical engineering. Application to a magneto-thermo-mechanical modelJourneaux, Antoine 18 November 2013 (has links)
Cette thèse aborde la problématique de la modélisation multiphysique en génie électrique, avec une application à l’étude des vibrations d’origine électromagnétique des cages de développantes. Cette étude comporte quatre parties : la construction de la densité de courant, le calcul des forces locales, le transfert de solutions entre maillages et la résolution des problèmes couplés. Un premier enjeu est de correctement représenter les courants, cette opération est effectuée en deux étapes : la construction de la densité de courant et l’annulation de la divergence. Si des structures complexes sont utilisées, l’imposition du courant ne peut pas toujours être réalisée à l’aide de méthodes analytiques. Une méthode basée sur une résolution électrocinétique ainsi qu’une méthode purement géométrique sont testées. Cette dernière donne des résultats plus proches de la densité de courant réelle. Parmi les nombreuses méthodes de calcul de forces, les méthodes des travaux virtuels et des forces de Laplace, considérées par la littérature comme les plus adaptées au calcul des forces locales, ont été étudiées. Nos travaux ont montré que bien que les forces de Laplace sont particulièrement précises, elles ne sont pas valables si la perméabilité n’est plus homogène. Ainsi, la méthode des travaux virtuels, applicable de manière universelle, est préférée. Afin de modéliser des problèmes multi-physiques complexes à l’aide de plusieurs codes de calculs dédiés, des méthodes de transferts entre maillages non conformes ont été développées. Les procédures d’interpolations, les méthodes localement conservatives et les projections orthogonales sont comparées. Les méthodes d’interpolations sont réputées rapides mais très diffusives tandis que les méthodes de projections sont considérées comme les plus précises. La méthode localement conservative peut être vue comme produisant des résultats comparables aux méthodes de projections, mais évite l’assemblage et la résolution de systèmes linéaires. La modélisation des problèmes multi-physiques est abordée à l’aide des méthodes de transferts de solutions. Pour une classe de problème donnée, l’assemblage d’un schéma de couplage n’est pas unique. Des tests sur des cas analytiques sont réalisés afin de déterminer, pour plusieurs types de couplages, les stratégies les plus appropriées.Ces travaux ont permis une application à la modélisation magnéto-mécanique des cages de développantes est présentée. / The modeling of multi-phycics problems in electrical engineering is presented, with an application to the numerical computation of vibrations within the end windings of large turbo-generators. This study is divided into four parts: the impositions of current density, the computation of local forces, the transfer of data between disconnected meshes, and the computation of multi-physics problems using weak coupling, Firstly, the representation of current density within numerical models is presented. The process is decomposed into two stages: the construction of the initial current density, and the determination of a divergence-free field. The representation of complex geometries makes the use of analytical methods impossible. A method based on an electrokinetical problem is used and a fully geometrical method are tested. The geometrical method produces results closer to the real current density than the electrokinetical problem. Methods to compute forces are numerous, and this study focuses on the virtual work principle and the Laplace force considering the recommendations of the literature. Laplace force is highly accurate but is applicable only if the permeability is uniform. The virtual work principle is finally preferred as it appears as the most general way to compute local forces. Mesh-to-mesh data transfer methods are developed to compute multi-physics models using multiples meshes adapted to the subproblems and multiple computational software. The interpolation method, a locally conservative projection, and an orthogonal projection are compared. Interpolation method is said to be fast but highly diffusive, and the orthogonal projections are highly accurate. The locally conservative method produces results similar to the orthogonal projection but avoid the assembly of linear systems. The numerical computation of multi-physical problems using multiple meshes and projections is then presented. However for a given class of problems, there is not an unique coupling scheme possible. Analytical tests are used to determine, for different class of problems, the most accurate scheme. Finally, numerical computations applied to the structure of end-windings is presented.
|
1439 |
Une méthode énergétique pour les systèmes vibro-acoustiques couplés / An energy based method for coupled vibro-acoustic systemsStelzer, Rainer 28 September 2012 (has links)
Ce mémoire de thèse présente le développement de la méthode «statistical modal energy distribution analysis (SmEdA)» pour des systèmes vibro-acoustiques couplés. Cette méthode de calcul est basée sur le bilan énergétique dans des sous-systèmes fermés couplés, comme une structure ou une cavité. L’interaction entre de tels systèmes est décrite par des couplages entre les modes. La version initiale de SmEdA prend en compte seulement les modes qui ont une fréquence propre dans le bande d’excitation. Le travail présenté ici étudie l’effet des modes non résonants sur la réponse et identifie les cas dans lesquels un tel effet devient important. L’introduction des modes non résonants permet d’utiliser la méthode SmEdA dans des cas d’applications plus larges. En outre, une nouvelle méthode de post-traitement a été développée pour calculer des distributions d'énergie dans les sous-systèmes. Finalement, une nouvelle méthode d'approximation pour la prise en compte des modes de systèmes de grandes dimensions ou mal définis a été formulée. Toutes ces méthodes ont été comparées avec d’autres méthodes de calcul via des exemples académiques et industriels. Ainsi, la nouvelle version de SmEdA incluant le post-traitement pour obtenir des distributions d'énergie a été validé et les avantages et possibilités d'applications sont montrés. / This dissertation presents the further development of the statistical modal energy distribution analysis (SmEdA) for vibro-acoustic coupled problems. This prediction method is based on the energy balance in bounded coupled subsystems, like a structure or a cavity. The interaction between such subsystems is described by mode-to-mode coupling. The original SmEdA formulation takes into account only the modes having the eigenfrequencies within the excitation band. The present work investigates the effect of non resonant modes to the response and identifies cases in which such an effect becomes important. The inclusion of non resonant modes has thus resulted in a new SmEdA formulation which can be used in extended applications. Furthermore, a new post-processing method has been developed to predict energy distribution within subsystems. Finally a novel approximation method for handling modes of huge or ill-defined systems has been formulated. All these methods have been compared to other prediction methods via academic and industrial examples. In this way, the extended SmEdA approach including the post-processing for energy distribution has been validated and its advantages and application possibilities have been demonstrated.
|
1440 |
Etude des codes en graphes pour le stockage de données / Study of Sparse-Graph for Distributed Storage SystemsJule, Alan 07 March 2014 (has links)
Depuis deux décennies, la révolution technologique est avant tout numérique entrainant une forte croissance de la quantité de données à stocker. Le rythme de cette croissance est trop importante pour les solutions de stockage matérielles, provoquant une augmentation du coût de l'octet. Il est donc nécessaire d'apporter une amélioration des solutions de stockage ce qui passera par une augmentation de la taille des réseaux et par la diminution des copies de sauvegarde dans les centres de stockage de données. L'objet de cette thèse est d'étudier l'utilisation des codes en graphe dans les réseaux de stockage de donnée. Nous proposons un nouvel algorithme combinant construction de codes en graphe et allocation des noeuds de ce code sur le réseau. Cet algorithme permet d'atteindre les hautes performances des codes MDS en termes de rapport entre le nombre de disques de parité et le nombre de défaillances simultanées pouvant être corrigées sans pertes (noté R). Il bénéficie également des propriétés de faible complexité des codes en graphe pour l'encodage et la reconstruction des données. De plus, nous présentons une étude des codes LDPC Spatiallement-Couplés permettant d'anticiper le comportement de leur décodage pour les applications de stockage de données.Il est généralement nécessaire de faire des compromis entre différents paramètres lors du choix du code correcteur d'effacement. Afin que ce choix se fasse avec un maximum de connaissances, nous avons réalisé deux études théoriques comparatives pour compléter l'état de l'art. La première étude s'intéresse à la complexité de la mise à jour des données dans un réseau dynamique établi et déterminons si les codes linéaires utilisés ont une complexité de mise à jour optimale. Dans notre seconde étude, nous nous sommes intéressés à l'impact sur la charge du réseau de la modification des paramètres du code correcteur utilisé. Cette opération peut être réalisée lors d'un changement du statut du fichier (passage d'un caractère hot à cold par exemple) ou lors de la modification de la taille du réseau. L'ensemble de ces études, associé au nouvel algorithme de construction et d'allocation des codes en graphe, pourrait mener à la construction de réseaux de stockage dynamiques, flexibles avec des algorithmes d'encodage et de décodage peu complexes. / For two decades, the numerical revolution has been amplified. The spread of digital solutions associated with the improvement of the quality of these products tends to create a growth of the amount of data stored. The cost per Byte reveals that the evolution of hardware storage solutions cannot follow this expansion. Therefore, data storage solutions need deep improvement. This is feasible by increasing the storage network size and by reducing data duplication in the data center. In this thesis, we introduce a new algorithm that combines sparse graph code construction and node allocation. This algorithm may achieve the highest performance of MDS codes in terms of the ratio R between the number of parity disks and the number of failures that can be simultaneously reconstructed. In addition, encoding and decoding with sparse graph codes helps lower the complexity. By this algorithm, we allow to generalize coding in the data center, in order to reduce the amount of copies of original data. We also study Spatially-Coupled LDPC (SC-LDPC) codes which are known to have optimal asymptotic performance over the binary erasure channel, to anticipate the behavior of these codes decoding for distributed storage applications. It is usually necessary to compromise between different parameters for a distributed storage system. To complete the state of the art, we include two theoretical studies. The first study deals with the computation complexity of data update and we determine whether linear code used for data storage are update efficient or not. In the second study, we examine the impact on the network load when the code parameters are changed. This can be done when the file status changes (from a hot status to a cold status for example) or when the size of the network is modified by adding disks. All these studies, combined with the new algorithm for sparse graph codes, could lead to the construction of new flexible and dynamical networks with low encoding and decoding complexities.
|
Page generated in 0.0791 seconds