• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 32
  • 18
  • 12
  • 7
  • 1
  • 1
  • 1
  • Tagged with
  • 89
  • 89
  • 24
  • 21
  • 16
  • 16
  • 13
  • 11
  • 9
  • 9
  • 9
  • 8
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

ANALYSIS OF NITROGEN DYNAMICS IN SOIL COLUMNS TO EVALUATE NITRATE POLLUTION DUE TO RECLAIMED WASTEWATER IRRIGATION / 下水再生水の灌漑利用による硝酸汚染評価のための土壌カラム中の窒素動態の解析 / ゲスイ サイセイスイ ノ カンガイ リヨウ ニ ヨル ショウサン オセン ヒョウカ ノ タメ ノ ドジョウ カラムチュウ ノ チッソ ドウタイ ノ カイセキ

DAYANTHI, WANNIARACHCHI KANKANAMGE CHANDRANI NEETHA 25 September 2007 (has links)
学位授与大学:京都大学 ; 取得学位: 博士(工学) ; 学位授与年月日: 2007-09-25 ; 学位の種類: 新制・課程博士 ; 学位記番号: 工博第2851号 ; 請求記号: 新制/工/1419 ; 整理番号: 25536 / Kyoto University (京都大学) / 0048 / 新制・課程博士 / 博士(工学) / 甲第13380号 / 工博第2851号 / 新制||工||1419(附属図書館) / 25536 / UT51-2007-Q781 / 京都大学大学院工学研究科都市環境工学専攻 / (主査)教授 田中 宏明, 教授 藤井 滋穂, 教授 清水 芳久 / 学位規則第4条第1項該当
22

Titanium dioxide films prepared by sol-gel/laser-induced technique for inactivation of bacteria

Joya, Yasir Faheem January 2011 (has links)
In the present research, a novel method, namely sol-gel/laser-induced technique (SGLIT), has been developed to generate nano-structured TiO2-based films. The TiO2 films based on unloaded (pure) TiO2, Ce-TiO2, W-TiO2 and Ag-TiO2, have been investigated in attempt to stabilise the formation of anatase and consequently of enhancing photo-catalytic and anti-bacterial activities. The TiO2 precursor loaded with Ce2+, W6+ and Ag2+ ions (Ce-TiO2, W-TiO2 and Ag-TiO2) were separately prepared by sol-gel method and spin-coated on microscopic glass slides. A pulsed KrF excimer laser with a wavelength of 248 nm and pulse width of 13-20 ns was employed to irradiate on the sol-gel prepared films at various operating parameters, in terms of laser fluence, number of laser pulses and repetition rate. The work has been focussed on microstructural characterisation of various films prepared by both SGLIT and furnace, in the consideration of crystallographic structure, phase transformation, crystallite sizes, surface morphology, film thickness and optical properties, by means of Raman spectroscopy, XRD, FEG-SEM/EDX, TEM/HR-TEM/EDX, AFM and UV-Vis spectroscopy etc. The results showed that nano-crystallisation of the films after laser irradiation has been achieved, with controllable amount of anatase formation. These coatings presented a unique feature of surface morphology with meso-porosity and much enlarged surface areas, compared with the films prepared by furnace sintering technique. The addition of Ce and Ag, stabilized the anatase structure during the laser irradiations, whereas the addition of W destabilized the anatase structure. The Ce-TiO2 films prepared by SGLIT exhibited anatase structure which was stable up to 500 laser pulses at 35 mJ cm-2 fluence. The anatase was formed after 10 laser pulses only at 65-75 mJ cm-2 fluence in the W-TiO2 films. When a higher number of laser pulses, fluence or higher W6+ loading were chosen, rutile structure started to form. On the other hand, the Ag-TiO2 nano-composite films prepared by SGLIT presented the anatase up to 200 laser pulses at 85 mJ cm-2 fluence. On average, anatase crystallite size of about 38 nm was achieved from both the W-TiO2 and Ag-TiO2 films prepared by SGLIT. In contrast, the furnace-sintered W-TiO2 and Ag-TiO2 films produced anatase crystallite size of 49.4 nm and 29.8 nm respectively. Another achievement of the present research is the development of a single-step laser irradiation technique to generate an Ag-TiO2 nano-composite film on the glass substrate. A pulsed laser beam produced hexagonal Ag nanoparticles along with the crystallization of anatase-based nano-structured TiO2 film which was accomplished in 1 µs only. The films prepared by SGLIT displayed a higher photo-absorption compared to their furnace-sintered counterparts due to the unique surface features with a higher surface roughness. Overall, an enhanced bactericidal activity against E. coli cells was demonstrated under UV light by each of the W-TiO2 films compared to furnace-sintered films except the 1W-TiO2. The E. coli cells did not survive on the W-TiO2 films prepared by SGLIT, after 80 minutes under UV (365 nm) light. In contrast, E. coli cells still survived on the surface of furnace-sintered W-TiO2 films under the same conditions. Ag-TiO2 nano-composite films prepared by SGLIT, demonstrated an enhanced anti-bacterial activity against E. coli compared to the conventionally- made Ag-TiO2 films. No bacteria survived on the Ag-TiO2 films prepared by 50 laser pulses at 85 mJ cm-2 fluence, whereas E. coli colonies always survived on the furnace-sintered Ag-TiO2 films under the UV, natural light and the dark room conditions.
23

Energy and Performance Models Enabling Design Space Exploration using Domain Specific Languages

Umar, Mariam 25 May 2018 (has links)
With the advent of exascale architectures maximizing performance while maintaining energy consumption within reasonable limits has become one of the most critical design constraints. This constraint is particularly significant in light of the power budget of 20 MWatts set by the U.S. Department of Energy for exascale supercomputing facilities. Therefore, understanding an application's characteristics, execution pattern, energy footprint, and the interactions of such aspects is critical to improving the application's performance as well as its utilization of the underlying resources. With conventional methods of analyzing performance and energy consumption trends scientists are forced to limit themselves to a manageable number of design parameters. While these modeling techniques have catered to the needs of current high-performance computing systems, the complexity and scale of exascale systems demands that large-scale design-space-exploration techniques are developed to enable comprehensive analysis and evaluations. In this dissertation we present research on performance and energy modeling of current high performance computing and future exascale systems. Our thesis is focused on the design space exploration of current and future architectures, in terms of their reconfigurability, application's sensitivity to hardware characteristics (e.g., system clock, memory bandwidth), application's execution patterns, application's communication behavior, and utilization of resources. Our research is aimed at understanding the methods by which we may maximize performance of exascale systems, minimize energy consumption, and understand the trade offs between the two. We use analytical, statistical, and machine-learning approaches to develop accurate, portable and scalable performance and energy models. We develop application and machine abstractions using Aspen (a domain specific language) to implement and evaluate our modeling techniques. As part of our research we develop and evaluate system-level performance and energy-consumption models that form part of an automated modeling framework, which analyzes application signatures to evaluate sensitivity of reconfigurable hardware components for candidate exascale proxy applications. We also develop statistical and machine-learning based models of the application's execution patterns on heterogeneous platforms. We also propose a communication and computation modeling and mapping framework for exascale proxy architectures and evaluate the framework for an exascale proxy application. These models serve as external and internal extensions to Aspen, which enable proxy exascale architecture implementations and thus facilitate design space exploration of exascale systems. / Ph. D.
24

Microvascular Heat Transfer Analysis in Carbon Fiber Composite Materials

Pierce, Matthew Ryan 12 August 2010 (has links)
No description available.
25

Fundamental studies for development of real-time model-based feedback control with model adaptation for small scale resistance spot welding

Chen, Jianzhong 02 March 2005 (has links)
No description available.
26

Deflection of concrete structures reinforced with FRP bars.

Kara, Ilker F., Ashour, Ashraf, Dundar, C. 01 1900 (has links)
yes / This paper presents an analytical procedure based on the stiffness matrix method for deflection prediction of concrete structures reinforced with fiber reinforced polymer (FRP) bars. The variation of flexural stiffness of cracked FRP reinforced concrete members has been evaluated using various available models for the effective moment of inertia. A reduced shear stiffness model was also employed to account for the variation of shear stiffness in cracked regions. Comparisons between results obtained from the proposed analytical procedure and experiments of simply and continuously supported FRP reinforced concrete beams show good agreement. Bottom FRP reinforcement at midspan section has a significant effect on the reduction of FRP reinforced concrete beam deflections. The shear deformation effect was found to be more influential in continuous FRP reinforced concrete beams than simply supported beams. The proposed analytical procedure forms the basis for the analysis of concrete frames reinforced with FRP concrete members.
27

An Analytical Model for On-Chip Interconnects in Multimedia Embedded Systems

Wu, Y., Min, Geyong, Zhu, D., Yang, L.T. January 2013 (has links)
No / The traffic pattern has significant impact on the performance of network-on-chip. Many recent studies have shown that multimedia applications can be supported in on-chip interconnects. Driven by the motivation of evaluating on-chip interconnects in multimedia embedded systems, a new analytical model is proposed to investigate the performance of the fat-tree based on-chip interconnection network under bursty multimedia traffic and nonuniform message destinations. Extensive simulation experiments are conducted to validate the accuracy of the model, which is then adopted as a cost-efficient tool to investigate the effects of bursty multimedia traffic with nonuniform destinations on the network performance.
28

Advancing air filtration analysis: a comprehensive approach to particle loading models

Berry, Gentry Nathaniel 08 December 2023 (has links) (PDF)
Fibrous air filters are commonly used to capture airborne particles due to their potential for a relatively high capture efficiency and low airflow resistance. Their performance characteristics make them ideal candidates in many instances, spanning a wide range from residential to sensitive industrial applications. However, as more particles are captured, the performance of the filter will evolve. This evolution of performance typically manifests as a higher capture efficiency and higher airflow resistance resulting from the additional particulate deposits. The prediction of fibrous filter performance has been the focus of research for many decades, resulting in numerous analytical, numerical, and empirical models. This work seeks to improve upon the state of aerosol filtration by investigating the process through which these models are developed and validated. To meet this objective, three major efforts are implemented: 1) a comprehensive literature review, 2) an aerosol and media measurement analysis focusing on instrumentation and Scanning Electron Microscope (SEM) imagery, and 3) the creation of a process to analyze and develop fibrous air filter models. A conceptual foundation is provided by the literature review, establishing the current state of fibrous filtration modeling of solid particles and identifying candidate models for implementation. The influence of data collection and reduction methodology for particle mass loading experiments is explored with an emphasis on the resulting effects towards filtration model development. Furthermore, an automated methodology to measure the physical characteristics of high efficiency particulate air (HEPA) filtration media is investigated, completing the set of variables necessary to predict filtration performance. Finally, an algorithm is proposed to optimize and correlate model variables to collected empirical data, allowing for the improvement of model predictions by investigating model functionality and identifying limitations. Altogether, the three efforts provide a framework through which fibrous aerosol filtration models of solid particles may be developed, validated, and systematically analyzed.
29

Influence des paramètres du laser sur la dynamique des paquets courts d’électrons relativistes dans des accélérateurs linéaires basés sur des canons RF et développement de diagnostics associés / Influence of laser parameters on the relativistic short electron bunches dynamics in linear accelerators based on RF-guns and development of associated diagnostics

Vinatier, Thomas 23 September 2015 (has links)
Dans de nombreuses applications, des paquets d’électrons relativistes sub-ps sont requis : Accélération laser-plasma, Lasers à électrons libres, Génération de rayonnement THz intense, Etude des phénomènes ultra-rapides dans la matière… L’aspect court des paquets et la nécessité d’un fort courant crête pour les applications impliquent de fortes forces de charge d’espace conduisant à une dégradation des propriétés du faisceau, telle que son émittance transverse et sa longueur. La principale difficulté est de caractériser, modéliser et prendre en compte ces effets. Ma thèse s’inscrit dans ce cadre à travers l’étude de la dynamique et des diagnostics associés à ces paquets courts. Le chapitre 2 rassemble des mesures de plusieurs propriétés du faisceau : charge, émittance transverse et longueur. L’originalité de mon travail réside dans l’utilisation de méthodes simples, des points de vues théoriques et technologiques. Ces méthodes, plus adaptées pour des faisceaux moins extrêmes, permettent néanmoins d’obtenir de très bons résultats. J’ai en particulier développé une méthode de mesure de charge à partir de la mesure de l’intensité lumineuse émise par un écran scintillant suite à l’interaction avec le faisceau. Cette méthode permet de mesurer précisément des charges inférieures à 100 fC, ce qui surpasse les capacités des diagnostics classiques (ICT et Coupe de Faraday) limités au picocoulomb à cause du bruit électronique. Cette méthode est utile, du fait que les paquets courts sont souvent faiblement chargés pour limiter l’effet des forces de charge d’espace. J’ai aussi adapté des méthodes multiparamétriques pour mesurer l’émittance transverse et la longueur des paquets d’électrons. Ces méthodes indirectes permettent de déterminer ces propriétés à partir de la mesure d’autres propriétés plus accessibles : les dimensions transverses pour l’émittance et la dispersion en énergie pour la longueur. La mesure de longueur (méthode des 3 phases) donne de très bons résultats, puisqu’elle permet de mesurer avec une précision meilleure que 10% des longueurs rms inférieures à la picoseconde. La mesure d’émittance sans prise en compte des forces de charge d’espace donne des résultats mitigés, puisque la précision varie de 20% (méthode des 3 gradients) à plus de 100% (méthode des 3 écrans). Une amélioration significative de la précision, jusqu’à un facteur 5, peut être obtenue en prenant en compte les forces de charge d’espace via une équation d’enveloppe, ce qui constitue l’originalité de mon travail. Le chapitre 3 consiste en une comparaison des propriétés des paquets courts d’électrons, unique ou longitudinalement modulé, générés par trois méthodes différentes : Utilisation d’une impulsion laser courte ou longitudinalement modulée dans un canon RF ; Compression magnétique dans une chicane ; Compression RF dans une structure accélératrice (Velocity Bunching). J’ai en particulier montré que, à charge égale, la génération de paquets courts via une impulsion laser courte dans un canon RF est désavantageuse, des points de vue de la longueur et de l’émittance transverse du faisceau, par rapport à la compression magnétique RF d’un paquet déjà accéléré. Cela est expliqué par les forces de charge d’espace plus importantes juste après l’émission du faisceau par la photocathode. Il est également consacré au développement et au test de modèles analytiques de la dynamique longitudinale des faisceaux. J’ai développé une matrice de transfert longitudinale pour un canon RF, en partant du modèle de K. J. Kim. Ce modèle a été comparé avec plusieurs mesures effectuées à PITZ et PHIL et a prouvé être précis sur les aspects énergétiques et temporels, mais pas sur l’aspect de la dispersion en énergie. J’ai également développé un modèle analytique du phénomène de velocity bunching dans des structures accélératrices à onde progressive, en partant d’un modèle simple développé par P. Piot. / In several applications, quasi-relativistic sub-ps electron bunches are required: Laser-plasma acceleration, Free electron lasers, Generation of intense THz radiation, Study of ultra-fast phenomena in matter… The short nature of the bunch and the necessity of a high peak current for the applications imply strong space-charge forces leading to a degradation of beam properties, as its transverse emittance and duration. The main difficulty is to characterize, model and take into account these effects. My thesis falls within this context through the study of dynamics and diagnostics related to these short bunches, namely whose rms duration is not directly measurable by an electronic method locating the border at a few tens of picoseconds. The chapter 2 consists in the measurements of several properties of these bunches: charge, transverse emittance and duration. The originality of my work is that I use simple methods, both on the theoretical (analytical at maximum) and technological (using only common elements of electron accelerators) point of view. These methods, more suitable for less extreme bunches, allow however obtaining very good results. I especially developed a method of charge measurement from the measurement of the light intensity emitted by a scintillating screen following the interaction with an electron beam. This method allows precisely measuring charges lower than 100fC. This is better than the capabilities of classical diagnostics (ICT and Faraday Cup) limited to the picocoulomb because of electronic noise. This method is useful since the short bunches are often low-charged to minimize the effects of space-charge forces. It will also be used for detectors calibration, which requires low charges. I also adapt multiparametric methods to measure the transverse emittance and duration of electron bunches. These indirect methods allow determining these properties from the measurement of other more accessible properties: the transverse dimensions for the transverse emittance and the energy spread for the duration. The duration measurement (3-phase method) gives very good results, since it allows determining with accuracy better than 10% rms durations lower than one picosecond. The emittance measurement without taking into account the space-charge forces in the modeling gives mixed results, since the accuracy is from 20% (3-gradients method) to more than 100% (3-screens method). A significant accuracy improvement, up to a factor of 5, can be obtained by taking the space-charge forces into account through a beam envelop equation, which constitutes the originality of my work. The chapter 3 consists in the comparison of the properties of short electron beams, single or longitudinally modulated, generated by three different methods: Injection of a short or longitudinally modulated laser pulse in an RF-gun; Magnetic compression in a chicane; RF-compression in an accelerating structure (Velocity Bunching). I particularly shown that, at equal conditions of charge, the generation of short bunches thanks to a short laser pulse driven an RF-gun is disadvantageous, both from the beam duration and transverse emittance point of view, with respect to a magnetic or RF compression of an already accelerated beam. This is explained by the more important space-charge forces just after the beam emission by the photocathode. It also consists in the development and test of analytical models for longitudinal beam dynamics. I developed a longitudinal transfer matrix for RF-gun, starting from a Kim-like model. This model has been compared with several measurements performed at PITZ and PHIL and shown to be accurate on the energy and temporal aspects, but not on the energy spread aspect. I also developed an analytical model of the velocity bunching phenomenon in travelling wave accelerating structures, starting from a simple model developed by P. Piot.
30

Análise preditiva de desempenho de workflows usando teoria do campo médio / Predictive performance analysis of workflows using mean field theory

Caro, Waldir Edison Farfán 17 April 2017 (has links)
Os processos de negócio desempenham um papel muito importante na indústria, principalmente pela evolução das tecnologias da informação. As plataformas de computação em nuvem, por exemplo, com a alocação de recursos computacionais sob demanda, possibilitam a execução de processos altamente requisitados. Para tanto, é necessário definir o ambiente de execução dos processos de tal modo que os recursos sejam utilizados de forma ótima e seja garantida a correta funcionalidade do processo. Nesse contexto, diferentes métodos já foram propostos para modelar os processos de negócio e analisar suas propriedades quantitativas e qualitativas. Há, contudo, vários desafios que podem restringir a aplicação desses métodos, especialmente para processos com alta demanda (como os workflows de numerosas instâncias) e que dependem de recursos limitados. A análise de desempenho de workflows de numerosas instâncias via modelagem analítica é o objeto de estudo deste trabalho. Geralmente, para a realização desse tipo de análise usa-se modelos matemáticos baseados em técnicas Markovianas (sistemas estocásticos), que sofrem do problema da explosão do espaço de estados. Entretanto, a Teoria do Campo Médio indica que o comportamento de um sistema estocástico, sob certas condições, pode ser aproximado por o de um sistema determinístico, evitando a explosão do espaço de estados. Neste trabalho usamos tal estratégia e, com base na definição formal de aproximação determinística e suas condições de existência, elaboramos um método para representar os workflows, e seus recursos, como equações diferenciais ordinárias, que descrevem um sistema determinístico. Uma vez definida a aproximação determinística, realizamos a análise de desempenho no modelo determinístico, verificando que os resultados obtidos são uma boa aproximação para a solução estocástica. / Business processes play a very important role in the industry, especially by the evolution of information technologies. Cloud computing platforms, for example, with the allocation of on-demand computing resources enable the execution of highly requested processes. Therefore, it is necessary to define the execution environment of the processes in such a way that the resources are used optimally and the correct functionality of the process is guaranteed. In this context, different methods have already been proposed to model business processes and analyze their quantitative and qualitative properties. There are, however, a number of challenges that may restrict the application of these methods, especially for high-demanded processes (such as workflows of numerous instances) and that rely on resources that are limited. The analysis of the performance of workflows of numerous instances through analytical modeling is the object of study of this work. Generally, for the accomplishment of this type of analysis, mathematical models based on Markovian techniques (stochastic systems) are used, which suffer the problem of the state space explosion. However, the Mean Field Theory, indicates that the behavior of a stochastic system, under certain conditions, can be approximated by that of a deterministic system, avoiding the explosion of the state space. In this work we use such a strategy, based on the formal definition of deterministic approximation and its conditions of existence, we elaborate a method to represent the workflows, and their resources, as ordinary differential equations, which describe a deterministic system. Once the deterministic approximation has been defined, we perform the performance analysis in the deterministic model, verifying that the obtained results are a good approximation for the stochastic solution.

Page generated in 0.154 seconds