• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 377
  • 153
  • 69
  • 59
  • 39
  • 30
  • 13
  • 11
  • 8
  • 6
  • 5
  • 4
  • 4
  • 3
  • 3
  • Tagged with
  • 970
  • 204
  • 170
  • 136
  • 103
  • 81
  • 67
  • 63
  • 63
  • 59
  • 59
  • 58
  • 57
  • 56
  • 54
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
271

Comparison of Standard Initial Dose and Reduced Initial Dose Regorafenib for Colorectal Cancer Patients: A Retrospective Cohort Study / 大腸がんに対するレゴラフェニブの標準開始用量と減量開始用量に関する比較:過去起点コホート研究

Nakashima, Masayuki 23 March 2021 (has links)
京都大学 / 新制・課程博士 / 博士(医学) / 甲第23067号 / 医博第4694号 / 新制||医||1049(附属図書館) / 京都大学大学院医学研究科医学専攻 / (主査)教授 今中 雄一, 教授 武藤 学, 教授 妹尾 浩 / 学位規則第4条第1項該当 / Doctor of Medical Science / Kyoto University / DFAM
272

Measurement of Thermal Diffusivities Using the Distributed Source, Finite Absorption Model

Hall, James B. 27 November 2012 (has links)
Thermal diffusivity in an important thermophysical property that quantifies the ratio of the rate at which heat is conducted through a material to the amount of energy stored in a material. The pulsed laser diffusion (PLD) method is a widely used technique for measuring thermal diffusivities of materials. This technique is based on the fact that the diffusivity of a sample may be inferred from measurement of the time-dependent temperature profile at a point on the surface of a sample that has been exposed to a pulse of radiant energy from a laser or flash lamp. An accepted standard approach for the PLD method is based on a simple model of a PLD measurement system. However, the standard approach is based on idealizations that are difficult to achieve in practice. Therefore, models that treat a PLD measurement system with greater fidelity are desired. The objective of this research is to develop and test a higher fidelity model that more accurately represents the spatial and temporal variations in the input power. This higher fidelity model is referred to as Distributed Source Finite Absorption (DSFA) model. The cost of the increased fidelity associated with the DSFA model is an increase in the complexity of inferring values of the thermal diffusivity. A new method of extracting values from time dependent temperature measurements based on a genetic algorithm and on reduced order modeling was developed. The primary contribution of this thesis is a detailed discussion of the development and numerical verification of this proposed new method for measuring the thermal diffusivity of various materials. Verification of the proposed new method was conducted using numerical experiments. A detailed model of a PLD system was created using advanced engineering software, and detailed simulations, including conjugate heat transfer and solution of the full Navier-Stokes equations, were used to generate multiple numerical data sets. These numerical data sets were then used to infer the thermal diffusivity and other properties of the sample using the proposed new method. These numerical data sets were also used as inputs to the standard approach. The results of this verification study show that the proposed new method is able to infer the thermal diffusivity of samples to within 4.93%, the absorption coefficient to within 10.57 % and the heat capacity of the samples to within 5.37 %. Application of the standard approach to these same data sets gave much poorer estimates of the thermal diffusivity, particularly when the absorption coefficient of the material was relatively low.
273

Analýza rozpoznání chodce řidičem vozidla / Analysis of Pedestrian Recognition by the Vehicle Driver

Šlapal, David January 2021 (has links)
This diploma thesis deals with vehicle driver’s recognition of pedestrian in different driving and light conditions. The introductory part will be devoted to basic theoretical knowledge. The attention is focused on safety measures to prevent accidents between pedestrians and vehicles. The main part will be devoted to the pedestrian´s visibility from the driver´s point of view during reduced visibility on roads in darkness. The practical part of this work summarizes the results of the experiment using the selected vehicle and chosen participants. During this experiment, the distance necessary for detection, recognition and identification of the pedestrian will be measured, to stop the vehicle safety in time or to avoid an obstacle on the road.
274

Analysis of Magnetic Gear End-Effects to Increase Torque and Reduce Computation Time

Losey, Bradley January 2020 (has links)
No description available.
275

Analýza vybraných biologicky aktivních látek v cereálních výrobcích / Analysis of selected biologically active substances in cereal products

Skutek, Miroslav January 2009 (has links)
Presented diploma thesis was focused on study of biologically active compounds, especially some sugars in cereal products. In experimental part the total of 29 different cereal materials, food industry waste products and natural complex matrixes (microbial polysaccharide, honey, beer) were used. As a part of this work optimization of analytical methods suitable for analysis of simple sugars, oligo- and polysaccharides was done. In cereal samples reduced and neutral sugars were analyzed spectrophotometrically and individual sugars were detected by chromatography. For HPLC/RI analysis optimal mobile phase composition and chromatography conditions were proposed. For mono- a oligosaccharides C18-NH2 sorbent, mixture acetonitril:water 75:25 as mobile phase and flow 1 ml/min were verified as suitable separation parameters. Thin layer chromatography of mono- and oligosaccharides was optimized too. Introduced chromatography and spectrophotometry methods were then applied to analysis of cereal samples. As a model sugar natural microbial polysaccharide pullulan was used to analytical method testing. In cereal products and food matrixes total neutral and reduced sugars as well as products of their acid and enzyme hydrolysis were measured. Detailed analysis of some glycoside composition was tested too. As the most usable method for both qualitative and quantitative analysis of cereal sugars HPLC/RI method was found. To detailed identification of malto-oligosacharides tandem LC/MS/MS technique using derivatization by 1-phenyl-3-methyl-5-pyrazolon was tested too.
276

Kinetika krystalizace v semikrystalických nanokompozitech / The Crystallization Kinetics in Semicrystalline Nanocomposites

Fiore, Kateřina January 2010 (has links)
Růst krystalů zásadně ovlivňuje morfologii a tím také mechanické vlastnosti semikrystalických polymerů. Tato PhD práce přináší alternativní pohled na popis kinetiky krystalizace v polyolefinech plněných slabě interagujícími částicemi. V nanokompozitních materiálech vysoký specifický povrch plniva i při nízkých plněních zásadně ovlivňuje dynamiku řetězců. V blízkosti povrchu plniva začíná hrát významnou roli zpomalená reptace způsobená jak vzájemnými interakcemi plnivo-polymer tak prostorovým omezením mezi nanočásticemi. Růst krystalů byl zkoumán pomocí polarizovaného optického mikroskopu vybaveného horkým stolkem. Výsledky byly korelovány s teoretickými modely a rozsáhlými počítačovými simulacemi na molekulární úrovni. Pozorovaný pokles rychlosti růstu sférolitů v závislosti na obsahu plniva a molekulové hmotnosti matrice je interpretován na základě imobilizační teorie, tedy, zpomalení reptačního pohybu.
277

Redukovaný model vírového proudění / Reduced order model of swirling flow

Urban, Ondřej January 2017 (has links)
This thesis deals with the formulation and application of reduced order models based on extraction of dominant structures from a system utilizing the method of proper orthogonal decomposition. Time evolution of computed modes is described by a system of ordinary differential equations, which is gained by means of Galerkin projection of these modes onto the Navier-Stokes equations. This methodology was applied on two test cases Kármán vortex street and vortex rope. In both cases, a CFD simulation of one refference point was carried out and by utilizing gained modes, the corresponding reduced order models were formulated. Their results were assessed by comparing to the refference simulation.
278

An automated approach to derive and optimise reduced chemical mechanisms for turbulent combustion / Une approche automatisée pour la réduction et l'optimisation de schémas cinétiques appliqués à la combustion turbulente

Jaouen, Nicolas 21 March 2017 (has links)
La complexité de la chimie joue un rôle majeur dans la simulation numérique de la plupart des écoulements réactifs industriels. L'utilisation de schémas cinétiques chimiques détaillés avec les outils de simulation actuels reste toutefois trop coûteuse du fait des faibles pas de temps et d'espaces associés à la résolution d'une flamme, bien souvent inférieurs de plusieurs ordres de grandeur à ceux nécessaires pour capturer les effets de la turbulence. Une solution est proposée pour s'affranchir de cette limite. Un outil automatisé de réduction de schémas cinétiques est développé sur la base d'un ensemble de trajectoires de références construites dans l'espace des compositions pour être représentatives du système à simuler. Ces trajectoires sont calculées à partir de l'évolution de particules stochastiques soumises à différentes conditions de mélange, de réaction et d'évaporation dans le cas de combustible liquide. L'ensemble est couplé à un algorithme génétique pour l'optimisation des taux de réaction du schéma réduit, permettant ainsi une forte réduction du coût calcul. L'approche a été validée et utilisée pour la réduction de divers mécanismes réactionnels sur des applications académiques et industrielles, pour des hydrocarbures simples comme le méthane jusqu'à des hydrocarbures plus complexes, comme le kérosène en incluant une étape optimisée de regroupement des isomères. / Complex chemistry is an essential ingredient in advanced numerical simulation of combustion systems. However, introducing detailed chemistry in Computational Fluid Dynamics (CFD) softwares is a non trivial task since the time and space resolutions necessary to capture and solve for a flame are very often smaller than the turbulent characteristic scales by several orders of magnitude. A solution based on the reduction of chemical mechanisms is proposed to tackle this issue. An automated reduction and optimisation strategy is suggested relying on the construction of reference trajectories computed with the evolution of stochastic particles that face mixing, evaporation and chemical reactions. The methodology, which offers strong reduction in CPU cost, is applied to the derivation of several mechanisms for canonical and industrial applications, for simple fuel such as methane up to more complex hydrocarbon fuels, as kerosene, including an optimised lumping procedure for isomers.
279

Efficient Uncertainty Characterization Framework in Neutronics Core Simulation with Application to Thermal-Spectrum Reactor Systems

Dongli Huang (7473860) 16 April 2020 (has links)
<div>This dissertation is devoted to developing a first-of-a-kind uncertainty characterization framework (UCF) providing comprehensive, efficient and scientifically defendable methodologies for uncertainty characterization (UC) in best-estimate (BE) reactor physics simulations. The UCF is designed with primary application to CANDU neutronics calculations, but could also be applied to other thermal-spectrum reactor systems. The overarching goal of the UCF is to propagate and prioritize all sources of uncertainties, including those originating from nuclear data uncertainties, modeling assumptions, and other approximations, in order to reliably use the results of BE simulations in the various aspects of reactor design, operation, and safety. The scope of this UCF is to propagate nuclear data uncertainties from the multi-group format, representing the input to lattice physics calculations, to the few-group format, representing the input to nodal diffusion-based core simulators and quantify the uncertainties in reactor core attributes.</div><div>The main contribution of this dissertation addresses two major challenges in current uncertainty analysis approaches. The first is the feasibility of the UCF due to the complex nature of nuclear reactor simulation and computational burden of conventional uncertainty quantification (UQ) methods. The second goal is to assess the impact of other sources of uncertainties that are typically ignored in the course of propagating nuclear data uncertainties, such as various modeling assumptions and approximations.</div>To deal with the first challenge, this thesis work proposes an integrated UC process employing a number of approaches and algorithms, including the physics-guided coverage mapping (PCM) method in support of model validation, and the reduced order modeling (ROM) techniques as well as the sensitivity analysis (SA) on uncertainty sources, to reduce the dimensionality of uncertainty space at each interface of neutronics calculations. In addition to the efficient techniques to reduce the computational cost, the UCF aims to accomplish four primary functions in uncertainty analysis of neutronics simulations. The first function is to identify all sources of uncertainties, including nuclear data uncertainties, modeling assumptions, numerical approximations and technological parameter uncertainties. Second, the proposed UC process will be able to propagate the identified uncertainties to the responses of interest in core simulation and provide uncertainty quantifications (UQ) analysis for these core attributes. Third, the propagated uncertainties will be mapped to a wide range of reactor core operation conditions. Finally, the fourth function is to prioritize the identified uncertainty sources, i.e., to generate a priority identification and ranking table (PIRT) which sorts the major sources of uncertainties according to the impact on the core attributes’ uncertainties. In the proposed implementation, the nuclear data uncertainties are first propagated from multi-group level through lattice physics calculation to generate few-group parameters uncertainties, described using a vector of mean values and a covariance matrix. Employing an ROM-based compression of the covariance matrix, the few-group uncertainties are then propagated through downstream core simulation in a computationally efficient manner.<div>To explore on the impact of uncertainty sources except for nuclear data uncertainties on the UC process, a number of approximations and assumptions are investigated in this thesis, e.g., modeling assumptions such as resonance treatment, energy group structure, etc., and assumptions associated with the uncertainty analysis itself, e.g., linearity assumption, level of ROM reduction and associated number of degrees of freedom employed. These approximations and assumptions have been employed in the literature of neutronic uncertainty analysis yet without formal verifications. The major argument here is that these assumptions may introduce another source of uncertainty whose magnitude needs to be quantified in tandem with nuclear data uncertainties. In order to assess whether modeling uncertainties have an impact on parameter uncertainties, this dissertation proposes a process to evaluate the influence of various modeling assumptions and approximations and to investigate the interactions between the two major uncertainty sources. To explore this endeavor, the impact of a number of modeling assumptions on core attributes uncertainties is quantified.</div><div>The proposed UC process has first applied to a BWR application, in order to test the uncertainty propagation and prioritization process with the ROM implementation in a wide range of core conditions. Finally, a comprehensive uncertainty library for CANDU uncertainty analysis with NESTLE-C as core simulator is generated compressed uncertainty sources from the proposed UCF. The modeling uncertainties as well as their impact on the parameter uncertainty propagation process are investigated on the CANDU application with the uncertainty library.</div>
280

Approximate computing for high energy-efficiency in IoT applications / Calcul approximatif à haute efficacité énergétique pour des applications de l'internet des objets

Ndour, Geneviève 17 July 2019 (has links)
Les unités à taille réduite font partie des méthodes proposées pour la réduction de la consommation d’énergie. Cependant, la plupart de ces unités sont évaluées séparément,c’est-à-dire elles ne sont pas évaluées dans une application complète. Dans cette thèse, des unités à taille réduite pour le calcul et pour l’accès à la mémoire de données, configurables au moment de l’exécution, sont intégrées dans un processeur RISC-V. La réduction d’énergie et la qualité de sortie des applications exécutées sur le processeur RISC-V étendu avec ces unités, sont évaluées. Les résultats indiquent que la consommation d’énergie peut être réduite jusqu’à 14% pour une erreur ≤0.1%. De plus, nous avons proposé un modèle d’énergie générique qui inclut à la fois des paramètres logiciels et architecturaux. Le modèle permet aux concepteurs logiciels et matériels d’avoir un aperçu rapide sur l’impact des optimisations effectuées sur le code source et/ou sur les unités de calcul. / Reduced width units are ones of the power reduction methods. However such units have been mostly evaluated separately, i.e. not evaluated in a complete applications. In this thesis, we extend the RISC-V processor with reduced width computation and memory units, in which only a number of most significant bits (MSBs), configurable at runtime is active. The energy reduction vs quality of output trade-offs of applications executed with the extended RISC-V are studied. The results indicate that the energy can be reduced by up to 14% for an error ≤ 0.1%. Moreover we propose a generic energy model that includes both software parameters and hardware architecture ones. It allows software and hardware designers to have an early insight into the effects of optimizations on software and/or units.

Page generated in 0.0925 seconds