• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 297
  • 234
  • 199
  • 26
  • 19
  • 14
  • 10
  • 9
  • 7
  • 6
  • 4
  • 4
  • 4
  • 3
  • 3
  • Tagged with
  • 976
  • 148
  • 106
  • 100
  • 93
  • 89
  • 74
  • 71
  • 56
  • 53
  • 45
  • 42
  • 42
  • 41
  • 40
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
371

Categorisation and formulation in risk management : Essential parts of a future Experience based Risk Management model within software engineering / Kategorisering och formulering inom riskhantering : Essentiella delar av en framtida Erfarenhetsbaserad riskmanagement model inom programvaruutveckling.

Nilsson, Peter, Ohlsson, Erik January 2003 (has links)
This software engineering thesis addresses three main issues. When creating the risk documents for this master thesis project, we became even more aware of the problems with categorization and formulation of risk statements and the scope is now focusing on categorization and formulation as a necessity for Experience based Risk Management (EbRM). The EbRM process is the foundation of the thesis and the categorisation and formulation parts had to be solved before implementing the EbRM model. To give the reader a notion about the background of this work, a brief introduction to the Experience based Risk Management model is given in the thesis. The thesis is based on literature studies, experiences and experiments. The formulation system is gathered from the Software Engineering Institute (SEI) and is called the CTC-format (Condition, Transition, Consequence). This format allows you to separate conditions and consequence of the risk and thereby provides you with easier categorisation and understandability. The categorisation system used is the SEI Taxonomy Based Categorisation (TBC). A categorisation system built as a search tree where each leaf represents a rather narrow risk domain. In order to evaluate those two different systems we performed an experiment showing that the combination thereof gave a much higher match in sorting risks between different groups. The conclusions of this work are that the TBC in connection with the CTC structure forms a very good basis for risk management when it comes to categorisation and formulation. In addition to properly formulated and tagged names and a thorough process when identifying and documenting risks, the risk management will be facilitated by using our conclusions in further risk management. Oral information must as well be on a sufficient level to gain full benefits from a risk management process.
372

Überlegungen zur Parameterwahl im Bramble-Pasciak-CG für gemischte FEM

Meyer, Arnd, Steinhorst, Peter 11 September 2006 (has links) (PDF)
Variants on the choice of nessecary control parameters in the generalized Bramble-Pasciak-CG method are discussed.
373

Modélisation expérimentale et analytique des propriétés rhéologiques des bétons autoplaçants

Kabagire, K. Daddy January 2018 (has links)
Les paramètres de formulation et les conditions de mise en place du béton frais affectent ses propriétés mécaniques et sa durabilité à l’état durci et in extenso la durabilité de l’ouvrage construit. De ce fait, l’ouvrabilité du béton doit être judicieusement choisi selon la géométrie de l’élément à couler et la disposition des armatures. Par exemple, le béton autoplacant (BAP) se met en place sous son propre poids et épouse adéquatement les coffrages sans apport de vibration mécanique. Ainsi le BAP est généralement utilisé pour couler les éléments ayant une géométrie complexe ou difficilement accessible. Le comportement à l’écoulement lors de la mise en place de cette classe de béton peut difficilement être prédit à l’aide des essais empiriques tels que le cône d’Abrams. Par ailleurs, une légère variation des propriétés des constituants peut conduire à un béton avec des faibles performances à l’écoulement. La rhéologie, une branche de la physique qui étudie l’écoulement des fluides, permet une meilleure description des propriétés à l’écoulement des BAP et facilite la compréhension de ce matériau. Le BAP peut être considéré comme une suspension de gros granulats dans un fluide suspendant (mortier). Dans ce cas, des modèles rhéologiques analytiques peuvent être exploités pour prédire les propriétés rhéologiques de ces matériaux et comprendre leur comportement à l’écoulement. Néanmoins, l’applicabilité de ces modèles est fonction de plusieurs paramètres, notamment la nature de la phase suspendante considérée (pâte, mortier de béton équivalent ou mortier), caractéristiques des particules solides, historique de cisaillement de la phase suspendante et de la suspension, etc. Cette recherche exhaustive vise essentiellement à évaluer l’influence de différents paramètres de formulation et d’essais sur les propriétés rhéologiques des BAP afin de comprendre le comportement à l’écoulement de cette classe de béton. Une approche de prédiction des propriétés rhéologiques des BAP simulés par des suspensions diphasiques, c’est-à-dire composés d’une phase suspendante et inclusions solides, est proposée. L’approche diphasique se heurte généralement à la difficulté de définir la phase suspendante. La première phase du programme expérimental est consacrée à une évaluation de l’applicabilité de la méthode de mortier de béton équivalent (MBE) pour prédire les propriétés à l’état frais des BAP. Les résultats obtenus montrent que cette méthode ne fournit pas une bonne corrélation entre les propriétés à l’état frais des BAP et leur MBE. Ceci est dû, en partie, au fait que seule la surface spécifique des gros granulats soit considérée comme critère pour formuler les mélanges de MBE. En effet, les résultats obtenus montrent que le concept de la pâte en excès doit être considéré pour améliorer les relations entre le BAP et son MBE. Ce concept tient compte du volume et de la compacité des gros granulats dans le béton. Bien que l’utilisation de la pâte en excès améliorer l’approche du MBE, mais résulte en des corrélations complexes et difficilement exploitables pour pouvoir appliquer une approche diphasique afin de prédire les propriétés à l’état frais et rhéologiques des BAP en considérant le MBE comme phase suspendante. Pour pallier à cette difficulté, une investigation plus exhaustive de la pâte-mortier/mortier-BAP est nécessaire afin de mieux comprendre la synergie des différents paramètres de formulation et d’identifier la phases suspendante la plus adéquate pour prédire les propriétés du BAP. La deuxième phase de cette étude porte sur l’évaluation de l’effet des caractéristiques des particules solides sur les propriétés rhéologiques des suspensions en exploitant des modèles analytiques, notamment les modèles de Krieger-Dougherty (KD) et de Château-Ovarlez-Trung (COT). Les résultats obtenus ont permis de mettre en évidence l’effet de la forme et de la granulométrie des particules solides sur les paramètres intrinsèques de ces modèles analytiques. De plus, il est montré que ces paramètres intrinsèques sont considérablement affectés par les paramètres de formulation, le protocole de cisaillement et la concentration de la phase suspendante (i.e. rapport Eau/Liant). Par ailleurs, il est montré que le modèle rhéologique (Bingham ou Herschel Bulkley) choisi pour décrire le comportement de la phase suspendante (pâte) et de la suspension (mortier) est un facteur influent sur les propriétés rhéologiques prédites des suspensions. La troisième phase de cette étude vise l’exploitation des modèles analytiques pour prédire les propriétés rhéologiques tout en identifiant une phase suspendante qui représente le plus fidèlement le BAP. Différentes phases suspendantes (mortier) ont été évaluées, notamment le mortier tamisé, mortier type I et le mortier II. Les corrélations entre les propriétés à l’état frais et rhéologiques des BAP et de différents mortiers ont été établies. De plus, l’effet des paramètres de formulation les plus influents sur les corrélations des propriétés rhéologiques entre les différentes phases ainsi que sur les paramètres intrinsèques des modèles de prédiction ont été mis en évidence. Des modèles sont proposés pour prédire les propriétés rhéologiques des mélanges de BAP en considérant le mortier type II comme phase suspendante.
374

A knowledge based approach of toxicity prediction for drug formulation : modelling drug vehicle relationships using soft computing techniques

Mistry, Pritesh January 2015 (has links)
This multidisciplinary thesis is concerned with the prediction of drug formulations for the reduction of drug toxicity. Both scientific and computational approaches are utilised to make original contributions to the field of predictive toxicology. The first part of this thesis provides a detailed scientific discussion on all aspects of drug formulation and toxicity. Discussions are focused around the principal mechanisms of drug toxicity and how drug toxicity is studied and reported in the literature. Furthermore, a review of the current technologies available for formulating drugs for toxicity reduction is provided. Examples of studies reported in the literature that have used these technologies to reduce drug toxicity are also reported. The thesis also provides an overview of the computational approaches currently employed in the field of in silico predictive toxicology. This overview focuses on the machine learning approaches used to build predictive QSAR classification models, with examples discovered from the literature provided. Two methodologies have been developed as part of the main work of this thesis. The first is focused on use of directed bipartite graphs and Venn diagrams for the visualisation and extraction of drug-vehicle relationships from large un-curated datasets which show changes in the patterns of toxicity. These relationships can be rapidly extracted and visualised using the methodology proposed in chapter 4. The second methodology proposed, involves mining large datasets for the extraction of drug-vehicle toxicity data. The methodology uses an area-under-the-curve principle to make pairwise comparisons of vehicles which are classified according to the toxicity protection they offer, from which predictive classification models based on random forests and decisions trees are built. The results of this methodology are reported in chapter 6.
375

BEZAFIBRATO: VALIDAÇÃO DE METODOLOGIA E APLICAÇÃO EM ESTUDO FARMACOCINÉTICO DE FORMULAÇÕES FARMACÊUTICAS / BEZAFIBRATE: VALIDATION OF METHODOLOGY AND APPLICATION IN PHARMACOKINETIC STUDY OF PHARMACEUTICAL FORMULATIONS

Melo, Janine de 11 October 2007 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Fibrates constitute an important class of drugs used in the treatment of the dyslipidemia, which is the main risk factor for the aterosclerosis development and incidence of cardiovascular diseases. The present dissertation reports the development and the validation of procedures for the bezafibrate (BEZ) analysis in pharmaceutical products and biological matrixes. The proposed methods include high performance liquid chromatography (HPLC) in reverse phase with ultraviolet detection (UV) and spectrophotometry with UV detection. For the chromatographic determination of bezafibrate in tablets, capsules and human plasma, were used a C-18 column (150 mm x 4.6 mm d.i, 5 μm), mobile phase composed by potassium phosphate buffer 0.01 M, pH 3.5: acetonitrile: methanol (50: 40: 10, v/v/v) with flow rate of 1 mL/min and detection at 230 nm. The extraction of the drug from the plasma was performed by liquid-liquid extraction using acidified tert-butyl methyl ether. For the spectrophotometric bezafibrate evaluation in tablets and capsules were used methanol and sodium hydroxide 0.1 N as solvents, with detection at 230 nm. The methodology developed for bezafibrate evaluation was validated observing the parameters specificity, linearity, precision, accuracy, recovery, robustness, stability and it was considered suitable for the analysis of the drug in formulations and biological fluids. The comparison of results demonstrated that there is not significant difference between the validated analytical methods (p<0.05). The bioanalytical developed method was applied with success for the determination of bezafibrate plasma in six healthy volunteers, allowing analysis of parameters as plasma maximum concentration (Cmax), extension of the absorption (AUC), constant of elimination (kel) and half-life time (t1/2). / Os fibratos constituem uma importante classe de medicamentos utilizada no tratamento da dislipidemia, que é o principal fator de risco para o desenvolvimento de aterosclerose e incidência de doenças cardiovasculares. A presente dissertação aborda o desenvolvimento e a validação de procedimentos para análise de bezafibrato (BEZ) em produtos farmacêuticos e matrizes biológicas. Os métodos propostos incluem cromatografia líquida de alta eficiência (HPLC) em fase reversa com detecção ultravioleta (UV) e espectrofotometria com detecção UV. Para a determinação cromatográfica de bezafibrato em comprimidos, cápsulas e plasma humano foi empregada uma coluna C-18 (150 mm x 4,6 mm d.i, 5 μm), fase móvel composta por tampão fosfato de potássio monobásico 0,01 M, pH 3,5: acetonitrila: metanol (50: 40: 10, V/V/V) com fluxo 1 mL/min e detecção em 230 nm. A extração do fármaco a partir do plasma foi realizada por extração líquido-líquido utilizando terc-butil metil éter acidificado. Na avaliação espectrofotométrica de bezafibrato em comprimidos e cápsulas foram utilizados como solventes metanol e hidróxido de sódio 0,1 N, com detecção em 230 nm. A metodologia desenvolvida para avaliação de bezafibrato foi validada observando-se os parâmetros especificidade, linearidade, precisão, exatidão, recuperação, robustez, estabilidade e foi considerada apropriada para análise do fármaco em formulações e fluidos biológicos. A comparação de resultados revelou que não há diferença significativa entre os métodos analíticos validados (p<0,05). O método bioanalítico desenvolvido foi aplicado com sucesso para determinação plasmática de bezafibrato em seis voluntários sadios, permitindo análise de parâmetros como concentração plasmática máxima (Cmáx), extensão da absorção (AUC), constante de eliminação (kel) e tempo de meia vida (t1/2).
376

Financial monitoring policies of microfinance institutions in Accra : policy formulation and implementation challenges

Quao, Kwami Hope January 2017 (has links)
Submitted in fulfillment of the requirements for the Degree: Doctor of Philosophy (Business Administration), Durban University of Technology, Durban, South Africa, 2017. / Although numerous articles have been published globally on microfinance (MF), essentially highlighting the need to regulate microfinance institutions (MFIs), none of these, to the knowledge of the researcher, specifically explore in profundity the formulation process of financial monitoring policies (FMPs), their implementation, and the challenges MFIs encounter in implementing these policies. The wave of distressed and failing of MFIs in Ghana and the loss of hard-earned thrift deposits of the poor, therefore demand for this investigation. This study consequently viaducts the gap and contributes to the debate by reviewing the specific financial policies pertaining to MFIs, their formulation, implementation of such policies, and the challenges MFIs encounter relating to those policies. Also introduced into the MF research arena, is the concept of implementation theory to move knowledge frontier forward. Further, the outcome will be of particular relevance to all emerging economies who view MFls as praxis for poverty alleviation, employment creation and addressing inequality. The study adopted a mixed research approach, with both qualitative and quantitative data gathered from a sample of 65 MFIs in Accra through a self-administered, Likert-scaled questionnaire. Data were analysed using SPSS version 24.0, with results presented in frequency tables, figures, correlation tables, and cross-tabulations. The findings reveal that FMPs exist for MFIs in Ghana – Accra, particularly. However, regulation formulation is shown to be lopsided, with implementation of FMPs, and monitoring and supervision thereof, also found to be deficient. The results further indicate that using minimum capital as a tool to ensuring efficiency in the sector, is a major obstacle to overcome to create an impetus for regulatory non-compliance. Based on the findings, the research recommends consideration by policymakers and MFI monitoring units to create a semi-autonomous institution, the National Microfinance Promotion Authority, to regulate and supervise the MFIs in Ghana. It is also recommended that research focus be shifted to policy implementation regarding MF operations. / D
377

Hamiltonian Formulations and Symmetry Constraints of Soliton Hierarchies of (1+1)-Dimensional Nonlinear Evolution Equations

Manukure, Solomon 20 June 2016 (has links)
We derive two hierarchies of 1+1 dimensional soliton-type integrable systems from two spectral problems associated with the Lie algebra of the special orthogonal Lie group SO(3,R). By using the trace identity, we formulate Hamiltonian structures for the resulting equations. Further, we show that each of these equations can be written in Hamiltonian form in two distinct ways, leading to the integrability of the equations in the sense of Liouville. We also present finite-dimensional Hamiltonian systems by means of symmetry constraints and discuss their integrability based on the existence of sufficiently many integrals of motion.
378

Estimateurs d'erreur a posteriori pour les équations de Maxwell en formulation temporelle et potentielle / A posteriori error estimators for the temporal and potential Maxwell's equations

Tittarelli, Roberta 27 September 2016 (has links)
Cette thèse porte sur le développement d’estimateurs d'erreur a posteriori pour la résolution numérique par éléments finis de problèmes en électromagnétisme basse fréquence. On s’intéresse aux formulations en potentiels (A-φ et T-Ω) des équations de Maxwell en régime quasi-stationnaire, pour le cas harmonique ou temporel. L'enjeu consiste à développer des outils numériques mathématiquement robustes, exploitables dans un code de calcul industriel, notamment le Code_Carmel3D (EDF R&D), permettant d'estimer l'erreur de discrétisation spatio-temporelle et de pouvoir ainsi améliorer la précision des calculs. On prouve la fiabilité, assurant le contrôle de l’erreur. On prouve également dans certains cas l’efficacité locale, permettant de repérer les zones du maillage dans lesquelles l’erreur est la plus importante, et de mettre ainsi en œuvre des stratégies de raffinement adaptatif. L'équivalence globale entre l'erreur en norme énergétique et l'estimateur est en général assurée. Les estimateurs obtenus sont finalement utilisés pour des simulations physiques/industrielles par le Code_Carmel3D. / This thesis focus on the developement of a posteriori error estimators for the finite element numerical resolution of low frequency electromagnetic problems. We are interested in two potential formulations of the Maxwell's equations in the quasi-static approximation, known as A-φ et T-Ω formulations, for both harmonic and temporal regimes. The challenge consists in developing numerical tools mathematically robust, usable in an industrial code allowing the estimation of the spatio-temporal error discretisation and the improvement of the quality and the cost of the computation. We prove the reliability of the proposed error estimators, which ensures an upper bound for the error in the energy norm. In some cases we also prove the local efficicency of the estimators, which allows to detect the zones where the error is the highest, so that an adaptive remeshing process can be set up. Anyway, the global equivalence between the energy error norm and the estimator is derived. The developed error estimators are finally used for physical and industrial numerical simulations in Code_Carmel3D (EDF R&D).
379

Multi-component epoxy resin formulation for high temperature applications

Poynton, Gary January 2014 (has links)
The high functionality epoxy resins tetraglycidyl-4,4’-diaminodiphenyl-methane(TGDDM) and triglycidyl-p-aminophenol (TGPAP) are the main components in most aerospace grade epoxy resin formulations. Owing to their high reactivity and high viscosity, TGDDM and TGPAP pose difficulties when used in wet layup composite manufacturing. As such, these resins are often modified to achieve the desired performance both in the liquid and cured states. The main objective of this thesis is to optimise a low viscosity multi-component epoxy resin formulation suitable for use as an aerospace grade composite matrix. The formulation will allow for the addition of high levels of thermoplastic to improve the fracture toughness of the resin whilst also maintaining resin processability. Through the use of thermal analytical techniques this thesis aims to study the effects of varying the TGDDM/TGPAP ratio, incorporation of a low viscosity bi-functional epoxy resin, the diglycidyl ether of bisphenol F (DGEBF) and changes to the stoichiometric ratio (r)between reactive groups of the epoxy resin and amine hardener (4,4’-diaminodiphenylsulphone, DDS) in multi-component epoxy resin formulations. Resin formulations were optimised using factorial experimental design (FED). Results from two FED’s showed curing multi-component resins at a low stoichiometric ratio significantly increased the processing window whilst also increasing the glass transition temperature (Tg) of the cured resin. No apparent benefit could be assigned to the inclusion of TGDDM owing to its poor processability and a Tg similar to TGPAP. Up to 60% DGEBF was incorporated in a multi-component resin formulation whilst still attaining a Tg greater than 220°C. Its inclusion at 60% had the additional benefit of increasing the processing window by 48 minutes over TGPAP, an increase of 62%. Two optimised resin formulations, 100% TGPAP (100T) and a binary mix of 60% DGEBF and 40% TGPAP (60D) were taken forward to study the effects of adding a thermoplastic toughener (polyethersulphone, PES) in incremental amounts up to 50wt%. SEM images showed all toughened 100T resins had a phase separated morphology whilst all 60D resins were homogenous. The phase separation seen in 100T did not improve the matrix fracture toughness when loaded at 10 wt% and 30 wt% PES. Only when 50 wt% PES was added did fracture toughness increase in comparison to the homogenous 60D resins. Through factorial experimental design two epoxy resin formulations which excluded TGDDM were optimised with a low stoichiometric ratio. The optimum aerospace formulation is dependent on the desired processability and fracture toughness of the resin. High DGEBF-containing formulations give the longest processing windows whilst the 100% TGPAP formulation toughened with 50% PES has the highest fracture toughness.
380

Development Of A New Finite-Volume Lattice Boltzmann Formulation And Studies On Benchmark Flows

Vilasrao, Patil Dhiraj 07 1900 (has links) (PDF)
This thesis is concerned with the new formulation of a finite-volume lattice Boltzmann equation method and its implementation on unstructured meshes. The finite-volume discretization with a cell-centered tessellation is employed. The new formulation effectively adopts a total variation diminishing concept. The formulation is analyzed for the modified partial differential equation and the apparent viscosity of the model. Further, the high-order extension of the present formulation is laid out. Parallel simulations of a variety of two-dimensional benchmark flows are carried out to validate the formulation. In Chapter 1, the important notions of the kinetic theory and the most celebrated equation in the kinetic theory, ‘the Boltzmann equation’ are given. The historical developments and the theory of a discrete form of Boltzmann equation are briefly discussed. Various off-lattice schemes are introduced. Various methodologies adopted in the past for the solution of the lattice Boltzmann equation on finite-volume discretization are reviewed. The basic objectives of this thesis are stated. In Chapter2,the basic formulations of lattice Boltzmann equation method with a rational behind different boundary condition implementations are discussed. The benchmark flows are studied for various flow phenomenon with the parallel code developed in-house. In particular, the new benchmark solution is given for the flow induced inside a rectangular, deep cavity. In Chapter 3, the need for off-lattice schemes and a general introduction to the finite-volume approach and unstructured mesh technology are given. A new mathematical formulation of the off-lattice finite-volume lattice Boltzmann equation procedure on a cell-centered, arbitrary triangular tessellation is laid out. This formulation employs the total variation diminishing procedure to treat the advection terms. The implementation of the boundary condition is given with an outline of the numerical implementation. The Chapman-Enskog (CE) expansion is performed to derive the conservation equations and an expression for the apparent viscosity from the finite-volume lattice Boltzmann equation formulation in Chapter 4. Further, the numerical investigations are performed to analyze the apparent viscosity variation with respect to the grid resolution. In Chapter 5, an extensive validation of the newly formulated finite-volume scheme is presented. The benchmark flows considered are of increasing complexity and are namely (1) Posieuille flow, (2) unsteady Couette flow, (3) lid-driven cavity flow, (4) flow past a backward step and (5) steady flow past a circular cylinder. Further, a sensitivity study to the various limiter functions has also been carried out. The main objective of Chapter6is to enhance the order of accuracy of spatio-temporal calculations in the newly presented finite-volume lattice Boltzmann equation formulation. Further, efficient implementation of the formulation for parallel processing is carried out. An appropriate decomposition of the computational domain is performed using a graph partitioning tool. The order-of-accuracy has been verified by simulating a flow past a curved surface. The extended formulation is employed to study more complex unsteady flows past circular cylinders. In Chapter 7, the main conclusions of this thesis are summarized. Possible issues to be examined for further improvements in the formulation are identified. Further, the potential applications of the present formulation are discussed.

Page generated in 0.1059 seconds