Spelling suggestions: "subject:"characterisation,"" "subject:"haracterisation,""
321 |
Formulace a testování nanočástic z větvených polyesterů s rifampicinem / Formulation and testing of rifampicin-loaded branched polyesters nanoparticlesBalciarová, Andrea January 2018 (has links)
Charles University Faculty of Pharmacy in Hradec Králové Department of Pharmaceutical Technology Consultant: doc. RNDr. Milan Dittrich, CSc. Student: Andrea Balciarová Title of Thesis: Formulation and testing of rifampicin-loaded branched polyesters nanoparticles In presented thesis, the main attention in theoretical part is focused on nanoparticles for targeted drug delivery, their types, structure and carriers used for their preparation. Moreover, in this part there is an overview of physicochemical characteristics and preparation methods of polymeric nanoparticles applicable in formulation of pharmaceutical products. The experimental part is concerned on studying the influence of the concentration of biodegradable polymers, the presence of cationic surfactants and rifampicin as model drug substance on nanoparticles΄ size and zeta-potential. The main attention is given to nanoparticles decoration with anionic biopolymers, hyaluronic acid and xanthan gum. The simple method of preparation which is usable in nanosystems formulation that influence biological functions purposefully was tried and tested in different contexts.
|
322 |
Biodegradation and ageing of bio-based thermosetting resins from lactic acidGomes Hastenreiter, Lara Lopes January 2019 (has links)
The need for replacing petroleum-based polymers has been increasing and bio-based polymers prove to be a suitable solution. The aim of this thesis was to synthesize bio-based resins with different chemical architectures to evaluate the effect of the structure on the properties and on their response to ageing and biodegradation. For this, three different bio-based thermoset resins have been synthesised by reacting one of three distinct core-molecules with lactic acid. The options of core-molecules chosen for this work were ethylene glycol, glycerol and pentaerythritol. Lactic acid was first reacted with a core-molecule by direct condensation, the resulting branched molecule was then end-functionalized with methacrylic anhydride. The amount of moles of lactic acid varied according to which core-molecule it was reacted with, but the chain length (n) was always maintained as three. Part of the samples were characterised by Fourier-transform infrared spectroscopy (FT-IR), differential scanning calorimetry (DSC), thermogravimetric analysis (TGA) and tensile test. DSC and TGA were used for determining the thermal behaviour. FT-IR was used to verify the first and second stage of the reaction and to ascertain the occurrence of the crosslinking reaction. Tensile test was done for investigation of mechanical properties. The ageing and biodegradation tests are useful to ascertain the material possible applications. Therefore, the samples that went through the process of ageing or biodegradation were also characterised in the end of the procedures to further check the effect of those processes on the specimens. The test results indicated that the PENTA/LA cured resin was the most stable thermally. The cured resin’s mechanical properties were similar to each other, so there was no comparison to make in this area. The samples proved to be affected by the biodegradation and the ageing processes, both in visual and structural aspects.
|
323 |
IMPROVING THE PROTEIN PIPELINE THROUGH NONLINEAR OPTICAL METHODSHilary M Florian (9127556) 29 July 2020 (has links)
<p> Understanding the function and structure of a protein is crucial for informing on rational drug design and for developing successful drug candidates. However, this understanding is often limited by the protein pipeline, i.e. the necessary steps to go from developing protein constructs to generating high-resolution structures of macromolecules. Because each step of the protein pipeline requires successful completion of the prior step, bottlenecks are often created and therefore this process can take up to several years to complete. Addressing current limitations in the protein pipeline can help to reduce the time required to successfully solve the structure of a protein. </p><p>The field of nonlinear optical (NLO) microscopy provides a potential solution to many issues surrounding the detection and characterization of protein crystals. Techniques such as second harmonic generation (SHG) and two-photon excited UV fluorescence (TPE-UVF) have already been shown to be effective methods for the detection of proteins with high selectivity and sensitivity. Efforts to improve high throughput capabilities of SHG microscopy for crystallization trials resulted in development of a custom microretarder array (μRA) for depth of field (DoF) extension, therefore eliminating the need for z-scanning and reducing the overall data acquisition time. Further work was done with a commercially available μRA to allow for polarization dependent TPE-UVF. By placing the μRA in the rear conjugate plane of the beam path, the patterned polarization was mapped onto the field of view and polarization information was extracted from images by Fourier analysis to aid in discrimination between crystalline and aggregate protein. </p><p>Additionally, improvements to X-ray diffraction (XRD), the current gold standard for macromolecular structure elucidation, can result in improved resolution for structure determination. X-ray induced damage to protein crystals is one of the greatest sources of loss in resolution. Previous work has been done to implement a multimodal nonlinear optical (NLO) microscope into the beamline at Argonne National Lab. This instrument aids in crystal positioning for XRD experiments by eliminating the need for X-ray rastering and reduces the overall X-ray dosage to the sample. Modifications to the system to continuously improve the capabilities of the instrument were done, focusing on redesign of the beam path to allow for epi detection of TPE-UVF and building a custom objective for improved throughput of 1064 nm light. Furthermore, a computational method using non-negative matrix factorization (NMF) was employed for isolation of unperturbed diffraction peaks and provided insight into the mechanism by which X-ray damage occurs. This work has the potential to improve the resolution of diffraction data and can be applied to other techniques where X-ray damage is of concern, such as electron microscopy.</p><div><br></div>
|
324 |
Microstructure and property models of alloy 718 applicable for simulation of manufacturing processesMoretti, Marie Anna January 2022 (has links)
This thesis focuses on experimental characterization, understanding and modelling of nickel-based alloy 718, for a large range of loading conditions. Alloy 718 is the most widely used nickel-based superalloy, due to its high strength, high corrosion resistance and excellent mechanical properties at high temperatures. In this work, the mechanical behavior and microstructure evolution of this alloy during high strain rate deformation is investigated. Compression tests using a Split-Hopkinson pressure bar (SHPB) device were performed and the microstructure of the deformed sample was observed using optical microscope (OM) and scanning-electron microscope (SEM) coupled with electron back-scattered diffraction (EBSD) technique. The microstructural evolution according to the deformation conditions was characterized. For high deformation temperatures (1000 C and above), recrystallisation is identifed as the main deformation mechanism. A physics-based model was employed to simulate the deformation behavior of alloy 718. This type of models accounts for the microstructural mechanisms taking place during deformation. Knowledge about the deformation mechanisms of alloy 718, acquired experimentally and from literature, enables to formulate mathematically the microstructural phenomena governing the deformation behavior of the alloy. The proposed model includes the effects of strain hardening, grain boundary strengthening (Hall-Petch), solid solution strengthening, phonon and electron drag and recovery by dislocation glide and cross-slip. It is calibrated and validated using data obtained from mechanical tests, as well as values captured by the microstructural analysis. / H2020-MSCA-ITN-2017 grant agreement Nº764979 - ENABLE project
|
325 |
Is That Really You, Sherlock Holmes? : A Corpus Stylistic and Comparative Literary Analysis Investigating the Survival of the Authentic Holmes in Contemporary PastichesSilfver, Amanda January 2021 (has links)
This thesis has conducted an extensive character analysis of Sherlock Holmes by comparing the original, authentic detective, as he appears in a corpus consisting of Conan Doyle’s collected works about Holmes, to the characterisation in three select period pastiches. The aim was to analyse to what extent the true characterisation of the famous sleuth has survived in contemporary adaptations, more specifically in the three texts, Sherlock Vs. Dracula (1976), Dr. Jekyll and Mr. Holmes (1979) and Sherlock Holmes and the Angel of the Opera (1994), where the detective encounters equally well-known fictional characters. The novel approach of combining corpus stylistic quantitative methods of characterisation with a qualitative literary approach of identifying similar stylistic and narratological features of characterisation efficiently facilitated an illustration on how Conan Doyle’s round and complex character has endured through adaptations and reimaginings. The corpus investigation on the Sherlock Conan Doyle Corpus supplied an encompassing image of the character, and revealed characteristics absent from the inherent cultural perception. The subsequent cross-comparison between the original in contrast to contemporary characterisations presented clear deviations to the character and further demonstrated a tendency to exaggerate select, generic features that complement the narrative and plot of the integrated novels. Overall, this study concludes that Sherlock Holmes remains the character who travels over time and genres, albeit with a reduced complexity as the respective characterisations in each of the pastiches to various degrees have modified core characteristics significant to the mind-modelling process. That is, through the process of adaptational alterations, the detective has become a flat character. Enough features persist for him to be recognisable and compelling, yet Sherlock Holmes in his entirety subsists merely as a caricature of his original self.
|
326 |
Etudes numérique et expérimentale de la synthèse de biogaz : vers la transformation thermochimique solaire de copeaux de bois / Numerical and experimental studies of biogas synthesis : toward a thermochemical conversion of wood chipsLorreyte, Clarisse 15 December 2017 (has links)
La gazéification de biomasse lignocellulosique en biogaz fait partie des technologies attractives permettant de s’affranchir des énergies fossiles et de valoriser les déchets agricoles ou forestiers. Néanmoins, le développement de cette technologie est freiné par son rendement énergétique faible et la production de polluants (CO2, NOx…). La pyrolyse/gazéification par voie solaire permettrait alors de pallier certains de ces inconvénients (énergie propre et gratuite, process à haute température produisant moins de polluants). Dans ce contexte, l’objectif de cette thèse est de développer une approche expérimentale et numérique afin de maîtriser les procédés de pyrolyse/gazéification de copeaux de bois pour mieux appréhender le développement de la gazéification solaire. D’abord, un travail détaillé de caractérisation des copeaux de bois a été réalisé, alliant diverses analyses des échantillons mais aussi des analyses basées sur l’utilisation d’images tomographiques au rayon X pour déterminer les propriétés morphologiques et, par simulation numérique, les propriétés effectives de transport des copeaux de bois. Ensuite, des essais avec un four-réacteur à échelle laboratoire ont été réalisés pour étudier le séchage, la pyrolyse et la gazéification des copeaux de bois. Ces essais nous ont permis d’étudier l’influence de paramètres comme la température en séchage et en pyrolyse ou le débit de vapeur d’eau en gazéification. En parallèle, un modèle multi-physique pour la simulation de la pyrolyse a été développé. Cet outil a permis une étude détaillé des phénomènes mis en jeu et, in fine, permettrait d’optimiser le système. Enfin, le design d’un gazéifieur solaire a été réalisé. / Thermochemical conversion of lignocellulosic biomass belongs to attractive technologies which are viable routes to reduce reliance on fossil energy and to enhance carbon conversion efficiency. Nevertheless, classical gasification process via autothermal combustion of biomass presents severe drawbacks as bad yield and produced important pollutants. Solar concentrated energy enables high temperature reactions with reduced contaminating gas and higher yield. In this context, this thesis aims at developing experimental and numerical approaches to study detailed mechanism of pyrolysis and gasification processes of wood chips packed bed which are key step toward designing efficient solar gasifier. In a first time, inner properties of wood (initial composition and thermal decomposition) were studied via ultimate and proximate analyzes. Structural and morphological properties of wood chips were computed using image analysis. Effective mass and heat transport properties of the packed bed were assessed via direct numerical simulation combined with X-ray tomographic images. Then a laboratory scale device enabling to characterize pyrolysis and gasification kinetics and gas production was developed. The aim of this experimental work was to understand the impact of parameters such as drying and pyrolysis temperatures, and the steam flow rate during gasification. A multiphysical model of pyrolysis of wood chips packed bed was also developed. It allowed to perform detailed study of pyrolysis physics and in fine it will allow optimizing the pyrolysis/gasification process. Finally, a first design for a solar gasifier was reported and constitutes the basis of further studies.
|
327 |
EXPERIMENTAL AND NUMERICAL ANALYSIS OF ENVIRONMENTAL CONTROL SYSTEMS FOR RESILIENT EXTRA-TERRESTRIAL HABITATSHunter Anthony Sakiewicz (15339325) 22 April 2023 (has links)
<p> As space exploration continues to advance, so does the drive to inhabit celestial bodies. In<br>
order to expand our civilization to the Moon or even other planets requires an enormous amount of research and development. The Resilient Extra-Terrestrial Habitat Institute is a NASA funded project that aims to develop the technology needed to establish deep-space habitats. Deep-space inhabitation poses many challenges that are not present here on earth. The Moon, for example, has temperatures that range from -233−123°C. Aside from the extreme temperatures, a variety of thermal loads will need to be handled by the Environmental Control and Life Support System (ECLSS). Aside from the research and architecture of the International Space Station’s ECLSS, very little information is known about disturbances related to the thermal management of extra- terrestrial habitats.<br>
</p>
<p>RETHi is developing a Cyber-Physical Testbed (CPT) that represents a one-fifth scale<br>
prototype of a deep space habitat. In order to answer difficult research questions regarding ECLSS and thermal management of a deep-space habitat, a heat pump was modeled and validated with the physical part of the CPT. Once validated, the heat pump model is able to accurately predict the steady state behavior given the indoor and outdoor conditions of the testbed. When coupled with the interior environment (IE) model, it gives insight into the system’s requirements and response. Experimental testing was conducted with the heat pump in order to validate the model. After the model was validated, a series of parametric studies were conducted in order to investigate the effects of varying thermal loads and dehumidification. Since the groundwork was laid through model development and experimentation, future work consists of designing a more versatile heat pump to test a variety of disturbance scenarios. Although the heat pump model is specifically designed for the CPT, it proves to be versatile for other closed and pressurized environments such as aircraft and clean rooms according to the analysis of dehumidification and dependence on pressure. </p>
|
328 |
Federated DeepONet for Electricity Demand Forecasting: A Decentralized Privacy-preserving ApproachZilin Xu (11819582) 02 May 2023 (has links)
<p>Electric load forecasting is a critical tool for power system planning and the creation of sustainable energy systems. Precise and reliable load forecasting enables power system operators to make informed decisions regarding power generation and transmission, optimize energy efficiency, and reduce operational costs and extra power generation costs, to further reduce environment-related issues. However, achieving desirable forecasting performance remains challenging due to the irregular, nonstationary, nonlinear, and noisy nature of the observed data under unprecedented events. In recent years, deep learning and other artificial intelligence techniques have emerged as promising approaches for load forecasting. These techniques have the ability to capture complex patterns and relationships in the data and adapt to changing conditions, thereby enhancing forecasting accuracy. As such, the use of deep learning and other artificial intelligence techniques in load forecasting has become an increasingly popular research topic in the field of power systems. </p>
<p>Although deep learning techniques have advanced load forecasting, the field still requires more accurate and efficient models. One promising approach is federated learning, which allows for distributed data analysis without exchanging data among multiple devices or cen- ters. This method is particularly relevant for load forecasting, where each power station’s data is sensitive and must be protected. In this study, a proposed approach utilizing Federated Deeponet for seven different power stations is introduced, which proposes a Federated Deep Operator Networks and a Lagevin Dynamics-based Federated Deep Operator Networks using Stochastic Gradient Langevin Dynamics as the optimizer for training the data daily for one-day and predicting for one-day frequencies by frequencies. The data evaluation methods include mean absolute percentage error and the percentage coverage under confidence interval. The findings demonstrate the potential of federated learning for secure and precise load forecasting, while also highlighting the challenges and opportunities of implementing this approach in real-world scenarios. </p>
|
329 |
COMPUTATIONAL MODELING OF SKIN GROWTH TO IMPROVE TISSUE EXPANSION RECONSTRUCTIONTianhong Han (15339766) 29 April 2023 (has links)
<p>Breast cancer affects 12.5\% of women over their life time and tissue expansion (TE) is the most common technique for breast reconstruction after mastectomy. However, the rate of complications with TE can be as high as 15\%. Even though the first documented case of TE happened in 1957, there has yet to be a standardized procedure established due to the variations among patients and the TE protocols are currently designed based on surgeon's experience. There are several studies of computational and theoretical framework modeling skin growth in TE but these tools are not used in the clinical setting. This dissertation focuses on bridging the gap between the already existing skin growth modeling efforts and it's potential application in the clinical setting.</p>
<p><br></p>
<p>We started with calibrating a skin growth model based on porcine skin expansions data. We built a predictive finite element model of tissue expansion. Two types of model were tested, isotropic and anisotropic models. Calibration was done in a probabilistic framework, allowing us to capture the inherent biological uncertainty of living tissue. We hypothesized that the skin growth rate was proportional to stretch. Indeed, the Bayesian calibration process confirmed that this conceptual model best explained the data. </p>
<p><br></p>
<p>Although the initial model described the macroscale response, it did not consider any activity on the cellular level. To account for the underlying cellular mechanisms at the microscopic scale, we have established a new system of differential equations that describe the dynamics of key mechanosensing pathways that we observed to be activated in the porcine model. We calibrated the parameters of the new model based on porcine skin data. The refined model is still able to reproduce the observed macroscale changes in tissue growth, but now based on mechanistic knowledge of the cell mechanobiology. </p>
<p><br></p>
<p>Lastly, we demonstrated how our skin growth model can be used in a clinical setting. We created TE simulations matching the protocol used in human patients and compared the results with clinical data with good agreement. Then we established a personalized model built from 3D scans of a patient unique geometry. We verified our model by comparing the skin growth area with the area of the skin harvested in the procedure, again with good agreement.</p>
<p><br></p>
<p>Our work shows that skin growth modeling can be a powerful tool to aid surgeons design TE procedures before they are actually performed. The simulations can help with optimizing the protocol to guarantee the correct amount of skin is growth in the shortest time possible without subjecting the skin to deformations that can compromise the procedure.</p>
|
330 |
BAYESIAN OPTIMIZATION FOR DESIGN PARAMETERS OF AUTOINJECTORS.pdfHeliben Naimeshkum Parikh (15340111) 24 April 2023 (has links)
<p>The document describes the computational framework to optimize spring-driven Autoinjectors. It involves Bayesian Optimization for efficient and cost-effective design of Autoinjectors.</p>
|
Page generated in 0.1827 seconds