Spelling suggestions: "subject:"scattered""
121 |
Algoritmo de busca dispersa aplicado ao problema de fluxo de potência ótimo considerando o desligamento de linhas de transmissão /Garcia, André Mendes January 2019 (has links)
Orientador: Rubén Augusto Romero Lázaro / Resumo: O principal objetivo deste trabalho é a implementação de uma metodologia que, utilizando a meta-heurística de busca dispersa (BD) resolva o problema de fluxo de potência ótimo (FPO) considerando o desligamento de linhas de transmissão (OTS) para a redução dos custos de ope-ração. Com o objetivo de avaliar o potencial da meta-heurística, o algoritmo de BD foi imple-mentado para otimizar funções multimodais restritas, metodologia denominada BD-FMR, e para resolver o problema de FPO, metodologia denominada BD-FPO. Foram realizados testes com onze problemas de funções multimodais restritas disponíveis na literatura especializada, utili-zando a metodologia BD-FMR, sendo que os resultados obtidos são comparáveis com os me-lhores resultados disponíveis na literatura. O problema de FPO foi resolvido pela metodologia BD-FPO utilizando três sistemas teste de 6, 14 e 57 barras, sendo que os resultados não foram satisfatórios quando comparados com as soluções do modelo exato do problema obtidas pelo solver KNITRO. Entretanto, o algoritmo BD-FPO serviu de base para a implementação da me-todologia principal deste trabalho. Por fim, a metodologia BD-OTS foi implementada em lin-guagem de programação C/C++, com a utilização de recursos de programação paralela através da biblioteca OpenMP. Neste trabalho a formulação utilizada para representar a operação da rede considera o modelo AC (corrente alternada), que consiste em um problema de programa-ção não linear inteira mista (PNLIM) devido a pre... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: The main objective of this work is the implementation of a methodology that, using the scatter search meta-heuristic (SS) solves the problem of optimal power flow (OPF) considering trans-mission switching (TS) to reduce the operation costs. In order to evaluate the potential of the meta-heuristic, the SS algorithm was implemented to optimize constrained multimodal func-tions, a methodology called BD-FMR, and to solve the OPF problem, a methodology called BD-FPO. Eleven constrained multimodal problems available in the specialized literature were solved using the BD-FMR method, and the results obtained are comparable with the best results available in the literature. The OPF problem was solved by the BD-FPO methodology using three test systems with 6, 14, and 57 buses, and the results were not satisfactory when compared to the solutions of the exact formulation of the problem obtained by the KNITRO solver. How-ever, the BD-FPO algorithm served as the basis for the implementation of the main method of this work. Finally, the BD-OTS method was implemented in the C/C ++ programming lan-guage, using parallel programming resources through the OpenMP library. In this work, the formulation used to represent the operation of the grid considers the alternating current (AC) model, which consists of a mixed-integer nonlinear programming (MINLP) problem due to the presence of discrete variables related to the operation state a line, transformer tap position and the operating state of the s... (Complete abstract click electronic access below) / Doutor
|
122 |
Deformation behaviour and twinning mechanisms of commercially pure titanium alloysBattaini, Michael January 2008 (has links)
The deformation behaviour and twinning mechanisms of commercially pure titanium alloys were investigated using complementary diffraction techniques and crystal plasticity modelling. The main motivation for conducting this investigation was to improve understanding of the deformation of titanium to help achieve the long term aim of reducing manufacturing and design costs. The deformation behaviour was characterised with tension, compression and channel die compression tests for three important variables: orientation; temperature from 25 C to 600 C; and composition for two contrasting alloys, CP-G1 and CP-G4. The experimental data used to characterise the behaviour and determine the mechanisms causing it were: textures determined by X-ray diffraction; twin area fractions for individual modes determined using electron back-scatter diffraction; and lattice strains measured by neutron diffraction. A strong effect of the orientation–stress state conditions on the flow curves (flow stress anisotropy) was found. The propensity for prism hai slip was the dominant cause of the behaviour – samples that were more favourably oriented for prism hai slip had lower flow stresses. Twinning was the most significant secondary deformation mode in the CP-G1 alloy but only had a minor effect on flow stress anisotropy in most cases. In the CP-G4 alloy twinning generally did not play a significant role indicating that hc + ai slip modes were significant in this alloy. Differences in the flow stress anisotropy between the two alloys were found to occur largely in the elasto-plastic transition and initial period of hardening. Modelling results indicated that larger relative resolved shear stress values for secondary deformation modes in the higher purity alloy increased the initial anisotropy. Decreasing flow stresses with increasing temperature were largely caused by a decrease in the critical resolved shear stress (CRSS) values for slip, but also by a decrease in the Hall-Petch parameter for slip. The propagation of twinning was found to be orientation dependent through a Schmid law in a similar way to slip – it was activated at a CRSS and hardened so that an increasing resolved shear stress was required for it to continue operating. The CRSS values determined for the individual twin modes were – 65MPa, 180MPa, 83MPa for {1012}, {1122} and {1011} twinning, respectively. Further, twinning was found to be temperature insensitive except when the ability to nucleate twins posed a significant barrier (for {1011} twinning). Also, the CRSS for {1012} twinning was clearly shown to increase with decreasing alloy purity. A thorough method for determining crystal plasticity modelling parameters based on experimental data was formulated. Additionally, twinning was modelled in a physically realistic manner influenced by the present findings using the visco-plastic self-consistent (VPSC) model. In particular: the activity of twinning decreased in a natural way due to greater difficulty in its operation rather than through an enforced saturation; and hardening or softening due to changes in orientation and dynamic Hall-Petch hardening were important. The rigorous modelling procedure gave great confidence in the key experimental findings.
|
123 |
Novel computational methods for image analysis and quantification using position sensitive radiation detectorsSanchez Crespo, Alejandro January 2005 (has links)
<p>The major advantage of position sensitive radiation detector systems lies in their ability to non invasively map the regional distribution of the emitted radiation in real-time. Three of such detector systems were studied in this thesis, gamma-cameras, positron cameras and CMOS image sensors. A number of physical factors associated to these detectors degrade the qualitative and quantitative properties of the obtained images. These blurring factors could be divided into two groups. The first group consists of the general degrading factors inherent to the physical interaction processes of radiation with matter, such as scatter and attenuation processes which are common to all three detectors The second group consists of specific factors inherent to the particular radiation detection properties of the used detector which have to be separately studied for each detector system. Therefore, the aim of this thesis was devoted to the development of computational methods to enable quantitative molecular imaging in PET, SPET and in vivo patient dosimetry with CMOS image sensors.</p><p>The first task was to develop a novel quantitative dual isotope method for simultaneous assessments of regional lung ventilation and perfusion using a SPET technique. This method included correction routines for photon scattering, non uniform attenuation at two different photon energies (140 and 392 keV) and organ outline. This quantitative method was validated both with phantom experiments and physiological studies on healthy subjects.</p><p>The second task was to develop and clinically apply a quantitative method for tumour to background activity uptake measurements using planar mammo-scintigraphy, with partial volume compensation.</p><p>The third stage was to produce several computational models to assess the spatial resolution limitations in PET from the positron range, the annihilation photon non-collineairy and the photon depth of interaction.</p><p>Finally, a quantitative image processing method for a CMOS image sensor for applications in ion beam therapy dosimetry was developed.</p><p>From the obtained phantom and physiological results it was concluded that the methodologies developed for the simultaneous measurement of the lung ventilation and perfusion and for the quantification of the tumour malignancy grade in breast carcinoma were both accurate. Further, the obtained models for the influence that the positron range in various human tissues, and the photon emission non-collinearity and depth of interaction have on PET image spatial resolution, could be used both to optimise future PET camera designs and spatial resolution recovery algorithms. Finally, it was shown that the proton fluence rate in a proton therapy beam could be monitored and visualised by using a simple and inexpensive CMOS image sensor.</p>
|
124 |
Corrections for improved quantitative accuracy in SPECT and planar scintigraphic imagingLarsson, Anne January 2005 (has links)
A quantitative evaluation of single photon emission computed tomography (SPECT) and planar scintigraphic imaging may be valuable for both diagnostic and therapeutic purposes. For an accurate quantification it is usually necessary to correct for attenuation and scatter and in some cases also for septal penetration. For planar imaging a background correction for the contribution from over- and underlying tissues is needed. In this work a few correction methods have been evaluated and further developed. Much of the work relies on the Monte Carlo method as a tool for evaluation and optimisation. A method for quantifying the activity of I-125 labelled antibodies in a tumour inoculated in the flank of a mouse, based on planar scintigraphic imaging with a pin-hole collimator, has been developed and two different methods for background subtraction have been compared. The activity estimates of the tumours were compared with measurements in vitro. The major part of this work is attributed to SPECT. A method for attenuation and scatter correction of brain SPECT based on computed tomography (CT) images of the same patient has been developed, using an attenuation map calculated from the CT image volume. The attenuation map is utilised not only for attenuation correction, but also for scatter correction with transmission dependent convolution subtraction (TDCS). A registration method based on fiducial markers, placed on three chosen points during the SPECT examination, was evaluated. The scatter correction method, TDCS, was then optimised for regional cerebral blood flow (rCBF) SPECT with Tc-99m, and was also compared with a related method, convolution scatter subtraction (CSS). TDCS has been claimed to be an iterative technique. This requires however some modifications of the method, which have been demonstrated and evaluated for a simulation with a point source. When the Monte Carlo method is used for evaluation of corrections for septal penetration, it is important that interactions in the collimator are taken into account. A new version of the Monte Carlo program SIMIND with this capability has been evaluated by comparing measured and simulated images and energy spectra. This code was later used for the evaluation of a few different methods for correction of scatter and septal penetration of I-123 brain SPECT. The methods were CSS, TDCS and a method where correction for scatter and septal penetration are included in the iterative reconstruction. This study shows that quantitative accuracy in I-123 brain SPECT benefits from separate modelling of scatter and septal penetration.
|
125 |
Novel computational methods for image analysis and quantification using position sensitive radiation detectorsSanchez Crespo, Alejandro January 2005 (has links)
The major advantage of position sensitive radiation detector systems lies in their ability to non invasively map the regional distribution of the emitted radiation in real-time. Three of such detector systems were studied in this thesis, gamma-cameras, positron cameras and CMOS image sensors. A number of physical factors associated to these detectors degrade the qualitative and quantitative properties of the obtained images. These blurring factors could be divided into two groups. The first group consists of the general degrading factors inherent to the physical interaction processes of radiation with matter, such as scatter and attenuation processes which are common to all three detectors The second group consists of specific factors inherent to the particular radiation detection properties of the used detector which have to be separately studied for each detector system. Therefore, the aim of this thesis was devoted to the development of computational methods to enable quantitative molecular imaging in PET, SPET and in vivo patient dosimetry with CMOS image sensors. The first task was to develop a novel quantitative dual isotope method for simultaneous assessments of regional lung ventilation and perfusion using a SPET technique. This method included correction routines for photon scattering, non uniform attenuation at two different photon energies (140 and 392 keV) and organ outline. This quantitative method was validated both with phantom experiments and physiological studies on healthy subjects. The second task was to develop and clinically apply a quantitative method for tumour to background activity uptake measurements using planar mammo-scintigraphy, with partial volume compensation. The third stage was to produce several computational models to assess the spatial resolution limitations in PET from the positron range, the annihilation photon non-collineairy and the photon depth of interaction. Finally, a quantitative image processing method for a CMOS image sensor for applications in ion beam therapy dosimetry was developed. From the obtained phantom and physiological results it was concluded that the methodologies developed for the simultaneous measurement of the lung ventilation and perfusion and for the quantification of the tumour malignancy grade in breast carcinoma were both accurate. Further, the obtained models for the influence that the positron range in various human tissues, and the photon emission non-collinearity and depth of interaction have on PET image spatial resolution, could be used both to optimise future PET camera designs and spatial resolution recovery algorithms. Finally, it was shown that the proton fluence rate in a proton therapy beam could be monitored and visualised by using a simple and inexpensive CMOS image sensor.
|
126 |
Implementation and evaluation of scatter estimation algorithms in positron emission tomography / Υλοποίηση και αξιολόγηση αλγόριθμων υπολογισμού σκέδασης για την τομογραφική απεικόνιση ποζιτρονίωνΤσούμπας, Χαράλαμπος 27 August 2009 (has links)
In positron emission tomography (PET) the current trend is to use the fully 3D capabilities of the scanner to increase sensitivity, hence improve the quality of data or reduce the scanning time. However, some difficulties have to be resolved. In 3D PET, the largest contributor to image degradation is Compton scatter since the scattered photons may comprise more than 50% out of all coincidences in the whole body studies. Much progress has been achieved the last few years by the use of scatter correction algorithms, such as the single scatter simulation (SSS). In this work, a model-based scatter simulation (MBSS) algorithm has been implemented in a software library called STIR (i.e. Software for Tomographic Image Reconstruction) initially based on SSS.
The aim of the current work is to validate the MBSS implementation; investigate the influence of several parameters; and, if possible extend the existing algorithm. The results are compared with both SimSET Monte Carlo simulation package and measured data. The comparison shows that SSS is in excellent agreement with the single scatter distribution produced by SimSET and in several cases can also approximate accurately the total scatter.
However, SSS is just an attempt to estimate the total Compton scatter effect, as it is possible that both photons may scatter, and potentially more than once. As shown, the single scatter distribution may have different shape from the total scatter distribution. How accurate this approximation is, it depends on how many detected photons are scattered multiple times. Multiple scatter is more likely to happen if the attenuation medium has large volume, hence it is more severe in 3D studies of the torso than the brain. In this work, the methodology used for the single scatter simulation algorithm is extended to handle twice-scattered events. Detailed description on how to implement the double scatter simulation (DSS) together with a preliminary evaluation is included. The results are promising even if the required computational time for DSS is much higher than for SSS, though not being prohibited. Finally, at the end of the thesis, an efficient recursive formula is proposed to estimate the rest multiple scatter distribution. / Κατά την τομογραφική απεικόνιση εκπομπόμενων ποζιτρονίων είναι αρκετά διαδεδομένη η χρήση της τρισδιάστατης ανίχνευσης, ώστε να βελτιωθεί η ευαισθησία και η ποιότητα των δεδομένων, αλλά και να μειωθεί ο συνολικός χρόνος εξέτασης. Για να είναι αυτά εφικτά πρέπει πρωτίστως να αντιμετωπιστούν αποτελεσματικά κάποιες δυσκολίες. Συγκεκριμένα, ένας από τους σημαντικότερους παράγοντες που υποβαθμίζουν την ποιότητα της εικόνας είναι η σκέδαση Compton, διότι, εξαιτίας αυτής, τα σκεδαζόμενα φωτόνιων που ανιχνεύονται μπορούν να ξεπεράσουν το 50% των συνολικών ανιχνεύσεων σε αρκετές μελέτες του ανθρώπινου κορμού. Σημαντική πρόοδος έχει επιτευχθεί τα τελευταία χρόνια με τη χρήση αλγόριθμων διόρθωσης σκέδασης και, κυρίως, με τη χρήση του αλγόριθμου προσομοίωσης μίας και μόνο σκέδασης (ΠΜΣ). Στην παρούσα μελέτη, ένας αλγόριθμος βασισμένος σε αυτό το μοντέλο δημιουργήθηκε σε μια βιβλιοθήκη λογισμικού για ανακατασκευή τομογραφικής εικόνας.
Ο στόχος αυτής της εργασίας είναι να πιστοποιήσει τη σωστή λειτουργία του αλγόριθμου, να μελετήσει την επίδραση διαφόρων παραμέτρων και, εάν είναι εφικτό, να τη βελτιώσει. Η σύγκριση των αποτελεσμάτων έδειξε πως ο ΠΜΣ επιβεβαιώνεται με Μόντε Κάρλο προσομοιώσεις.
Ωστόσο, ο αλγόριθμος ΠΜΣ είναι μια προσέγγιση του συνολικού ποσοστού φωτονίων σκέδασης Compton. Υπάρχει πάντα πιθανότητα και τα δύο φωτόνια να σκεδαστούν μία ή και περισσότερες φορές. Όπως αποδεικνύεται στην παρούσα μελέτη, η κατανομή μίας και μόνο σκέδασης έχει διαφορετική μορφή σε σύγκριση με τη συνολική κατανομή της. Πόσο ακριβής είναι αυτή η προσέγγιση εξαρτάται από τον αριθμό των πολλαπλά σκεδαζόμενων φωτονίων που έχουν ανιχνευτεί. Το φαινόμενο πολλαπλής σκέδασης είναι πιθανότερο εάν το μέσον απορρόφησης ακτινοβολίας καταλαμβάνει μεγάλον όγκο και συνεπώς κατά τις τρισδιάστατες μελέτες του κορμού, παρά του εγκεφάλου. Στην παρούσα εργασία η μεθοδολογία που χρησιμοποιήθηκε για τον αλγόριθμο προσομοίωσης μίας και μόνο σκέδασης επεκτάθηκε, ώστε να συμπεριλάβει και γεγονότα διπλής σκέδασης. Μια αναλυτική περιγραφή παρουσιάζεται για το πώς μπορεί να υλοποιηθεί η προσομοίωση διπλής σκέδασης (ΠΔΣ), που ακολουθείται από μία προκαταρκτική αξιολόγηση. Τα αποτελέσματα είναι αρκετά ενθαρρυντικά ακόμη και αν ο απαιτούμενος υπολογιστικός χρόνος για την ΠΔΣ είναι αρκετά μεγαλύτερος από την ΠΜΣ, χωρίς να την καθιστά απαγορευτική. Στο τέλος της διπλωματικής εργασίας προτείνεται ένας ολοκληρωμένος αναδρομικός αλγόριθμος για τον αποδοτικό υπολογισμό του συνολικού ποσοστού σκεδάσεων.
|
127 |
Étude des artefacts en tomodensitométrie par simulation Monte CarloBedwani, Stéphane 08 1900 (has links)
En radiothérapie, la tomodensitométrie (CT) fournit l’information anatomique du patient utile au calcul de dose durant la planification de traitement. Afin de considérer la composition hétérogène des tissus, des techniques de calcul telles que la méthode Monte Carlo sont nécessaires pour calculer la dose de manière exacte. L’importation des images CT dans un tel calcul exige que chaque voxel exprimé en unité Hounsfield (HU) soit converti en une valeur physique telle que la densité électronique (ED). Cette conversion est habituellement effectuée à l’aide d’une courbe d’étalonnage HU-ED. Une anomalie ou artefact qui apparaît dans une image CT avant l’étalonnage est
susceptible d’assigner un mauvais tissu à un voxel. Ces erreurs peuvent causer une perte cruciale de fiabilité du calcul de dose.
Ce travail vise à attribuer une valeur exacte aux voxels d’images CT afin d’assurer la fiabilité des calculs de dose durant la planification de traitement en radiothérapie. Pour y parvenir, une étude est réalisée sur les artefacts qui sont reproduits par simulation Monte Carlo. Pour réduire le temps de calcul, les simulations sont parallélisées et transposées sur un superordinateur. Une étude de sensibilité des nombres HU en présence d’artefacts est ensuite réalisée par une analyse statistique des histogrammes. À l’origine de nombreux artefacts, le durcissement de faisceau est étudié davantage. Une revue sur l’état de l’art en matière de correction du durcissement de faisceau est présentée suivi d’une démonstration explicite d’une correction empirique. / Computed tomography (CT) is widely used in radiotherapy to acquire patient-specific data for an accurate dose calculation in radiotherapy treatment planning. To consider the composition of heterogeneous tissues, calculation techniques such as Monte Carlo method are needed to compute an exact dose distribution. To use CT images with dose calculation algorithms, all voxel values, expressed in Hounsfield unit (HU), must be converted into relevant physical parameters such as the electron density (ED). This conversion is typically accomplished by means of a HU-ED calibration curve. Any discrepancy (or artifact) that appears in the reconstructed CT image prior to calibration is
susceptible to yield wrongly-assigned tissues. Such tissue misassignment may crucially decrease the reliability of dose calculation.
The aim of this work is to assign exact physical values to CT image voxels to insure the reliability of dose calculation in radiotherapy treatment planning. To achieve this, origins of CT artifacts are first studied using Monte Carlo simulations. Such simulations require a lot of computational time and were parallelized to run efficiently on a supercomputer. An sensitivity study on HU uncertainties due to CT artifacts is then performed using statistical analysis of the image histograms. Beam hardening effect appears to be the origin of several artifacts and is specifically addressed. Finally, a review on the state of the art in beam hardening correction is presented and an empirical correction is exposed in detail.
|
128 |
Simulations of a back scatter time of flight neutron spectrometer for the purpose of concept testing at the NESSA facility.Eriksson, Benjamin January 2018 (has links)
A back scatter time of flight neutron spectrometer consisting of two scintillation detectors is simulated in Geant4 to examine whether it is possible to perform a proof of concept test at the NESSA facility at Uppsala University. An efficiency of ε = 2.45 · 10^-6 is shown to be large enough for a neutron generator intensity of 1.9 · 10^10 neutrons per second to achieve a minimal required signal count rate of 10000 counts per hour. A corresponding full width at half maximum energy resolution of 8.3% is found. The background in one of the detectors is simulated in MCNP and found to be a factor 62 larger than the signal for a given set of pulse height thresholds in the detectors. Measures to increase the signal to background ratio are discussed and an outlook for future work concerning testing the spectrometer at NESSA is presented.
|
129 |
Mécanismes d'activation du récepteur tyrosine kinase MET par son ligand l'HGF/SF : rôles des domaines N et K1 / MET receptor activation mechanisms by HGF/SF : new insights about N and K1 domains contributionSimonneau, Claire 25 September 2015 (has links)
L’HGF/SF (Hepatocyte Growth Factor/Scatter Factor) est le ligand du Récepteur Tyrosine Kinase (RTK) MET. Ce couple ligand-récepteur joue un rôle essentiel dans de nombreux processus biologiques tels que l’embryogenèse, la régénération tissulaire et l’angiogenèse. Comme pour de nombreux RTK, la dérégulation de l’activité de MET est associée à la progression et l’invasion tumorales. Bien que le récepteur MET ait été intensivement étudié au cours de ces dernières décennies, les processus moléculaires conduisant à son activation par l’HGF/SF restent encore mal connus et controversés.NK1, un variant naturel de l’HGF/SF, comprenant la partie N-terminale (N) et le premier domaine kringle (K1) de l’HGF/SF, possède une activité agoniste. En effet, NK1 dimérise spontanément en position « tête-bêche » et est considéré aujourd’hui comme la structure minimale permettant la dimérisation de MET et son activation. Afin de déterminer leur contribution respective, les domaines N et K1 isolés ont été produits par voie recombinante et ne montrent aucune ou qu’une très faible activité agoniste respectivement. Une présentation monovalente de ces domaines au récepteur MET ne semble donc pas pertinente pour déterminer leur fonction.Par conséquent, nous avons souhaité générer des complexes multivalents mimant le positionnement des domaines N et K1 au sein du dimère naturel. En tirant partie de la « One-Pot SEA ligation » développée au laboratoire, ces domaines ont été synthétisés par voie chimique et fonctionnalisés avec une extrémité C-terminale biotinylée (NB et K1B). En utilisant la streptavidine (S) comme plateforme de multimérisation, nous avons généré des complexes semi-synthétiques NB/S et K1B/S et déterminé les propriétés biologiques de ces nouvelles constructions multivalentes.L’ensemble des analyses de signalisations cellulaires et phénotypiques démontre sans équivoque que le complexe K1B/S est capable de mimer les réponses biologiques induites par l’HGF/SF et son variant NK1. De plus, le complexe K1B/S, injecté dans la circulation systémique, déclenche la signalisation de MET dans le foie. L’utilisation de ce complexe K1B/S nous a permis de démontrer que deux domaines K1, correctement assemblés et orientés, constituent l'interface minimale et suffisante requise pour déclencher une pleine activation de MET. A l’inverse, les premières données fonctionnelles ont démontré que le complexe NB/S ne lie pas directement MET mais utilise les héparanes sulfates comme pont moléculaire.Ces études utilisant de nouvelles configurations structurales pourraient donc servir de modèle de base au développement de nouveaux agonistes de MET dans le cadre de thérapies régénératives ou préservatrices, mais aussi d’antagonistes dans le cadre de thérapies anticancéreuses ciblées. / Hepatocyte Growth Factor/Scatter Factor (HGF/SF) and its receptor tyrosine kinase (RTK) MET play an essential role in embryogenesis, tissue regeneration and angiogenesis. As observed for many others RTK, MET is also strongly involved in tumor progression and invasion mechanisms. Although numerous biological and structural approaches have been focused on the molecular processes leading to MET activation by HGF/SF, the HGF/SF-MET interaction framework remains only partially understood due to the complexity of the multivalent ligand-receptor binding events.NK1, a naturally occurring splice variant of HGF/SF, comprising the N-terminal part and the first kringle domain (K1) of HGF/SF, exhibits a partial agonistic activity toward MET. Indeed, in presence of heparan sulfates, NK1 self-associates into a “head-to-tail” dimer and is considered as the minimal structural module able to trigger MET dimerization and activation. Nevertheless, the individual role of N and K1 domains in the dimerization/activation of MET remain elusive.Stimulated by the conviction that monomeric N and K1 domains are not suitable for studying the functioning of HGF/SF-MET, we produced, by total chemical synthesis, biotinylated analogs of the N and K1 domains (NB and K1B). By combining with streptavidin (S), we engineered the semisynthetic constructs NB/S and K1B/S in order to determine the biological properties of these new multivalent architectures of N and K1 domains.In vitro, as observed with HGF/SF or NK1, we show that the K1B/S complex is able to fully activate MET signaling cascades to promote scattering, morphogenesis and survival phenotypes in various cell types. Even more, the K1B/S complex stimulates angiogenesis in vivo and, when injected systemically, triggers MET signaling in the liver. The use of this K1B/S complex allows us to demonstrate that two K1 domains, correctly assembled and oriented, constitute the minimal unit for sufficient MET activation. In contrast, first in vitro data have demonstrated that NB/S complex does not bind directly MET as previously thought, but rather, uses heparan sulfates as a molecular bridge.We envision these new structural configurations serving as a template for both the rational design of potent MET agonists (e.g. using K1B/S for regenerative therapies) and antagonists (e.g. using NB/S for targeted cancer therapies).
|
130 |
Introdução à econometria no Ensino Médio : aplicações da regressão linearWill, Ricardo de Souza January 2016 (has links)
Orientador: Prof. Dr. André Ricardo Oliveira da Fonseca / Dissertação (mestrado) - Universidade Federal do ABC, Programa de Pós-Graduação em Mestrado Profissional em Matemática em Rede Nacional, 2016. / Esta dissertação tem como objetivo propor e dar subsídios aos professores de Matemática do 3º ano do ensino médio sobre temas envolvendo Estatística e Economia,
tendo como sugestão a Econometria, especificamente a Regressão Linear Simples e Múltipla, em virtude de possuir conceitos muito abrangentes e que permitirá ao aluno desenvolver condições de entender as diversas aplicações e ser capaz de reconhecer o fenômeno linear e utilizar a regressão para fazer previsões.
Abordaremos no capítulo 1 uma revisão dos conceitos de Estatística dando ênfase ao desvio padrão, intervalos de confiança e teste de hipóteses.
No capítulo 2 teremos os conceitos de Econometria, como: a origem da palavra, a análise econométrica de um modelo matemático, os objetivos e a metodologia econométrica. Dando uma atenção em especial a Keynes e seus postulados sobre propensão marginal ao consumo e a poupar. Também permitirá aos alunos utilizarem seus conhecimentos de obtenção e tabulação dos dados das variáveis observadas, a construção do gráfico de dispersão, o ajustamento de uma reta que passa pelos pontos, determinar os parâmetros e a equação da reta. No capítulo 3 trataremos do Modelo de Regressão Linear Simples. Inicialmente
damos uma atenção especial a Galton que deu origem ao conceito de correlação, então passaremos para os cálculos dos parâmetros, dos resíduos, da variância, do desvio padrão e dos coeficientes de correlação e determinação. Utilizaremos os conceitos de estatística que foram relembrados no capítulo 1.
No capítulo 4 trataremos do Modelo de Regressão Linear Múltipla, portanto, abordaremos as diferenças quando deparamos com duas ou mais variáveis explicativas. O
professor poderá revisar os conceitos de matrizes. Por fim no capítulo 5 teremos o plano de aula, a escolha do público alvo e da unidade escolar, calendário e cronograma das atividades com os alunos e os resultados obtidos. / This dissertation aims to propose and to give subsidies to Mathematics teachers of the 3rd year of high school on topics involving Statistics and Economics, with Econometrics as a suggestion, specifically the Simple Linear Regression and Multiple, due to having very broad concepts and allow the student to develop a condition to understand the various applications and be able to recognize the linear phenomenon and use regression to make predictions.
We discuss in chapter 1 a review of statistical concepts emphasizing the standard deviation, confidence intervals and hypothesis testing. In chapter 2, we will have concepts of Econometrics as the word¿s origin, the econometric analysis of a mathematical model, econometrics¿ goals and methodology. With
a particular attention to Keynes and his postulates of small propensity for consuming and saving. It will also allow students to use knowledge of collecting and observed variables data tabulating, the construction of the scatter plot, adjustment of a line that passes through the points and determination of parameters and equation of the line. In chapter 3, we will deal of the Simple Linear Regression Model. Initially we give a special attention to Galton that gave rise to the concept of correlation, then move on to the calculations of the parameters of waste, variance, standard deviation and to
correlation and determination¿s coefficients. We will use the statistical concepts that were recalled in Chapter 1. In chapter 4, we will treat the Multiple Linear Regression Model, therefore, we will discuss the differences when faced with two or more explanatory variables. The teacher may review the concepts of matrices. Finally, in Chapter 5, we have the lesson plan, target audience and the school unit choice, calendar and schedule of activities with the students and the results obtained.
|
Page generated in 0.2139 seconds