131 |
Modelování metod číslicového zpracování obrazu u digitální radiografie / Digital radiography- image processing simulationLamoš, Martin January 2010 (has links)
This paper describes a MATLAB application with the main purpose of the simulation of noise components and noise elimination methods of Digital Radiography. The main parts of simulator are the model of a scene, procedures for loading the noise components to image data and methods for image processing. Various methods are employed depending on the type of noise. Subtraction techniques are used for the elimination of structural noise. The physical noise suppression is obtained using several methods of cumulation and Pixel Shift is used to reduce motion artifacts caused by the existence of moving noise. The techniques of superposition highlight the areas of interest in an image. Included are also auxiliary procedures for simulator running and presentation of final data. The model and the presented application can be used mainly for educational purposes as a powerful didactic tool.
|
132 |
Neural Evidence for the Influence of Communication on Cognitive Processing as Proposed by Quantum Cognition TheoryBorghetti, Lorraine 09 October 2019 (has links)
No description available.
|
133 |
Untersuchungen zum laminar-turbulenten Transitionsprozess bei Anregung und Dämpfung schräglaufender Tollmien-Schlichting-WellenKnörnschild, Ulrich 21 January 2002 (has links)
Als Teilprojekt des Themenkreises III "Transitionskontrolle" des Schwerpunkt-Forschungsprogramms "Transition" der Deutschen Forschungsgemeinschaft, konzentriert sich diese Arbeit auf experimentelle Grundlagenuntersuchungen zum laminar-turbulenten Grenzsichtumschlag. Die Experimente wurden in der Grenzschicht einer ebenen, parallel angeblasenen, hydraulisch glatten Platte durchgeführt. Einen besonderen Schwerpunkt bildet die Abhängigkeit der Entwicklung der Instabilitäten, der sogenannten Tollmien-Schlichting Wellen, von deren Schräglaufwinkel zur Plattenvorderkante. Weiterhin wird der Einfluss zahlreicher Parameter wie z.B. des Schalldruckpegels und der Anregungsfrequenz diskutiert. Die Anregung der Tollmien-Schlichting Wellen erfolgte über periodisches Ausblasen / Ansaugen von Luft durch oberflächenbündige Schlitze quer zur Strömungsrichtung. Mit einem zeitlich hochauflösenden, restlichverstärkendem Kamerasystem konnten Aufnahmen der Strömungsvisualisierung erzielt werden, die unter anderem die zeitliche Entwicklung von Wirbelstrukturen (Lambda- Wirbel) zeigen. Zur Analyse der experimentell gewonnen Daten werden vergleichend Berechnungen nach der "Linearen Stabilitätstheorie" diskutiert. Einen weiteren Schwerpunkt bilden Untersuchungen zur aktiven Transitionskontrolle. Dabei wird der künstlich angeregten Tollmien Schlichting Welle eine gegenphasige Störwelle stromab überlagert. Es konnte nachgewiesen werden, das mit diesem Verfahren entsprechend des Superpositionsprinzips, die anfängliche Störamplitude der Tollmien Schlichting Welle deutlich reduziert wird. Es kommt zu einer fast vollständigen Störauslöschung. Untersuchungen im Nahfeld der Störeinkopplung, sowohl im Bereich der Anregnung als auch der gegenphasigen Dämpfungseinkopplung, zeigen deren Auswirkung auf die Entwicklung der Grenzschicht. / A sub-project of the working group III, "Transition Control" of the German Research Community's project "Transition", this paper is focused on experimental fundamental investigations in the field of laminar-turbulent transition. The experiments were carried out in the boundary layer of a flat plat with tangential blowing. The main topic is the development of instabilities, or so-called Tollmien-Schlichting Waves (TSWs), based on the oblique angle between the TSWs and the leading edge. In addition the influence of other parameters, including the sound-pressure level and the frequency of the TSWs are discussed. The instabilities are initialised by suction and blowing through flush, oblique slots in the surface of a flat plate. Pictures of the flow visualisation, recorded with a high-speed camera system, show the time-resolved development of structures (Lamda-Vortices) within the boundary layer. In order to analyse the experimental data, a comparison is made between it and numerical calculations corresponding to the Linear Instability Theory. Another main topic is the investigation of the active transition control. According to the Super Position Principle a second wave with opposite phase is superimposed on the TSW. It can be demonstrated that this technique works with oblique waves too. The initialised instabilities can almost completely be cancelled out. Investigations very close to the initialising slots of the TSW with a high special resolution show their influence on the development of the boundary layer.
|
134 |
AN EXPERIMENTAL INVESTIGATION OF MULTIPLE MODE EXCITATION OF AN INTEGRALLY BLADED DISKGarafolo, Nicholas Gordon January 2006 (has links)
No description available.
|
135 |
Dynamic Mechanical Thermal Analysis of PolyoxymethyleneAsare, Richard January 2024 (has links)
This thesis, conducted in collaboration with IKEA Components, explores the rate-temperature dependence of polyoxymethylene (POM) thermoplastic using dynamic mechanical thermal analysis (DMTA). A force-controlled DMTA study was carried out, and the experimental data were processed to derive the complex, storage, and loss modulus. Master curves were constructed using the time-temperature superposition (TTS) method, comparing the Arrhenius and William-Landel-Ferry (WLF) equations. Additionally, a master curve was manually created by shifting isothermal material properties. This manual curve was then compared to those generated using standard equations. The study found that the storage modulus was the dominant phenomenon in POM, with the loss modulus showing distortion likely due to measurement noise. Results indicated a slight softening of the storage modulus with increased cyclic loading. The manually constructed master curve was more coherent compared to those derived from the Arrhenius and WLF equations.
|
136 |
Accelerated Testing Method to Estimate the Lifetime of Polyethylene PipesKalhor, Roozbeh 26 June 2017 (has links)
The ability to quickly develop predictions of the time-to-failure under different loading levels allows designers to choose the best polymeric material for a specific application. Additionally, it helps material producers to design, manufacture, test, and modify a polymeric material more rapidly. In the case of polymeric pipes, previous studies have shown that there are two possible time-dependent failure mechanisms corresponding to ductile and brittle failure. The ductile mechanism is evident at shorter times-to-failure and results from the stretching of the amorphous region under loading and the subsequent plastic deformation. Empirical results show that many high-performance polyethylene (PE) materials do not exhibit the brittle failure mechanism. Hence, it is critical to understand the ductile mechanism and find an approach to predict the corresponding times-to-failure using accelerated means. The aim of this study is to develop an innovative rupture lifetime acceleration protocol for PE pipes which is sensitive to the structure, orientation, and morphology changes introduced by changing processing conditions. To accomplish this task, custom fixtures are developed to admit tensile and hoop burst tests on PE pipes. A pressure modified Eyring flow equation is used to predict the rupture lifetime of PE pipes using the measured mechanical properties under axial tensile and hydrostatic pressure loading in different temperatures and strain rates. In total, the experimental method takes approximately one week to be completed and allows the prediction of pipe lifetimes for service lifetime in excess of 50 years. / Master of Science / Steel, cast and galvanized iron, and asbestos cement (AC) pipelines have been historically used in water management services. However, they often experienced deterioration because of corrosion and encrustation, resulting in 23 to 27 bursts per 100 miles of pipeline in the US per year. Therefore, plastic pipes were developed to carry liquids (water and sewage), gases, etc. The Plastic Pipe Institute (PPI) requires a service life of at least 50-years for plastic pipes. Hence, pipe producers and material suppliers continuously attempt to improve the materials and manufacturing processes used for plastic pipes to increase their service lifetimes. However, there is still no plastic pipe that has been in service for 50 years. Therefore, a few techniques have been developed to accelerate the aging process and to predict if the plastic pipe is able to endure the 50-year lifetime without failure.
In this work, a combined experimental and analytical framework is presented to develop accelerated lifetime estimates for plastic pipes. Custom axial tensile and internal pressurization fixtures are developed to measure the pipe response; the analytical method is used to extend the results to predict 50-year (and beyond) behavior.
|
137 |
Lineare und nichtlineare Analyse hochdynamischer Einschlagvorgänge mit Creo Simulate und Abaqus/Explicit / Linear and Nonlinear Analysis of High Dynamic Impact Events with Creo Simulate and Abaqus/ExplicitJakel, Roland 23 June 2015 (has links) (PDF)
Der Vortrag beschreibt wie sich mittels der unterschiedlichen Berechnungsverfahren zur Lösung dynamischer Strukturpobleme der Einschlag eines idealisierten Bruchstücks in eine Schutzwand berechnen lässt. Dies wird mittels zweier kommerzieller FEM-Programme beschrieben:
a.) Creo Simulate nutzt zur Lösung die Methode der modalen Superposition, d.h., es können nur lineare dynamische Systeme mit rein modaler Dämpfung berechnet werden. Kontakt zwischen zwei Bauteilen lässt sich damit nicht erfassen. Die unbekannte Kraft-Zeit-Funktion des Einschlagvorganges muss also geeignet abgeschätzt und als äußere Last auf die Schutzwand aufgebracht werden. Je dynamischer der Einschlagvorgang, desto eher wird der Gültigkeitsbereich des zugrunde liegenden linearen Modells verlassen.
b.) Abaqus/Explicit nutzt ein direktes Zeitintegrationsverfahren zur schrittweisen Lösung der zugrunde liegenden Differentialgleichung, die keine tangentiale Steifigkeitsmatrix benötigt. Damit können sowohl Materialnichtlinearitäten als auch Kontakt geeignet erfasst und damit die Kraft-Zeit-Funktion des Einschlages ermittelt werden. Auch bei extrem hochdynamischen Vorgängen liefert diese Methode ein gutes Ergebnis. Es müssen dafür jedoch weit mehr Werkstoffdaten bekannt sein, um das nichtlineare elasto-plastische Materialverhalten mit Schädigungseffekten korrekt zu beschreiben. Die Schwierigkeiten der Werkstoffdatenbestimmung werden in den Grundlagen erläutert. / The presentation describes how to analyze the impact of an idealized fragment into a stell protective panel with different dynamic analysis methods. Two different commercial Finite Element codes are used for this:
a.) Creo Simulate: This code uses the method of modal superposition for analyzing the dynamic response of linear dynamic systems. Therefore, only modal damping and no contact can be used. The unknown force-vs.-time curve of the impact event cannot be computed, but must be assumed and applied as external force to the steel protective panel. As more dynamic the impact, as sooner the range of validity of the underlying linear model is left.
b.) Abaqus/Explicit: This code uses a direct integration method for an incremental (step by step) solution of the underlying differential equation, which does not need a tangential stiffness matrix. In this way, matieral nonlinearities as well as contact can be obtained as one result of the FEM analysis. Even for extremely high-dynamic impacts, good results can be obtained. But, the nonlinear elasto-plastic material behavior with damage initiation and damage evolution must be characterized with a lot of effort. The principal difficulties of the material characterization are described.
|
138 |
Lineare und nichtlineare Analyse hochdynamischer Einschlagvorgänge mit Creo Simulate und Abaqus/Explicit / Linear and Nonlinear Analysis of High Dynamic Impact Events with Creo Simulate and Abaqus/ExplicitJakel, Roland 23 June 2015 (has links)
Der Vortrag beschreibt wie sich mittels der unterschiedlichen Berechnungsverfahren zur Lösung dynamischer Strukturpobleme der Einschlag eines idealisierten Bruchstücks in eine Schutzwand berechnen lässt. Dies wird mittels zweier kommerzieller FEM-Programme beschrieben:
a.) Creo Simulate nutzt zur Lösung die Methode der modalen Superposition, d.h., es können nur lineare dynamische Systeme mit rein modaler Dämpfung berechnet werden. Kontakt zwischen zwei Bauteilen lässt sich damit nicht erfassen. Die unbekannte Kraft-Zeit-Funktion des Einschlagvorganges muss also geeignet abgeschätzt und als äußere Last auf die Schutzwand aufgebracht werden. Je dynamischer der Einschlagvorgang, desto eher wird der Gültigkeitsbereich des zugrunde liegenden linearen Modells verlassen.
b.) Abaqus/Explicit nutzt ein direktes Zeitintegrationsverfahren zur schrittweisen Lösung der zugrunde liegenden Differentialgleichung, die keine tangentiale Steifigkeitsmatrix benötigt. Damit können sowohl Materialnichtlinearitäten als auch Kontakt geeignet erfasst und damit die Kraft-Zeit-Funktion des Einschlages ermittelt werden. Auch bei extrem hochdynamischen Vorgängen liefert diese Methode ein gutes Ergebnis. Es müssen dafür jedoch weit mehr Werkstoffdaten bekannt sein, um das nichtlineare elasto-plastische Materialverhalten mit Schädigungseffekten korrekt zu beschreiben. Die Schwierigkeiten der Werkstoffdatenbestimmung werden in den Grundlagen erläutert. / The presentation describes how to analyze the impact of an idealized fragment into a stell protective panel with different dynamic analysis methods. Two different commercial Finite Element codes are used for this:
a.) Creo Simulate: This code uses the method of modal superposition for analyzing the dynamic response of linear dynamic systems. Therefore, only modal damping and no contact can be used. The unknown force-vs.-time curve of the impact event cannot be computed, but must be assumed and applied as external force to the steel protective panel. As more dynamic the impact, as sooner the range of validity of the underlying linear model is left.
b.) Abaqus/Explicit: This code uses a direct integration method for an incremental (step by step) solution of the underlying differential equation, which does not need a tangential stiffness matrix. In this way, matieral nonlinearities as well as contact can be obtained as one result of the FEM analysis. Even for extremely high-dynamic impacts, good results can be obtained. But, the nonlinear elasto-plastic material behavior with damage initiation and damage evolution must be characterized with a lot of effort. The principal difficulties of the material characterization are described.
|
139 |
Figurations du réel : l'exemple musical : Appuis mentaux, visées, saisies et reprojections dans l'architecture cognitive / Representations of reality : the case of music : mental anchor points, designs, input and reprojections in cognitive architectureLetailleur, Alain 18 December 2017 (has links)
La façon dont les musiciens parviennent à reconnaître les notes, par l’écoute seule ou en pratiquant leur art, a toujours fait l’objet d’une certaine fascination. Eux-mêmes, du reste, ne savent que rarement les raisons particulières qui leur permettent de disposer ainsi d’une excellente oreille musicale : « on est doué ou on ne l’est pas » reste alors souvent le raccourci qui permet de ne pas s’aventurer plus loin dans la quête d’une véritable explication. Il faut bien admettre que cette propension à pouvoir identifier des hauteurs perçues paraît ne pas trouver de véritable fondement, et ce d’autant plus que le son musical se trouve être invisible, impalpable et relativement fugace. Pour tenter de mieux comprendre les raisons liées à cette capacité mystérieuse, nous avons pris le parti d’interroger des musiciens, professionnels ou en apprentissage, afin de les questionner sur les procédures mentales qu’ils mettent en oeuvre à l’instant de l’identification notale. La description détaillée des plus petits éléments mentaux (ou la plus petite cohabitation de microéléments mentaux) que les musiciens utilisent pour effectuer cette tâche nous fait alors entrer dans un monde fascinant, qui révèle progressivement l’organisation de nombreuses actions de bas niveau, aussi ajustées à leurs fonctions que particulièrement discrètes. Ces fragments de pensées, que nous avons nommés appuis mentaux (les musiciens se repèrent en fonction de points d’ancrages mentaux adaptés pour accéder à l’identification) peuvent être décrits, sont variés dans leurs formes d’émergence à l’esprit et adoptent différents types de missions. Il a été possible de classer l’ensemble des configurations décrites en plusieurs catégories d’approches stratégiques. Certains de ces infimes gestes internes se sont tellement automatisés au fil du temps qu’ils se trouvent enfouis dans le registre inconscient. Ils deviennent alors très difficiles, voire parfois impossibles à détecter. En y regardant de plus près, nous pouvons imaginer que ces mécanismes hautement spécialisés, décrits dans un secteur restreint du monde musical, relèvent de principes fonctionnels généraux qui semblent s’activer, en réalité, à tout instant de notre vie quotidienne, pour chaque opération que nous sommes appelés à effectuer : calculer, orthographier, créer, faire du sport, cuisiner, bricoler ou bien penser tout simplement. C’est ce que la seconde partie de recherche tente de montrer dans un premier temps, pour exposer ensuite une bien étrange problématique, concernant les rapports interactifs qui s’opèrent entre contenus perceptifs et représentationnels (de nombreux témoignages font en effet état de situations où les appuis mentaux s’invitent directement sur la scène perceptive). La confrontation de ces deux univers, à travers le maniement de ce que nous avons appelé les reprojections mentales, nous met en situation de questionner les rouages qui sont en jeu dans l’édification de la cognition humaine, et interroge sur les conséquences qu’ils impliquent vis-à-vis de notre compréhension du réel. / The way musicians identify notes has always been a fascinating subject. In order to understand this competence of theirs, we have opted to interview professional and learner musicians so as to analyse the mental methods they use to fulfil this task. A detailed description of the faintest mental processes involved in so doing opens on a bewildering world which exposes an organisation of many low level actions as adapted to their functions as they are subtle. These fragments of thoughts - which we have called mental anchor points - can be described, are varied in their ways of surfacing and can engage in diverse mission types. When subjected to closer scrutiny, we can imagine that these highly specialised mechanisms fall within the sphere of general functional principles which seem to be active at every moment of our lives, for whichever operation we try to perform: calculating, taking part in sports activities, cooking or simply thinking. This is what the second part of this study first tries to show, before disclosing a strange system of issues concerning interactive relations between perceptions and representations. Many testimonies mention situations in which mental anchor points play a prominent part in our perceptive behaviour. The confrontation of these two universes, thanks to the use of what we have called mental reprojections, makes it possible for us to examine the machinery at stake in our cognitive constructions and to analyse the consequences they imply concerning our comprehension of the real world.
|
140 |
Funcions densitat i semblança molecular quàntica: nous desenvolupaments i aplicacionsGironés Torrent, Xavier 10 May 2002 (has links)
La present tesi, tot i que emmarcada dins de la teoria de les Mesures Semblança Molecular Quántica (MQSM), es deriva en tres àmbits clarament definits:- La creació de Contorns Moleculars de IsoDensitat Electrònica (MIDCOs, de l'anglès Molecular IsoDensity COntours) a partir de densitats electròniques ajustades.- El desenvolupament d'un mètode de sobreposició molecular, alternatiu a la regla de la màxima semblança.- Relacions Quantitatives Estructura-Activitat (QSAR, de l'anglès Quantitative Structure-Activity Relationships).L'objectiu en el camp dels MIDCOs és l'aplicació de funcions densitat ajustades, ideades inicialment per a abaratir els càlculs de MQSM, per a l'obtenció de MIDCOs. Així, es realitza un estudi gràfic comparatiu entre diferents funcions densitat ajustades a diferents bases amb densitats obtingudes de càlculs duts a terme a nivells ab initio. D'aquesta manera, l'analogia visual entre les funcions ajustades i les ab initio obtinguda en el ventall de representacions de densitat obtingudes, i juntament amb els valors de les mesures de semblança obtinguts prèviament, totalment comparables, fonamenta l'ús d'aquestes funcions ajustades. Més enllà del propòsit inicial, es van realitzar dos estudis complementaris a la simple representació de densitats, i són l'anàlisi de curvatura i l'extensió a macromolècules. La primera observació correspon a comprovar no només la semblança dels MIDCOs, sinó la coherència del seu comportament a nivell de curvatura, podent-se així observar punts d'inflexió en la representació de densitats i veure gràficament aquelles zones on la densitat és còncava o convexa. Aquest primer estudi revela que tant les densitats ajustades com les calculades a nivell ab initio es comporten de manera totalment anàloga. En la segona part d'aquest treball es va poder estendre el mètode a molècules més grans, de fins uns 2500 àtoms.Finalment, s'aplica part de la filosofia del MEDLA. Sabent que la densitat electrònica decau ràpidament al allunyar-se dels nuclis, el càlcul d'aquesta pot ser obviat a distàncies grans d'aquests. D'aquesta manera es va proposar particionar l'espai, i calcular tan sols les funcions ajustades de cada àtom tan sols en una regió petita, envoltant l'àtom en qüestió. Duent a terme aquest procés, es disminueix el temps de càlcul i el procés esdevé lineal amb nombre d'àtoms presents en la molècula tractada.En el tema dedicat a la sobreposició molecular es tracta la creació d'un algorisme, així com la seva implementació en forma de programa, batejat Topo-Geometrical Superposition Algorithm (TGSA), d'un mètode que proporcionés aquells alineaments que coincideixen amb la intuïció química. El resultat és un programa informàtic, codificat en Fortran 90, el qual alinea les molècules per parelles considerant tan sols nombres i distàncies atòmiques. La total absència de paràmetres teòrics permet desenvolupar un mètode de sobreposició molecular general, que proporcioni una sobreposició intuïtiva, i també de forma rellevant, de manera ràpida i amb poca intervenció de l'usuari. L'ús màxim del TGSA s'ha dedicat a calcular semblances per al seu ús posterior en QSAR, les quals majoritàriament no corresponen al valor que s'obtindria d'emprar la regla de la màxima semblança, sobretot si hi ha àtoms pesats en joc.Finalment, en l'últim tema, dedicat a la Semblança Quàntica en el marc del QSAR, es tracten tres aspectes diferents:- Ús de matrius de semblança. Aquí intervé l'anomenada matriu de semblança, calculada a partir de les semblances per parelles d'entre un conjunt de molècules. Aquesta matriu és emprada posteriorment, degudament tractada, com a font de descriptors moleculars per a estudis QSAR. Dins d'aquest àmbit s'han fet diversos estudis de correlació d'interès farmacològic, toxicològic, així com de diverses propietats físiques.- Aplicació de l'energia d'interacció electró-electró, assimilat com a una forma d'autosemblança. Aquesta modesta contribució consisteix breument en prendre el valor d'aquesta magnitud, i per analogia amb la notació de l'autosemblança molecular quàntica, assimilar-la com a cas particular de d'aquesta mesura. Aquesta energia d'interacció s'obté fàcilment a partir de programari mecanoquàntic, i esdevé ideal per a fer un primer estudi preliminar de correlació, on s'utilitza aquesta magnitud com a únic descriptor. - Càlcul d'autosemblances, on la densitat ha estat modificada per a augmentar el paper d'un substituent. Treballs previs amb densitats de fragments, tot i donar molt bons resultats, manquen de cert rigor conceptual en aïllar un fragment, suposadament responsable de l'activitat molecular, de la totalitat de l'estructura molecular, tot i que les densitats associades a aquest fragment ja difereixen degut a pertànyer a esquelets amb diferents substitucions. Un procediment per a omplir aquest buit que deixa la simple separació del fragment, considerant així la totalitat de la molècula (calcular-ne l'autosemblança), però evitant al mateix temps valors d'autosemblança no desitjats provocats per àtoms pesats, és l'ús de densitats de Forats de fermi, els quals es troben definits al voltant del fragment d'interès. Aquest procediment modifica la densitat de manera que es troba majoritàriament concentrada a la regió d'interès, però alhora permet obtenir una funció densitat, la qual es comporta matemàticament igual que la densitat electrònica regular, podent-se així incorporar dins del marc de la semblança molecular. Les autosemblances calculades amb aquesta metodologia han portat a bones correlacions amb àcids aromàtics substituïts, podent així donar una explicació al seu comportament.Des d'un altre punt de vista, també s'han fet contribucions conceptuals. S'ha implementat una nova mesura de semblança, la d'energia cinètica, la qual consisteix en prendre la recentment desenvolupada funció densitat d'energia cinètica, la qual al comportar-se matemàticament igual a les densitats electròniques regulars, s'ha incorporat en el marc de la semblança. A partir d'aquesta mesura s'han obtingut models QSAR satisfactoris per diferents conjunts moleculars. Dins de l'aspecte del tractament de les matrius de semblança s'ha implementat l'anomenada transformació estocàstica com a alternativa a l'ús de l'índex Carbó. Aquesta transformació de la matriu de semblança permet obtenir una nova matriu no simètrica, la qual pot ser posteriorment tractada per a construir models QSAR. / The present work, even embraced by the Molecular Quantum Similarity Measures (MQSM) theory, is divided into three clearly defined frameworks:- Creation of Molecular IsoDensity Contours (MIDCOs) from fitted electronic densities.- Development of an alternative superposition method to replace the maximal similarity rule.- Quantitative Structure-Activity Relationships (QSAR).The objective in field of MIDCOs is the application of fitted density functions, initially developed to reduce the computational costs of calculating MQSM, in order to obtain MIDCOs. So, a graphical comparison is carried out using different fittings to different basis sets, derived from calculations carried out at ab initio level. In this way, the visual analogy between the density representations, along with the values of previously calculated similarity measures, which in turn are fully comparable, reinforces the usage of such fitted densities.Apart from the initial idea, two further studies were done to complement the simple density representations, these are the curvature analysis and the extension to macromolecules. The first step corresponds not only to verify the visual similarity between MIDCOs, but to ensure their coherence in their behaviour at curvature level, allowing to observe inflexion points in the representations and those regions where the density itself presents a convex or concave shape. This first study reveals that both fitted and ab initio densities behave in the same way. In the second study, the method was extended to larger molecules, up to 2500 atoms.Finally, some of the MEDLA philosophy is applied. Knowing that electronic density rapidly decays as it is measured further from nuclei, its calculation at large distances from the nearest nucleus can be avoided. In this way, a partition of the space was proposed, and the fitted density basis set is only calculated for each atom only in its vicinity. Using this procedure, the calculation time is reduced and the whole process becomes linear with the number of atoms of the studied molecule.In the chapter devoted to molecular superposition, the creation of an algorithm, along with its practical implementation, called Topo-Geometrical Superposition Algorithm (TGSA), able to direct those alignments that coincide with chemical intuition. The result is an informatics program, codified in Fortran 90, which aligns the molecules by pairs, considering only atomic numbers and coordinates. The complete absence of theoretical parameters allows developing a general superposition method, which provides an intuitive superposition, and it is also relevant, in a fast way without much user-supplied information. Major usage of TGSA has been devoted to provide an alignment to later compute similarity measures, which in turn do not customarily coincide with those values obtained from the maximal similarity rule, most if heavy atoms are present.Finally, in the last studied subject, devoted to the Quantum Similarity in the QSAR field, three different scopes are treated:1) Usage of similarity matrices. Here, the so-called similarity matrix, calculated from pairwise similarities in a molecular set, is latterly, and properly manipulated, used a source of molecular descriptors for QSAR studies. With in this framework, several correlation studies have been carried out, involving relevant pharmacological properties, as well as toxicity and physical properties.2) Application of the electron-electron repulsion energy. This simple contribution consists briefly of taking the value of this magnitude, and by analogy with the self-similarity notation, it is assimilated as a particular case of this measure. This interaction energy is easily obtainable from mecanoquantic software and becomes ideal in order to make a preliminary correlation study, where this magnitude is used as a single descriptor.3) Calculation of self-similarities, where the density has been modified to account a particular substituent. Previous studies using density fragments, even they provide valuable results, lack of conceptual rigour when isolating a fragment, supposed to be responsible for a particular molecular response, even those densities associated to a particular fragment are already different due to they belong to a common skeleton with different substitutions. A procedure able to fill the gap between usage of density fragments , which avoid the heavy atom effect, and the whole molecular density, which allows to compute a self-similarity, is proposed. It consists of using Fermi Hole density functions, where the holes are defined and averaged around a particular fragment. This procedure medifies the density in a way that is basically collapsed in this molecular bay, but allowing at the same time obtaining a a density function that behaves mathematically in the same way as a regular electronic density; hence it is included in the similarity framework. Those self-similarities computed using this procedure provide good correlations with substituted aromatic acids, allowing to provide an explanation for their acidic behaviour.From another point of view, conceptual contributions have been developed. A new similarity measure, Kinetic Energy-based, has been implemented, which consists of taking the recently developed Kinetic Energy density function, which has been incorporated into the similarity framework due to it mathematically behaves like a regular density function. Satisfactory QSAR models, derived from this new measure, have been obtained for several molecular systems. Within the similarity matrices field, it has been implemented a new scaling, called stochastic transformation, as an alternative to Carbó Index. This transformation of the matrix allows obtaining a new non-symmetric matrix, which can be later used as a source of descriptors to build QSAR model.
|
Page generated in 0.107 seconds