Spelling suggestions: "subject:"densities"" "subject:"ensities""
111 |
Treize Etudes Pour L'OrchestreConstantinidis, MariaSilvia Castillo 01 January 2008 (has links)
Treize Etudes Pour L?Orchestre is a thirteen-movement symphonic work for full contemporary orchestra. The purpose of this work has been to develop a sonic exploration of textural possibilities through the orchestral medium. The motivic materials of the whole piece were first utilized in pieces for one piano, piano and cello and two pianos combinations These smaller pieces ahs been included in the appendix section of this work. The orchestral work does not represent an orchestration of the smaller pieces, but an expansion of the material into different textural studies. Preparation for this work includes the study of twelve different bird sounds, first recorded and later musically transcribed to create thematic materials and secondary materials for this work; the study of production of fabric of sounds representing color spectrums and intensity through sound tapestries, and the sonic representation of water, a starry dark night and the jungle. The Treize Etudes Pour L?Orchestre is formally a through-composed work. The different musical materials created as motive unity of the whole work have been developed throughout it by means of using a variety of compositional devices and techniques including Schoenberg?s Klangfarbenmelodie, Messiaen?s ?Language Musicale?, Ives? quadraphonic effect, and Samuel Adler?s sound curtain technique, and, the use of folk-like materials; all within the parameters of acoustic instrumentation.
|
112 |
Funcions densitat i semblança molecular quàntica: nous desenvolupaments i aplicacionsGironés Torrent, Xavier 10 May 2002 (has links)
La present tesi, tot i que emmarcada dins de la teoria de les Mesures Semblança Molecular Quántica (MQSM), es deriva en tres àmbits clarament definits:- La creació de Contorns Moleculars de IsoDensitat Electrònica (MIDCOs, de l'anglès Molecular IsoDensity COntours) a partir de densitats electròniques ajustades.- El desenvolupament d'un mètode de sobreposició molecular, alternatiu a la regla de la màxima semblança.- Relacions Quantitatives Estructura-Activitat (QSAR, de l'anglès Quantitative Structure-Activity Relationships).L'objectiu en el camp dels MIDCOs és l'aplicació de funcions densitat ajustades, ideades inicialment per a abaratir els càlculs de MQSM, per a l'obtenció de MIDCOs. Així, es realitza un estudi gràfic comparatiu entre diferents funcions densitat ajustades a diferents bases amb densitats obtingudes de càlculs duts a terme a nivells ab initio. D'aquesta manera, l'analogia visual entre les funcions ajustades i les ab initio obtinguda en el ventall de representacions de densitat obtingudes, i juntament amb els valors de les mesures de semblança obtinguts prèviament, totalment comparables, fonamenta l'ús d'aquestes funcions ajustades. Més enllà del propòsit inicial, es van realitzar dos estudis complementaris a la simple representació de densitats, i són l'anàlisi de curvatura i l'extensió a macromolècules. La primera observació correspon a comprovar no només la semblança dels MIDCOs, sinó la coherència del seu comportament a nivell de curvatura, podent-se així observar punts d'inflexió en la representació de densitats i veure gràficament aquelles zones on la densitat és còncava o convexa. Aquest primer estudi revela que tant les densitats ajustades com les calculades a nivell ab initio es comporten de manera totalment anàloga. En la segona part d'aquest treball es va poder estendre el mètode a molècules més grans, de fins uns 2500 àtoms.Finalment, s'aplica part de la filosofia del MEDLA. Sabent que la densitat electrònica decau ràpidament al allunyar-se dels nuclis, el càlcul d'aquesta pot ser obviat a distàncies grans d'aquests. D'aquesta manera es va proposar particionar l'espai, i calcular tan sols les funcions ajustades de cada àtom tan sols en una regió petita, envoltant l'àtom en qüestió. Duent a terme aquest procés, es disminueix el temps de càlcul i el procés esdevé lineal amb nombre d'àtoms presents en la molècula tractada.En el tema dedicat a la sobreposició molecular es tracta la creació d'un algorisme, així com la seva implementació en forma de programa, batejat Topo-Geometrical Superposition Algorithm (TGSA), d'un mètode que proporcionés aquells alineaments que coincideixen amb la intuïció química. El resultat és un programa informàtic, codificat en Fortran 90, el qual alinea les molècules per parelles considerant tan sols nombres i distàncies atòmiques. La total absència de paràmetres teòrics permet desenvolupar un mètode de sobreposició molecular general, que proporcioni una sobreposició intuïtiva, i també de forma rellevant, de manera ràpida i amb poca intervenció de l'usuari. L'ús màxim del TGSA s'ha dedicat a calcular semblances per al seu ús posterior en QSAR, les quals majoritàriament no corresponen al valor que s'obtindria d'emprar la regla de la màxima semblança, sobretot si hi ha àtoms pesats en joc.Finalment, en l'últim tema, dedicat a la Semblança Quàntica en el marc del QSAR, es tracten tres aspectes diferents:- Ús de matrius de semblança. Aquí intervé l'anomenada matriu de semblança, calculada a partir de les semblances per parelles d'entre un conjunt de molècules. Aquesta matriu és emprada posteriorment, degudament tractada, com a font de descriptors moleculars per a estudis QSAR. Dins d'aquest àmbit s'han fet diversos estudis de correlació d'interès farmacològic, toxicològic, així com de diverses propietats físiques.- Aplicació de l'energia d'interacció electró-electró, assimilat com a una forma d'autosemblança. Aquesta modesta contribució consisteix breument en prendre el valor d'aquesta magnitud, i per analogia amb la notació de l'autosemblança molecular quàntica, assimilar-la com a cas particular de d'aquesta mesura. Aquesta energia d'interacció s'obté fàcilment a partir de programari mecanoquàntic, i esdevé ideal per a fer un primer estudi preliminar de correlació, on s'utilitza aquesta magnitud com a únic descriptor. - Càlcul d'autosemblances, on la densitat ha estat modificada per a augmentar el paper d'un substituent. Treballs previs amb densitats de fragments, tot i donar molt bons resultats, manquen de cert rigor conceptual en aïllar un fragment, suposadament responsable de l'activitat molecular, de la totalitat de l'estructura molecular, tot i que les densitats associades a aquest fragment ja difereixen degut a pertànyer a esquelets amb diferents substitucions. Un procediment per a omplir aquest buit que deixa la simple separació del fragment, considerant així la totalitat de la molècula (calcular-ne l'autosemblança), però evitant al mateix temps valors d'autosemblança no desitjats provocats per àtoms pesats, és l'ús de densitats de Forats de fermi, els quals es troben definits al voltant del fragment d'interès. Aquest procediment modifica la densitat de manera que es troba majoritàriament concentrada a la regió d'interès, però alhora permet obtenir una funció densitat, la qual es comporta matemàticament igual que la densitat electrònica regular, podent-se així incorporar dins del marc de la semblança molecular. Les autosemblances calculades amb aquesta metodologia han portat a bones correlacions amb àcids aromàtics substituïts, podent així donar una explicació al seu comportament.Des d'un altre punt de vista, també s'han fet contribucions conceptuals. S'ha implementat una nova mesura de semblança, la d'energia cinètica, la qual consisteix en prendre la recentment desenvolupada funció densitat d'energia cinètica, la qual al comportar-se matemàticament igual a les densitats electròniques regulars, s'ha incorporat en el marc de la semblança. A partir d'aquesta mesura s'han obtingut models QSAR satisfactoris per diferents conjunts moleculars. Dins de l'aspecte del tractament de les matrius de semblança s'ha implementat l'anomenada transformació estocàstica com a alternativa a l'ús de l'índex Carbó. Aquesta transformació de la matriu de semblança permet obtenir una nova matriu no simètrica, la qual pot ser posteriorment tractada per a construir models QSAR. / The present work, even embraced by the Molecular Quantum Similarity Measures (MQSM) theory, is divided into three clearly defined frameworks:- Creation of Molecular IsoDensity Contours (MIDCOs) from fitted electronic densities.- Development of an alternative superposition method to replace the maximal similarity rule.- Quantitative Structure-Activity Relationships (QSAR).The objective in field of MIDCOs is the application of fitted density functions, initially developed to reduce the computational costs of calculating MQSM, in order to obtain MIDCOs. So, a graphical comparison is carried out using different fittings to different basis sets, derived from calculations carried out at ab initio level. In this way, the visual analogy between the density representations, along with the values of previously calculated similarity measures, which in turn are fully comparable, reinforces the usage of such fitted densities.Apart from the initial idea, two further studies were done to complement the simple density representations, these are the curvature analysis and the extension to macromolecules. The first step corresponds not only to verify the visual similarity between MIDCOs, but to ensure their coherence in their behaviour at curvature level, allowing to observe inflexion points in the representations and those regions where the density itself presents a convex or concave shape. This first study reveals that both fitted and ab initio densities behave in the same way. In the second study, the method was extended to larger molecules, up to 2500 atoms.Finally, some of the MEDLA philosophy is applied. Knowing that electronic density rapidly decays as it is measured further from nuclei, its calculation at large distances from the nearest nucleus can be avoided. In this way, a partition of the space was proposed, and the fitted density basis set is only calculated for each atom only in its vicinity. Using this procedure, the calculation time is reduced and the whole process becomes linear with the number of atoms of the studied molecule.In the chapter devoted to molecular superposition, the creation of an algorithm, along with its practical implementation, called Topo-Geometrical Superposition Algorithm (TGSA), able to direct those alignments that coincide with chemical intuition. The result is an informatics program, codified in Fortran 90, which aligns the molecules by pairs, considering only atomic numbers and coordinates. The complete absence of theoretical parameters allows developing a general superposition method, which provides an intuitive superposition, and it is also relevant, in a fast way without much user-supplied information. Major usage of TGSA has been devoted to provide an alignment to later compute similarity measures, which in turn do not customarily coincide with those values obtained from the maximal similarity rule, most if heavy atoms are present.Finally, in the last studied subject, devoted to the Quantum Similarity in the QSAR field, three different scopes are treated:1) Usage of similarity matrices. Here, the so-called similarity matrix, calculated from pairwise similarities in a molecular set, is latterly, and properly manipulated, used a source of molecular descriptors for QSAR studies. With in this framework, several correlation studies have been carried out, involving relevant pharmacological properties, as well as toxicity and physical properties.2) Application of the electron-electron repulsion energy. This simple contribution consists briefly of taking the value of this magnitude, and by analogy with the self-similarity notation, it is assimilated as a particular case of this measure. This interaction energy is easily obtainable from mecanoquantic software and becomes ideal in order to make a preliminary correlation study, where this magnitude is used as a single descriptor.3) Calculation of self-similarities, where the density has been modified to account a particular substituent. Previous studies using density fragments, even they provide valuable results, lack of conceptual rigour when isolating a fragment, supposed to be responsible for a particular molecular response, even those densities associated to a particular fragment are already different due to they belong to a common skeleton with different substitutions. A procedure able to fill the gap between usage of density fragments , which avoid the heavy atom effect, and the whole molecular density, which allows to compute a self-similarity, is proposed. It consists of using Fermi Hole density functions, where the holes are defined and averaged around a particular fragment. This procedure medifies the density in a way that is basically collapsed in this molecular bay, but allowing at the same time obtaining a a density function that behaves mathematically in the same way as a regular electronic density; hence it is included in the similarity framework. Those self-similarities computed using this procedure provide good correlations with substituted aromatic acids, allowing to provide an explanation for their acidic behaviour.From another point of view, conceptual contributions have been developed. A new similarity measure, Kinetic Energy-based, has been implemented, which consists of taking the recently developed Kinetic Energy density function, which has been incorporated into the similarity framework due to it mathematically behaves like a regular density function. Satisfactory QSAR models, derived from this new measure, have been obtained for several molecular systems. Within the similarity matrices field, it has been implemented a new scaling, called stochastic transformation, as an alternative to Carbó Index. This transformation of the matrix allows obtaining a new non-symmetric matrix, which can be later used as a source of descriptors to build QSAR model.
|
113 |
Experimental and modeling study of heterogeneous ice nucleation on mineral aerosol particles and its impact on a convective cloud / Étude expérimentale et de modélisation de la nucléation hétérogène de la glace sur les particules d'aérosol minérales et son impact sur un nuage convectifHiron, Thibault 29 September 2017 (has links)
L’un des enjeux principaux dans l’appréhension de l’évolution du climat planétaire réside dans la compréhension du rôle des processus de formation de la glace ainsi que leur rôle dans la formation et l’évolution des nuages troposphériques. Un cold stage nouvellement construit permet l’observation simultanée de jusqu’à 200 gouttes monodispersées de suspensions contenant des particules de K–feldspath, connues comme étant des particules glaçogènes très actives. Les propriétés glaçogènes des particules résiduelles de chaque goutte sont ensuite comparées pour les différents modes de glaciation et le lien entre noyau glaçogène en immersion et en déposition est étudié. Les premiers résultats ont montré que les mêmes sites actifs étaient impliqué dans la glaciation par immersion et par déposition. Les implications atmosphériques des résultats expérimentaux sont discutés à l’aide de Descam (Flossmann et al., 1985), un modèle 1.5–d à microphysique détaillée dans une étude de cas visant à rendre compte du rôle des différents mécanismes de glaciation dans l’évolution dynamique du nuage convective CCOPE (Dye et al., 1986). Quatre types d’aérosol minéraux (K–feldspath, kaolinite, illite et quartz) sont utilisés pour la glaciation en immersion, par contact et par déposition, à l’aide de paramétrisations sur la densité de sites glaçogènes actifs. Des études de sensibilité, où les différents types d’aérosols et modes de glaciation sont considérés séparément et en compétition, permettent de rendre compte de leurs importances relatives. La glaciation en immersion sur les particules de K–feldspath s’est révélée comme ayant le plus d’impact sur l’évolution dynamique et sur les précipications pour un nuage convectif. / One of the main challenges in understanding the evolution of Earth's climate resides in the understanding the ice formation processes and their role in the formation of tropospheric clouds as well as their evolution. A newly built humidity-controlled cold stage allows the simultaneous observation of up to 200 monodispersed droplets of suspensions containing K-feldspar particles, known to be very active ice nucleating particles. The ice nucleation efficiencies of the individual residual particles were compared for the different freezing modes and the relationship between immersion ice nuclei and deposition ice nuclei were investigated. The results showed that the same ice active sites are responsible for nucleation of ice in immersion and deposition modes.The atmospheric implications of the experimental results are discussed, using Descam (Flossmann et al., 1985), a 1.5-d bin-resolved microphysics model in a case study aiming to assess the role of the different ice nucleation pathways in the dynamical evolution of the CCOPE convective cloud (Dye et al., 1986). Four mineral aerosol types (K-feldspar, kaolinite, illite and quartz) were considered for immersion and contact freezing and deposition nucleation, with explicit Ice Nucleation Active Site density parameterizations.In sensitivity studies, the different aerosol types and nucleation modes were treated seperately and in competition to assess their relative importance. Immersion freezing on K-feldspar was found to have the most pronounced impact on the dynamical evolution and precipitation for a convective cloud.
|
114 |
Optimal tests for symmetryCassart, Delphine 01 June 2007 (has links)
Dans ce travail, nous proposons des procédures de test paramétriques et nonparamétrique localement et asymptotiquement optimales au sens de Hajek et Le Cam, pour trois modèles d'asymétrie. <p>La construction de modèles d'asymétrie est un sujet de recherche qui a connu un grand développement ces dernières années, et l'obtention des tests optimaux (pour trois modèles différents) est une étape essentielle en vue de leur mise en application. <p>Notre approche est fondée sur la théorie de Le Cam d'une part, pour obtenir les propriétés de normalité asymptotique, bases de la construction des tests paramétriques optimaux, et la théorie de Hajek d'autre part, qui, via un principe d'invariance permet d'obtenir les procédures non-paramétriques.<p><p>Nous considérons dans ce travail deux classes de distributions univariées asymétriques, l'une fondée sur un développement d'Edgeworth (décrit dans le Chapitre 1), et l'autre construite en utilisant un paramètre d'échelle différent pour les valeurs positives et négatives (le modèle de Fechner, décrit dans le Chapitre 2).<p>Le modèle d'asymétrie elliptique étudié dans le dernier chapitre est une généralisation multivariée du modèle du Chapitre 2.<p>Pour chacun de ces modèles, nous proposons de tester l'hypothèse de symétrie par rapport à un centre fixé, puis par rapport à un centre non spécifié.<p><p>Après avoir décrit le modèle pour lequel nous construisons les procédures optimales, nous obtenons la propriété de normalité locale asymptotique. A partir de ce résultat, nous sommes capable de construire les tests paramétriques localement et asymptotiquement optimaux. Ces tests ne sont toutefois valides que si la densité sous-jacente f est correctement spécifiée. Ils ont donc le mérite de déterminer les bornes d'efficacité paramétrique, mais sont difficilement applicables. <p>Nous adaptons donc ces tests afin de pouvoir tester les hypothèses de symétrie par rapport à un centre fixé ou non, lorsque la densité sous-jacente est considérée comme un paramètre de nuisance. <p>Les tests que nous obtenons restent localement et asymptotiquement optimaux sous f, mais restent valides sous une large classe de densités. <p><p>A partir des propriétés d'invariance du sous-modèle identifié par l'hypothèse nulle, nous obtenons les tests de rangs signés localement et asymptotiquement optimaux sous f, et valide sous une vaste classe de densité. Nous présentons en particulier, les tests fondés sur les scores normaux (ou tests de van der Waerden), qui sont optimaux sous des hypothèses Gaussiennes, tout en étant valides si cette hypothèse n'est pas vérifiée.<p>Afin de comparer les performances des tests paramétriques et non paramétriques présentés, nous calculons les efficacités asymptotiques relatives des tests non paramétriques par rapport aux tests pseudo-Gaussiens, sous une vaste classe de densités non-Gaussiennes, et nous proposons quelques simulations. / Doctorat en sciences, Orientation statistique / info:eu-repo/semantics/nonPublished
|
115 |
Numerical analysis and multi-precision computational methods applied to the extant problems of Asian option pricing and simulating stable distributions and unit root densitiesCao, Liang January 2014 (has links)
This thesis considers new methods that exploit recent developments in computer technology to address three extant problems in the area of Finance and Econometrics. The problem of Asian option pricing has endured for the last two decades in spite of many attempts to find a robust solution across all parameter values. All recently proposed methods are shown to fail when computations are conducted using standard machine precision because as more and more accuracy is forced upon the problem, round-off error begins to propagate. Using recent methods from numerical analysis based on multi-precision arithmetic, we show using the Mathematica platform that all extant methods have efficacy when computations use sufficient arithmetic precision. This creates the proper framework to compare and contrast the methods based on criteria such as computational speed for a given accuracy. Numerical methods based on a deformation of the Bromwich contour in the Geman-Yor Laplace transform are found to perform best provided the normalized strike price is above a given threshold; otherwise methods based on Euler approximation are preferred. The same methods are applied in two other contexts: the simulation of stable distributions and the computation of unit root densities in Econometrics. The stable densities are all nested in a general function called a Fox H function. The same computational difficulties as above apply when using only double-precision arithmetic but are again solved using higher arithmetic precision. We also consider simulating the densities of infinitely divisible distributions associated with hyperbolic functions. Finally, our methods are applied to unit root densities. Focusing on the two fundamental densities, we show our methods perform favorably against the extant methods of Monte Carlo simulation, the Imhof algorithm and some analytical expressions derived principally by Abadir. Using Mathematica, the main two-dimensional Laplace transform in this context is reduced to a one-dimensional problem.
|
116 |
Modélisation quantochimiques des forces de dispersion de London par la méthode des phases aléatoires (RPA) : développements méthodologiques / Quantum chemical studies of London dispersion forces by the random phase approximation (RPA) : methodological developments.Mussard, Bastien 13 December 2013 (has links)
Dans cette thèse sont montrés des développements de l'approximation de la phase aléatoire (RPA) dans le contexte de théories à séparation de portée. On présente des travaux sur le formalisme de la RPA en général, et en particulier sur le formalisme "matrice diélectrique" qui est exploré de manière systématique. On montre un résumé d'un travail sur les équations RPA dans le contexte d'orbitales localisées, notamment des développements des orbitales virtuelles localisées que sont les "orbitales oscillantes projetées" (POO). Un programme a été écrit pour calculer des fonctions telles que le trou de d'échange, la fonction de réponse, etc... sur des grilles de l'espace réel (grilles parallélépipédiques ou de type "DFT"). On montre certaines de ces visualisations. Dans l'espace réel, on expose une adaptation de l'approximation du dénominateur effectif (EED), développée originellement dans l'espace réciproque en physique du solide. Également, les gradients analytiques des énergies de corrélation RPA dans le contexte de la séparation de portée sont dérivés. Le formalisme développé ici à l'aide d'un lagrangien permet une dérivation tout-en-un des termes courte- et longue-portée qui émergent dans les expressions du gradient, et qui montrent un parallèle intéressant. Des applications sont montrées, telles que des optimisations de géométries aux niveaux RSH-dRPA-I et RSH-SOSEX d'un ensemble de 16 petites molécules, ou encore le calcul et la visualisation des densités corrélées au niveau RSH-dRPA-I / In this thesis are shown developments in the random phase approximation (RPA) in the context of range-separated theories. We present advances in the formalism of the RPA in general, and particularly in the "dielectric matrix" formulation of RPA, which is explored in details. We show a summary of a work on the RPA equations with localized orbitals, especially developments of the virtual localized orbitals that are the "projected oscillatory orbitals" (POO). A program has been written to calculate functions such as the exchange hole, the response function, etc... on real space grid (parallelepipedic or of the "DFT" type) ; some of those visualizations are shown here. In the real space, we offer an adaptation of the effective energy denominator approximation (EED), originally developed in the reciprocal space in solid physics. The analytical gradients of the RPA correlation energies in the context of range separation has been derived. The formalism developed here with a Lagrangian allows an all-in-one derivation of the short- and long-range terms that emerge in the expressions of the gradient. These terms show interesting parallels. Geometry optimizations at the RSH-dRPA-I and RSH-SOSEX levels on a set of 16 molecules are shown, as well as calculations and visualizations of correlated densities at the RSH-dRPA-I level
|
117 |
Temporal Variations in the Compliance of Gas Hydrate FormationsRoach, Lisa Aretha Nyala 20 March 2014 (has links)
Seafloor compliance is a non-intrusive geophysical method sensitive to the shear modulus of the sediments below the seafloor. A compliance analysis requires the computation of the frequency dependent transfer function between the vertical stress, produced at the seafloor by the ultra low frequency passive source-infra-gravity waves, and the resulting displacement, related to velocity through the frequency. The displacement of the ocean floor is dependent on the elastic structure of the sediments and the compliance function is tuned to different depths, i.e., a change in the elastic parameters at a given depth is sensed by the compliance function at a particular frequency. In a gas hydrate system, the magnitude of the stiffness is a measure of the quantity of gas hydrates present. Gas hydrates contain immense stores of greenhouse gases making them relevant to climate change science, and represent an important potential alternative source of energy. Bullseye Vent is a gas hydrate system located in an area that has been intensively studied for over 2 decades and research results suggest that this system is evolving over time.
A partnership with NEPTUNE Canada allowed for the investigation of this possible evolution. This thesis describes a compliance experiment configured for NEPTUNE Canada’s seafloor observatory and its failure. It also describes the use of 203 days of simultaneously logged pressure and velocity time-series data, measured by a Scripps differential pressure gauge, and a Güralp CMG-1T broadband seismometer on NEPTUNE Canada’s seismic station, respectively, to evaluate variations in sediment stiffness near Bullseye. The evaluation resulted in a (- 4.49 x10-3± 3.52 x 10-3) % change of the transfer function of 3rd October, 2010 and represents a 2.88% decrease in the stiffness of the sediments over the period. This thesis also outlines a new algorithm for calculating the static compliance of isotropic layered sediments.
|
118 |
Temporal Variations in the Compliance of Gas Hydrate FormationsRoach, Lisa Aretha Nyala 20 March 2014 (has links)
Seafloor compliance is a non-intrusive geophysical method sensitive to the shear modulus of the sediments below the seafloor. A compliance analysis requires the computation of the frequency dependent transfer function between the vertical stress, produced at the seafloor by the ultra low frequency passive source-infra-gravity waves, and the resulting displacement, related to velocity through the frequency. The displacement of the ocean floor is dependent on the elastic structure of the sediments and the compliance function is tuned to different depths, i.e., a change in the elastic parameters at a given depth is sensed by the compliance function at a particular frequency. In a gas hydrate system, the magnitude of the stiffness is a measure of the quantity of gas hydrates present. Gas hydrates contain immense stores of greenhouse gases making them relevant to climate change science, and represent an important potential alternative source of energy. Bullseye Vent is a gas hydrate system located in an area that has been intensively studied for over 2 decades and research results suggest that this system is evolving over time.
A partnership with NEPTUNE Canada allowed for the investigation of this possible evolution. This thesis describes a compliance experiment configured for NEPTUNE Canada’s seafloor observatory and its failure. It also describes the use of 203 days of simultaneously logged pressure and velocity time-series data, measured by a Scripps differential pressure gauge, and a Güralp CMG-1T broadband seismometer on NEPTUNE Canada’s seismic station, respectively, to evaluate variations in sediment stiffness near Bullseye. The evaluation resulted in a (- 4.49 x10-3± 3.52 x 10-3) % change of the transfer function of 3rd October, 2010 and represents a 2.88% decrease in the stiffness of the sediments over the period. This thesis also outlines a new algorithm for calculating the static compliance of isotropic layered sediments.
|
Page generated in 0.0433 seconds