• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 219
  • 94
  • 69
  • 67
  • 24
  • 19
  • 11
  • 10
  • 6
  • 6
  • 5
  • 4
  • 4
  • 3
  • 2
  • Tagged with
  • 607
  • 114
  • 88
  • 72
  • 67
  • 57
  • 57
  • 50
  • 43
  • 42
  • 42
  • 41
  • 40
  • 39
  • 38
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Efektivní schémata digitálních podpisů / Efficient Digital Signature Schemes

Varga, Ondrej January 2011 (has links)
Digital signatures, which take the properties of classical signatures, are used to secure the actual content of documents, which can be modified during transmission over an insecure channel. The problems of security and protection of communicating participants are solved by cryptographic techniques. Identity verification, message integrity, credibility, the ownership of documents, and the secure transmission of information over an unsecured channel, are all dealt with in secure communications - Public Key Infrastructure, which uses digital signatures. Nowadays digital signatures are often used to secure data in communication over an unsecured channel. The aim of the following master’s thesis is to familiarize readers with the necessary technological aspects of digital signatures, as well as their advantages and disadvantages. By the time digital signatures are being used they will have to be improved and modified to be secure against more sophisticated attacks. In this paper, proposals of new efficient digital signature schemes and their comparison with current ones are described. Also are examined their implications for computationally weak devices, or deployment in low speed channel transmission systems. After an explanation of cryptography and a description of its basic subjects, digital signatures are introduced. The first chapter describes the possible formatting and architecture of the digital signature. The second part of this master’s thesis is about current digital signature schemes and their properties. Chapter 3 describes some proposals of new efficient digital signature schemes and their comparison to those currently in use. In the practical part, the implementations (in the environment .NET in C#) of two effective digital signature schemes as part of a client-server application are presented and described (Chapter 4). In the last chapter the comparison and analysis of the implemented signature schemes are provided.
22

Analyse de la qualité des signatures manuscrites en-ligne par la mesure d'entropie / Quality analysis of online signatures based on entropy measure

Houmani, Nesma 13 January 2011 (has links)
Cette thèse s'inscrit dans le contexte de la vérification d'identité par la signature manuscrite en-ligne. Notre travail concerne plus particulièrement la recherche de nouvelles mesures qui permettent de quantifier la qualité des signatures en-ligne et d'établir des critères automatiques de fiabilité des systèmes de vérification. Nous avons proposé trois mesures de qualité faisant intervenir le concept d’entropie. Nous avons proposé une mesure de qualité au niveau de chaque personne, appelée «Entropie personnelle», calculée sur un ensemble de signatures authentiques d’une personne. L’originalité de l’approche réside dans le fait que l’entropie de la signature est calculée en estimant les densités de probabilité localement, sur des portions, par le biais d’un Modèle de Markov Caché. Nous montrons que notre mesure englobe les critères habituels utilisés dans la littérature pour quantifier la qualité d’une signature, à savoir: la complexité, la variabilité et la lisibilité. Aussi, cette mesure permet de générer, par classification non supervisée, des catégories de personnes, à la fois en termes de variabilité de la signature et de complexité du tracé. En confrontant cette mesure aux performances de systèmes de vérification usuels sur chaque catégorie de personnes, nous avons trouvé que les performances se dégradent de manière significative (d’un facteur 2 au minimum) entre les personnes de la catégorie «haute Entropie» (signatures très variables et peu complexes) et celles de la catégorie «basse Entropie» (signatures les plus stables et les plus complexes). Nous avons ensuite proposé une mesure de qualité basée sur l’entropie relative (distance de Kullback-Leibler), dénommée «Entropie Relative Personnelle» permettant de quantifier la vulnérabilité d’une personne aux attaques (bonnes imitations). Il s’agit là d’un concept original, très peu étudié dans la littérature. La vulnérabilité associée à chaque personne est calculée comme étant la distance de Kullback-Leibler entre les distributions de probabilité locales estimées sur les signatures authentiques de la personne et celles estimées sur les imitations qui lui sont associées. Nous utilisons pour cela deux Modèles de Markov Cachés, l'un est appris sur les signatures authentiques de la personne et l'autre sur les imitations associées à cette personne. Plus la distance de Kullback-Leibler est faible, plus la personne est considérée comme vulnérable aux attaques. Cette mesure est plus appropriée à l’analyse des systèmes biométriques car elle englobe en plus des trois critères habituels de la littérature, la vulnérabilité aux imitations. Enfin, nous avons proposé une mesure de qualité pour les signatures imitées, ce qui est totalement nouveau dans la littérature. Cette mesure de qualité est une extension de l’Entropie Personnelle adaptée au contexte des imitations: nous avons exploité l’information statistique de la personne cible pour mesurer combien la signature imitée réalisée par un imposteur va coller à la fonction de densité de probabilité associée à la personne cible. Nous avons ainsi défini la mesure de qualité des imitations comme étant la dissimilarité existant entre l'entropie associée à la personne à imiter et celle associée à l'imitation. Elle permet lors de l’évaluation des systèmes de vérification de quantifier la qualité des imitations, et ainsi d’apporter une information vis-à-vis de la résistance des systèmes aux attaques. Nous avons aussi montré l’intérêt de notre mesure d’Entropie Personnelle pour améliorer les performances des systèmes de vérification dans des applications réelles. Nous avons montré que la mesure d’Entropie peut être utilisée pour : améliorer la procédure d’enregistrement, quantifier la dégradation de la qualité des signatures due au changement de plateforme, sélectionner les meilleures signatures de référence, identifier les signatures aberrantes, et quantifier la pertinence de certains paramètres pour diminuer la variabilité temporelle. / This thesis is focused on the quality assessment of online signatures and its application to online signature verification systems. Our work aims at introducing new quality measures quantifying the quality of online signatures and thus establishing automatic reliability criteria for verification systems. We proposed three quality measures involving the concept of entropy, widely used in Information Theory. We proposed a novel quality measure per person, called "Personal Entropy" calculated on a set of genuine signatures of such a person. The originality of the approach lies in the fact that the entropy of the genuine signature is computed locally, on portions of such a signature, based on local density estimation by a Hidden Markov Model. We show that our new measure includes the usual criteria of the literature, namely: signature complexity, signature variability and signature legibility. Moreover, this measure allows generating, by an unsupervised classification, 3 coherent writer categories in terms of signature variability and complexity. Confronting this measure to the performance of two widely used verification systems (HMM, DTW) on each Entropy-based category, we show that the performance degrade significantly (by a factor 2 at least) between persons of "high Entropy-based category", containing the most variable and the least complex signatures and those of "low Entropy-based category", containing the most stable and the most complex signatures. We then proposed a novel quality measure based on the concept of relative entropy (also called Kullback-Leibler distance), denoted « Personal Relative Entropy » for quantifying person's vulnerability to attacks (good forgeries). This is an original concept and few studies in the literature are dedicated to this issue. This new measure computes, for a given writer, the Kullback-Leibler distance between the local probability distributions of his/her genuine signatures and those of his/her skilled forgeries: the higher the distance, the better the writer is protected from attacks. We show that such a measure simultaneously incorporates in a single quantity the usual criteria proposed in the literature for writer categorization, namely signature complexity, signature variability, as our Personal Entropy, but also the vulnerability criterion to skilled forgeries. This measure is more appropriate to biometric systems, because it makes a good compromise between the resulting improvement of the FAR and the corresponding degradation of FRR. We also proposed a novel quality measure aiming at quantifying the quality of skilled forgeries, which is totally new in the literature. Such a measure is based on the extension of our former Personal Entropy measure to the framework of skilled forgeries: we exploit the statistical information of the target writer for measuring to what extent an impostor’s hand-draw sticks to the target probability density function. In this framework, the quality of a skilled forgery is quantified as the dissimilarity existing between the target writer’s own Personal Entropy and the entropy of the skilled forgery sample. Our experiments show that this measure allows an assessment of the quality of skilled forgeries of the main online signature databases available to the scientific community, and thus provides information about systems’ resistance to attacks. Finally, we also demonstrated the interest of using our Personal Entropy measure for improving performance of online signature verification systems in real applications. We show that Personal Entropy measure can be used to: improve the enrolment process, quantify the quality degradation of signatures due to the change of platforms, select the best reference signatures, identify the outlier signatures, and quantify the relevance of times functions parameters in the context of temporal variability.
23

Analysis Of Electronic Signature In Turkey From The Legal And Economic Perspectives And The Awareness Level In The Country

Iskender, Gokhan 01 August 2006 (has links) (PDF)
As in the case of other information technologies, the best way of obtaining efficient results from electronic signature application is integrating it to the legal and economic systems and increasing the awareness level of technology in the society. This thesis performs the legal and economic analyses of electronic signature in Turkey and measures the awareness level in the society. The analyses performed in the thesis show that electronic signature is not legally established in Turkey even the legal base is harmonious with European Union and it is expensive in practice even though its economic rate of return is high and the awareness level in the society which is measured in this study with a 20 questions test is not very high.
24

Zařízení pro napodobení statických a dynamických vlastností písma / Device for Imitation of Static and Dynamic Handwriting Characteristics

Pawlus, Jan January 2019 (has links)
This project deals with designing and assembling a system for imitation of static and dynamic handwriting characteristics. This system's design takes into account special pen targeted for getting the handwriting characteristics, working with these characteristics and their imitation with a 3D printer altered for this purpose. This topic might be interesting because research about this specific field, which would include a real demonstration of how a signature can be forged using dynamic handwriting characteristics not by forger's hand, barely exists. The problem with preventing forgery is that we need to know the attack well, which is a clear motivation for this project.
25

Performance analysis of multimodal biometric fusion

Almayyan, Waheeda January 2012 (has links)
Biometrics is constantly evolving technology which has been widely used in many official and commercial identification applications. In fact in recent years biometric-based authentication techniques received more attention due to increased concerns in security. Most biometric systems that are currently in use typically employ a single biometric trait. Such systems are called unibiometric systems. Despite considerable advances in recent years, there are still challenges in authentication based on a single biometric trait, such as noisy data, restricted degree of freedom, intra-class variability, non-universality, spoof attack and unacceptable error rates. Some of the challenges can be handled by designing a multimodal biometric system. Multimodal biometric systems are those which utilize or are capable of utilizing, more than one physiological or behavioural characteristic for enrolment, verification, or identification. In this thesis, we propose a novel fusion approach at a hybrid level between iris and online signature traits. Online signature and iris authentication techniques have been employed in a range of biometric applications. Besides improving the accuracy, the fusion of both of the biometrics has several advantages such as increasing population coverage, deterring spoofing activities and reducing enrolment failure. In this doctoral dissertation, we make a first attempt to combine online signature and iris biometrics. We principally explore the fusion of iris and online signature biometrics and their potential application as biometric identifiers. To address this issue, investigations is carried out into the relative performance of several statistical data fusion techniques for integrating the information in both unimodal and multimodal biometrics. We compare the results of the multimodal approach with the results of the individual online signature and iris authentication approaches. This dissertation describes research into the feature and decision fusion levels in multimodal biometrics.
26

CCFS cryptographically curated file system

Goldman, Aaron David 07 January 2016 (has links)
The Internet was originally designed to be a next-generation phone system that could withstand a Soviet attack. Today, we ask the Internet to perform tasks that no longer resemble phone calls in the face of threats that no longer resemble Soviet bombardment. However, we have come to rely on names that can be subverted at every level of the stack or simply be allowed to rot by their original creators. It is possible for us to build networks of content that serve the content distribution needs of today while withstanding the hostile environment that all modern systems face. This dissertation presents the Cryptographically Curated File System (CCFS), which offers five properties that we feel a modern content distribution system should provide. The first property is Strong Links, which maintains that only the owner of a link can change the content to which it points. The second property, Permissionless Distribution, allows anyone to become a curator without dependence on a naming or numbering authority. Third, Independent Validation arises from the fact that the object seeking affirmation need not choose the source of trust. Connectivity, the fourth property, allows any curator to delegate and curate the right to alter links. Each curator can delegate the control of a link and that designee can do the same, leaving a chain of trust from the original curator to the one who assigned the content. Lastly, with the property of Collective Confidence, trust does not need to come from a single source, but can instead be an aggregate affirmation. Since CCFS embodies all five of these properties, it can serve as the foundational technology for a more robust Web. CCFS can serve as the base of a web that performs the tasks of today’s Web, but also may outperform it. In the third chapter, we present a number of scenarios that demonstrate the capacity and potential of CCFS. The system can be used as a publication platform that has been re-optimized within the constraints of the modern Internet, but not the constraints of decades past. The curated links can still be organized into a hierarchical namespace (e.g., a Domain Naming System (DNS)) and de jure verifications (e.g., a Certificate Authority (CA) system), but also support social, professional, and reputational graphs. This data can be distributed, versioned, and archived more efficiently. Although communication systems were not designed for such a content-centric system, the combination of broadcasts and point-to-point communications are perfectly suited for scaling the distribution, while allowing communities to share the burdens of hosting and maintenance. CCFS even supports the privacy of friend-to-friend networks without sacrificing the ability to interoperate with the wider world. Finally, CCFS does all of this without damaging the ability to operate search engines or alert systems, providing a discovery mechanism, which is vital to a usable, useful web. To demonstrate the viability of this model, we built a research prototype. The results of these tests demonstrate that while the CCFS prototype is not ready to be used as a drop-in replacement for all file system use cases, the system is feasible. CCFS is fast enough to be usable and can be used to publish, version, archive, and search data. Even in this crude form, CCFS already demonstrates advantages over previous state-of-the-art systems. When the Internet was designed, there were relatively fewer computers that were far weaker than the computers we have now. They were largely connected to each other over reliable connections. When the Internet was first created, computing was expensive and propagation delay was negligible. Since then, the propagation delay has not improved on a Moore’s Law Curve. Now, latency has come to dominate all other costs of retrieving content; specifically, the propagation time has come to dominate the latency. In order to improve the latency, we are paying more for storage, processing, and bandwidth. The only way to improve propagation delay is to move the content closer to the destination. In order to have the content close to the demand, we store multiple copies and search multiple locations, thus trading off storage, bandwidth, and processing for lower propagation delay. The computing world should re-evaluate these trade-offs because the situation has changed. We need an Internet that is designed for the technologies used today, rather than the tools of the 20th century. CCFS, which regards the trade-off for lower propagation delay, will be better suited for 21st-century technologies. Although CCFS is not preferable in all situations, it can still offer tremendous value. Better robustness, performance, and democracy make CCFS a contribution to the field. Robustness comes from the cryptographic assurances provided by the five properties of CCFS. Performance comes from the locality of content. Democracy arises from the lack of a centralized authority that may grant the right of Free Speech only to those who espouse rhetoric compatible with their ideals. Combined, this model for a cryptographically secure, content-centric system provides a novel contribution to the state of communications technology and information security.
27

Digital signatures

Swanepoel, Jacques Philip 12 1900 (has links)
Thesis (PhD)--Stellenbosch University, 2015 / AFRIKAANSE OPSOMMING : In hierdie verhandeling stel ons 'n nuwe strategie vir outomatiese handtekening-verifikasie voor. Die voorgestelde raamwerk gebruik 'n skrywer-onafhanklike benadering tot handtekening- modellering en is dus in staat om bevraagtekende handtekeninge, wat aan enige skrywer behoort, te bekragtig, op voorwaarde dat minstens een outentieke voorbeeld vir vergelykingsdoeleindes beskikbaar is. Ons ondersoek die tradisionele statiese geval (waarin 'n bestaande pen-op-papier handtekening vanuit 'n versyferde dokument onttrek word), asook die toenemend gewilde dinamiese geval (waarin handtekeningdata outomaties tydens ondertekening m.b.v. gespesialiseerde elektroniese hardeware bekom word). Die statiese kenmerk-onttrekkingstegniek behels die berekening van verskeie diskrete Radontransform (DRT) projeksies, terwyl dinamiese handtekeninge deur verskeie ruimtelike en temporele funksie-kenmerke in die kenmerkruimte voorgestel word. Ten einde skryweronafhanklike handtekening-ontleding te bewerkstellig, word hierdie kenmerkstelle na 'n verskil-gebaseerde voorstelling d.m.v. 'n geskikte digotomie-transformasie omgeskakel. Die klassikasietegnieke, wat vir handtekeking-modellering en -verifikasie gebruik word, sluit kwadratiese diskriminant-analise (KDA) en steunvektormasjiene (SVMe) in. Die hoofbydraes van hierdie studie sluit twee nuwe tegnieke, wat op die bou van 'n robuuste skrywer-onafhanklike handtekeningmodel gerig is, in. Die eerste, 'n dinamiese tydsverbuiging digotomie-transformasie vir statiese handtekening-voorstelling, is in staat om vir redelike intra-klas variasie te kompenseer, deur die DRT-projeksies voor vergelyking nie-lineêr te belyn. Die tweede, 'n skrywer-spesieke verskil-normaliseringstrategie, is in staat om inter-klas skeibaarheid in die verskilruimte te verbeter deur slegs streng relevante statistieke tydens die normalisering van verskil-vektore te beskou. Die normaliseringstrategie is generies van aard in die sin dat dit ewe veel van toepassing op beide statiese en dinamiese handtekening-modelkonstruksie is. Die stelsels wat in hierdie studie ontwikkel is, is spesi ek op die opsporing van hoë-kwaliteit vervalsings gerig. Stelselvaardigheid-afskatting word met behulp van 'n omvattende eksperimentele protokol bewerkstellig. Verskeie groot handtekening-datastelle is oorweeg. In beide die statiese en dinamiese gevalle vaar die voorgestelde SVM-gebaseerde stelsel beter as die voorgestelde KDA-gebaseerde stelsel. Ons toon ook aan dat die stelsels wat in hierdie studie ontwikkel is, die meeste bestaande stelsels wat op dieselfde datastelle ge evalueer is, oortref. Dit is selfs meer belangrik om daarop te let dat, wanneer hierdie stelsels met bestaande tegnieke in die literatuur vergelyk word, ons aantoon dat die gebruik van die nuwe tegnieke, soos in hierdie studie voorgestel, konsekwent tot 'n statisties beduidende verbetering in stelselvaardigheid lei. / ENGLISH ABSTRACT : In this dissertation we present a novel strategy for automatic handwritten signature verification. The proposed framework employs a writer-independent approach to signature modelling and is therefore capable of authenticating questioned signatures claimed to belong to any writer, provided that at least one authentic sample of said writer's signature is available for comparison. We investigate both the traditional off-line scenario (where an existing pen-on-paper signature is extracted from a digitised document) as well as the increasingly popular on-line scenario (where the signature data are automatically recorded during the signing event by means of specialised electronic hardware). The utilised off-line feature extraction technique involves the calculation of several discrete Radon transform (DRT) based projections, whilst on-line signatures are represented in feature space by several spatial and temporal function features. In order to facilitate writer-independent signature analysis, these feature sets are subsequently converted into a dissimilarity-based representation by means of a suitable dichotomy transformation. The classification techniques utilised for signature modelling and verification include quadratic discriminant analysis (QDA) and support vector machines (SVMs). The major contributions of this study include two novel techniques aimed towards the construction of a robust writer-independent signature model. The first, a dynamic time warping (DTW) based dichotomy transformation for off-line signature representation, is able to compensate for reasonable intra-class variability by non-linearly aligning DRT-based projections prior to matching. The second, a writer-specific dissimilarity normalisation strategy, improves inter-class separability in dissimilarity space by considering only strictly relevant dissimilarity statistics when normalising the dissimilarity vectors belonging to a specific individual. This normalisation strategy is generic in the sense that it is equally applicable to both off-line and on-line signature model construction. The systems developed in this study are specifically aimed towards skilled forgery detection. System proficiency estimation is conducted using a rigorous experimental protocol. Several large signature corpora are considered. In both the off-line and on-line scenarios, the proposed SVM-based system outperforms the proposed QDA-based system. We also show that the systems proposed in this study outperform most existing systems that were evaluated on the same data sets. More importantly, when compared to state-of-the-art techniques currently employed in the literature, we show that the incorporation of the novel techniques proposed in this study consistently results in a statistically significant improvement in system proficiency.
28

Dynamic Behavioral Analysis of Malicious Software with Norman Sandbox

Shoemake, Danielle 05 August 2010 (has links)
Current signature-based Anti-Virus (AV) detection approaches take, on average, two weeks from discovery to definition update release to AV users. In addition, these signatures get stale quickly: AV products miss between 25%-80% of new malicious software within a week of not updating. This thesis researches and develops a detection/classification mechanism for malicious software through statistical analysis of dynamic malware behavior. Several characteristics for each behavior type were stored and analyzed such as function DLL names, function parameters, exception thread ids, exception opcodes, pages accessed during faults, port numbers, connection types, and IP addresses. Behavioral data was collected via Norman Sandbox for storage and analysis. We proposed to find which statistical measures and metrics can be collected for use in the detection and classification of malware. We conclude that our logging and cataloging procedure is a potentially viable method in creating behavior-based malicious software detection and classification mechanisms.
29

Derivation of the human cell cycle transcriptional signature

Giotti, Bruno January 2017 (has links)
Duplication of the genome and successful mitotic cell division requires the coordinated activity of hundreds of proteins. Many are known, but a complete list of the components of the cell cycle machinery is still lacking. This thesis describes a series of data driven analyses to assemble a comprehensive list of genes induced during the human cell cycle. To start with, a meta-analysis of previous transcriptomics studies revealed a larger number of cell cycle genes consistently expressed across multiple human cell types than previously reported. Following this observation, the cell cycle transcriptome was further investigated with the generation of a new time-course microarray dataset on normal human dermal fibroblasts (NHDF) undergoing synchronised cell division. Network cluster analysis of these data identified transcripts whose expression was associated with different stages of cell cycle progression. Co-expression of these transcripts was then analysed using a complementary dataset that included genome-wide promoter expression of a wide range of human primary cells. This resulted in the identification of a core set of 545 cell cycle genes, mainly associated with G1/S to M phases, which showed a high degree of co-expression across all cell types. Expression of 75% of these genes was also found conserved in mouse, as revealed by the analysis of a new microarray experiment generated from mouse fibroblasts. Gene Ontology and motif enrichment analysis validated the list with significant enrichments for terms and transcription factor biding sites linked with cell cycle biology. Toward a better interpretation of these 545 genes, a meticulous manual annotation exercise was carried out. Unsurprisingly, the majority of these genes were known to be involved in S and M phases-associated processes, however 50 genes were functionally uncharacterised. A subset of 36 of these were then taken forward for subcellular localisation assays. These studies were performed by transfection of human embryonic kidney cells (HEK293T) with GFP-tagged cDNA clones leading to the finding of four uncharacterised proteins co-localising with the centrosome, a crucial organelle for normal cell cycle progression. This thesis represents an attempt in documenting the genes actively transcribed and therefore likely involved in the processes associated with cell cycle, hence providing a comprehensive catalogue of its key components. In so doing, I have also identified a significant number of new genes likely to contribute to this central process vital in health and disease.
30

Modeling of Seismic Signatures of Carbonate Rock Types

Jan, Badr H. 2009 December 1900 (has links)
Carbonate reservoirs of different rock types have wide ranges of porosity and permeability, creating zones with different reservoir quality and flow properties. This research addresses how seismic technology can be used to identify different carbonate rock types for characterization of reservoir heterogeneity. I also investigated which seismic methods can help delineate thin high-permeability (super-k) layers that cause early water breakthroughs that severely reduce hydrocarbon recovery. Based on data from a Middle East producing field, a typical geologic model is defined including seal, a thin fractured layer, grainstone and wackestone. Convolutional, finite difference, and fluid substitution modeling methods are used to understand the seismic signatures of carbonate rock types. Results show that the seismic reflections from the seal/fractured-layer interface and the fractured-layer/grainstone interface cannot be resolved with conventional seismic data. However, seismic reflection amplitudes from interfaces between different carbonate rock types within the reservoir are strong enough to be identified on seismic data, compared with reflections from both the top and bottom interfaces of the reservoir. The seismic reflection amplitudes from the fractured-layer/grainstone and the grainstone/wackestone interfaces are 17% and 23% of those from the seal/fracturedlayer interface, respectively. By using AVO analysis, it may be possible to predict the presence of the fractured layer. It is observed that seismic reflection amplitude resulting from the interference between the reflections from overburden/seal and seal/fractured-layer does not change with offset. The thin super-k layer can also be identified using fluid substitution method and time-lapse seismic analysis. It shows that this layer has 5% increase in acoustic impedance after oil is fully replaced by injecting water in the layer. This causes 11% decrease and 87% increase in seismic reflection amplitudes from the seal/fractured-layer interface and the fractured-layer/grainstone interface after fluid substitution, respectively. These results show that it is possible to predict carbonate rock types, including thin super-k layers, using their seismic signatures, when different seismic techniques are used together, such as synthetic wave modeling, AVO, and time-lapse analysis. In future work, the convolutional model, AVO analysis, and fluid substitution could be applied to real seismic data for field verification and production monitoring.

Page generated in 0.0507 seconds