Spelling suggestions: "subject:"discovery"" "subject:"rediscovery""
431 |
An economic analysis of crude oil exploration in Saskatchewan and AlbertaKamsari, Haul 28 February 2005
The International market of crude oil and natural gas is well established and very competitive. Knowledge about costs is important in helping to understand the current position of producers within the industry. In the eyes of the producers, the lower the costs the more profitable they will become given the price of crude.
This thesis focuses on an economic analysis of crude oil exploration in Saskatchewan and Alberta. In a competitive market, the producers require estimates of finding costs in both regions. The public policies that are designed to encourage crude exploration also rely heavily on reliable estimates of these costs.
The results show that Saskatchewans per-unit finding cost is significantly lower than Albertas in spite of the geological differences between the two provinces. The finding costs are estimated by using a methodology (Uhler 1979) that has been widely accepted within economic literature of non-renewable resources. The results support the hypothesis that finding costs in both regions are increasing and the argument that these costs will converge in the long-run, except for the last six years of the analysis.
|
432 |
Making sense of the mess : do CDS's help?Esau, Heidi Marie 12 April 2010
In a firm level matched sample of 499 firms we examine the information flow between stocks and the credit default swap (CDSs) over a period of January 2004 to December 2008. Our study confirms the general findings of previous studies that the information generally flows from equity market to CDS market. However, for a much smaller number of firms we also find that information also flows from the CDS to its stock. A major advantage of our sample period is that it allows us to examine the information flow before and during the crisis. This paper makes two contributions. We document that the firms for which the information flows from the CDS to its stock increases by almost tenfold during the crisis. The current crisis is often referred as a credit crisis, so this finding is consistent with what is expected of CDSs. The major contribution of this paper is that it identifies the firm specific factors that influence the information flow across the two markets. We show that characteristics such as asset size, profitability, and industry, amongst others, play an important role in determining information flow.
|
433 |
Structure Pattern Analysis Using Term Rewriting and Clustering AlgorithmFu, Xuezheng 27 June 2007 (has links)
Biological data is accumulated at a fast pace. However, raw data are generally difficult to understand and not useful unless we unlock the information hidden in the data. Knowledge/information can be extracted as the patterns or features buried within the data. Thus data mining, aims at uncovering underlying rules, relationships, and patterns in data, has emerged as one of the most exciting fields in computational science. In this dissertation, we develop efficient approaches to the structure pattern analysis of RNA and protein three dimensional structures. The major techniques used in this work include term rewriting and clustering algorithms. Firstly, a new approach is designed to study the interaction of RNA secondary structures motifs using the concept of term rewriting. Secondly, an improved K-means clustering algorithm is proposed to estimate the number of clusters in data. A new distance descriptor is introduced for the appropriate representation of three dimensional structure segments of RNA and protein three dimensional structures. The experimental results show the improvements in the determination of the number of clusters in data, evaluation of RNA structure similarity, RNA structure database search, and better understanding of the protein sequence-structure correspondence.
|
434 |
Die Entdeckung des Elementes 91 durch Kasimir Fajans und Oswald Göhring im Jahr 1913 und die Namensgebung durch Otto Hahn und Lise Meitner 1918 / The discovery of the element no. 91 by Kasimir Fajans and Oswald Göhring in 1913 and its naming by Otto Hahn and Lise Meitner in 1918Niese, Siegfried 21 February 2013 (has links) (PDF)
Kasimir Fajans und Oswald Göhring entdeckten 1913 das von ihnen Brevium (Bv) genannte Element 91 als kurzlebiges Protactiniumisotop 234mPa in unmittelbarer Folge des von Alexander S. Russell, Frederick Soddy und Fajans entdeckten radioaktiven Verschiebungsgesetzes, nachdem das als UX bezeichnete thoriumähnliche Tochterprodukt des Urans noch ein entsprechend der Voraussage von Dimitri Mendeleev tantalähnliches unbekanntes Radioelement enthalten muss. Auf der Suche nach dem langlebigen Mutterkörper des Actiniums fanden Otto Hahn und Lise Meitner 1918 das langlebige Isotop des Breviums (231Pa), das sie Protactinium nannten. Obgleich sie es als Isotop des Breviums bezeichneten, wurden sie in der Folgezeit nicht nur als Namensgeber; sondern meist auch als Entdecker des Elementes Nr. 91 genannt. / In 1913 Kasimir Fajans and Oswald Göhring discovered the element number 91as its short-lived isotope 234mPa. They named it brevium (Bv). The discovery was the result of the displacement law discovered by Alexander Smith Russell, Frederick Soddy and Fajans. According to this law and the periodic system of Dimitri Mendeleev the daughter of uranium UX must contain an unknown radioelement chemical similar to tantalum. In 1918 during the search of the mother of actinium Otto Hahn and Lise Meitner found the long-lived Isotope of Brevium (231Pa), which they designated as protactinium. Later often is written, that Hahn and Meitner have non-only given the name but also discovered the element number 91
|
435 |
Automatic Stability Checking for Large Analog CircuitsMukherjee, Parijat 1985- 14 March 2013 (has links)
Small signal stability has always been an important concern for analog designers.
Recent advances such as the Loop Finder algorithm allows designers to detect and
identify local, potentially unstable return loops without the need to identify and add
breakpoints. However, this method suffers from extremely high time and memory
complexity and thus cannot be scaled to very large analog circuits. In this research
work, we first take an in-depth look at the loop finder algorithm so as to identify
certain key enhancements that can be made to overcome these shortcomings. We
next propose pole discovery and impedance computation methods that address these
shortcomings by exploring only a certain region of interest in the s-plane. The reduced
time and memory complexity obtained via the new methodology allows us to extend
automatic stability checking to much larger circuits than was previously possible.
|
436 |
Improving Scalability And Efficiency Of Ilp-based And Graph-based Concept Discovery SystemsMutlu, Alev 01 July 2013 (has links) (PDF)
Concept discovery is the problem of finding definitions of target relation in terms or other relation given
as a background knowledge. Inductive Logic Programming (ILP)-based and graph-based approaches
are two competitors in concept discovery problem. Although ILP-based systems have long dominated
the area, graph-based systems have recently gained popularity as they overcome certain shortcomings
of ILP-based systems. While having applications in numerous domains, ILP-based concept discovery systems still sustain scalability and efficiency issues. These issues generally arose due to the large search spaces such systems build. In this work we propose memoization-based and parallelization-based methods that modify the search space construction step and the evaluation step of ILP-based concept discovery systems to overcome these problem.
In this work we propose three memoization-based methods, called Tabular CRIS, Tabular CRIS-wEF,
and Selective Tabular CRIS. In these methods, basically, evaluation queries are stored in look-up tables
for later uses. While preserving some core functions in common, each proposed method improves
e_ciency and scalability of its predecessor by introducing constraints on what kind of evaluation
queries to store in look-up tables and for how long.
The proposed parallelization method, called pCRIS, parallelizes the search space construction and
evaluation steps of ILP-based concept discovery systems in a data-parallel manner. The proposed
method introduces policies to minimize the redundant work and waiting time among the workers at
synchronization points.
Graph-based approaches were first introduced to the concept discovery domain to handle the so called local plateau problem. Graph-based approaches have recently gained more popularity in concept discovery system as they provide convenient environment to represent relational data and are able to
overcome certain shortcomings of ILP-based concept discovery systems. Graph-based approaches can
be classified as structure-based approaches and path-finding approaches. The first class of approaches
need to employ expensive algorithms such as graph isomorphism to find frequently appearing substructures.
The methods that fall into the second class need to employ sophisticated indexing mechanisms
to find out the frequently appearing paths that connect some nodes in interest. In this work, we also
propose a hybrid method for graph-based concept discovery which does not require costly substructure
matching algorithms and path indexing mechanism. The proposed method builds the graph in such a
way that similar facts are grouped together and paths that eventually turn to be concept descriptors are
build while the graph is constructed.
|
437 |
Investigation Of Schizophrenia Related Genes And Pathways Through Genome Wide Association StudiesDom, Huseyin Alper 01 January 2013 (has links) (PDF)
Schizophrenia is a complex mental disorder that is commonly characterized as deterioration of intellectual process and emotional responses and affects 1% of any given population. SNPs are single nucleotide changes that take place in DNA sequences and establish the major percentage of genomic variations. In this study, our goal was to identify SNPs as genomic markers that are related with schizophrenia and investigate the genes and pathways that are identified through the analysis of SNPs. Genome wide association studies (GWAS) analyse the whole genome of case and control groups to identify genetic variations and search for related markers, like SNPs. GWASs are the most common method to investigate genetic causes of a complex disease such as
v
schizophrenia because regular linkage studies are not sufficient. Out of 909,622 SNPs analysis of the dbGAP Schizophrenia genotyping data identified 25,555 SNPs with a p-value 5x10-5. Next, combined p-value approach to identify associated genes and pathways and AHP based prioritization to select biologically relevant SNPs with high statistical association are used through METU-SNP software. 6,000 SNPs had an AHP score above 0.4, which mapped to 2,500 genes suggested to be associated with schizophrenia and related conditions. In addition to previously described neurological pathways, pathway and network analysis showed enrichment of two pathways.
Melanogenesis and vascular smooth muscle contraction pathways were found to be highly associated with schizophrenia. We have also shown that these pathways can be organized in one biological network, which might have a role in the molecular etiology of schizophrenia. Overall analysis results revealed two novel candidate genes SOS1 and GUCY1B3 that have a possible relation with schizophrenia.
|
438 |
Design and Performance Evaluation of Service Discovery Protocols for Vehicular NetworksAbrougui, Kaouther 28 September 2011 (has links)
Intelligent Transportation Systems (ITS) are gaining momentum among researchers. ITS encompasses several technologies, including
wireless communications, sensor networks, data and voice communication, real-time driving assistant systems, etc. These states of the art technologies are expected to pave the way for a plethora of vehicular network applications. In fact, recently we have witnessed a growing interest in Vehicular Networks from both the research community and industry. Several potential applications
of Vehicular Networks are envisioned such as road safety and security, traffic monitoring and driving comfort, just to mention a few. It is critical that the existence of convenience or driving comfort services do not negatively affect the performance of safety services. In essence, the dissemination of safety services or the discovery of convenience applications requires the communication among service providers and service requesters through constrained bandwidth resources. Therefore, service discovery techniques for vehicular networks must efficiently use the available common resources.
In this thesis, we focus on the design of bandwidth-efficient and scalable service discovery protocols for Vehicular Networks. Three types of service discovery architectures are introduced: infrastructure-less, infrastructure-based, and hybrid architectures.
Our proposed algorithms are network layer based where service discovery messages are integrated into the routing messages for a
lightweight discovery. Moreover, our protocols use the channel diversity for efficient service discovery. We describe our algorithms and discuss their implementation. Finally, we present the main results of the extensive set of simulation experiments that have been used in order to evaluate their performance.
|
439 |
Statistical Learning in Drug Discovery via Clustering and MixturesWang, Xu January 2007 (has links)
In drug discovery, thousands of compounds are assayed to detect activity against a
biological target. The goal of drug discovery is to identify compounds that are active against the target (e.g. inhibit a virus). Statistical learning in drug discovery seeks to build a model that uses descriptors characterizing molecular structure to predict biological activity. However, the characteristics of drug discovery data can make it difficult to model the relationship between molecular descriptors and biological activity. Among these characteristics are the rarity of active compounds, the large
volume of compounds tested by high-throughput screening, and the complexity of
molecular structure and its relationship to activity.
This thesis focuses on the design of statistical learning algorithms/models and
their applications to drug discovery. The two main parts of the thesis are: an
algorithm-based statistical method and a more formal model-based approach. Both
approaches can facilitate and accelerate the process of developing new drugs. A
unifying theme is the use of unsupervised methods as components of supervised
learning algorithms/models.
In the first part of the thesis, we explore a sequential screening approach, Cluster
Structure-Activity Relationship Analysis (CSARA). Sequential screening integrates
High Throughput Screening with mathematical modeling to sequentially select the
best compounds. CSARA is a cluster-based and algorithm driven method. To
gain further insight into this method, we use three carefully designed experiments
to compare predictive accuracy with Recursive Partitioning, a popular structureactivity
relationship analysis method. The experiments show that CSARA outperforms
Recursive Partitioning. Comparisons include problems with many descriptor
sets and situations in which many descriptors are not important for activity.
In the second part of the thesis, we propose and develop constrained mixture
discriminant analysis (CMDA), a model-based method. The main idea of CMDA
is to model the distribution of the observations given the class label (e.g. active
or inactive class) as a constrained mixture distribution, and then use Bayes’ rule
to predict the probability of being active for each observation in the testing set.
Constraints are used to deal with the otherwise explosive growth of the number
of parameters with increasing dimensionality. CMDA is designed to solve several
challenges in modeling drug data sets, such as multiple mechanisms, the rare target
problem (i.e. imbalanced classes), and the identification of relevant subspaces of
descriptors (i.e. variable selection).
We focus on the CMDA1 model, in which univariate densities form the building
blocks of the mixture components. Due to the unboundedness of the CMDA1 log
likelihood function, it is easy for the EM algorithm to converge to degenerate solutions.
A special Multi-Step EM algorithm is therefore developed and explored via
several experimental comparisons. Using the multi-step EM algorithm, the CMDA1
model is compared to model-based clustering discriminant analysis (MclustDA).
The CMDA1 model is either superior to or competitive with the MclustDA model,
depending on which model generates the data. The CMDA1 model has better
performance than the MclustDA model when the data are high-dimensional and
unbalanced, an essential feature of the drug discovery problem!
An alternate approach to the problem of degeneracy is penalized estimation. By
introducing a group of simple penalty functions, we consider penalized maximum
likelihood estimation of the CMDA1 and CMDA2 models. This strategy improves
the convergence of the conventional EM algorithm, and helps avoid degenerate
solutions. Extending techniques from Chen et al. (2007), we prove that the PMLE’s
of the two-dimensional CMDA1 model can be asymptotically consistent.
|
440 |
Aiding Human Discovery of Out-of-the-Moment Handwriting Recognition ErrorsStedman, Ryan January 2009 (has links)
Handwriting recognizers frequently misinterpret digital ink input, requiring human verification of recognizer output to identify and correct errors, before the output of the recognizer can be used with any confidence int its correctness. Technologies like Anoto pens can make this error discovery and correction task more difficult, because verification of recognizer output may occur many hours after data input, creating an ``out-of-the-moment'' verification scenario. This difficulty can increase the number of recognition errors missed by users in verification. To increase the accuracy of human verified recognizer output, methods of aiding users in the discovery of handwriting recognition errors need to be created. While this need has been recognized by the research community, no published work exists examining this problem.
This thesis explores the problem of creating error discovery aids for handwriting recognition. Design possibilities for the creation of error discovery aids are explored, and concrete designs for error discovery aids are presented. Evaluations are performed on a set of these proposed discovery aids, showing that the visual proximity aid improves user performance in error discovery. Following the evaluation of the discovery aids proposed in this thesis, the one discovery aid that has been proposed in the literature, confidence highlighting, is explored in detail and its potential as a discovery aid is highlighted. A technique is then presented, complimentary to error discovery aids, to allow a system to monitor and respond to user performance in errors discovery. Finally, a set of implications are derived from the presented work for the design of verification interfaces for handwriting recognition.
|
Page generated in 0.0587 seconds