• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 737
  • 173
  • 83
  • 60
  • 59
  • 23
  • 20
  • 18
  • 10
  • 9
  • 6
  • 6
  • 5
  • 5
  • 5
  • Tagged with
  • 1529
  • 301
  • 289
  • 286
  • 234
  • 194
  • 175
  • 146
  • 127
  • 123
  • 122
  • 111
  • 111
  • 92
  • 90
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
411

Essays on the dynamic relationship between different types of investment flow and prices

OH, Natalie Yoon-na, Banking & Finance, Australian School of Business, UNSW January 2005 (has links)
This thesis presents three related essays on the dynamic relationship between different types of investment flow and prices in the equity market. These studies attempt to provide greater insight into the evolution of prices by investigating not ???what moves prices??? but ???who moves prices??? by utilising a unique database from the Korean Stock Exchange. The first essay investigates the trading behaviour and performance of online equity investors in comparison to other investors on the Korean stock market. Whilst the usage of online resources for trading is becoming more and more prevalent in financial markets, the literature on the role of online investors and their impact on prices is limited. The main finding arising from this essay supports the claim that online investors are noise traders at an aggregate level. Whereas foreigners show distinct trading patterns as a group in terms of consensus on the direction of market movements, online investors do not show such distinct trading patterns. The essay concludes that online investors do not trade on clear information signals and introduce noise into the market. Direct performance and market timing ability measures further show that online investors are the worst performers and market timers whereas foreign investors consistently show outstanding performance and market timing ability. Domestic mutual funds in Korea have not been extensively researched. The second essay analyses mutual fund activity and relations between stock market returns and mutual fund flows in Korea. Although regulatory authorities have been cautious about introducing competing funds, contractual-type mutual funds have not been cannibalized by the US-style corporate mutual funds that started trading in 1998. Negative feedback trading activity is observed between stock market returns and mutual fund flows, measured as net trading volumes using stock purchases and sales volume. It is predominantly returns that drive flows, although stock purchases contain information about returns, partially supporting the price pressure hypothesis. After controlling for declining markets, the results suggest Korean equity fund managers tend to swing indiscriminately between increasing purchases and increasing sales in times of rising market volatility, possibly viewing volatility as an opportunity to profit and defying the mean-variance framework that predicts investors should retract from the market as volatility increases. Mutual funds respond indifferently to wide dispersions in investor beliefs. The third essay focuses on the conflicting issue of home bias by looking at the impact on domestic prices of foreign trades relative to locals using high frequency data from the Korean Stock Exchange (KSE). This essay extends the work of Choe, Kho and Stulz (2004) (CKS) in three ways. First, it analyses the post-Asian financial crisis period, whereas CKS (2004) analyse the crisis (1996-98) period. Second, this essay adopts a modified version of the CKS method to better capture the aggregate behaviour of each investor-type by utilising the participation ratio in comparison to the CKS method. Third, this essay does not limit investigation to intra-day analysis but extends to daily analysis up to 50 days to observe the effect of intensive trading activity in a longer horizon than the CKS study. In contrast to the CKS findings, this paper finds that foreigners have a short-lived private information advantage over locals and trades by foreigners have a larger impact on prices using intra-day data. However, assuming investors buy-hold for up to 50 days, the local individuals provide a greater impact and more profitable returns than foreigners. Superior performance is documented for buys rather than sells.
412

台股指數期貨價格發現(Price Discovery)之探討-日內與週型態

王凱蒂, Wang, Kai-Ti Unknown Date (has links)
本研究探討台灣加權股價指數以及本土指數期貨間的「價格發現」關係。研究期間乃自民國87年9月1日至88年12月31日止,選取各交易日內期貨與現貨每5分鐘的資料作為觀察值。在研究方法的採用上包括:ADF單根檢定、共整合檢定、錯誤更正模型(ECM)以及衝擊反應分析與變異數分解等。進而,本研究亦依照相同之分析流程,將資料進一步區分為週一至週六等6個交易日,以探討各交易日的結果是否不同。本研究得出以下之結論: 1. 在ADF單根檢定之下,我們發現不論期貨或現貨,兩數列均為I(1)之數列。 2. 根據共整合的檢定結果,發現台股指數期貨與現貨間存在「共整合關係」,即兩者存在一長期均衡關係,且此一情形亦適用於所有資料與各交易日。 3. 將共整合關係考慮進ECM分析中則可發現,對全體資料而言,不論是期貨或現貨,兩者均會對前期均衡誤差作調整,但是期貨的調整速度較現貨為快,也較為顯著。但對於單一交易日而言,可發現不同之結果:期貨仍會往均衡方向作移動,但現貨除星期五外,並沒有往均衡移動之情形。 4. 在「領先-落後」關係上:就全部資料來看(落後4期),期貨會領先現貨約15分鐘左右,而現貨領先期貨亦為20分鐘,兩者並非單一方向之因果關係。而在週一至週六的結果上,回饋關係亦存在,且領先落後的時間也約為15至20分鐘,唯獨「星期一」期貨似乎未有領先現貨之情形。 5. 在衝擊反應分析與變異數分解方面,不論期貨或現貨,大部分的波動來源,仍是來自於自身的變異程度。但相對上,期貨對現貨預測誤差變異數的解釋程度會高於現貨對期貨預測誤差變異數的解釋程度。同時,由衝擊反應函數來看,亦可得出相類似的結果:即相對而言,期貨對現貨之衝擊較大,且衝擊時間約為15至20分鐘。
413

Modelling User Tasks and Intentions for Service Discovery in Ubiquitous Computing

Ingmarsson, Magnus January 2007 (has links)
<p>Ubiquitous computing (Ubicomp) increases in proliferation. Multiple and ever growing in numbers, computational devices are now at the users' disposal throughout the physical environment, while simultaneously being effectively invisible. Consequently, a significant challenge is service discovery. Services may for instance be physical, such as printing a document, or virtual, such as communicating information. The existing solutions, such as Bluetooth and UPnP, address part of the issue, specifically low-level physical interconnectivity. Still absent are solutions for high-level challenges, such as connecting users with appropriate services. In order to provide appropriate service offerings, service discovery in Ubicomp must take the users' context, tasks, goals, intentions, and available resources into consideration. It is possible to divide the high-level service-discovery issue into two parts; inadequate service models, and insufficient common-sense models of human activities.</p><p>This thesis contributes to service discovery in Ubicomp, by arguing that in order to meet these high-level challenges, a new layer is required. Furthermore, the thesis presents a prototype implementation of this new service-discovery architecture and model. The architecture consists of hardware, ontology-layer, and common-sense-layer. This work addresses the ontology and common-sense layers. Subsequently, implementation is divided into two parts; Oden and Magubi. Oden addresses the issue of inadequate service models through a combination of service-ontologies in concert with logical reasoning engines, and Magubi addresses the issue of insufficient common-sense models of human activities, by using common sense models in combination with rule engines. The synthesis of these two stages enables the system to reason about services, devices, and user expectations, as well as to make suitable connections to satisfy the users' overall goal.</p><p>Designing common-sense models and service ontologies for a Ubicomp environment is a non-trivial task. Despite this, we believe that if correctly done, it might be possible to reuse at least part of the knowledge in different situations. With the ability to reason about services and human activities it is possible to decide if, how, and where to present the services to the users. The solution is intended to off-load users in diverse Ubicomp environments as well as provide a more relevant service discovery.</p> / Report code: LiU-Tek-Lic-2007:14.
414

"Lite udda och inte riktigt som andra" : en tematisk undersökning av hur utanförskap och identitetssökande som motiv skildras i Inger Edelfeldts romaner

Sellin, Anna January 2007 (has links)
<p>The main purpose of this study is to analyse how the main themes of alienation and the search for identity is portrayed by Swedish author Inger Edelfeldt. I have applied the theories of Rita Felski concerning feminist novels of self-discovery, in which the development of the female identity is the main question. As Edelfeldt’s writing consists of literature for the young as well as adults, I have included material from both of these genres. I have also taken use of Ulla Lundqvists theories about Swedish juvenile books when examining aspects of the main character’s feelings of alienation and identity searching.</p><p>The results of my analysis show that the reading of my material as feminist novels of self-discovery has revealed pervading charachteristics of alienation, love, friendship and psychological development. The genre-crossing tendency of Edelfeldt’s writing shows in that the theme of identity crisis and the search for identity is an important issue in all of her novels, despite the protagonist’s age. Finally, I show in my study, that by rejecting the heterosexual love-story narrative, Edelfeldts novels put the woman’s own psychological development in focus.</p>
415

From shape-based object recognition and discovery to 3D scene interpretation

Payet, Nadia 12 May 2011 (has links)
This dissertation addresses a number of inter-related and fundamental problems in computer vision. Specifically, we address object discovery, recognition, segmentation, and 3D pose estimation in images, as well as 3D scene reconstruction and scene interpretation. The key ideas behind our approaches include using shape as a basic object feature, and using structured prediction modeling paradigms for representing objects and scenes. In this work, we make a number of new contributions both in computer vision and machine learning. We address the vision problems of shape matching, shape-based mining of objects in arbitrary image collections, context-aware object recognition, monocular estimation of 3D object poses, and monocular 3D scene reconstruction using shape from texture. Our work on shape-based object discovery is the first to show that meaningful objects can be extracted from a collection of arbitrary images, without any human supervision, by shape matching. We also show that a spatial repetition of objects in images (e.g., windows on a building facade, or cars lined up along a street) can be used for 3D scene reconstruction from a single image. The aforementioned topics have never been addressed in the literature. The dissertation also presents new algorithms and object representations for the aforementioned vision problems. We fuse two traditionally different modeling paradigms Conditional Random Fields (CRF) and Random Forests (RF) into a unified framework, referred to as (RF)^2. We also derive theoretical error bounds of estimating distribution ratios by a two-class RF, which is then used to derive the theoretical performance bounds of a two-class (RF)^2. Thorough experimental evaluation of individual aspects of all our approaches is presented. In general, the experiments demonstrate that we outperform the state of the art on the benchmark datasets, without increasing complexity and supervision in training. / Graduation date: 2011 / Access restricted to the OSU Community at author's request from May 12, 2011 - May 12, 2012
416

Internet Fish

LaMacchia, Brian A. 01 August 1996 (has links)
I have invented "Internet Fish," a novel class of resource-discovery tools designed to help users extract useful information from the Internet. Internet Fish (IFish) are semi-autonomous, persistent information brokers; users deploy individual IFish to gather and refine information related to a particular topic. An IFish will initiate research, continue to discover new sources of information, and keep tabs on new developments in that topic. As part of the information-gathering process the user interacts with his IFish to find out what it has learned, answer questions it has posed, and make suggestions for guidance. Internet Fish differ from other Internet resource discovery systems in that they are persistent, personal and dynamic. As part of the information-gathering process IFish conduct extended, long-term conversations with users as they explore. They incorporate deep structural knowledge of the organization and services of the net, and are also capable of on-the-fly reconfiguration, modification and expansion. Human users may dynamically change the IFish in response to changes in the environment, or IFish may initiate such changes itself. IFish maintain internal state, including models of its own structure, behavior, information environment and its user; these models permit an IFish to perform meta-level reasoning about its own structure. To facilitate rapid assembly of particular IFish I have created the Internet Fish Construction Kit. This system provides enabling technology for the entire class of Internet Fish tools; it facilitates both creation of new IFish as well as additions of new capabilities to existing ones. The Construction Kit includes a collection of encapsulated heuristic knowledge modules that may be combined in mix-and-match fashion to create a particular IFish; interfaces to new services written with the Construction Kit may be immediately added to "live" IFish. Using the Construction Kit I have created a demonstration IFish specialized for finding World-Wide Web documents related to a given group of documents. This "Finder" IFish includes heuristics that describe how to interact with the Web in general, explain how to take advantage of various public indexes and classification schemes, and provide a method for discovering similarity relationships among documents.
417

Automated Discovery of Pedigrees and Their Structures in Collections of STR DNA Specimens Using a Link Discovery Tool

Haun, Alex Brian 01 May 2010 (has links)
In instances of mass fatality, such as plane crashes, natural disasters, or terrorist attacks, investigators may encounter hundreds or thousands of DNA specimens representing victims. For example, during the January 2010 Haiti earthquake, entire communities were destroyed, resulting in the loss of thousands of lives. With such a large number of victims the discovery of family pedigrees is possible, but often requires the manual application of analytical methods, which are tedious, time-consuming, and expensive. The method presented in this thesis allows for automated pedigree discovery by extending Link Discovery Tool (LDT), a graph visualization tool designed for discovering linkages in large criminal networks. The proposed algorithm takes advantage of spatial clustering of graphs of DNA specimens to discover pedigree structures in large collections of specimens, saving both time and money in the identification process.
418

WASP : Lightweight Programmable Ephemeral State on Routers to Support End-to-End Applications

Martin, Sylvain 07 November 2007 (has links)
We present WASP (World-friendly Active packets for ephemeral State Processing), a novel active networks architecture that enables ephemeral storage of information on routers in order to ease distributed application synchronisation and co-operation. We aimed at a design compatible with modern routers hardware and with network operators' goals. Our solution has to scale with the number of interfaces of the device and to support throughput of several Gbps. Throughout this thesis we searched for the best trade-off between features (platform exibility) and guarantees (platform safety), with as little performance sacri ce as possible. We picked the Ephemeral State Processing (ESP) router, developed by K. Calvert's team at University of Kentucky, as a starting point and extended it with our own virtual processor (VPU) to offer higher exibility to the network programmer. The VPU is a minimalist bytecode interpreter that manipulates the content of the "ephemeral state store" of the router according to a microprogram present in packets. It ultimately allows the microprogram to drop or forward the packet on any router, acting as remotely programmable filters around unmodified IP routing cores. We developed two implementations of WASP: a "reference" module for the Linux kernel, and, based on that prototype experience, a WASP filter application for the IXP2400 network processor that proves feasibility of our platform at higher speed. We extensively tested those two implementations against their ESP counterpart in order to estimate the overhead of our approach. High speed tests on the IXP were also performed to ensure WASP's robustness, and were actually rich in lessons for future development on programmable network devices. The nature of WASP makes it a platform of choice to detect properties of the network along a given path. Thanks to per-flow variables (even if ephemeral) and its ability to sustain custom processing at wire-speed, we can for instance implement lightweight measurement of QoS parameters or enforce application-specific congestion control. We have however opted -- in the context of this thesis -- for a focus on another use of the platform: using the ephemeral state to advertise and detect members of distributed applications (e.g. grid computing or peer-to-peer systems) in a purely decentralised way. To evaluate the benefits of this approach, we propose a model of a peer-to-peer community where peers try and join former neighbours, and we show through simulations how efficiency and quality of user experience evolve with the presence of more WASP routers in the network.
419

Structure-Based Virtual Screening : New Methods and Applications in Infectious Diseases

Jacobsson, Micael January 2008 (has links)
A drug discovery project typically starts with a pharmacological hypothesis: that the modulation of a specific molecular biological mechanism would be beneficial in the treatment of the targeted disease. In a small-molecule project, the next step is to identify hits, i.e. molecules that can effect this modulation. These hits are subsequently expanded into hit series, which are optimised with respect to pharmacodynamic and pharmacokinetic properties, through medicinal chemistry. Finally, a drug candidate is clinically developed into a new drug. This thesis concerns the use of structure-based virtual screening in the hit identification phase of drug discovery. Structure-based virtual screening involves using the known 3D structure of a target protein to predict binders, through the process of docking and scoring. Docking is the prediction of potential binding poses, and scoring is the prediction of the free energy of binding from those poses. Two new methodologies, based on post-processing of scoring results, were developed and evaluated using model systems. Both methods significantly increased the enrichment of true positives. Furthermore, correlation was observed between scores and simple molecular properties, and identified as a source of false positives in structure-based virtual screening. Two target proteins, Mycobacterium tuberculosis ribose-5-phosphate isomerase, a potential drug target in tuberculosis, and Plasmodium falciparum spermidine synthase, a potential drug target in malaria, were subjected to docking and virtual screening. Docking of substrates and products of ribose-5-phosphate isomerase led to hypotheses on the role of individual residues in the active site. Additionally, virtual screening was used to predict 48 potential inhibitors, but none was confirmed as an inhibitor or binder to the target enzyme. For spermidine synthase, structure-based virtual screening was used to predict 32 potential active-site binders. Seven of these were confirmed to bind in the active site.
420

Knowledge Discovery In Microarray Data Of Bioinformatics

Kocabas, Fahri 01 June 2012 (has links) (PDF)
This thesis analyzes major microarray repositories and presents a metadata framework both to address the current issues and to promote the main operations such as knowledge discovery, sharing, integration, and exchange. The proposed framework is demonstrated in a case study on real data and can be used for other high throughput repositories in biomedical domain. Not only the number of microarray experimentation increases, but also the size and complexity of the results rise in response to biomedical inquiries. And, experiment results are significant when examined in a batch and placed in a biological context. There have been standardization initiatives on content, object model, exchange format, and ontology. However, they have proprietary information space. There are backlogs and the data cannot be exchanged among the repositories. There is a need for a format and data management standard at present.iv v We introduced a metadata framework to include metadata card and semantic nets to make the experiment results visible, understandable and usable. They are encoded in standard syntax encoding schemes and represented in XML/RDF. They can be integrated with other metadata cards, semantic nets and can be queried. They can be exchanged and shared. We demonstrated the performance and potential benefits with a case study on a microarray repository. This study does not replace any product on repositories. A metadata framework is required to manage such huge data. We state that the backlogs can be reduced, complex knowledge discovery queries and exchange of information can become possible with this metadata framework.

Page generated in 0.0449 seconds