• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 736
  • 173
  • 83
  • 60
  • 59
  • 23
  • 20
  • 18
  • 10
  • 9
  • 6
  • 6
  • 5
  • 5
  • 5
  • Tagged with
  • 1526
  • 300
  • 288
  • 284
  • 233
  • 193
  • 175
  • 146
  • 127
  • 123
  • 122
  • 111
  • 111
  • 92
  • 89
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
321

Structural Credit Risk Models: Estimation and Applications

Lovreta, Lidija 26 May 2010 (has links)
El risc de crèdit s'associa a l'eventual incompliment de les obligacions de pagament per part dels creditors. En aquest cas, l'interès principal de les institucions financeres és mesurar i gestionar amb precisió aquest risc des del punt de vista quantitatiu. Com a resposta a l'interès esmentat, aquesta tesi doctoral, titulada "Structural Credit Risk Models: Estimation and Applications", se centra en l'ús pràctic dels anomenats "models estructurals de risc de crèdit". Aquests models es caracteritzen perquè estableixen una relació explícita entre el risc de crèdit i diverses variables fonamentals, la qual cosa permet un ventall ampli d'aplicacions. Concretament, la tesi analitza el contingut informatiu tant del mercat d'accions com del mercat de CDS sobre la base dels models estructurals esmentats.El primer capítol, estudia la velocitat distinta amb què el mercat d'accions i el mercat de CDS incorporen nova informació sobre el risc de crèdit. L'anàlisi se centra a respondre dues preguntes clau: quin d'aquests mercats genera una informació més precisa sobre el risc de crèdit i quins factors determinen el diferent contingut informatiu dels indicadors respectius de risc, és a dir, les primes de crèdit implícites en el mercat d'accions enfront del de CDS. La base de dades utilitzada inclou 94 empreses (40 d'europees, 32 de nordamericanes i 22 de japoneses) durant el període 2002-2004. Entre les conclusions principals destaquen la naturalesa dinàmica del procés de price discovery, una interconnexió més gran entre ambdós mercats i un major domini informatiu del mercat d'accions, associat a uns nivells més elevats del risc de crèdit, i, finalment, una probabilitat més gran de lideratge informatiu del mercat de CDS en els períodes d'estrès creditici.El segon capítol se centra en el problema de l'estimació de les variables latents en els models estructurals. Es proposa una nova metodologia, que consisteix en un algoritme iteratiu aplicat a la funció de versemblança per a la sèrie temporal del preu de les accions. El mètode genera estimadors de pseudomàxima versemblança per al valor, la volatilitat i el retorn que s'espera obtenir dels actius de l'empresa. Es demostra empíricament que aquest nou mètode produeix, en tots els casos, valors raonables del punt de fallida. A més, aquest mètode és contrastat d'acord amb les primes de CDS generades. S'observa que, en comparació amb altres alternatives per fixar el punt de fallida (màxima versemblança estàndard, barrera endògena, punt d'impagament de KMV i nominal del deute), l'estimació per pseudomàxima versemblança proporciona menys divergències.El tercer i darrer capítol de la tesi tracta la qüestió relativa a components distints del risc de crèdit a la prima dels CDS. Més concretament, estudia l'efecte del desequilibri entre l'oferta i la demanda, un aspecte important en un mercat on el nombre de compradors (de protecció) supera habitualment el de venedors. La base de dades cobreix, en aquest cas, 163 empreses en total (92 d'europees i 71 de nord-americanes) per al període 2002- 2008. Es demostra que el desequilibri entre l'oferta i la demanda té, efectivament, un paper important a l'hora d'explicar els moviments a curt termini en els CDS. La influència d'aquest desequilibri es detecta després de controlar l'efecte de variables fonamentals vinculades al risc de crèdit, i és més gran durant els períodes d'estrès creditici. Aquests resultats il·lustren que les primes dels CDS reflecteixen no tan sols el cost de la protecció, sinó també el cost anticipat per part dels venedors d'aquesta protecció per tancar la posició adquirida. / El riesgo de crédito se asocia al potencial incumplimiento por parte de los acreedores respecto de sus obligaciones de pago. En este sentido, el principal interés de las instituciones financieras es medir y gestionar con precisión dicho riesgo desde un punto de vista cuantitativo. Con objeto de responder a este interés, la presente tesis doctoral titulada "Structural Credit Risk Models: Estimation and Applications", se centra en el uso práctico de los denominados "Modelos Estructurales de Riesgo de Crédito". Estos modelos se caracterizan por establecer una conexión explícita entre el riesgo de crédito y diversas variables fundamentales, permitiendo de este modo un amplio abanico de aplicaciones. Para ser más explícitos, la presente tesis explora el contenido informativo tanto del mercado de acciones como del mercado de CDS sobre la base de los mencionados modelos estructurales.El primer capítulo de la tesis estudia la distinta velocidad con la que el mercado de acciones y el mercado de CDS incorporan nueva información sobre el riesgo de crédito. El análisis se centra en contestar dos preguntas clave: cuál de estos mercados genera información más precisa sobre el riesgo de crédito, y qué factores determinan en distinto contenido informativo de los respectivos indicadores de riesgo, esto es, primas de crédito implícitas en el mercado de acciones frente a CDS. La base de datos utilizada engloba a 94 compañías (40 europeas, 32 Norteamericanas y 22 japonesas) durante el periodo 2002-2004. Entre las principales conclusiones destacan la naturaleza dinámica del proceso de price discovery, la mayor interconexión entre ambos mercados y el mayor dominio informativo del mercado de acciones asociados a mayores niveles del riesgo de crédito, y finalmente la mayor probabilidad de liderazgo informativo del mercado de CDS en los periodos de estrés crediticio.El segundo capítulo se centra en el problema de estimación de variables latentes en modelos estructurales. Se propone una nueva metodología consistente en un algoritmo iterativo aplicado a la función de verosimilitud para la serie temporal del precio de las acciones. El método genera estimadores pseudo máximo verosímiles para el valor, volatilidad y retorno esperado de los activos de la compañía. Se demuestra empíricamente que este nuevo método produce en todos los casos valores razonables del punto de quiebra. El método es además contrastado en base a las primas de CDS generadas. Se observa que, en comparación con otras alternativas para fijar el punto de quiebra (máxima verosimilitud estándar, barrera endógena, punto de impago de KMV, y nominal de la deuda), la estimación por pseudo máxima verosimilitud da lugar a las menores divergencias.El tercer y último capítulo de la tesis aborda la cuestión relativa a componentes distintos al riesgo de crédito en la prima de los CDS. Se estudia más concretamente el efecto del desequilibrio entre oferta y demanda, un aspecto importante en un mercado donde el número de compradores (de protección) supera habitualmente al de vendedores. La base de datos cubre en este caso un total de 163 compañías (92 europeas y 71 norteamericanas) para el periodo 2002-2008. Se demuestra que el desequilibrio entre oferta y demanda tiene efectivamente un papel importante a la hora de explicar los movimientos de corto plazo en los CDS. La influencia de este desequilibrio se detecta una vez controlado el efecto de variables fundamentales ligadas al riesgo de crédito, y es mayor durante los periodos de estrés crediticio. Estos resultados ilustran que las primas de los CDS reflejan no sólo el coste de la protección, sino el coste anticipado por parte de los vendedores de tal protección de cerrar la posición adquirida. / Credit risk is associated with potential failure of borrowers to fulfill their obligations. In that sense, the main interest of financial institutions becomes to accurately measure and manage credit risk on a quantitative basis. With the intention to respond to this task this doctoral thesis, entitled "Structural Credit Risk Models: Estimation and Applications", focuses on practical usefulness of structural credit risk models that are characterized with explicit link with economic fundamentals and consequently allow for a broad range of application possibilities. To be more specific, in essence, the thesis project explores the information on credit risk embodied in the stock market and market for credit derivatives (CDS market) on the basis of structural credit risk models. The issue addressed in the first chapter refers to relative informational content of stock and CDS market in terms of credit risk. The overall analysis is focused on answering two crucial questions: which of these markets provides more timely information regarding credit risk, and what are the factors that influence informational content of credit risk indicators (i.e. stock market implied credit spreads and CDS spreads). Data set encompasses international set of 94 companies (40 European, 32 US and 22 Japanese) during the period 2002-2004. The main conclusions uncover time-varying behaviour of credit risk discovery, stronger cross market relationship and stock market leadership at higher levels of credit risk, as well as positive relationship between the frequency of severe credit deterioration shocks and the probability of the CDS market leadership.Second chapter concentrates on the problem of estimation of latent parameters of structural models. It proposes a new, maximum likelihood based iterative algorithm which, on the basis of the log-likelihood function for the time series of equity prices, provides pseudo maximum likelihood estimates of the default barrier and of the value, volatility, and expected return on the firm's assets. The procedure allows for credit risk estimation based only on the readily available information from stock market and is empirically tested in terms of CDS spread estimation. It is demonstrated empirically that, contrary to the standard ML approach, the proposed method ensures that the default barrier always falls within reasonable bounds. Moreover, theoretical credit spreads based on pseudo ML estimates offer the lowest credit default swap pricing errors when compared to the other options that are usually considered when determining the default barrier: standard ML estimate, endogenous value, KMV's default point, and principal value of debt.Final, third chapter of the thesis, provides further evidence of the performance of the proposed pseudo maximum likelihood procedure and addresses the issue of the presence of non-default component in CDS spreads. Specifically, the effect of demand-supply imbalance, an important aspect of liquidity in the market where the number of buyers frequently outstrips the number of sellers, is analyzed. The data set is largely extended covering 163 non-financial companies (92 European and 71 North American) and period 2002-2008. In a nutshell, after controlling for the fundamentals reflected through theoretical, stock market implied credit spreads, demand-supply imbalance factors turn out to be important in explaining short-run CDS movements, especially during structural breaks. Results illustrate that CDS spreads reflect not only the price of credit protection, but also a premium for the anticipated cost of unwinding the position of protection sellers.
322

Semantic Service Discovery With Heuristic Relevance Calculation

Ozyonum, Muge 01 February 2010 (has links) (PDF)
In this thesis, a semantically aided web service and restful service search mechanism is presented that makes use of an ontology. The mechanism relates method names, input and output parameters for ontology guided matches and offers results with varying relevance corresponding to the matching degree. The mechanism is demonstrated using an experimental domain that is tourism and travel. An ontology is created to support a set of web services that exist in this domain.
323

Management company's role & effectiveness in community building

Ng, Lin-chu, Julie., 吳蓮珠. January 1998 (has links)
published_or_final_version / Housing Management / Master / Master of Housing Management
324

La rhétorique des origines dans l'Histoire de la Nouvelle-France de Marc Lescarbot /

Lachance, Isabelle January 2004 (has links)
The Histoire de la Nouvelle-France (1609, 1611, 1612, 1617, 1618) by Marc Lescarbot (v. 1570--1641) is read as a symbolic foundation for the young colony of Port-Royal, Acadia (Annapolis, Nova Scotia), a construct which functions as a valid genesis for French America (thus, "New France" in the title refers specifically to this habitation as well as to the men who contributed to its making). Chapter I is devoted to a reading of the work's abundant paratext and identifies the topics at stake in the unfavourable rumours about the Acadian expeditions as well as about the lieutenant of Port-Royal, Jean de Biencourt, sieur de Poutrincourt. Moreover, this chapter explores the subjective marks, disseminated in the paratext, that build up the historian's ethos, which works as a proof of the validity of his object. This chapter investigates as well the metadiscursive comments on the writing of history and their incidence on the referentiality of the work. Chapter II compares the compilation of travel accounts contained in the Histoire with its sources. This comparison shows how the alteration of these accounts of travellers---who recorded themselves the result of their American expeditions---strengthens the division of the stereotyped dichotomy between the man of letters and the man of action, two functions respectively assigned to Lescarbot and Poutrincourt in the Histoire. The order of this compilation as well as the organisation of its various parts according to a diegetical logic shape specific places where a tension emerges between a reliable discourse, intended to a readership interested in the actual conditions of a colonial establishment, and the production of a textual "coating" aiming at attracting a courtly readership, to which the Jesuits, who challenged Poutrincourt's colonial project, addressed their requests. In chapter III, where are confronted the written and mapped representations of Port-Royal, this tension is even more manifest.
325

Discovery and evaluation of direct acting antivirals against hepatitis C virus

Abdurakhmanov, Eldar January 2015 (has links)
Until recently, the standard therapy for hepatitis C treatment has been interferon and ribavirin. Such treatment has only 50% efficacy and is not well tolerated. The emergence of new drugs has increased the treatment efficacy to 90%. Despite such an achievement, the success is limited since the virus mutates rapidly, causing the emergence of drug resistant forms. In addition, most new drugs were developed to treat genotype 1 infections. Thus, development of new potent antivirals is needed and drug discovery against hepatitis C is continued. In this thesis, a FRET-based protease assay was used to evaluate new pyrazinone based NS3 protease inhibitors that are structurally different to the newly approved and currently developing drugs. Several compounds in this series showed good potencies in the nanomolar range against NS3 proteases from genotype 1, 3, and the drug resistance variant R155K. We assume that these compounds can be further developed into drug candidates that possess activity against above mentioned enzyme variants. By using SPR technology, we analyzed interaction mechanisms and characteristics of allosteric inhibitors targeting NS5B polymerases from genotypes 1 and 3. The compounds exhibited different binding mechanisms and displayed a low affinity against NS5B from genotype 3. In order to evaluate the activity and inhibitors of the NS5B polymerase, we established an SPR based assay, which enables the monitoring of polymerization and its inhibition in real time. This assay can readily be implemented for the discovery of inhibitors targeting HCV. An SPR based fragment screening approach has also been established. A screen of a fragment library has been performed in order to identify novel scaffolds that can be used as a starting point for development of new allosteric inhibitors against NS5B polymerase. Selected fragments will be further elaborated to generate a new potent allosteric drug candidate. Alternative approaches have successfully been developed and implemented to the discovery of potential lead compounds targeting two important HCV drug targets.
326

Effective Characterization of Sequence Data through Frequent Episodes

Ibrahim, A January 2015 (has links) (PDF)
Pattern discovery is an important area of data mining referring to a class of techniques designed for the extraction of interesting patterns from the data. A pattern is some kind of a local structure that captures correlations and dependencies present in the elements of the data. In general, pattern discovery is about finding all patterns of `interest' in the data and a popular measure of interestingness for a pattern is its frequency of occurrence in the data. Thus the problem of frequent pattern discovery is to find all patterns in the data whose frequency of occurrence exceeds some user defined threshold. However, frequency of a pattern is not the only measure for finding patterns of interest and there also exist other measures and techniques for finding interesting patterns. This thesis is concerned with efficient discovery of inherent patterns from long sequence (or temporally ordered) data. Mining of such sequentially ordered data is called temporal data mining and the temporal patterns that are discovered from large sequential data are called episodes. More specifically, this thesis explores efficient methods for finding small and relevant subsets of episodes from sequence data that best characterize the data. The thesis also discusses methods for comparing datasets, based on comparing the sets of patterns representing the datasets. The data in a frequent episode discovery framework is abstractly viewed as a single long sequence of events. Here, the event is a tuple, (Ei; ti), where Ei is referred to as an event-type (taking values from a finite alphabet set) and ti is the time of occurrence. The events are ordered in the non-decreasing order of the time of occurrence. The pattern of interest in such a sequence is called an episode, which is a collection of event-types with a partial order defined over it. In this thesis, the focus is on a special type of episode called serial episode, where there is a total order defined among the collection of event-types representing the episode. The occurrence of an episode is essentially a subset of events from the data whose event-types match the set of eventtypes associated with the episode and the order in which they occur conforms to the underlying partial order of the episode. The frequency of an episode is some measure of how often it occurs in the event stream. Many different notions of frequency have been defined in literature. Given a frequency definition, the goal of frequent episode discovery is to unearth all episodes which have a frequency greater than a user-defined threshold. The size of an episode is the number of event-types in the episode. An episode β is called a subepisode of another episode β, if the collection of event-types of β is a subset of the corresponding collection of α and the event-types of β satisfy the same partial order relationships present among the corresponding event-types of α. The set of all episodes can be arranged in a partial order lattice, where each level i contains episodes of size i and the partial order is the subepisode relationship. In general, there are two approaches for mining frequent episodes, based on the way one traverses this lattice. The first approach is to traverse this lattice in a breadth-first manner, and is called the Apriori approach. The other approach is the Pattern growth approach, where the lattice is traversed in a depth-first manner. There exist different frequency notions for episodes, and many Apriori based algorithms have been proposed for mining frequent episodes under the different frequencies. However there do not exist Pattern-growth based methods for many of the frequency notions. The first part of the thesis proposes new Pattern-growth methods for discovering frequent serial episodes under two frequency notions called the non-overlapped frequency and the total frequency. Special cases, where certain additional conditions, called the span and gap constraints, are imposed on the occurrences of the episodes are also considered. The proposed methods, in general, consist of two steps: the candidate generation step and the counting step. The candidate generation step involves finding potential frequent episodes. This is done by following the general Pattern growth approach for finding the candidates, which is the depth-first traversal of the lattice of all episodes. The second step, which is the counting step, involves counting the frequencies of the episodes. The thesis presents efficient methods for counting the occurrences of serial episodes using occurrence windows of subepisodes for both the non-overlapped and total frequency. The relative advantages of Pattern-growth approaches over Apriori approaches are also discussed. Through detailed simulation results, the effectiveness of this approach on a host of synthetic and real data sets is shown. It is shown that the proposed methods are highly scalable and efficient in runtime as compared to the existing Apriori approaches. One of the main issues in frequent pattern mining is the huge number of frequent patterns, returned by the discovery methods, irrespective of the approach taken to solve the problems. The second part of this thesis, addresses this issue and discusses methods of selecting a small subset of relevant episodes from event sequences. There have been a few approaches, discussed in the literature, for finding a small subset of patterns. One set of methods are information theory based methods, where patterns that provide maximum information are searched for. Another approach is the Minimum Description Length (MDL) principle based summarization schemes. Here the data is encoded using a subset of patterns (which forms the model for the data) and its occurrences. The subset of patterns that has the maximum efficiency in encoding the data is the best representative model for the data. The MDL principle takes into account both the encoding efficiency of the model as well as model complexity. A method, called Constrained Serial episode Coding(CSC), is proposed based on the MDL principle, which returns a highly relevant, non-redundant and small subset of serial episodes. This also includes an encoding scheme, where the model representation and the encoding of the data are efficient. An interesting feature of this algorithm for isolating a small set of relevant episodes is that it does not need a user-specified threshold on frequency. The effectiveness of this method is shown on two types of data. The first is data obtained from a detailed simulator for a reconfigurable coupled conveyor system. The conveyor system consists of different intersecting paths and packages flow through such a network. Mining of such data can allow one to unearth the main paths of package ows which can be useful in remote monitoring and visualization of the system. On this data, it is shown that the proposed method is able to return highly consistent sub paths, in the form of serial episodes, with great encoding efficiency as compared to other known related sequence summarization schemes, like SQS and GoKrimp. The second type of data consists of a collection of multi-class sequence datasets. It is shown that the selected episodes from the proposed method form good features in classi cation. The proposed method is compared with SQS and GoKrimp, and it is shown that the episodes selected by this method help in achieving better classification results as compared to other methods. The third and nal part of the thesis discusses methods for comparing sets of patterns representing different datasets. There are many instances when one is interested in comparing datasets. For example, in streaming data, one is interested in knowing whether the characteristics of the data are the same or have changed significantly. In other cases, one may simply like to compare two datasets and quantify the degree of similarity between them. Often, data are characterized by a set of patterns as described above. Comparing sets of patterns representing datasets gives information about the similarity/dissimilarity between the datasets. However not many measures exist for comparing sets of patterns. This thesis proposes a similarity measure for comparing sets of patterns which in turn aids in comparison of di erent datasets. First, a kernel for comparing two patterns, called the Pattern Kernel, is proposed. This kernel is proposed for three types of patterns: serial episodes, sequential patterns and itemsets. Using this kernel, a Pattern Set Kernel is proposed for comparing different sets of patterns. The effectiveness of this kernel is shown in classification and change detection. The thesis concludes with a summary of the main contributions and some suggestions for extending the work presented here.
327

Comunicação direta entre dispositivos usando o modelo centrado em conteúdo

Floôr, Igor Maldonado 13 November 2015 (has links)
Submitted by Livia Mello (liviacmello@yahoo.com.br) on 2016-09-23T18:25:13Z No. of bitstreams: 1 DissIMF.pdf: 12997797 bytes, checksum: 61ca28804fe846c5e4f1f3d97a366017 (MD5) / Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-10-10T14:23:42Z (GMT) No. of bitstreams: 1 DissIMF.pdf: 12997797 bytes, checksum: 61ca28804fe846c5e4f1f3d97a366017 (MD5) / Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-10-10T14:23:49Z (GMT) No. of bitstreams: 1 DissIMF.pdf: 12997797 bytes, checksum: 61ca28804fe846c5e4f1f3d97a366017 (MD5) / Made available in DSpace on 2016-10-10T14:23:59Z (GMT). No. of bitstreams: 1 DissIMF.pdf: 12997797 bytes, checksum: 61ca28804fe846c5e4f1f3d97a366017 (MD5) Previous issue date: 2015-11-13 / Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) / The popularization of mobile devices capable of communicating via wireless network technologies allows us to consider different scenarios in which these devices may autonomously interact with each other. The envisioned communications would occur in a P2P fashion, as each device could simultaneously provide and consume services. A mechanism for dynamically discovering nearby devices and the available services would be necessary. Although a few existing applications already provide the direct interaction among devices they are purpose-specific and rely on pre-configured information for identifying other devices. A service-oriented architecture (SOA), based on HTTP requests and the REST or SOAP protocols, is commonly used in this type of communication. However, automatically finding available known services is still challenging. Service discovery is usually based exclusively on service name, which is not very flexible. This work proposes a new model for the direct interaction between computing devices. In an attempt to facilitate service discovery and selection we propose a content centric model in which interactions are defined according to an object’s type and the action to be applied to it. The proposed approach can workatop of existing discovery protocols, based on extensible metadata fields and on existing service data. Our proposal is evaluated according to i) the viability of direct communication between nearby devices, even when carried by users or associated to vehicles; ii) the proposed service discovery and matching using the content centric approach; iii) the effectiveness of a middleware to support the development of generic applications for direct device communication. Simulation results show our proposed model is viable. A preliminary implementation of the middleware was also evaluated and the results show that spontaneous, opportunistic, service-based interactions among devices can be achieved for different types of services. / A popularização de dispositivos móveis dotados de capacidade de comunicação sem fio possibilita a criação de ambientes onde estes dispositivos interagem diretamente entre si. Essas comunicações ocorrem no modelo P2P, de forma que cada dispositivo pode implementar simultaneamente papéis de cliente e de servidor. Contudo, para que ocorram interações di- retas entre dispositivos através de aplicações, é preciso que estes dispositivos implementem algum mecanismo de descoberta. Atualmente, a maioria das aplicações que se comunicam diretamente utilizam informações pré-configuradas para identificação de dispositivos e serviços. Uma forma utilizada para interação entre dispositivos é através da oferta e consumo de serviços utilizando a arquitetura orientada a serviços (SOA), baseada em requisições HTTP utilizando os padrões REST ou SOAP. Um problema recorrente para consumidores de serviços é a identificação de serviços disponíveis. A identificação utilizada em protoco- los de descoberta existentes baseia-se apenas no nome do serviço, salvo em comunicações pré-configuradas, o que não apresenta flexibilidade para descobrir novos serviços. De forma a facilitar a troca de informações entre dispositivos, este trabalho propõe um modelo em que interações diretas entre dispositivos sejam centradas no conteúdo envolvido na interação e nas ações que se deseja realizar sobre eles. Para tanto, uma identificação de serviço pode ser baseada em metadados que são adicionados às descrições de serviços existentes, ou em informações obtidas com protocolos de descoberta de serviço existentes. Para avaliar o modelo proposto, esse trabalho apresenta um estudo sobre i) a viabilidade de interações diretas entre dispositivos, considerando suas mobilidades; ii) o uso de um modelo de interação centrado em conteúdo e ação; iii) o desenvolvimento de um Middleware para simplificar o desenvolvimento de aplicações que usem o modelo de serviço proposto. Os resultados de simulação obtidos mostram que o modelo é viável. Além disso, uma versão preliminar do Middleware proposto foi avaliada e mostra que a interação direta entre dispositivos pode ocorrer de forma oportunística e espontânea.
328

Falso positivo na performance dos fundos de investimento com gestão ativa no Brasil: mensurando sorte dos gestores nos alfas estimados

Jesus, Marcelo de 01 February 2011 (has links)
Made available in DSpace on 2016-03-15T19:30:42Z (GMT). No. of bitstreams: 1 Marcelo de Jesus.pdf: 753815 bytes, checksum: 4b3631ad6c0a3a4e6928e2b70685850d (MD5) Previous issue date: 2011-02-01 / This study investigates, for the period between 2002 and 2009, what is the impact of luck on the performance of stocks mutual funds managers with active management in Brazil to surpass its benchmark. To that purpose, we used a new method, the False Discovery Rate approach - FDR to empirically test those impact. To measure precisely luck and unluck, ig, the frequency of false positives (Type I errors) in the tails of the cross-section of the tdistribution associated with the alphas of funds in the sample, this new approach was applied to measure the skills of grouped shape managers of stock funds with active management in Brazil. The FDR approach offers a simple and objective method to estimate the proportion of skilled funds (with a positive alpha), alpha-zero funds, and unskilled funds (with a negative alpha) across the population. Applying the FDR technique, it was found as a result of research that the majority of funds were alpha-zero, then no truly skilled funds, and only a small proportion of truly skilled funds. / Esta pesquisa investiga, para o período entre 2002 e 2009, qual o impacto da sorte na performance dos gestores de fundos de investimentos em ações com gestão ativa no Brasil que superam o seu benchmark. Para tanto, foi usado um novo método, a abordagem False Discovery Rate - FDR para testar empiricamente esse impacto. Para mensurar precisamente sorte e azar, ou seja, a freqüência de falsos positivos (erros do tipo I) nas caudas do crosssection da distribuição t associadas aos alfas dos fundos da amostra, foi aplicada essa nova abordagem para mensurar de forma agrupada a habilidade dos gestores de fundos de ações com gestão ativa no Brasil. A abordagem FDR oferece um método simples e objetivo para estimar a proporção de fundos habilidosos (com um alfa positivo), fundos de alfa-zero, e fundos não habilidosos (com um alfa negativo) em toda a população. Aplicando-se a técnica FDR, encontrou-se como resultado da pesquisa que a maioria dos fundos foram alfa-zero, seguida pelos fundos verdadeiramente não habilidosos, e apenas uma pequena proporção de fundos verdadeiramente habilidosos.
329

Using P2P approach for resource discovery in Grid Computing

Shah, ShairBaz January 2007 (has links)
One of the fundamental requirements of Grid computing is efficient and effective resource discovery mechanism. Resource discovery involves discovery of appropriate resources required by user applications. In this regard various resource discovery mechanisms have been proposed during the recent years. These mechanisms range from centralized to hierarchical information servers approach. Most of the techniques developed based on these approaches have scalability and fault tolerance limitations. To overcome these limitations Peer to Peer based discovery mechanisms are proposed. / shairbaz@gmail.com
330

Ranking And Classification of Chemical Structures for Drug Discovery : Development of Fragment Descriptors And Interpolation Scheme

Kandel, Durga Datta January 2013 (has links) (PDF)
Deciphering the activity of chemical molecules against a pathogenic organism is an essential task in drug discovery process. Virtual screening, in which few plausible molecules are selected from a large set for further processing using computational methods, has become an integral part and complements the expensive and time-consuming in vivo and in vitro experiments. To this end, it is essential to extract certain features from molecules which in the one hand are relevant to the biological activity under consideration, and on the other are suitable for designing fast and robust algorithms. The features/representations are derived either from physicochemical properties or their structures in numerical form and are known as descriptors. In this work we develop two new molecular-fragment descriptors based on the critical analysis of existing descriptors. This development is primarily guided by the notion of coding degeneracy, and the ordering induced by the descriptor on the fragments. One of these descriptors is derived based on the simple graph representation of the molecule, and attempts to encode topological feature or the connectivity pattern in a hierarchical way without discriminating atom or bond types. Second descriptor extends the first one by weighing the atoms (vertices) in consideration with the bonding pattern, valence state and type of the atom. Further, the usefulness of these indices is tested by ranking and classifying molecules in two previously studied large heterogeneous data sets with regard to their anti-tubercular and other bacterial activity. This is achieved by developing a scoring function based on clustering using these new descriptors. Clusters are obtained by ordering the descriptors of training set molecules, and identifying the regions which are (almost) exclusively coming from active/inactive molecules. To test the activity of a new molecule, overlap of its descriptors in those cluster (interpolation) is weighted. Our results are found to be superior compared to previous studies: we obtained better classification performance by using only structural information while previous studies used both structural features and some physicochemical parameters. This makes our model simple, more interpretable and less vulnerable to statistical problems like chance correlation and over fitting. With focus on predictive modeling, we have carried out rigorous statistical validation. New descriptors utilize primarily the topological information in a hierarchical way. This can have significant implications in the design of new bioactive molecules (inverse QSAR, combinatorial library design) which is plagued by combinatorial explosion due to use of large number of descriptors. While the combinatorial generation of molecules with desirable properties is still a problem to be satisfactorily solved, our model has potential to reduce the number of degrees of freedom, thereby reducing the complexity.

Page generated in 0.1116 seconds