• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 455
  • 205
  • 61
  • 32
  • 29
  • 28
  • 26
  • 21
  • 7
  • 6
  • 6
  • 4
  • 3
  • 3
  • 3
  • Tagged with
  • 1034
  • 127
  • 126
  • 123
  • 100
  • 93
  • 82
  • 79
  • 76
  • 75
  • 68
  • 64
  • 62
  • 59
  • 57
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
281

Robust and efficient intrusion detection systems

Gupta, Kapil Kumar January 2009 (has links)
Intrusion Detection systems are now an essential component in the overall network and data security arsenal. With the rapid advancement in the network technologies including higher bandwidths and ease of connectivity of wireless and mobile devices, the focus of intrusion detection has shifted from simple signature matching approaches to detecting attacks based on analyzing contextual information which may be specific to individual networks and applications. As a result, anomaly and hybrid intrusion detection approaches have gained significance. However, present anomaly and hybrid detection approaches suffer from three major setbacks; limited attack detection coverage, large number of false alarms and inefficiency in operation. / In this thesis, we address these three issues by introducing efficient intrusion detection frameworks and models which are effective in detecting a wide variety of attacks and which result in very few false alarms. Additionally, using our approach, attacks can not only be accurately detected but can also be identified which helps to initiate effective intrusion response mechanisms in real-time. Experimental results performed on the benchmark KDD 1999 data set and two additional data sets collected locally confirm that layered conditional random fields are particularly well suited to detect attacks at the network level and user session modeling using conditional random fields can effectively detect attacks at the application level. / We first introduce the layered framework with conditional random fields as the core intrusion detector. Layered conditional random field can be used to build scalable and efficient network intrusion detection systems which are highly accurate in attack detection. We show that our systems can operate either at the network level or at the application level and perform better than other well known approaches for intrusion detection. Experimental results further demonstrate that our system is robust to noise in training data and handles noise better than other systems such as the decision trees and the naive Bayes. We then introduce our unified logging framework for audit data collection and perform user session modeling using conditional random fields to build real-time application intrusion detection systems. We demonstrate that our system can effectively detect attacks even when they are disguised within normal events in a single user session. Using our user session modeling approach based on conditional random fields also results in early attack detection. This is desirable since intrusion response mechanisms can be initiated in real-time thereby minimizing the impact of an attack.
282

Essays on testing conditional independence

Huang, Meng. January 2009 (has links)
Thesis (Ph. D.)--University of California, San Diego, 2009. / Title from first page of PDF file (viewed August 11, 2009). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 134-136).
283

Conditional random fields for noisy text normalisation

Coetsee, Dirko 12 1900 (has links)
Thesis (MScEng) -- Stellenbosch University, 2014. / ENGLISH ABSTRACT: The increasing popularity of microblogging services such as Twitter means that more and more unstructured data is available for analysis. The informal language usage in these media presents a problem for traditional text mining and natural language processing tools. We develop a pre-processor to normalise this noisy text so that useful information can be extracted with standard tools. A system consisting of a tokeniser, out-of-vocabulary token identifier, correct candidate generator, and N-gram language model is proposed. We compare the performance of generative and discriminative probabilistic models for these different modules. The effect of normalising the training and testing data on the performance of a tweet sentiment classifier is investigated. A linear-chain conditional random field, which is a discriminative model, is found to work better than its generative counterpart for the tokenisation module, achieving a 0.76% character error rate compared to 1.41% for the finite state automaton. For the candidate generation module, however, the generative weighted finite state transducer works better, getting the correct clean version of a word right 36% of the time on the first guess, while the discriminatively trained hidden alignment conditional random field only achieves 6%. The use of a normaliser as a pre-processing step does not significantly affect the performance of the sentiment classifier. / AFRIKAANSE OPSOMMING: Mikro-webjoernale soos Twitter word al hoe meer gewild, en die hoeveelheid ongestruktureerde data wat beskikbaar is vir analise groei daarom soos nooit tevore nie. Die informele taalgebruik in hierdie media maak dit egter moeilik om tradisionele tegnieke en bestaande dataverwerkingsgereedskap toe te pas. ’n Stelsel wat hierdie ruiserige teks normaliseer word ontwikkel sodat bestaande pakkette gebruik kan word om die teks verder te verwerk. Die stelsel bestaan uit ’n module wat die teks in woordeenhede opdeel, ’n module wat woorde identifiseer wat gekorrigeer moet word, ’n module wat dan kandidaat korreksies voorstel, en ’n module wat ’n taalmodel toepas om die mees waarskynlike skoon teks te vind. Die verrigting van diskriminatiewe en generatiewe modelle vir ’n paar van hierdie modules word vergelyk en die invloed wat so ’n normaliseerder op die akkuraatheid van ’n sentimentklassifiseerder het word ondersoek. Ons bevind dat ’n lineêre-ketting voorwaardelike toevalsveld—’n diskriminatiewe model — beter werk as sy generatiewe eweknie vir tekssegmentering. Die voorwaardelike toevalsveld-model behaal ’n karakterfoutkoers van 0.76%, terwyl die toestandsmasjien-model 1.41% behaal. Die toestantsmasjien-model werk weer beter om kandidaat woorde te genereer as die verskuilde belyningsmodel wat ons geïmplementeer het. Die toestandsmasjien kry 36% van die tyd die regte weergawe van ’n woord met die eerste raaiskoot, terwyl die diskriminatiewe model dit slegs 6% van die tyd kan doen. Laastens het ons bevind dat die vooraf normalisering van Twitter boodskappe nie ’n beduidende effek op die akkuraatheid van ’n sentiment klassifiseerder het nie.
284

Conditional many-body dynamics and quantum control of ultracold fermions and bosons in optical lattices coupled to quantized light

Mazzucchi, Gabriel January 2016 (has links)
We study the atom-light interaction in the fully quantum regime, with the focus on off-resonant light scattering into a cavity from ultracold atoms trapped in an optical lattice. Because of the global coupling between the atoms and the light modes, observing the photons leaking from the cavity allows the quantum nondemolition (QND) measurement of quantum correlations of the atomic ensemble, distinguishing between different quantum states. Moreover, the detection of the photons perturbs the quantum state of the atoms via the so-called measurement backaction. This effect constitutes an unusual additional dynamical source in a many-body strongly correlated system and it is able to efficiently compete with its intrinsic short-range dynamics. This competition becomes possible due to the ability to change the spatial profile of a global measurement at a microscopic scale comparable to the lattice period, without the need of single site addressing. We demonstrate nontrivial dynamical effects such as large-scale multimode oscillations, breakup and protection of strongly interacting fermion pairs. We show that measurement backaction can be exploited for realizing quantum states with spatial modulations of the density and magnetization, thus overcoming usual requirement for a strong interatomic interactions. We propose detection schemes for implementing antiferromagnetic states and density waves and we demonstrate that such long-range correlations cannot be realized with local addressing. Finally, we describe how to stabilize these emerging phases with the aid of quantum feedback. Such a quantum optical approach introduces into many-body physics novel processes, objects, and methods of quantum engineering, including the design of many-body entangled environments for open systems and it is easily extendable to other systems promising for quantum technologies.
285

Sparse Bayesian Time-Varying Covariance Estimation in Many Dimensions

Kastner, Gregor 18 September 2016 (has links) (PDF)
Dynamic covariance estimation for multivariate time series suffers from the curse of dimensionality. This renders parsimonious estimation methods essential for conducting reliable statistical inference. In this paper, the issue is addressed by modeling the underlying co-volatility dynamics of a time series vector through a lower dimensional collection of latent time-varying stochastic factors. Furthermore, we apply a Normal-Gamma prior to the elements of the factor loadings matrix. This hierarchical shrinkage prior effectively pulls the factor loadings of unimportant factors towards zero, thereby increasing parsimony even more. We apply the model to simulated data as well as daily log-returns of 300 S&P 500 stocks and demonstrate the effectiveness of the shrinkage prior to obtain sparse loadings matrices and more precise correlation estimates. Moreover, we investigate predictive performance and discuss different choices for the number of latent factors. Additionally to being a stand-alone tool, the algorithm is designed to act as a "plug and play" extension for other MCMC samplers; it is implemented in the R package factorstochvol. (author's abstract) / Series: Research Report Series / Department of Statistics and Mathematics
286

Predicting Uncertainty in Financial Markets : -An empirical study on ARCH-class models ability to estimate Value at Risk

Nybrant, Arvid, Rundberg, Henrik January 2018 (has links)
Value at Risk has over the last couple of decades become one of the most widely used measures of market risk. Several methods to compute this measure have been suggested. In this paper, we evaluate the use of the GARCH(1,1)-, EGARCH(1,1)- and the APARCH(1,1) model for estimation of this measure under the assumption that the conditional error distribution is normally-, t-, skewed t- and NIG-distributed respectively. For each model, the 95% and 99% one-day Value at Risk is computed using rolling out-of-sample forecasts for three equity indices. These forecasts are evaluated with Kupiec´s test for unconditional coverage test and Christoffersen’s test for conditional coverage. The results imply that the models generally perform well. The APARCH(1,1) model seems to be the most robust model. However, the GARCH(1,1) and the EGARCH(1,1) models also provide accurate predictions. The results indicate that the assumption of conditional distribution matters more for 99% than 95% Value at Risk. Generally, a leptokurtic distribution appears to be a sound choice for the conditional distribution.
287

Voicing Conditional Forgiveness

January 2011 (has links)
abstract: The current study is the first qualitative investigation aimed solely at understanding what it means to communicate conditional forgiveness in serious romantic relationships. Conditional forgiveness is forgiveness that has been offered with the stipulation that the errant behavior cease. It is a provocative topic because some argue genuine forgiveness is not conditional, but recent discoveries that have associated its use with severe transgressions and relational deterioration suggest it is a critical site for investigation. This inductive analysis of open-ended data from 201 anonymous surveys identified both distinctions between and intersections of conditional forgiveness, forgiveness, and reconciliation. A relational dialectics analysis also revealed that reconcilable-irreconcilable was the overarching tension for conditional forgivers and six additional tensions also were also discovered: individual identity-couple identity, safety-risk, certainty-uncertainty, mercy-justice, heart-mind, and expression-suppression. Of particular intrigue, the current analysis supports the previous discovery of implicit conditional forgiveness--suppressing conditions, sometimes in response to physical and substance abuse. Ultimately, the current analysis contributes to the enduring conversation aimed at understanding the communication and pursuit of forgiveness and reconciliation. It addresses one of the basic instincts and paradoxes of existing with others--the balance between vulnerability and protection. / Dissertation/Thesis / M.A. Communication Studies 2011
288

Extração de informações de conferências em páginas web

Garcia, Cássio Alan January 2017 (has links)
A escolha da conferência adequada para o envio de um artigo é uma tarefa que depende de diversos fatores: (i) o tema do trabalho deve estar entre os temas de interesse do evento; (ii) o prazo de submissão do evento deve ser compatível com tempo necessário para a escrita do artigo; (iii) localização da conferência e valores de inscrição são levados em consideração; e (iv) a qualidade da conferência (Qualis) avaliada pela CAPES. Esses fatores aliados à existência de milhares de conferências tornam a busca pelo evento adequado bastante demorada, em especial quando se está pesquisando em uma área nova. A fim de auxiliar os pesquisadores na busca de conferências, o trabalho aqui desenvolvido apresenta um método para a coleta e extração de dados de sites de conferências. Essa é uma tarefa desafiadora, principalmente porque cada conferência possui seu próprio site, com diferentes layouts. O presente trabalho apresenta um método chamado CONFTRACKER que combina a identificação de URLs de conferências da Tabela Qualis à identificação de deadlines a partir de seus sites. A extração das informações é realizada independente da conferência, do layout do site e da forma como são apresentadas as datas (formatação e rótulos). Para avaliar o método proposto, foram realizados experimentos com dados reais de conferências da Ciência da Computação. Os resultados mostraram que CONFTRACKER obteve resultados significativamente melhores em relação a um baseline baseado na posição entre rótulos e datas. Por fim, o processo de extração é executado para todas as conferências da Tabela Qualis e os dados coletados populam uma base de dados que pode ser consultada através de uma interface online. / Choosing the most suitable conference to submit a paper is a task that depends on various factors: (i) the topic of the paper needs to be among the topics of interest of the conference; (ii) submission deadlines need to be compatible with the necessary time for paper writing; (iii) conference location and registration costs; and (iv) the quality or impact of the conference. These factors allied to the existence of thousands of conferences, make the search of the right event very time consuming, especially when researching in a new area. Intending to help researchers finding conferences, this work presents a method developed to retrieve and extract data from conference web sites. Our method combines the identification of conference URL and deadline extraction. This is a challenging task as each web site has its own layout. Here, we propose CONFTRACKER, which combines the identification of the URLs of conferences listed in the Qualis Table and the extraction of their deadlines. Information extraction is carried out independent from the page’s layout and how the dates are presented. To evaluate our proposed method, we carried out experiments with real web data from Computer Science conferences. The results show that CONFTRACKER outperformed a baseline method based on the position of labels and dates. Finaly, the extracted data is stored in a database to be searched with an online tool.
289

Assessing the contribution of garch-type models with realized measures to BM&FBovespa stocks allocation

Boff, Tainan de Bacco Freitas January 2018 (has links)
Neste trabalho realizamos um amplo estudo de simulação com o objetivo principal de avaliar o desempenho de carteiras de mínima variância global construídas com base em modelos de previsão da volatilidade que utilizam dados de alta frequência (em comparação a dados diários). O estudo é baseado em um abrangente conjunto de dados financeiros, compreendendo 41 ações listadas na BM&FBOVESPA entre 2009 e 2017. Nós avaliamos modelos de previsão de volatilidade que são inspirados na literatura ARCH, mas que também incluem medidas realizadas. Eles são os modelos GARCH-X, HEAVY e Realized GARCH. Seu desempenho é comparado com o de carteiras construídas com base na matriz de covariância amostral, métodos de encolhimento e DCC-GARCH, bem como com a carteira igualmente ponderada e o índice Ibovespa. Uma vez que a natureza do trabalho é multivariada, e a fim de possibilitar a estimação de matrizes de covariância de grandes dimensões, recorremos à especificação DCC. Utilizamos três frequências de rebalanceamento (diária, semanal e mensal) e quatro conjuntos diferentes de restrições sobre os pesos das carteiras. A avaliação de desempenho baseia-se em medidas econômicas tais como retornos anualizados, volatilidade anualizada, razão de Sharpe, máximo drawdown, Valor em Risco, Valor em Risco condicional e turnover. Como conclusão, para o nosso conjunto de dados o uso de retornos intradiários (amostrados a cada 5 e 10 minutos) não melhora o desempenho das carteiras de mínima variância global. / In this work we perform an extensive backtesting study targeting as a main goal to assess the performance of global minimum variance (GMV) portfolios built on volatility forecasting models that make use of high frequency (compared to daily) data. The study is based on a broad intradaily financial dataset comprising 41 assets listed on the BM&FBOVESPA from 2009 to 2017. We evaluate volatility forecasting models that are inspired by the ARCH literature, but also include realized measures. They are the GARCH-X, the High-Frequency Based Volatility (HEAVY) and the Realized GARCH models. Their perfomances are benchmarked against portfolios built on the sample covariance matrix, covariance matrix shrinkage methods, DCC-GARCH as well as the naive (equally weighted) portfolio and the Ibovespa index. Since the nature of this work is multivariate and in order to make possible the estimation of large covariance matrices, we resort to the Dynamic Conditional Correlation (DCC) specification. We use three different rebalancing schemes (daily, weekly and monthly) and four different sets of constraints on portfolio weights. The performance assessment relies on economic measures such as annualized portfolio returns, annualized volatility, Sharpe ratio, maximum drawdown, Value at Risk, Expected Shortfall and turnover. We also account for transaction costs. As a conclusion, for our dataset the use of intradaily returns (sampled every 5 and 10 minutes) does not enhance the performance of GMV portfolios.
290

Lunar cycles of reproduction in the clown anemonefish Amphiprion percula: individual-level strategies and population-level patterns

Seymour, Jeremiah R. 23 April 2018 (has links)
Lunar cycles of reproduction are a widespread phenomenon in marine invertebrates and vertebrates. It is common practice to infer the adaptive value of this behavior based on the population level pattern. This practice may be flawed if individuals within the population are employing different reproductive strategies. Here, we capitalize on a long-term field study and a carefully controlled laboratory experiment of individually identifiable clown anemonefish, Amphiprion percula, to investigate the individual reproductive strategies underlying population-level patterns of reproduction. The field data reveal that A. percula exhibit a lunar cycle of reproduction at the population level. Further, the field data reveal that there is naturally occurring variation among individuals and within individuals in the number of times they reproduce per month. The laboratory experiment reveals that the number of times individuals reproduce per month is dependent on their food availability. Individuals are employing a conditional strategy, breeding once, twice or thrice per month, depending on resource availability. Breaking down the population level pattern by reproductive tactic, we show that each reproductive tactic has its own non-random lunar cycle of reproduction. Considering the adaptive value of these cycles, we suggest that all individuals, regardless of tactic, may avoid reproducing around the new moon. Further, individuals may avoid breeding in synchrony with each other, because of negative frequency dependent selection at the time of settlement. Most importantly, we conclude that determining what individuals are doing is a critical step toward understanding the adaptive value of lunar cycles of reproduction.

Page generated in 0.081 seconds