• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 469
  • 77
  • 34
  • 31
  • 29
  • 12
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 807
  • 511
  • 239
  • 228
  • 173
  • 149
  • 129
  • 98
  • 97
  • 86
  • 83
  • 82
  • 73
  • 73
  • 71
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

High Returns and Low Volatility: The Case for Mid-Cap Stocks

Lynch, Ryan 01 May 2018 (has links)
This study examines excess risk-adjusted returns generated by mid-cap firms with an average market equity between $2.4 billion and $5.5 billion in 2017. Researchers have heavily studied the small-firm effect since its identification in the early 1980s, leading investors to overweight small-cap securities. Additional investments in the small-cap segment caused the small-cap anomaly to weaken. This study finds that excess returns of small-cap firms compared to mid-cap firms are not statistically significant in the periods 1946 – 2017 and 1982 -2017. However, mid-cap firms generate significantly higher 3-year average returns relative to small and large-cap firms after the initial identification of the small-cap anomaly (1982 – 2017). Further, mid-cap securities generate a higher risk-adjusted return after the small-cap anomaly was identified. This study hypothesizes the mid-cap anomaly results from greater growth potential for mid-caps relative to large-caps while still being large enough to weather economic storms. This study also hypothesizes that non-size related factors have the largest impact on the mid-cap segment. The results support the existence of a mid-cap anomaly; however, the results suggest the anomaly is not a result of the growth potential of firms within the segment. Additionally, the results suggest non-size related factors such as book-to-market and operating profitability have the smallest impact on mid-cap securities. Therefore, this study concludes excess returns generated by mid-cap securities represent a true anomaly that is not dependent upon non-size related factors.
162

Improving internet security via large-scale passive and active dns monitoring

Antonakakis, Emmanouil Konstantinos 04 June 2012 (has links)
The Domain Name System (DNS) is a critical component of the Internet. DNS provides the ability to map human-readable and memorable domain names to machine-level IP addresses and other records. These mappings lie at the heart of the Internet's success and are essential for the majority of core Internet applications and protocols. The critical nature of DNS means that it is often the target of abuse. Cyber-criminals rely heavily upon the reliability and scalability of the DNS protocol to serve as an agile platform for their illicit operations. For example, modern malware and Internet fraud techniques rely upon DNS to locate their remote command-and-control (C&C) servers through which new commands from the attacker are issued, serve as exfiltration points for information stolen from the victims' computers, and to manage subsequent updates to their malicious toolset. The research described in this thesis scientifically addresses problems in the area of DNS-based detection of illicit operations. In detail, this research studies new methods to quantify and track dynamically changing reputations for DNS based on passive network measurements. The research also investigates methods for the creation of early warning systems for DNS. These early warning systems enables the research community to identify emerging threats (e.g., new botnets and malware infections) across the DNS hierarchy in a timelier manner.
163

ONTO-Analyst: um método extensível para a identificação e visualização de anomalias em ontologias / ONTO-Analyst: An Extensible Method for the Identification and the Visualization of Anomalies in Ontologies

João Paulo Orlando 21 August 2017 (has links)
A Web Semântica é uma extensão da Web em que as informações tem um significado explícito, permitindo que computadores e pessoas trabalhem em cooperação. Para definir os significados explicitamente, são usadas ontologias na estruturação das informações. À medida que mais campos científicos adotam tecnologias da Web Semântica, mais ontologias complexas são necessárias. Além disso, a garantia de qualidade das ontologias e seu gerenciamento ficam prejudicados quanto mais essas ontologias aumentam em tamanho e complexidade. Uma das causas para essas dificuldades é a existência de problemas, também chamados de anomalias, na estrutura das ontologias. Essas anomalias englobam desde problemas sutis, como conceitos mal projetados, até erros mais graves, como inconsistências. A identificação e a eliminação de anomalias podem diminuir o tamanho da ontologia e tornar sua compreensão mais fácil. Contudo, métodos para identificar anomalias encontrados na literatura não visualizam anomalias, muitos não trabalham com OWL e não são extensíveis por usuários. Por essas razões, um novo método para identificar e visualizar anomalias em ontologias, o ONTO-Analyst, foi criado. Ele permite aos desenvolvedores identificar automaticamente anomalias, usando consultas SPARQL, e visualizá-las em forma de grafos. Esse método usa uma ontologia proposta, a METAdata description For Ontologies/Rules (MetaFOR), para descrever a estrutura de outras ontologias, e consultas SPARQL para identificar anomalias nessa descrição. Uma vez identificadas, as anomalias podem ser apresentadas na forma de grafos. Um protótipo de sistema, chamado ONTO-Analyst, foi criado para a validação desse método e testado em um conjunto representativo de ontologias, por meio da verificação de anomalias representativas. O protótipo testou 18 tipos de anomalias retirados da literatura científica, em um conjunto de 608 ontologias OWL de 4 repositórios públicos importantes e dois artigos. O sistema detectou 4,4 milhões de ocorrências de anomalias nas 608 ontologias: 3,5 milhões de ocorrências de um mesmo tipo e 900 mil distribuídas em 11 outros tipos. Essas anomalias ocorreram em várias partes das ontologias, como classes, propriedades de objetos e de dados, etc. Num segundo teste foi realizado um estudo de caso das visualizações geradas pelo protótipo ONTO-Analyst das anomalias encontradas no primeiro teste. Visualizações de 11 tipos diferentes de anomalias foram automaticamente geradas. O protótipo mostrou que cada visualização apresentava os elementos envolvidos na anomalia e que pelo menos uma solução podia ser deduzida a partir da visualização. Esses resultados demonstram que o método pode eficientemente encontrar ocorrências de anomalias em um conjunto representativo de ontologias OWL, e que as visualizações facilitam o entendimento e correção da anomalia encontrada. Para estender os tipos de anomalias detectáveis, usuários podem escrever novas consultas SPARQL. / The Semantic Web is an extension of the World Wide Web in which the information has explicit meaning, allowing computers and people to work in cooperation. In order to explicitly define meaning, ontologies are used to structure information. As more scientific fields adopt Semantic Web technologies, more complex ontologies are needed. Moreover, the quality assurance of the ontologies and their management are undermined as these ontologies increase in size and complexity. One of the causes for these difficulties is the existence of problems, also called anomalies, in the ontologies structure. These anomalies range from subtle problems, such as poorly projected concepts, to more serious ones, such as inconsistencies. The identification and elimination of anomalies can diminish the ontologies size and provide a better understanding of the ontologies. However, methods to identify anomalies found in the literature do not provide anomaly visualizations, many do not work on OWL ontologies or are not user extensible. For these reasons, a new method for anomaly identification and visualization, the ONTO-Analyst, was created. It allows ontology developers to automatically identify anomalies, using SPARQL queries, and visualize them as graph images. The method uses a proposed ontology, the METAdata description For Ontologies/Rules (MetaFOR), to describe the structure of other ontologies, and SPARQL queries to identify anomalies in this description. Once identified, the anomalies can be presented as graph images. A system prototype, the ONTO-Analyst, was created in order to validate this method and it was tested in a representative set of ontologies, trough the verification of representative anomalies. The prototype tested 18 types of anomalies, taken from the scientific literature, in a set of 608 OWL ontologies from major public repositories and two articles. The system detected 4.4 million anomaly occurrences in the 608 ontologies: 3.5 million occurrences from the same type and 900 thousand distributed in 11 other types. These anomalies occurred in various parts of the ontologies, such as classes, object and data properties, etc. In a second test, a case study was performed in the visualizations generated by the ONTO-Analyst prototype, from the anomalies found in the first test. It was shown that each visualization presented the elements involved in the anomaly and that at least one possible solution could be deduced from the visualization. These results demonstrate that the method can efficiently find anomaly occurrences in a representative set of OWL ontologies and that the visualization aids in the understanding and correcting of said anomalies. In order to extend the types of detectable anomalies, users can write new SPARQL queries.
164

Mecanismo de Pontecorvo estendido / Extended Pontecorvo mechanism

Zavanin, Eduardo Marcio, 1989- 19 May 2006 (has links)
Orientador: Marcelo Moraes Guzzo / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Física Gleb Wataghin / Made available in DSpace on 2018-08-21T20:53:09Z (GMT). No. of bitstreams: 1 Zavanin_EduardoMarcio_M.pdf: 2031044 bytes, checksum: a24fad776402ee2c06c0d5c44baaf4bb (MD5) Previous issue date: 2013 / Resumo: O objetivo desse trabalho é desenvolver um mecanismo que possa servir como solução para as anomalias dos antineutrinos de reatores e do Gálio. Relaxando a hipótese de Pontecorvo, permitindo que os ângulos de mistura que compõem um estado de sabor possuam diferentes valores, conseguimos explicar o fenômeno de desaparecimento de neutrinos/antineutrinos em baixas distâncias, através de um parâmetro livre. Para confrontar o mecanismo desenvolvido também fazemos uma analise criteriosa de alguns limites experimentais obtidos por aceleradores de partículas e identificamos uma possível dependência desse parâmetro livre com a energia. Adotando esse dependência energética para o parâmetro livre, conseguimos acomodar a grande maioria dos dados experimentais em física de neutrinos através de um único modelo / Abstract: This project aims the development of a mechanism that provides a possible solution to reactor antineutrino anomaly and Gallium anomaly. Relaxing the Pontecorvo\'s hypothesis, allowing the mixing angles that compose a flavor state possesses different values, it is possible to explain the phenomenon of desappearance in short-baselines, through a free parameter. To confront the mechanism developed we also perform an analysis of some experimental limits obtained by particle accelerators and identify a possible dependence of this free parameter with the energy. Adopting this energetic dependence for the free parameter, we can¿t almost every experiment in neutrino physics through a single model / Mestrado / Física / Mestre em Física
165

Detecting Non-Natural Objects in a Natural Environment using Generative Adversarial Networks with Stereo Data

Gehlin, Nils, Antonsson, Martin January 2020 (has links)
This thesis investigates the use of Generative Adversarial Networks (GANs) for detecting images containing non-natural objects in natural environments and if the introduction of stereo data can improve the performance. The state-of-the-art GAN-based anomaly detection method presented by A. Berget al. in [5] (BergGAN) was the base of this thesis. By modifiying BergGAN to not only accept three channel input, but also four and six channel input, it was possible to investigate the effect of introducing stereo data in the method. The input to the four channel network was an RGB image and its corresponding disparity map, and the input to the six channel network was a stereo pair consistingof two RGB images. The three datasets used in the thesis were constructed froma dataset of aerial video sequences provided by SAAB Dynamics, where the scene was mostly wooded areas. The datasets were divided into training and validation data, where the latter was used for the performance evaluation of the respective network. The evaluation method suggested in [5] was used in the thesis, where each sample was scored on the likelihood of it containing anomalies, Receiver Operating Characteristics (ROC) analysis was then applied and the area under the ROC-curve was calculated. The results showed that BergGAN was successfully able to detect images containing non-natural objects in natural environments using the dataset provided by SAAB Dynamics. The adaption of BergGAN to also accept four and six input channels increased the performance of the method, showing that there is information in stereo data that is relevant for GAN-based anomaly detection. There was however no substantial performance difference between the network trained with two RGB images versus the one trained with an RGB image and its corresponding disparity map.
166

Anomaly Detection in Wait Reports and its Relation with Apache Cassandra Statistics

Madhu, Abheyraj Singh, Rapolu, Sreemayi January 2021 (has links)
Background: Apache Cassandra is a highly scalable distributed system that can handle large amounts of data through several nodes / virtual machines grouped together as Apache Cassandra clusters. When one such node in an Apache Cassandra cluster is down, there is a need for a tool or an approach that can identify this failed virtual machine by analyzing the data generated from each of the virtual machines in the cluster. Manual analysis of this data is tedious and can be quite strenuous. Objectives: The objective of the thesis is to identify, build and evaluate a solution that can detect and report the behaviour of the erroneous or failed virtual machine by analyzing the data generated by each virtual machine in an Apache Cassandra cluster. In the study, we analyzed two specific data sources from each virtual machine, i.e., the wait reports and Apache Cassandra statistics, and proposed a tool named AnoDect to realize this objective. The tool has been built using the input provided by the technical support team at Ericsson through interviews and was also evaluated by them to realize its reliability, usability and, usefulness in an industrial setting. Methods: A case study methodology has been piloted at Ericsson and semi-structured interviews have been conducted to identify the key features in the data along with the functionalities AnoDect needs to perform to assist the CIL team (technical support team at Ericsson) to rectify the erroneous virtual machine in the cluster. An experimental evaluation and a static user evaluation have been conducted, as a part of the case study evaluation, where the experimental evaluation is conducted to identify the best technique for AnoDect's anomaly detection in wait reports and the static evaluation has been conducted to evaluate AnoDect for its reliability and usability once it is deployed for use. Results: From the feedback provided by the CIL team through the questionnaire, it has been observed that the results provided by the tool are quite satisfactory, in terms of usability and reliability of the tool.
167

PRAAG Algorithm in Anomaly Detection

Zhang, Dongyang January 2016 (has links)
Anomaly detection has been one of the most important applications of datamining, widely applied in industries like financial, medical,telecommunication, even manufacturing. In many scenarios, data are in theform of streaming in a large amount, so it is preferred to analyze the datawithout storing all of them. In other words, the key is to improve the spaceefficiency of algorithms, for example, by extracting the statistical summary ofthe data. In this thesis, we study the PRAAG algorithm, a collective anomalydetection algorithm based on quantile feature of the data, so the spaceefficiency essentially depends on that of quantile algorithm.Firstly, the master thesis investigates quantile summary algorithms thatprovides quantile information of a dataset without storing all the data point.Then, we implement the selected algorithms and run experiments to test theperformance. Finally, the report focuses on experimenting on PRAAG tounderstand how the parameters affect the performance and compare it withother anomaly detection algorithms.In conclusion, GK algorithm provides a more space efficient way to estimatequantiles than simply storing all data points. Also, PRAAG is effective in termsof True Prediction Rate (TPR) and False Prediction Rate (FPR), comparingwith a baseline algorithm CUSUM. In addition, there are many possibleimprovements to be investigated, such as parallelizing the algorithm. / Att upptäcka avvikelser har varit en av de viktigaste tillämpningarna avdatautvinning (data mining). Det används stor utsträckning i branscher somfinans, medicin, telekommunikation, och även tillverkning. I många fallströmmas stora mängder data och då är det mest effektivt att analysera utanatt lagra data. Med andra ord är nyckeln att förbättra algoritmernasutrymmeseffektivitet till exempel genom att extraheraden statistiskasammanfattning avdatat. PRAAGär en kollektiv algoritm för att upptäckaavvikelser. Den ärbaserad på kvantilenegenskapernai datat, såutrymmeseffektiviteten beror i huvudsak på egenskapernahoskvantilalgoritmen.Examensarbetet undersöker kvantilsammanfattande algoritmer som gerkvantilinformationen av ett dataset utan att spara alla datapunkter. Vikommer fram till att GKalgoritmenuppfyllervåra krav. Sedan implementerarvialgoritmerna och genomför experiment för att testa prestandan. Slutligenfokuserar rapporten påexperiment på PRAAG för att förstå hur parametrarnapåverkar prestandan. Vi jämför även mot andra algoritmer för att upptäckaavvikelser.Sammanfattningsvis ger GK ett mer utrymmeseffektiv sätt att uppskattakvantiler än att lagra alla datapunkter. Dessutom är PRAAG, jämfört med enstandardalgoritm (CUSUM), effektiv när det gäller True Prediction Rate (TPR)och False Prediction Rate (FPR). Det finns fortfarande flertalet möjligaförbättringar som ska undersökas, t.ex. parallelisering av algoritmen.
168

Edge-based blockchain enabled anomaly detection for insider attack prevention in Internet of Things

Tukur, Yusuf M., Thakker, Dhaval, Awan, Irfan U. 31 March 2022 (has links)
Yes / Internet of Things (IoT) platforms are responsible for overall data processing in the IoT System. This ranges from analytics and big data processing to gathering all sensor data over time to analyze and produce long-term trends. However, this comes with prohibitively high demand for resources such as memory, computing power and bandwidth, which the highly resource constrained IoT devices lack to send data to the platforms to achieve efficient operations. This results in poor availability and risk of data loss due to single point of failure should the cloud platforms suffer attacks. The integrity of the data can also be compromised by an insider, such as a malicious system administrator, without leaving traces of their actions. To address these issues, we propose in this work an edge-based blockchain enabled anomaly detection technique to prevent insider attacks in IoT. The technique first employs the power of edge computing to reduce the latency and bandwidth requirements by taking processing closer to the IoT nodes, hence improving availability, and avoiding single point of failure. It then leverages some aspect of sequence-based anomaly detection, while integrating distributed edge with blockchain that offers smart contracts to perform detection and correction of abnormalities in incoming sensor data. Evaluation of our technique using real IoT system datasets showed that the technique remarkably achieved the intended purpose, while ensuring integrity and availability of the data which is critical to IoT success. / Petroleum Technology Development Fund(PTDF) Nigeria, Grant/Award Number:PTDF/ED/PHD/TYM/858/16
169

Detection and localization of link-level network anomalies using end-to-end path monitoring / Détection et localisation des anomalies réseau au niveau des liens en utilisant de la surveillance des chemins de bout-en-bout

Salhi, Emna 13 February 2013 (has links)
L'objectif de cette thèse est de trouver des techniques de détection et de localisation des anomalies au niveau des liens qui soient à faible coût, précises et rapides. La plupart des techniques de détection et de localisation des anomalies au niveau des liens qui existent dans la littérature calculent les solutions, c-à-d l'ensemble des chemins à monitorer et les emplacements des dispositifs de monitorage, en deux étapes. La première étape sélectionne un ensemble minimal d'emplacements des dispositifs de monitorage qui permet de détecter/localiser toutes les anomalies possibles. La deuxième étape sélectionne un ensemble minimal de chemins de monitorage entre les emplacements sélectionnés de telle sorte que tous les liens du réseau soient couverts/distinguables paire par paire. Toutefois, ces techniques ignorent l'interaction entre les objectifs d'optimisation contradictoires des deux étapes, ce qui entraîne une utilisation sous-optimale des ressources du réseau et des mesures de monitorage biaisées. L'un des objectifs de cette thèse est d'évaluer et de réduire cette interaction. A cette fin, nous proposons des techniques de détection et de localisation d'anomalies au niveau des liens qui sélectionnent les emplacements des moniteurs et les chemins qui doivent être monitorés conjointement en une seule étape. Par ailleurs, nous démontrons que la condition établie pour la localisation des anomalies est suffisante mais pas nécessaire. Une condition nécessaire et suffisante qui minimise le coût de localisation considérablement est établie. Il est démontré que les deux problèmes sont NP-durs. Des algorithmes heuristiques scalables et efficaces sont alors proposés. / The aim of this thesis is to come up with cost-efficient, accurate and fast schemes for link-level network anomaly detection and localization. It has been established that for detecting all potential link-level anomalies, a set of paths that cover all links of the network must be monitored, whereas for localizing all potential link-level anomalies, a set of paths that can distinguish between all links of the network pairwise must be monitored. Either end-node of each path monitored must be equipped with a monitoring device. Most existing link-level anomaly detection and localization schemes are two-step. The first step selects a minimal set of monitor locations that can detect/localize any link-level anomaly. The second step selects a minimal set of monitoring paths between the selected monitor locations such that all links of the network are covered/distinguishable pairwise. However, such stepwise schemes do not consider the interplay between the conflicting optimization objectives of the two steps, which results in suboptimal consumption of the network resources and biased monitoring measurements. One of the objectives of this thesis is to evaluate and reduce this interplay. To this end, one-step anomaly detection and localization schemes that select monitor locations and paths that are to be monitored jointly are proposed. Furthermore, we demonstrate that the already established condition for anomaly localization is sufficient but not necessary. A necessary and sufficient condition that minimizes the localization cost drastically is established. The problems are demonstrated to be NP-Hard. Scalable and near-optimal heuristic algorithms are proposed.
170

Tópicos em física de neutrinos / Topics in neutrino physics

Zavanin, Eduardo Marcio, 1989- 17 March 2017 (has links)
Orientador: Marcelo Moraes Guzzo / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Física Gleb Wataghin / Made available in DSpace on 2018-09-01T21:05:54Z (GMT). No. of bitstreams: 1 Zavanin_EduardoMarcio_D.pdf: 12290577 bytes, checksum: e37b2af24ec03321ad993ebd98fbc0dc (MD5) Previous issue date: 2017 / Resumo: O objetivo desse trabalho é estudar um mecanismo alternativo à hipótese de neutrino estéril para a solução das anomalias dos antineutrinos de reatores, da anomalia do Gálio e da anomalia dos aceleradores. Vamos também entender como encaixar esse mecanismo na teoria da física de partículas através de interações não padrão. Além disso, vamos estudar o duplo decaimento beta sem a emissão de neutrinos e colocar vínculos para a massa efetiva de Majorana. Não obstante, vamos entender os limites que o experimento ECHo fornecerá para medidas direta da massa dos neutrinos / Abstract: The objective of this work is the study of an alternative mechanism, that is not the hypothesized sterile neutrino, to solve the reactor anti-neutrino anomaly, the Gallium anomaly and the LSND anomaly. We will also understand how to fit this mechanism in the theory of particle physics through non standard interactions. In addition, we will study the neutrino-less double beta decay and set constraints to the effective Majorana neutrino mass. Furthermore we will understand the limits that the ECHo experiment will provide for direct measurements of the neutrino mass / Doutorado / Física / Doutor em Ciências / 2013/02518-7 / 1189631/2013 / FAPESP / CAPES

Page generated in 0.0439 seconds