• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 349
  • 134
  • 38
  • 33
  • 32
  • 31
  • 16
  • 12
  • 11
  • 10
  • 8
  • 6
  • 6
  • 5
  • 4
  • Tagged with
  • 779
  • 122
  • 88
  • 86
  • 84
  • 73
  • 65
  • 58
  • 52
  • 51
  • 51
  • 50
  • 44
  • 41
  • 40
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Logghantering : En undersökning av logghantering och logghanteringssystem

Flodin, Anton January 2016 (has links)
This research includes a review of the log management of the company Telia. The research has also included a comparison of the two log management sys- tems Splunk and ELK. The review of the company’s log management shows that log messages are being stored in files on a hard drive that can be accessed through the network. The log messages are system-specific. ELK is able to fetch log messages of different formats simultaneously, but this feature is not possible in Splunk where the process of uploading log messages has to be re- peated for log messages that have different formats. Both systems store log messages through a file system on a hard drive, where the systems are installed. In networks that involve multiple servers, ELK is distributing the log messages between the servers. Thus, the workload to perform searches and storing large amounts of data is reduced. Using Splunk in networks can also reduce the workload. This is done by using forwarders that send the log messages to one or multiple central servers which stores the messages. Searches of log messages in Splunk are performed by using a graphical interface. Searches in ELK is done by using a REST-API which can be used by external systems as well, to retrieve search results. Splunk also has a REST-API that can be used by external sys- tems to receive search results. The research revealed that ELK had a lower search time than Splunk. However, no method was found that could be used to measure the indexing time of ELK, which meant that no comparison could be made with respect to the indexing time for Splunk. For future work there should be an investigation whether there is any possibility to measure the indexing time of ELK. Another recommendation is to include more log management sys- tem in the research to improve the results that may be suitable candidates for the company Telia. An improvement suggestion as well, is to do performance tests in a network with multiple servers and thereby draw conclusions how the performance is in practice. / Denna undersökning har innefattat en granskning av logghanteringen som exi- sterar hos företaget Telia och en jämförelse av två logghanteringssystem: Splunk och ELK. Undersökningen visar att loggmeddelanden hos företaget har olika format och lagras i filer på en hårddisk som nås genom nätverket. Både ELK och Splunk kan hantera loggmeddelanden med olika format. ELK kan läsa in loggmeddelanden av olika format samtidigt, men detta är inte möjligt i Splunk då inläsningsprocessen måste repeteras för loggmeddelanden som har olika format. Båda systemen lagrar loggmeddelanden genom ett filsystem på en servers hårddisk där systemen är installerad. I nätverk som involverar flera servrar arbetar ELK distributivt genom att distribuera loggmeddelanden mellan dessa servrar. Följder av distribuering av loggmeddelanden ger en lägre arbets- börda för varje server i nätverket. I nätverk där Splunk används kan forwarders användas som skickar vidare loggmeddelanden till en eller flera central server som lagrar loggmeddelanden, därmed kan arbetsbördan för sökningar och in- dexering av data minskas. Sökningar av loggmeddelanden i Splunk utförs ge- nom att använda ett grafiskt gränssnitt. Sökningar i ELK sker genom att använ- da ett REST-API som finns i systemet som även används av externa system för att hämta sökresultat. Splunk har också ett REST-API inkluderat som kan an- vändas för att exportera sökresultat. Undersökningen visade att ELK hade en lägre söktid än Splunk. För undersökningen fanns ingen metod att använda för att mäta indexeringstiden för ELK vilket innebar att ingen jämförelse kunde gö- ras med avseende på indexeringstid. För framtida arbete rekommenderas bland annat att undersöka om det finns någon möjlighet att mäta indexeringstiden för ELK. En annan rekommendation är att låta fler logghanteringssystem ingå i un- dersökningen för att förbättra resultatet som kan vara lämpliga kandidater för företaget Telia. Ett förbättringsförslag är att utföra prestandatester för ett nät- verk med flera servrar för att därmed dra slutsatser för hur prestandan är i praktiken.
92

Zvýšení bezpečnosti nasazením SIEM systému v prostředí malého poskytovatele internetu / Security Enhancement Deploying SIEM in a Small ISP Environment

Bělousov, Petr January 2019 (has links)
Diplomová práce se zaměřuje na zvýšení bezpečnosti v prostředí malého poskytovatele internetu nasazením SIEM systému. Dostupné systémy jsou porovnány a zhodnoceny v souladu s požadavky zadávající firmy. Projekt nasazení systému SIEM je navržen, implementován a zhodnocen v souladu s unikátním prostředím firmy.
93

A structured approach to selecting the most suitable log management system for an organization

Kristiansson Herrera, Lucas January 2020 (has links)
With the advent of digitalization, a typical organization today will contain an ecosystem of servers, databases, and other components. These systems can produce large volumes of log data on a daily basis. By using a log management system (LMS) for collecting, structuring and analyzing these log events, an organization could benefit in their services. The primary intent with this thesis is to construct a decision model that will aid organizations in finding a LMS that most fit their needs. To construct such a model, a number of log management products are investigated that are both proprietary and open source. Furthermore, good practices of handling log data are investigated by reading various papers and books on the subject. The result is a decision model that can be used by an organization for preparing, implementing, maintaining and choosing a LMS. The decision model makes an attempt to quantify various properties such as product features, but the LMSs it suggests should mostly be seen as a decision basis. In order to make the decision model more comprehensive and usable, more products should be included in the model and other factors that could play a part in finding a suitable LMS should be investigated.
94

Angehrn-Siu type effective base point freeness for quasi-log canonical pairs / 擬対数的標準対に対するアンゲールン-シウ型の有効自由性

Liu, Haidong 25 September 2018 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(理学) / 甲第21328号 / 理博第4424号 / 新制||理||1635(附属図書館) / 京都大学大学院理学研究科数学・数理解析専攻 / (主査)教授 並河 良典, 教授 上 正明, 教授 森脇 淳 / 学位規則第4条第1項該当 / Doctor of Science / Kyoto University / DFAM
95

How to Build a Log Cabin by the Post-and-Beam Method

Conlee, Robert Michael 05 1900 (has links)
The primary purpose of this study is to give simple and detailed instructions for building a log cabin by the post-and-beam method. The data were gathered from three sources: (1) library research, (2) interviews with experienced builders of cabins, and (3) personal experience in cabin construction. A step-by-step guide for building a cabin is given in Chapters II and III, which explain in depth how to construct each section of the cabin, from laying the foundation to putting on the finishing touches. It is believed that any serious builder can follow the directions and construct his own log cabin for less than one-third the cost of a similar commercially built cabin.
96

A System for Automatic Information Extraction from Log Files

Chhabra, Anubhav 15 August 2022 (has links)
The development of technology, data-driven systems and applications are constantly revolutionizing our lives. We are surrounded by digitized systems/solutions that are transforming and making our lives easier. The criticality and complexity behind these systems are immense. So as to meet user satisfaction and keep up with the business needs, these digital systems should possess high availability, minimum downtime, and mitigate cyber attacks. Hence, system monitoring becomes an integral part of the lifecycle of a digital product/system. System monitoring often includes monitoring and analyzing logs outputted by the systems containing information about the events occurring within a system. The first step in log analysis generally includes understanding and segregating the various logical components within a log line, termed log parsing. Traditional log parsers use regular expressions and human-defined grammar to extract information from logs. Human experts are required to create, maintain and update the database containing these regular expressions and rules. They should keep up with the pace at which new products, applications and systems are being developed and deployed, as each unique application/system would have its own set of logs and logging standards. Logs from new sources tend to break the existing systems as none of the expressions match the signature of the incoming logs. The reasons mentioned above make the traditional log parsers time-consuming, hard to maintain, prone to errors, and not a scalable approach. On the other hand, machine learning based methodologies can help us develop solutions that automate the log parsing process without much intervention from human experts. NERLogParser is one such solution that uses a Bidirectional Long Short Term Memory (BiLSTM) architecture to frame the log parsing problem as a Named Entity Recognition (NER) problem. There have been recent advancements in the Natural Language Processing (NLP) domain with the introduction of architectures like Transformer and Bidirectional Encoder Representations from Transformers (BERT). However, these techniques have not been applied to tackle the problem of information extraction from log files. This gives us a clear research gap to experiment with the recent advanced deep learning architectures. This thesis extensively compares different machine learning based log parsing approaches that frame the log parsing problem as a NER problem. We compare 14 different approaches, including three traditional word-based methods: Naive Bayes, Perceptron and Stochastic Gradient Descent; a graphical model: Conditional Random Fields (CRF); a pre-trained sequence-to-sequence model for log parsing: NERLogParser; an attention-based sequence-to-sequence model: Transformer Neural Network; three different neural language models: BERT, RoBERTa and DistilBERT; two traditional ensembles and three different cascading classifiers formed using the individual classifiers mentioned above. We evaluate the NER approaches using an evaluation framework that offers four different evaluation schemes that not just help in comparing the NER approaches but also help us assess the quality of extracted information. The primary goal of this research is to evaluate the NER approaches on logs from new and unseen sources. To the best of our knowledge, no study in the literature evaluates the NER methodologies in such a context. Evaluating NER approaches on unseen logs helps us understand the robustness and the generalization capabilities of various methodologies. To carry out the experimentation, we use In-Scope and Out-of-Scope datasets. Both the datasets originate from entirely different sources and are entirely mutually exclusive. The In-Scope dataset is used for training, validation and testing purposes, whereas the Out-of-Scope dataset is purely used to evaluate the robustness and generalization capability of NER approaches. To better deal with logs from unknown sources, we propose Log Diversification Unit (LoDU), a unit of our system that enables us to carry out log augmentation and enrichment, which helps make the NER approaches more robust towards new and unseen logs. We segregate our final results on a use-case basis where different NER approaches may be suitable for various applications. Overall, traditional ensembles perform the best in parsing the Out-of-Scope log files, but they may not be the best option to consider for real-time applications. On the other hand, if we want to balance the trade-off between performance and throughput, cascading classifiers can be considered the go-to solution.
97

Log Frequency Analysis for Anomaly Detection in Cloud Environments

Bendapudi, Prathyusha January 2024 (has links)
Background: Log analysis has been proven to be highly beneficial in monitoring system behaviour, detecting errors and anomalies, and predicting future trends in systems and applications. However, with continuous evolution of these systems and applications, the amount of log data generated on a timely basis is increasing rapidly. Hence, the amount of manual effort invested in log analysis for error detection and root cause analysis is also increasing. While there is continuous research to reduce manual effort, This Thesis introduced a new approach based on the temporal patternsof logs in a particular system environment, to the current scenario of automated log analysis which can help in reducing manual effort to a great extent. Objectives: The main objective of this research is to identify temporal patterns in logs using clustering algorithms, extract the outlier logs which do not adhere to any time pattern, and further analyse them to check if these outlier logs are helpful in error detection and identifying the root cause of the said errors. Methods: Design Science Research was implemented to fulfil the objectives of the thesis, as the thesis required generation of intermediary results and an iterative and responsive approach. The initial part of the thesis consisted of building an artifact which aided in identifying temporal patterns in the logs of different log types using DBSCAN clustering algorithm. After identification of patterns and extraction of outlier logs, Interviews were conducted which employed manual analysis of the outlier logs by system experts, who then provided insights on the logs and validated the log frequency analysis. Results: The results obtained after running the clustering algorithm on logs of different log types show clusters which represent temporal patterns in most of the files. There are log files which do not have any time patterns, which indicate that not all log types have logs which adhere to a fixed time pattern. The interviews conducted with system experts on the outlier logs yield promising results, indicating that the log frequency analysis is indeed helpful in reducing manual effort involved in log analysis for error detection and root cause analysis. Conclusions: The results of the Thesis show that most of the logs in the given cloud environment adhere to time frequency patterns, and analysing these patterns and their outliers will lead to easier error detection and root cause analysis in the given cloud environment.
98

On a turbo decoder design for low power dissipation

Fei, Jia 21 July 2000 (has links)
A new coding scheme called "turbo coding" has generated tremendous interest in channel coding of digital communication systems due to its high error correcting capability. Two key innovations in turbo coding are parallel concatenated encoding and iterative decoding. A soft-in soft-out component decoder can be implemented using the maximum a posteriori (MAP) or the maximum likelihood (ML) decoding algorithm. While the MAP algorithm offers better performance than the ML algorithm, the computation is complex and not suitable for hardware implementation. The log-MAP algorithm, which performs necessary computations in the logarithm domain, greatly reduces hardware complexity. With the proliferation of the battery powered devices, power dissipation, along with speed and area, is a major concern in VLSI design. In this thesis, we investigated a low-power design of a turbo decoder based on the log-MAP algorithm. Our turbo decoder has two component log-MAP decoders, which perform the decoding process alternatively. Two major ideas for low-power design are employment of a variable number of iterations during the decoding process and shutdown of inactive component decoders. The number of iterations during decoding is determined dynamically according to the channel condition to save power. When a component decoder is inactive, the clocks and spurious inputs to the decoder are blocked to reduce power dissipation. We followed the standard cell design approach to design the proposed turbo decoder. The decoder was described in VHDL, and then synthesized to measure the performance of the circuit in area, speed and power. Our decoder achieves good performance in terms of bit error rate. The two proposed methods significantly reduce power dissipation and energy consumption. / Master of Science
99

Log-Periodic Loop Antennas

Kim, Jeong I. 13 August 1999 (has links)
The Log-Periodic Loop Antenna with Ground Reflector (LPLA-GR) is investigated as a new type of antenna, which provides wide bandwidth, broad beamwidth, and high gain. This antenna has smaller transverse dimensions (by a factor of 2/pi) than a log-periodic dipole antenna with comparable radiation characteristics. Several geometries with different parameters are analyzed numerically using ESP code, which is based on the method of moments. A LPLA-GR with 6 turns and a cone angle of 30* offers the most promising radiation characteristics. This antenna yields 47.6 % gain bandwidth and 12 dB gain according to the numerical analysis. The LPLA-GR also provides linear polarization and unidirectional patterns. Three prototype antennas were constructed and measured in the Virginia Tech Antenna Laboratory. Far-field patterns and input impedance were measured over a wide range of frequencies. The measured results agree well with the calculated results. Because of its wide bandwidth, high gain, and small size, the LPLA is expected to find applications as feeds for reflector antennas, as detectors in EMC scattering range, and as mobile communication antennas. / Master of Science
100

Nutrient diagnosis of orange crops applying compositional data analysis and machine learning techniques /

Yamane, Danilo Ricardo. January 2018 (has links)
Orientador: Arthur Bernardes Cecílio Filho / Resumo: O manejo eficiente de nutrientes é crucial para atingir alta produtividade de frutos. Resultados da análise do tecido são comumente interpretados usando faixas críticas de concentração de nutrientes (CNCR) e Sistema Integrado de Diagnose e Recomendação (DRIS) em culturas de laranja. No entanto, ambos os métodos ignoram as propriedades inerentes à classe dos dados composicionais, não considerando adequadamente as interações de nutrientes e a influência varietal na composição nutricional da planta. Portanto, ferramentas eficazes de modelagem são necessárias para corrigir vieses e incorporar efeitos genéticos na avaliação do estado nutricional. O objetivo deste estudo foi desenvolver uma abordagem diagnóstica precisa para avaliar o estado nutricional de variedades de copa de laranjeira (Citrus sinensis), usando a análise composicional dos dados e algoritmos de inteligência artificial. Foram coletadas 716 amostras foliares de ramos frutíferos em pomares comerciais de laranjeiras não irrigadas (“Valência”, “Hamlin”, “Pera”, “Natal”, “Valencia Americana” e “Westin”) distribuídos pelo estado de São Paulo (Brasil), analisadas as concentrações de N, S, P, K, Ca, Mg, B, Cu, Zn, Mn e Fe, e avaliadas as produções de frutos. Balanços de nutrientes foram computados como relações-log isométricas (ilr). Análises discriminantes dos valores de ilr diferenciaram os perfis de nutrientes das variedades de copa, indicando composições nutricionais específicas. A acurácia diagnóstica dos balanços de... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: Efficient nutrient management is crucial to attain high fruit productivity. Results of tissue analysis are commonly interpreted using critical nutrient concentration ranges (CNCR) and Diagnosis and Recommendation Integrated System (DRIS) on orange crops. Nevertheless, both methods ignore the inherent properties of compositional data class, not accounting adequately for nutrient interactions and varietal influence on plant ionome. Therefore, effective modeling tools are needed to rectify biases and incorporate genetic effects on nutrient composition. The objective of this study was to develop an accurate diagnostic approach to evaluate the nutritional status across orange (Citrus sinensis) canopy varieties using compositional data analysis and machine learning algorithms. We collected 716 foliar samples from fruit-bearing shoots in plots of non-irrigated commercial orange orchards (“Valencia”, “Hamlin”, “Pera”, “Natal”, “Valencia Americana” and “Westin”) distributed across São Paulo state (Brazil), analyzed N, S, P, K, Ca, Mg, B, Cu, Zn, Mn and Fe, and measured fruit yields. Sound nutrient balances were computed as isometric log-ratios (ilr). Discriminant analysis of ilr values differentiated the nutrient profiles of canopy varieties, indicating plant-specific ionomes. Diagnostic accuracy of nutrient balances reached 88% about cutoff yield of 60 Mg ha-1 using ilrs and a k-nearest neighbors classification, allowing the development of reliable nutritional standards at high fruit... (Complete abstract click electronic access below) / Doutor

Page generated in 0.0413 seconds