• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 350
  • 134
  • 38
  • 33
  • 32
  • 31
  • 16
  • 12
  • 11
  • 10
  • 8
  • 6
  • 6
  • 6
  • 4
  • Tagged with
  • 782
  • 122
  • 89
  • 86
  • 84
  • 73
  • 65
  • 59
  • 53
  • 51
  • 51
  • 50
  • 44
  • 41
  • 40
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
651

Användning av artificiella neurala nätverk (ANNs) för att upptäcka cyberattacker: En systematisk litteraturgenomgång av hur ANN kan användas för att identifiera cyberattacker

Wongkam, Nathalie, Shameel, Ahmed Abdulkareem Shameel January 2023 (has links)
Denna studie undersöker användningen av maskininlärning (ML), särskilt artificiella neurala nätverk (ANN), inom nätverksdetektering för att upptäcka och förebygga cyberattacker. Genom en systematisk litteraturgenomgång sammanställs och analyseras relevant forskning för att erbjuda insikter och vägledning för framtida studier. Forskningsfrågorna utforskar tillämpningen av maskininlärningsalgoritmer för att effektivt identifiera och förhindra nätverksattacker samt de utmaningar som uppstår vid användningen av ANN. Metoden innefattar en strukturerad sökning, urval och granskning av vetenskapliga artiklar. Resultaten visar att maskininlärningsalgoritmer kan effektivt användas för att bekämpa cyberattacker. Dock framkommer utmaningar kopplade till ANNs känslighet för störningar i nätverkstrafiken och det ökade behovet av stor datamängd och beräkningskraft. Studien ger vägledning för utveckling av tillförlitliga och kostnadseffektiva ANN-baserade lösningar inom nätverksdetektering. Genom att sammanställa och analysera befintlig forskning ger studien en djupare förståelse för tillämpningen av ML-algoritmer, särskilt ANN, inom cybersäkerhet. Detta bidrar till kunskapsutveckling och tillför en grund för framtida forskning inom området. Studiens betydelse ligger i att främja utvecklingen av effektiva lösningar för att upptäcka och förebygga nätverksattacker. / This research study investigates the application of machine learning (ML), specifically artificial neural networks (ANN), in network intrusion detection to identify and prevent cyber-attacks. The study employs a systematic literature review to compile and analyse relevant research, aiming to offer insights and guidance for future studies. The research questions explore the effectiveness of machine learning algorithms in detecting and mitigating network attacks, as well as the challenges associated with using ANN. The methodology involves conducting a structured search, selection, and review of scientific articles. The findings demonstrate the effective utilization of machine learning algorithms, particularly ANN, in combating cyber-attacks. The study also highlights challenges related to ANN's sensitivity to network traffic disturbances and the increased requirements for substantial data and computational power. The study provides valuable guidance for developing reliable and cost-effective solutions based on ANN for network intrusion detection. By synthesizing and analysing existing research, the study contributes to a deeper understanding of the practical application of machine learning algorithms, specifically ANN, in the realm of cybersecurity. This contributes to knowledge development and provides a foundation for future research in the field. The significance of the study lies in promoting the development of effective solutions for detecting and preventing network attacks.
652

EXPLORING HEALTH WEBSITE USERS BY WEB MINING

Kong, Wei 07 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / With the continuous growth of health information on the Internet, providing user-orientated health service online has become a great challenge to health providers. Understanding the information needs of the users is the first step to providing tailored health service. The purpose of this study is to examine the navigation behavior of different user groups by extracting their search terms and to make some suggestions to reconstruct a website for more customized Web service. This study analyzed five months’ of daily access weblog files from one local health provider’s website, discovered the most popular general topics and health related topics, and compared the information search strategies for both patient/consumer and doctor groups. Our findings show that users are not searching health information as much as was thought. The top two health topics which patients are concerned about are children’s health and occupational health. Another topic that both user groups are interested in is medical records. Also, patients and doctors have different search strategies when looking for information on this website. Patients get back to the previous page more often, while doctors usually go to the final page directly and then leave the page without coming back. As a result, some suggestions to redesign and improve the website are discussed; a more intuitive portal and more customized links for both user groups are suggested.
653

Statnamic Lateral Load Testing and Analysis of a Drilled Shaft in Liquefied Sand

Bowles, Seth I. 02 December 2005 (has links) (PDF)
Three progressively larger statnamic lateral load tests were performed on a 2.59 m diameter drilled shaft foundation after the surrounding soil was liquefied using down-hole explosive charges. An attempt to develop p-y curves from strain data along the pile was made. Due to low quality and lack of strain data, p-y curves along the test shaft could not be reliably determined. Therefore, the statnamic load tests were analyzed using a ten degree-of-freedom model of the pile-soil system to determine the equivalent static load-deflection curve for each test. The equivalent static load-deflection curves had shapes very similar to that obtained from static load tests performed previously at the site. The computed damping ratio was 30%, which is within the range of values derived from the log decrement method. The computer program LPILE was then used to compute the load-deflection curves in comparison with the response from the field load tests. Analyses were performed using a variety of p-y curve shapes proposed for liquefied sand. The best agreement was obtained using the concave upward curve shapes proposed by Rollins et al. (2005) with a p-multiplier of approximately 8 to account for the increased pile diameter. P-y curves based on the undrained strength approach and the p-multiplier approach with values of 0.1 to 0.3 did not match the measured load-deflection curve over the full range of deflections. These approaches typically overestimated resistance at small deflections and underestimated the resistance at large deflections indicating that the p-y curve shapes were inappropriate. When the liquefied sand was assumed to have no resistance, the computed deflection significantly overestimated the deflections from the field tests.
654

Seismic and Well Log Attribute Analysis of the Jurassic Entrada/Curtis Interval Within the North Hill Creek 3D Seismic Survey, Uinta Basin, Utah, A Case History

ONeal, Ryan J. 18 July 2007 (has links) (PDF)
3D seismic attribute analysis of the Jurassic Entrada/Curtis interval within the North Hill Creek (NHC) survey has been useful in delineating reservoir quality eolian-influenced dune complexes. Amplitude, average reflection strength and spectral decomposition appear to be most useful in locating reservoir quality dune complexes, outlining their geometry and possibly displaying lateral changes in thickness. Cross sectional views displaying toplap features likely indicate an unconformity between Entrada clinoforms below and Curtis planar beds above. This relationship may aid the explorationist in discovering this important seismic interval. Seismic and well log attribute values were cross plotted and have revealed associations between these data. Cross plots are accompanied by regression lines and R2 values which support our interpretations. Although reservoir quality dune complexes may be delineated, the Entrada/Curtis play appears to be mainly structural. The best producing wells in the survey are associated with structural or stratigraphic relief and the thickest Entrada/Curtis intervals. Structural and stratigraphic traps are not always associated with laterally extensive dune complexes. Time structure maps as well as isochron maps have proven useful in delineating the thickest and/or gas prone portions of the Entrada/Curtis interval as well as areas with structural and stratigraphic relief. We have observed that the zones of best production are associated with low gamma ray (40-60 API) values. These low values are associated with zones of high amplitude. Thus, max peak amplitude as a seismic attribute may delineate areas of higher sand content (i.e. dune complexes) whereas zones of low amplitude may represent areas of lower sand content (i.e. muddier interdune or tidal flat facies). Lack of significant average porosity does not seem to be related to a lack of production. In fact, the best producing wells have been drilled in Entrada/Curtis intervals where average porosity is near 4 %. There are however zones within the upper portion of the Entrada/Curtis that are 40 ft. (12.2 m) thick and have porosities between 14% and 20%. By combining derived attribute maps with observed cross plot relationships, it appears that the best producing intervals within the Entrada/Curtis are those associated with high amplitudes, API values from 40-60 and structural relief.
655

An Enhanced Data Model and Tools for Analysis and Visualization of Levee Simulations

Griffiths, Thomas Richard 15 March 2010 (has links) (PDF)
The devastating levee failures associated with hurricanes Katrina and Rita, and the more recent Midwest flooding, placed a spotlight on the importance of levees and our dependence on them to protect life and property. In response to levee failures associated with the hurricanes, Congress passed the Water Resources Development Act of 2007 which established a National Committee on Levee Safety. The committee was charged with developing recommendations for a National Levee Safety Program. The Secretary of the Army was charged with the establishment and maintenance of a National Levee Database. The National Levee Database is a critical tool in assessing and improving the safety of the nation's levees. However, the NLD data model, established in 2007, lacked a structure to store seepage and slope stability analyses – vital information for assessing the safety of a levee. In response, the Levee Analyst was developed in 2008 by Dr. Norm Jones and Jeffrey Handy. The Levee Analysis Data Model was designed to provide a central location, compatible with the National Levee Database, for storing large amounts of levee seepage and slope stability analytical data. The original Levee Analyst geoprocessing tools were created to assist users in populating, managing, and analyzing Levee Analyst geodatabase data. In an effort to enhance the Levee Analyst and provide greater accessibility to levee data, this research expanded the Levee Analyst to include modifications to the data model and additional geoprocessing tools that archive GeoStudio SEEP/W and SLOPE/W simulations as well as export the entire Levee Analyst database to Google Earth. Case studies were performed to demonstrate the new geoprocessing tools' capabilities and the compatibility between the National Levee Database and the Levee Analyst database. A number of levee breaches were simulated to prototype the enhancement of the Levee Analyst to include additional feature classes, tables, and geoprocessing tools. This enhancement would allow Levee Analyst to manage, edit, and export two-dimensional levee breach scenarios.
656

Summer Watering Patterns of Mule Deer and Differential Use of Water by Bighorn Sheep, Elk, Mule Deer, and Pronghorn in Utah

Shields, Andrew V. 06 December 2012 (has links) (PDF)
Changes in the abundance and distribution of free (drinking) water can influence wildlife in arid regions. In the western USA, free water is considered by wildlife managers to be important for bighorn sheep (Ovis canadensis), elk (Cervus elaphus), mule deer (Odocoileus hemionus), and pronghorn (Antilocapra americana). Nonetheless, we lack information on the influence of habitat and landscape features surrounding water sources, including wildlife water developments, and how these features may influence use of water by sexes differently. Consequently, a better understanding of differential use of water by the sexes could influence the conservation and management of those ungulates and water resources in their habitats. We deployed remote cameras at water sources to document water source use. For mule deer specifically, we monitored all known water sources on one mountain range in western Utah, during summer from 2007 to 2011 to document frequency and timing of water use, number of water sources used by males and females, and to estimate population size from individually identified mule deer. Male and female mule deer used different water sources but visited that resource at similar frequencies. On average, mule deer used 1.4 water sources and changed water sources once per summer. Additionally, most wildlife water developments were used by both sexes. We also randomly sampled 231 water sources with remote cameras in a clustered-sampling design throughout Utah in 2006 and from 2009 to 2011. In association with camera sampling at water sources, we measured several site and landscape scale features around each water source to identify patterns in ungulate use informative for managers. We used model selection to identify features surrounding water sources that were related to visitation rates for male and female bighorn sheep, elk, mule deer, and pronghorn. Top models for each species were different, but supported models for males and females of the same species generally included similar covariates, although with varying strengths. Our results highlight the differing use of water sources by the sexes. This information will help guide managers when siting and reprovisioning wildlife water developments meant to benefit those species, and when prioritizing natural water sources for preservation or enhancement.
657

ML enhanced interpretation of failed test result

Pechetti, Hiranmayi January 2023 (has links)
This master thesis addresses the problem of classifying test failures in Ericsson AB’s BAIT test framework, specifically distinguishing between environment faults and product faults. The project aims to automate the initial defect classification process, reducing manual work and facilitating faster debugging. The significance of this problem lies in the potential time and cost savings it offers to Ericsson and other companies utilizing similar test frameworks. By automating the classification of test failures, developers can quickly identify the root cause of an issue and take appropriate action, leading to improved efficiency and productivity. To solve this problem, the thesis employs machine learning techniques. A dataset of test logs is utilized to evaluate the performance of six classification models: logistic regression, support vector machines, k-nearest neighbors, naive Bayes, decision trees, and XGBoost. Precision and macro F1 scores are used as evaluation metrics to assess the models’ performance. The results demonstrate that all models perform well in classifying test failures, achieving high precision values and macro F1 scores. The decision tree and XGBoost models exhibit perfect precision scores for product faults, while the naive Bayes model achieves the highest macro F1 score. These findings highlight the effectiveness of machine learning in accurately distinguishing between environment faults and product faults within the Bait framework. Developers and organizations can benefit from the automated defect classification system, reducing manual effort and expediting the debugging process. The successful application of machine learning in this context opens up opportunities for further research and development in automated defect classification algorithms. / Detta examensarbete tar upp problemet med att klassificera testfel i Ericsson AB:s BAIT-testramverk, där man specifikt skiljer mellan miljöfel och produktfel. Projektet syftar till att automatisera den initiala defekten klassificeringsprocessen, vilket minskar manuellt arbete och underlättar snabbare felsökning. Betydelsen av detta problem ligger i de potentiella tids- och kostnadsbesparingarna det erbjuder till Ericsson och andra företag som använder liknande testramar. Förbi automatisera klassificeringen av testfel, kan utvecklare snabbt identifiera grundorsaken till ett problem och vidta lämpliga åtgärder, vilket leder till förbättrad effektivitet och produktivitet. För att lösa detta problem använder avhandlingen maskininlärningstekniker. A datauppsättning av testloggar används för att utvärdera prestandan för sex klassificeringar modeller: logistisk regression, stödvektormaskiner, k-närmaste grannar, naiva Bayes, beslutsträd och XGBoost. Precision och makro F1 poäng används som utvärderingsmått för att bedöma modellernas prestanda. Resultaten visar att alla modeller presterar bra i klassificeringstest misslyckanden, uppnå höga precisionsvärden och makro F1-poäng. Beslutet tree- och XGBoost-modeller uppvisar perfekta precision-spoäng för produktfel, medan den naiva Bayes-modellen uppnår högsta makro F1-poäng. Dessa resultat belyser effektiviteten av maskininlärning när det gäller att exakt särskilja mellan miljöfel och produktfel inom Bait-ramverket. Utvecklare och organisationer kan dra nytta av den automatiska defektklassificeringen system, vilket minskar manuell ansträngning och påskyndar felsöknings-processen. De framgångsrik tillämpning av maskininlärning i detta sammanhang öppnar möjligheter för vidare forskning och utveckling inom automatiserade defektklassificeringsalgoritmer.
658

Assessment of Cross Laminated Timber Markets for Hardwood Lumber

Adhikari, Sailesh 25 September 2020 (has links)
The goal of this study was to assess the potential of using hardwood lumber in CLT manufacturing. The goal was achieved by addressing four specific objectives. The first objective was to collect CLT manufacturers' perspectives for using hardwood lumber in the current manufacturing setup. The second objective was to determine hardwood sawmills' current ability to produce structural grade lumber (SGHL) from low value logs as a product mix through a survey of hardwood lumber producers in the US. The third objective was to conduct a log yield study of SGHL production from yellow poplar (YP) logs to produce 6'' and 8'' width SGHL to match the PRG 320 requirements. The fourth objective was to determine CLTs' production cost using SGHL and compared it with the CLTs manufactured from southern yellow pine (SYP). The results suggest that all three CLT industries visited and interviewed had sufficient technology to produce hardwood CLTs. The production of hardwood CLTs was mainly limited by the quality and quantity of lumber available. The hardwood sawmill survey results indicated that, currently, less than 10% of the sawmills had all the resources required to produce SGHL. The current ability of the sawmills was measured based on the resources necessary to begin SGHL production. Forty percent of the sawmills would require an investment in sawing technology to saw SGHL, 70% would require employing a certified lumber grader, and 80% would require a planer to surface lumber. Another significant finding was the sawmills' willingness to collaborate with other sawmills and lumber manufacturers. More than 50% of sawmills were open to potential collaboration with other stakeholders if necessary, which is crucial to commercializing SGHL for a new market. The log yield study of yellow poplar helped demonstrate that the mixed grade lumber production method to convert lumber from lower quality zones as SGHL yields higher lumber volume for sawmills and at the same time reduces lower-grade lumber volume. On average, SGHL production increased lumber volume by more than 6% compared to only NHLA grade lumber production when 65% of the lumber was converted to SGHL. The volume of lower lumber grades from 2 common and below decreased from an average of 85% to less than 30% when producing SGHL as a product mix with NHLA grade lumber. This study observed more than 95% of SGHL as Number 3 and better lumber grades. At estimated lumber value, 2x6 and 2x8 SGHL and NHLA grade lumber production as product mix from a log generate higher revenue for all log groups except for the diameter 13" logs. A lower percentage of higher-grade lumber was observed for diameter 13’’ logs than other log groups from this experiment, which resulted in lower revenue. Production cost of CLTs was determined based on the lumber value to manufacture 40' x 10' plain panels with different combinations by lumber grade of yellow poplar and southern yellow pine lumber alone. Production cost was determined by assuming that lumber value contributes 40% of CLTs' total production cost. The 3- ply CLT panels were manufactured using S. Selects lumber in a major direction, and No 1-grade lumber in the minor direction from YP had a production cost of $662.56 per cubic meter, which cost only $643.10 when SYP lumber was used at referenced lumber value. This study concludes that CLT panels from YP cost 3-7 % more than SYP-CLTs at the referenced lumber values. / Ph.D. / This research aims to expand the hardwood lumber consumption in the US by evaluating the opportunity to manufacture cross-laminated timber (CLTs). First, CLT manufacturing industries were visited to know their current capacity to process hardwood lumber. The results suggest that all three CLT industries had sufficient technology to produce hardwood CLTs, and the production was mainly limited by the quality and quantity of lumber available. Commercially hardwood can be used in CLT manufacturing if it can be used for structural application. Hardwood lumber must meet the structural application's minimum requirements to manufacture the structural grade CLTs, so we surveyed the hardwood sawmills to know if they have the required resources to manufacture the structural grade hardwood lumber (SGHL). Only ten percent of the sawmills had required technology to produce SGHL without additional investments. Production of the SGHL also required to generate more revenue for the hardwood sawmills, so we conducted the log yield study to know how the revenue structure of sawmill operation will change from the mixed grade lumber production. At estimated lumber value, 2x6 and 2x8 SGHL and 1-inch National Hardwood Lumber Association (NHLA) grade lumber production as product mix from logs generate higher revenue for all log groups except for the diameter 13" logs. Finally, the production cost of SGHL from the log yield study was evaluated and used to produce CLTs at 40% production cost from lumber at 15% profit margins for sawmills and compare with southern yellow pines CLTs. The results indicate that yellow poplar CLTs cost 3-7 % more than southern yellow pines CLTs at the referenced lumber values. This study concludes that hardwood lumber can be used in CLT manufacturing, so there is an opportunity for hardwood sawmills to expand the market. The first step for commercial production of hardwood CLTs is to produce SGHL on a commercial scale, given that sawmills can benefit from these new products in the current lumber market and meet the minimum requirements of the CLT raw materials.
659

Discover patterns within train log data using unsupervised learning and network analysis

Guo, Zehua January 2022 (has links)
With the development of information technology in recent years, log analysis has gradually become a hot research topic. However, manual log analysis requires specialized knowledge and is a time-consuming task. Therefore, more and more researchers are searching for ways to automate log analysis. In this project, we explore methods for train log analysis using natural language processing and unsupervised machine learning. Multiple language models are used in this project to extract word embeddings, one of which is the traditional language model TF-IDF, and the other three are the very popular transformer-based model, BERT, and its variants, the DistilBERT and the RoBERTa. In addition, we also compare two unsupervised clustering algorithms, the DBSCAN and the Mini-Batch k-means. The silhouette coefficient and Davies-Bouldin score are utilized for evaluating the clustering performance. Moreover, the metadata of the train logs is used to verify the effectiveness of the unsupervised methods. Apart from unsupervised learning, network analysis is applied to the train log data in order to explore the connections between the patterns, which are identified by train control system experts. Network visualization and centrality analysis are investigated to analyze the relationship and, in terms of graph theory, importance of the patterns. In general, this project provides a feasible direction to conduct log analysis and processing in the future. / I och med informationsteknologins utveckling de senaste åren har logganalys gradvis blivit ett hett forskningsämne. Manuell logganalys kräver dock specialistkunskap och är en tidskrävande uppgift. Därför söker fler och fler forskare efter sätt att automatisera logganalys. I detta projekt utforskar vi metoder för tåglogganalys med hjälp av naturlig språkbehandling och oövervakad maskininlärning. Flera språkmodeller används i detta projekt för att extrahera ordinbäddningar, varav en är den traditionella språkmodellen TF-IDF, och de andra tre är den mycket populära transformatorbaserade modellen, BERT, och dess varianter, DistilBERT och RoBERTa. Dessutom jämför vi två oövervakade klustringsalgoritmer, DBSCAN och Mini-Batch k-means. Siluettkoefficienten och Davies-Bouldin-poängen används för att utvärdera klustringsprestandan. Dessutom används tågloggarnas metadata för att verifiera effektiviteten hos de oövervakade metoderna. Förutom oövervakad inlärning tillämpas nätverksanalys på tågloggdata för att utforska sambanden mellan mönstren, som identifieras av experter på tågstyrsystem. Nätverksvisualisering och centralitetsanalys undersöks för att analysera sambandet och grafteoriskt betydelsen av mönstren mönstren. I allmänhet ger detta projekt en genomförbar riktning för att genomföra logganalys och bearbetning i framtiden.
660

Supervised Failure Diagnosis of Clustered Logs from Microservice Tests / Övervakad feldiagnos av klustrade loggar från tester på mikrotjänster

Strömdahl, Amanda January 2023 (has links)
Pinpointing the source of a software failure based on log files can be a time consuming process. Automated log analysis tools are meant to streamline such processes, and can be used for tasks like failure diagnosis. This thesis evaluates three supervised models for failure diagnosis of clustered log data. The goal of the thesis is to compare the performance of the models on industry data, as a way to investigate whether the chosen ML techniques are suitable in the context of automated log analysis. A Random Forest, an SVM and an MLP are generated from a dataset of 194 failed executions of tests on microservices, that each resulted in a large collection of logs. The models are tuned with random search and compared in terms of precision, recall, F1-score, hold-out accuracy and 5-fold cross-validation accuracy. The hold-out accuracy is calculated as a mean from 50 hold-out data splits, and the cross-validation accuracy is computed separately from a single set of folds. The results show that the Random Forest scores highest in terms of mean hold-out accuracy (90%), compared to the SVM (86%) and the Neural Network (85%). The mean cross-validation accuracy is the highest for the SVM (95%), closely followed by the Random Forest (94%), and lastly the Neural Network (85%). The precision, recall and F1-score are stable and consistent with the hold-out results, although the precision results are slightly higher than the other two measures. According to this evaluation, the Random Forest has the overall highest performance on the dataset when considering the hold-out- and cross-validation accuracies, and also the fact that it has the lowest complexity and thus the shortest training time, compared to the other considered solutions. All in all, the results of the thesis demonstrate that supervised learning is a promising approach to automatize log analysis. / Att identifiera orsaken till en misslyckad mjukvaruexekvering utifrån logg-filer kan vara en tidskrävande process. Verktyg för automatiserad logg-analysis är tänkta att effektivisera sådana processer, och kan bland annat användas för feldiagnos. Denna avhandling tillhandahåller tre övervakade modeller för feldiagnos av klustrad logg-data. Målet med avhandlingen är att jämföra modellernas prestanda på data från näringslivet, i syfte att utforska huruvida de valda maskininlärningsteknikerna är lämpliga för automatiserad logg-analys. En Random Forest, en SVM och en MLP genereras utifrån ett dataset bestående av 194 misslyckade exekveringar av tester på mikrotjänster, där varje exekvering resulterade i en stor uppsättning loggar. Modellerna finjusteras med hjälp av slumpmässig sökning och jämförs via precision, träffsäkerhet, F-poäng, noggrannhet och 5-faldig korsvalidering. Noggrannheten beräknas som medelvärdet av 50 datauppdelningar, och korsvalideringen tas fram separat från en enstaka uppsättning vikningar. Resultaten visar att Random Forest har högst medelvärde i noggrannhet (90%), jämfört med SVM (86%) och Neurala Nätverket (85%). Medelvärdet i korsvalidering är högst för SVM (95%), tätt följt av Random Forest (94%), och till sist, Neurala Nätverket (85%). Precisionen, träffsäkerheten och F-poängen är stabila och i enlighet med noggrannheten, även om precisionen är något högre än de andra två måtten. Enligt den här analysen har Random Forest överlag högst prestanda på datasetet, med hänsyn till noggrannheten och korsvalideringen, samt faktumet att denna modell har lägst komplexitet och därmed kortast träningstid, jämfört med de andra undersökta lösningarna. Sammantaget visar resultaten från denna avhandling att övervakad inlärning är ett lovande tillvägagångssätt för att automatisera logg-analys.

Page generated in 0.0178 seconds