Spelling suggestions: "subject:"aprediction algorithm"" "subject:"iprediction algorithm""
1 |
Desarrollo de un software de monitoreo y predicción en tiempo real de incidencias ambientales para data center de telecomunicacionesGallegos Sánchez, Carlos Alberto, Huachin Herrera, Carlos Edson 26 February 2018 (has links)
Durante las últimas dos décadas aparecieron equipos con avances tecnológicos que han mejorado los servicios de telecomunicaciones. Estos trabajan las 24 horas del día dentro de un Data Center. La caída de un equipo representaría la caída de un servicio con pérdidas económicas. Una de las principales causas de los incidentes es por variables ambientales (temperatura, humedad o punto de rocío). El proyecto trata sobre una aplicación de predicción de fallos en tiempo real. Este es capaz de predecir fallos y proporcionar un tiempo de reacción ante un incidente.
El software implementado tiene dos modos: online y simulación. El modo online es una
aplicación para extraer la información de los sensores de ambientales. Este fue implementado en Bitel Telecom donde empresa manifestó su aprobación con una carta de recomendación. Por otro lado, el modo simulación funciona utilizando un servidor web, para subir los resultados en internet y poder monitorearla desde un móvil o PC. El modo simulación tiene dos modos: aleatorio y manual. En estos modos se puede modificar parámetros del algoritmo según exigencias de la empresa. Luego de varias pruebas, se llegó a la conclusión de que la predicción de fallos ayudara a evitar incidentes y generar ahorros económicos. / During last two decades, equipments with technological advancements that have improved telecommunication services appeared. These work 24 hours a day inside a Data Center. Fall of an equipment would represent fall of a service, with economic losses. One of the main causes of incidents is because of environmental variables (temperature, humidity or dew point). The project is about a real time failure prediction application. This is capable of predict failures and give a reaction time upon an incident.
The implemented software has two modes: online and simulation. Online mode is an application for extracting information of environmental sensors. This was implemented at Bitel Telecom, which manifested her approval with a recommendation letter. Simulation mode has two modes: random and manual. In these modes you can modify algorithm parameters according to enterprise requirements. After many tests, it came to the conclusion that failure prediction will help to avoid incidents and generate economic savings. / Tesis
|
2 |
A network traffic analysis tool for the prediction of perceived VoIP call qualityMaritz, Gert Stephanus Herman 12 1900 (has links)
Thesis (MScEng)--University of Stellenbosch, 2011. / ENGLISH ABSTRACT: The perceived quality of Voice over Internet Protocol (IP) (VoIP) communication
relies on the network which is used to transport voice packets between the end
points. Variable network characteristics such as bandwidth, delay and loss are critical
for real-time voice traffic and are not always guaranteed by networks. It is
important for network service providers to determine the Quality of Service (QoS)
it provides to its customers. The solution proposed here is to predict the perceived
quality of a VoIP call, in real-time by using network statistics.
The main objective of this thesis is to develop a network analysis tool, which
gathers meaningful statistics from network traffic. These statistics will then be
used for predicting the perceived quality of a VoIP call. This study includes the
investigation and deployment of two main components. Firstly, to determine call
quality, it is necessary to extract the voice streams from captured network traffic.
The extracted sound files can then be analysed by various VoIP quality models to
determine the perceived quality of a VoIP call.
The second component is the analysis of network characteristics. Loss, delay
and jitter are all known to influence perceived call quality. These characteristics
are, therefore, determined from the captured network traffic and compared with
the call quality. Using the statistics obtained by the repeated comparison of the
call quality and network characteristics, a network specific algorithm is generated.
This Non-Intrusive Quality Prediction Algorithm (NIQPA) uses basic characteristics
such as time of day, delay, loss and jitter to predict the quality of a real-time VoIP call quickly in a non-intrusive way. The realised algorithm for each network
will differ, because every network is different.
Prediction results can then be used to adapt either the network (more bandwidth,
packet prioritising) or the voice stream (error correction, change VoIP codecs)
to assure QoS. / AFRIKAANSE OPSOMMING: Die kwaliteit van spraak oor die internet (VoIP) kommunikasie is afhanklik van
die netwerk wat gebruik word om spraakpakkies te vervoer tussen die eindpunte.
Netwerk eienskappe soos bandwydte, vertraging en verlies is krities vir intydse
spraakverkeer en kan nie altyd gewaarborg word deur netwerkverskaffers nie. Dit
is belangrik vir die netwerk diensverskaffers om die vereiste gehalte van diens
(QoS) te verskaf aan hul kliënte. Die oplossing wat hier voorgestel word is om
die kwaliteit van ’n VoIP oproep intyds te voorspel, deur middel van die netwerkstatistieke.
Die belangrikste doel van hierdie projek is om ’n netwerk analise-instrument te
ontwikkel. Die instrument versamel betekenisvolle statistiek deur van netwerkverkeer
gebruik te maak. Hierdie statistiek sal dan gebruik word om te voorspel wat
die gehalte van ’n VoIP oproep sal wees vir sekere netwerk toestande. Hierdie studie
berus op die ondersoek en implementering van twee belangrike komponente.
In die eerste plek, moet oproep kwaliteit bepaal word. Spraakstrome word uit
die netwerkverkeer onttrek. Die onttrekte klanklêers kan dan geanaliseer word
deur verskeie spraak kwaliteitmodelle om die kwaliteitdegradasie van ’n spesifieke
VoIP oproep vas te stel.
Die tweede komponent is die analise van netwerkeienskappe. Pakkieverlies,
pakkievertraging en bibbereffek is bekend vir hul invloed op VoIP kwaliteit en is waargeneem. Hierdie netwerk eienskappe word dus bepaal uit die netwerkverkeer
en daarna vergelyk met die gemete gesprekskwaliteit.
Statistiek word verkry deur die herhaalde vergelyking van gesprekkwaliteit en
netwerk eienskappe. Uit die statistiek kan ’n algoritme (vir die spesifieke network)
gegenereer word om spraakkwaliteit te voorspel. Hierdie Nie-Indringende Kwaliteit
Voorspellings-algoritme (NIKVA), gebruik basiese kenmerke, soos die tyd van
die dag, pakkie vertraging, pakkie verlies en bibbereffek om die kwaliteit van ’n
huidige VoIP oproep te voorspel. Hierdie metode is vinnig, in ’n nie-indringende
manier. Die gerealiseerde algoritme vir die verskillende netwerke sal verskil, want
elke netwerk is anders.
Die voorspelling van spraakgehalte kan dan gebruik word om òf die netwerk
aan te pas (meer bandwydte, pakkie prioriteit) òf die spraakstroom aan te pas (foutkorreksie,
verander VoIP kodering) om die goeie kwaliteit van ’n VoIP oproep te
verseker.
|
3 |
Effects of Investor Sentiment Using Social Media on Corporate Financial DistressHoteit, Tarek 01 January 2015 (has links)
The mainstream quantitative models in the finance literature have been ineffective in detecting possible bankruptcies during the 2007 to 2009 financial crisis. Coinciding with the same period, various researchers suggested that sentiments in social media can predict future events. The purpose of the study was to examine the relationship between investor sentiment within the social media and the financial distress of firms Grounded on the social amplification of risk framework that shows the media as an amplified channel for risk events, the central hypothesis of the study was that investor sentiments in the social media could predict t he level of financial distress of firms. Third quarter 2014 financial data and 66,038 public postings in the social media website Twitter were collected for 5,787 publicly held firms in the United States for this study. The Spearman rank correlation was applied using Altman Z-Score for measuring financial distress levels in corporate firms and Stanford natural language processing algorithm for detecting sentiment levels in the social media. The findings from the study suggested a non-significant relationship between investor sentiments in the social media and corporate financial distress, and, hence, did not support the research hypothesis. However, the model developed in this study for analyzing investor sentiments and corporate distress in firms is both original and extensible for future research and is also accessible as a low-cost solution for financial market sentiment analysis.
|
4 |
Improving computational predictions of Cis-regulatory binding sites in genomic dataRezwan, Faisal Ibne January 2011 (has links)
Cis-regulatory elements are the short regions of DNA to which specific regulatory proteins bind and these interactions subsequently influence the level of transcription for associated genes, by inhibiting or enhancing the transcription process. It is known that much of the genetic change underlying morphological evolution takes place in these regions, rather than in the coding regions of genes. Identifying these sites in a genome is a non-trivial problem. Experimental (wet-lab) methods for finding binding sites exist, but all have some limitations regarding their applicability, accuracy, availability or cost. On the other hand computational methods for predicting the position of binding sites are less expensive and faster. Unfortunately, however, these algorithms perform rather poorly, some missing most binding sites and others over-predicting their presence. The aim of this thesis is to develop and improve computational approaches for the prediction of transcription factor binding sites (TFBSs) by integrating the results of computational algorithms and other sources of complementary biological evidence. Previous related work involved the use of machine learning algorithms for integrating predictions of TFBSs, with particular emphasis on the use of the Support Vector Machine (SVM). This thesis has built upon, extended and considerably improved this earlier work. Data from two organisms was used here. Firstly the relatively simple genome of yeast was used. In yeast, the binding sites are fairly well characterised and they are normally located near the genes that they regulate. The techniques used on the yeast genome were also tested on the more complex genome of the mouse. It is known that the regulatory mechanisms of the eukaryotic species, mouse, is considerably more complex and it was therefore interesting to investigate the techniques described here on such an organism. The initial results were however not particularly encouraging: although a small improvement on the base algorithms could be obtained, the predictions were still of low quality. This was the case for both the yeast and mouse genomes. However, when the negatively labeled vectors in the training set were changed, a substantial improvement in performance was observed. The first change was to choose regions in the mouse genome a long way (distal) from a gene over 4000 base pairs away - as regions not containing binding sites. This produced a major improvement in performance. The second change was simply to use randomised training vectors, which contained no meaningful biological information, as the negative class. This gave some improvement over the yeast genome, but had a very substantial benefit for the mouse data, considerably improving on the aforementioned distal negative training data. In fact the resulting classifier was finding over 80% of the binding sites in the test set and moreover 80% of the predictions were correct. The final experiment used an updated version of the yeast dataset, using more state of the art algorithms and more recent TFBSs annotation data. Here it was found that using randomised or distal negative examples once again gave very good results, comparable to the results obtained on the mouse genome. Another source of negative data was tried for this yeast data, namely using vectors taken from intronic regions. Interestingly this gave the best results.
|
5 |
Computational methods for protein-protein interaction identificationZiyun Ding (7817588) 05 November 2019 (has links)
<div>
<div>
<div>
<p>Understanding protein-protein interactions (PPIs) in a cell is essential for learning protein
functions, pathways, and mechanisms of diseases. This dissertation introduces the computational
method to predict PPIs. In the first chapter, the history of identifying protein interactions and some
experimental methods are introduced. Because interacting proteins share similar functions, protein
function similarity can be used as a feature to predict PPIs. NaviGO server is developed for
biologists and bioinformaticians to visualize the gene ontology relationship and quantify their
similarity scores. Furthermore, the computational features used to predict PPIs are summarized.
This will help researchers from the computational field to understand the rationale of extracting
biological features and also benefit the researcher with a biology background to understand the
computational work. After understanding various computational features, the computational
prediction method to identify large-scale PPIs was developed and applied to Arabidopsis, maize,
and soybean in a whole-genomic scale. Novel predicted PPIs were provided and were grouped
based on prediction confidence level, which can be used as a testable hypothesis to guide biologists’
experiments. Since affinity chromatography combined with mass spectrometry technique
introduces high false PPIs, the computational method was combined with mass spectrometry data
to aid the identification of high confident PPIs in large-scale. Lastly, some remaining challenges
of the computational PPI prediction methods and future works are discussed.
</p>
</div>
</div>
</div>
|
6 |
Perceptual Criterion Based Rate Control And Fast Mode Search For Spatial Intra Prediction In Video CodingNagori, Soyeb 05 1900 (has links)
This thesis dwells on two important problems in the field of video coding; namely rate control and spatial domain intra prediction. While the former is applicable generally to most video compression standards, the latter applies to recent advanced video compression standards such as H.264, VC1 and AVS.
Rate control regulates the instantaneous video bit-rate to maximize a picture quality metric while satisfying channel rate and buffer size constraints. Rate control has an important bearing on the picture quality of encoded video. Typically, a quality metric such as Peak Signal-to-Noise ratio (PSNR) or weighted signal-to-noise ratio (WSNR) is chosen out of convenience. However neither metric is a true measure of perceived video quality.
A few researchers have attempted to derive rate control algorithms with the combination of standard PSNR and ad-hoc perceptual metrics of video quality. The concept of using perceptual criterion for video coding was introduced in [7] within the context of perceptual adaptive quantization. In this work, quantization noise levels were adjusted such that more noise was allowed where it was less visible (busy and textured areas) while sensitive areas (typically flat and low detail regions) were finely quantized. Macro–blocks were classified into low detail, texture and edge areas depending on a classifier that studied the variance of sub-blocks within a macro-block (MB). The Rate models were trained from training sets of pre -classified video. One drawback of the above scheme as with standard PSNR was that neither accounts for the perceptual effect of motion. The work in [8] achieved this by assigning higher weights to the regions of the image that were experiencing the highest motion. Also, the center of the image and objects in the foreground are perceived as more important than the sides.
However, attempts to use perceptual metrics for video quality have been limited by the accuracy of the video quality metrics chosen. In the recent years, new and improved metrics of subjective quality have been invented and their statistical accuracy has been studied in a formal manner. Particularly interesting is the work undertaken by ITU and the Video quality experts group (VQEG). VQEG conducted two phases of testing; in the first pha se, several algorithms were tested but they were not found to be very accurate, in fact none were found to be any more accurate than PSNR based metric. In the second phase of testing a few years later, a few new algorithms were experimented with, and it wa s concluded that four of these did achieve results good enough to warrant their standardization as a part of ITU –T Recommendation J.144. These experiments are referred to as the FR-TV (Full Reference Television) phase-II evaluations. ITU-T J.144 does not explicitly identify a single algorithm but provides guidelines on the selection of appropriate techniques to objectively measure subjective video quality. It describes four reference algorithms as well as PSNR. Amongst the four, the NTIA General Video Quality Model (VQM), [11] is the best performing and has been adopted by American National Standards Institute (ANSI) as a North American standard T1.801.03. NTIA’s approach has been to focus on defining parameters that model how humans perceive video quality. These parameters have been combined using linear models to produce estimates of video quality that closely approximate subjective test results. NTIA General Video Quality Model (VQM) has been proven to have strong correlation with subjective quality.
In the first part of the thesis, we apply metrics motivated by NTIA-VQM model within a rate control algorithm to maximize perceptual video quality. We derive perceptual weights using key NTIA parameters to influence QP value used to decide degree of quantization. Our experiments demonstrate that a perceptual quality motivated standard TMN8 rate control in an H.263 encoder results in perceivable quality improvements over a baseline TMN8 rate control algorithm that uses a PSNR metric. Our experimental results on a set of 11 sequences show on an average reduction of 6% in bitrate using the proposed algorithm for the same perceptual quality as standard TMN-8.
The second part of our thesis work deals with spatial domain intra prediction used in advance video coding standard such as H.264. The H.264 Advanced Video coding standard [36] has been shown to achieve video quality similar to older standards such as MPEG2 and H.263 at nearly half the bit-rate. Generally, this compression improvement is attributed to several new tools that were introduced in H.264 – including spatial intra prediction, adaptive block size for motion compensation, in-loop de-blocking filter, context adaptive binary arithmetic coding (CABAC), and multiple reference frames.
While the new tools allow better coding efficiency, they also introduce additi onal computational complexity at both encoder and decoder ends. We are especially concerned here on the impact of Intra prediction on the computational complexity of the encoder. H.264 reference implementations such as JM [29] search through all allowed intra-rediction “modes” in order to find the optimal mode. While this approach yields the optimal prediction mode, it comes at an extremely heavy computational cost. Hence there is a lot of interest into well -motivated algorithms that reduce the computational complexity of the search for the best prediction mode, while retaining the quality advantages of full-search Intra4x4.
We propose a novel algorithm to reduce the complexity of full search by exploiting our knowledge of the source statistics. Specifically, we analyze the transform domain energy distribution of the original 4x4 block in different directions and use the results of our analysis to eliminate unlikely modes and reduce the search space for the optimal I ntra mode. Experimental results show that the proposed algorithm achieves quality metrics (PSNR) similar to full search at nearly a third of the complexity.
This thesis has four chapters and is organized as follows, in the first chapter we introduce basics of video encoding and subsequently present exiting work in the area of perceptual rate control and introduce TMN-8 rate control algorithm in brief. At the end we introduce spatial domain intra prediction. In the second chapter we explain the challenges present in combining NTIA perceptual parameters with TMN8 rate control algorithm. We examine perceptual features used by NTIA from a video compression perspective and explain how the perceptual metrics capture typical compression artifacts. We next present a two pass perceptual rate control (PRCII) algorithm. Finally, we list experimental results on set of video sequences showing on an average of 6% bit-rate reduction by using PRC-II rate control over standard TMN-8 rate control. Chapter 3 contains part-II of our thesis work on, spatial domain intra prediction . We start by reviewing existing work in intra prediction and then present the details of our proposed intra prediction algorithm and experimental results. We finally conclude this thesis in chapter 4 and discuss direction for the future work on both our proposed algorithms.
|
7 |
Promoter Prediction In Microbial Genomes Based On DNA Structural FeaturesRangannan, Vetriselvi 04 1900 (has links) (PDF)
Promoter region is the key regulatory region, which enables the gene to be
transcribed or repressed by anchoring RNA polymerase and other transcription factors, but it is difficult to determine experimentally. Hence an in silico identification of promoters is crucial in order to guide experimental work and to pin point the key region that controls the transcription initiation of a gene. Analysis of various genome sequences in the vicinity of experimentally identified transcription start sites (TSSs) in prokaryotic as well as eukaryotic genomes had earlier indicated that they have several structural features in common, such as lower stability, higher curvature and less bendability, when compared with their neighboring regions. In this thesis work, the variation observed for these DNA sequence dependent structural properties have been used to identify and delineate promoter regions from other genomic regions. Since the number of bacterial genomes being sequenced is increasing very rapidly, it is crucial to have procedures for rapid and reliable annotation of their functional elements such as promoter regions, which control the expression of each gene or each transcription unit of the genome. The thesis work addresses this requirement and presents step by step protocols followed to get a generic method for promoter prediction that can be applicable across organisms. The each paragraph below gives an overall idea about the thesis organization into chapters.
An overview of prokaryotic transcriptional regulation, structural polymorphism
adapted by DNA molecule and its impact on transcriptional regulation has been
discussed in introduction chapter of this thesis (chapter 1).
Standardization of promoter prediction
methodology - Part I
Based on the difference in stability between neighboring upstream and downstream regions in the vicinity of experimentally determined transcription start sites, a promoter prediction algorithm has been developed to identify prokaryotic promoter sequences in whole genomes. The average free energy (E) over known promoter sequences and the difference (D) between E and the average free energy over the random sequence generated using the downstream region of known TSS (REav) are used to search for promoters in the genomic sequences. Using these cutoff values to predict promoter regions across entire E. coli genome, a reliability of 70% has been achieved, when the predicted promoters were cross verified against the 960 transcription start sites (TSSs) listed in the Ecocyc database. Reliable promoter prediction is obtained when these genome specific threshold values were used to search for promoters in the whole E. coli genome sequence. Annotation of the whole E. coli genome for promoter region has been carried out with 49% accuracy.
Reference
Rangannan, V. and Bansal, M. (2007) Identification and annotation of promoter regions inmicrobial genome sequences on the basis of DNA stability. J Biosci, 32, 851-862.
Standardization of promoter prediction methodology - Part II
In this chapter, it has been demonstrated that while the promoter regions are
in general less stable than the flanking regions, their average free energy varies
depending on the GC composition of the flanking genomic sequence. Therefore, a set of free energy threshold values (TSS based threshold values), from the genomic DNA with varying GC content in the vicinity of experimentally identified TSSs have been obtained. These threshold values have been used as generic criteria for predicting promoter regions in E. coli and B. subtilis and M. tuberculosis genomes, using an in-house developed tool ‘PromPredict’. On applying it to predict promoter regions corresponding to the 1144 and 612 experimentally validated TSSs in E. coli (genome %GC : 50.8) and B. subtilis (genome %GC : 43.5) sensitivity of 99% and 95% and precision values of 58% and 60%, respectively, were achieved. For the limited data set of 81 TSSs available for M. tuberculosis (65.6% GC) a sensitivity of 100% and precision of 49% was obtained.
Reference
Rangannan, V. and Bansal, M. (2009) Relative stability of DNA as a generic
criterion for promoter prediction: whole genome annotation of microbial
genomes with varying nucleotide base composition. Mol Biosyst, 5, 1758 -
1769.
Standardization of promoter prediction
methodology - Part III
In this chapter, the promoter prediction algorithm and the threshold values have
been improved to predict promoter regions on a large scale over 913 microbial
genome sequences. The average free energy (AFE) values for the promoter regions as well as their downstream regions are found to differ, depending on their GC content even with respect to translation start sites (TLSs) from 913 microbial genomes. The TSS based cut-off values derived in chapter 3 do not have cut-off values for both extremes of GC-bins at 5% interval. Hence, threshold values have been derived from a subset of translation start sites (TLSs) from all microbial genomes which were categorized based on their GC-content. Interestingly the cut-off values derived with respect to TSS data set (chapter 3) and TLS data set are very similar for the in-between GC-bins. Therefore, TSS based cut-off values derived in chapter 2 with the TLS based cut-off values have been combined (denoted as TSS-TLS based cutoff values) to predict promoters over the complete genome sequences. An average recall value of 72% (which indicates the percentage of protein and RNA coding genes with predicted promoter regions assigned to them) and precision of 56% is achieved over the 913 microbial genome dataset. These predicted promoter regions have been given a reliability level (low, medium, high, very high and highest) based on the difference in its relative average free energy, which can help the users design their experiments with more confidence by using the predictions with higher reliability levels.
Reference
Rangannan, V. and Bansal, M. (2010) High Quality Annotation of Promoter
Regions for 913 Bacterial Genomes. Bioinformatics, 26, 3043-3050.
Web applications
PromBase : The predicted promoter regions for 913 microbial genomes were
deposited into a public domain database called, PromBase which can serve as a
valuable resource for comparative genomics study for their general genomic features and also help the experimentalist to rapidly access the annotation of the promoter regions in any given genome. This database is freely accessible for the users via the World Wide Web http://nucleix.mbu.iisc.ernet.in/prombase/.
EcoProm : EcoProm is a database that can identify and display the potential
promoter regions corresponding to EcoCyc annotated TSS and genes. Also displays predictions for whole genomic sequence of E. coli and EcoProm is available at
http://nucleix.mbu.iisc.ernet.in/ecoprom/index.htm.
PromPredict : The generic promoter prediction methodology described in previous chapters has been implemented in to an algorithm ‘PromPredict’ and available at
http://nucleix.mbu.iisc.ernet.in/prompredict/prompredict.html.
Analysing the DNA structural characteristic of prokaryotic promoter sequences for their
predominance
Sequence dependent structural properties and their variation in genomic DNA are important in controlling several crucial processes such as transcription, replication, recombination and chromatin compaction. In this chapter 6, quantitative analysis of sequences motifs as well as sequence dependent structural properties, such as curvature, bendability and stability in the upstream region of TSS and TLS from E. coli, B. subtilis and M. tuberculosis has been carried out in order to assess their predictive power for promoter regions. Also the correlation between these structural properties and GC-content has been investigated. Our results have shown that AFE values (stability) gives finer discrimination rather than %GC in identifying promoter regions and stability have shown to be the better structural property in delineating promoter regions from non-promoter regions.
Analysis of these DNA structural properties has been carried out in human
promoter sequences and observed to be correlating with the inactivation status of
the X-linked genes in human genome. Since, it is deviating from the theme of main thesis; this chapter has been included as appendix A to the main thesis.
General conclusion
Stability is the ubiquitous DNA structural property seen in promoter regions. Stability shows finer discrimination for promoter prediction rather than directly using %GC-content. Based on relative stability of DNA, a generic promoter prediction algorithm has been developed and implemented to predict promoter regions on a large scale over 913 microbial genome sequences. The analysis of the predicted regions across organisms showed highly reliable predictive performance of the algorithm.
|
8 |
Decision-making in Highway Autonomous Driving Combined with Prediction Algorithms / Beslutsfattande inom motorvägsautonom körning i kombination med förutsägelsealgoritmerChen, Jingsheng January 2022 (has links)
Over the past two decades, autonomous driving technology has made tremendous breakthroughs. With this technology, human drivers have been able to take their hands off the wheel in many scenarios and let the vehicle drive itself. Highway scenarios are less disturbed than urban scenarios, so autonomous driving is much simpler to implement and can be accomplished very well with a rule-based approach. However, a significant drawback of the rule-based approach compared to human drivers is that it is difficult to predict the intent of the vehicles in the surrounding environment by designing the algorithm’s logic. In contrast, human drivers can easily implement the intent analysis. Therefore, in this research work, we introduce the prediction module as the upstream of the autonomous driving decision-making module, so that the autonomous driving decision-maker has richer input information to better optimize the decision output by getting the intent of the surrounding vehicles. The evaluation of the final results confirms that our proposed approach is helpful for optimizing Rule-based autonomous driving decisions. / Under de senaste två decennierna har tekniken för autonom körning gjort enorma genombrott. Med denna teknik har mänskliga förare kunnat ta bort händerna från ratten i många situationer och låta fordonet köra sig självt. Scenarier på motorvägar är mindre störda än scenarier i städer, så autonom körning är mycket enklare att genomföra och kan åstadkommas mycket bra med en regelbaserad metod. En betydande nackdel med det regelbaserade tillvägagångssättet jämfört med mänskliga förare är dock att det är svårt att förutsäga avsikten hos fordonen i den omgivande miljön genom att utforma algoritmens logik. Däremot kan mänskliga förare lätt genomföra avsiktsanalysen. I det här forskningsarbetet inför vi därför förutsägelsemodulen som en uppströmsmodul för beslutsfattandet vid autonom körning, så att beslutsfattaren vid autonom körning har mer omfattande information för att bättre optimera beslutsutfallet genom att få reda på de omgivande fordonens intentioner. Utvärderingen av slutresultaten bekräftar att vårt föreslagna tillvägagångssätt är till hjälp för att optimera regelbaserade beslut om autonom körning.
|
Page generated in 0.1006 seconds