• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 65
  • 62
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 139
  • 84
  • 43
  • 37
  • 31
  • 27
  • 25
  • 23
  • 20
  • 19
  • 19
  • 18
  • 17
  • 16
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

'n Masjienleerbenadering tot woordafbreking in Afrikaans

Fick, Machteld 06 1900 (has links)
Text in Afrikaans / Die doel van hierdie studie was om te bepaal tot watter mate ’n suiwer patroongebaseerde benadering tot woordafbreking bevredigende resultate lewer. Die masjienleertegnieke kunsmatige neurale netwerke, beslissingsbome en die TEX-algoritme is ondersoek aangesien dit met letterpatrone uit woordelyste afgerig kan word om lettergreep- en saamgesteldewoordverdeling te doen. ’n Leksikon van Afrikaanse woorde is uit ’n korpus van elektroniese teks genereer. Om lyste vir lettergreep- en saamgesteldewoordverdeling te kry, is woorde in die leksikon in lettergrepe verdeel en saamgestelde woorde is in hul samestellende dele verdeel. Uit elkeen van hierdie lyste van ±183 000 woorde is ±10 000 woorde as toetsdata gereserveer terwyl die res as afrigtingsdata gebruik is. ’n Rekursiewe algoritme is vir saamgesteldewoordverdeling ontwikkel. In hierdie algoritme word alle ooreenstemmende woorde uit ’n verwysingslys (die leksikon) onttrek deur stringpassing van die begin en einde van woorde af. Verdelingspunte word dan op grond van woordlengte uit die samestelling van begin- en eindwoorde bepaal. Die algoritme is uitgebrei deur die tekortkominge van hierdie basiese prosedure aan te spreek. Neurale netwerke en beslissingsbome is afgerig en variasies van beide tegnieke is ondersoek om die optimale modelle te kry. Patrone vir die TEX-algoritme is met die OPatGen-program gegenereer. Tydens toetsing het die TEX-algoritme die beste op beide lettergreep- en saamgesteldewoordverdeling presteer met 99,56% en 99,12% akkuraatheid, respektiewelik. Dit kan dus vir woordafbreking gebruik word met min risiko vir afbrekingsfoute in gedrukte teks. Die neurale netwerk met 98,82% en 98,42% akkuraatheid op lettergreep- en saamgesteldewoordverdeling, respektiewelik, is ook bruikbaar vir lettergreepverdeling, maar dis meer riskant. Ons het bevind dat beslissingsbome te riskant is om vir lettergreepverdeling en veral vir woordverdeling te gebruik, met 97,91% en 90,71% akkuraatheid, respektiewelik. ’n Gekombineerde algoritme is ontwerp waarin saamgesteldewoordverdeling eers met die TEXalgoritme gedoen word, waarna die resultate van lettergreepverdeling deur beide die TEXalgoritme en die neurale netwerk gekombineer word. Die algoritme het 1,3% minder foute as die TEX-algoritme gemaak. ’n Toets op gepubliseerde Afrikaanse teks het getoon dat die risiko vir woordafbrekingsfoute in teks met gemiddeld tien woorde per re¨el ±0,02% is. / The aim of this study was to determine the level of success achievable with a purely pattern based approach to hyphenation in Afrikaans. The machine learning techniques artificial neural networks, decision trees and the TEX algorithm were investigated since they can be trained with patterns of letters from word lists for syllabification and decompounding. A lexicon of Afrikaans words was extracted from a corpus of electronic text. To obtain lists for syllabification and decompounding, words in the lexicon were respectively syllabified and compound words were decomposed. From each list of ±183 000 words, ±10 000 words were reserved as testing data and the rest was used as training data. A recursive algorithm for decompounding was developed. In this algorithm all words corresponding with a reference list (the lexicon) are extracted by string fitting from beginning and end of words. Splitting points are then determined based on the length of reassembled words. The algorithm was expanded by addressing shortcomings of this basic procedure. Artificial neural networks and decision trees were trained and variations of both were examined to find optimal syllabification and decompounding models. Patterns for the TEX algorithm were generated by using the program OPatGen. Testing showed that the TEX algorithm performed best on both syllabification and decompounding tasks with 99,56% and 99,12% accuracy, respectively. It can therefore be used for hyphenation in Afrikaans with little risk of hyphenation errors in printed text. The performance of the artificial neural network was lower, but still acceptable, with 98,82% and 98,42% accuracy for syllabification and decompounding, respectively. The decision tree with accuracy of 97,91% on syllabification and 90,71% on decompounding was found to be too risky to use for either of the tasks A combined algorithm was developed where words are first decompounded by using the TEX algorithm before syllabifying them with both the TEX algoritm and the neural network and combining the results. This algoritm reduced the number of errors made by the TEX algorithm by 1,3% but missed more hyphens. Testing the algorithm on Afrikaans publications showed the risk for hyphenation errors to be ±0,02% for text assumed to have an average of ten words per line. / Decision Sciences / D. Phil. (Operational Research)
132

Exploiting Prior Information in Parametric Estimation Problems for Multi-Channel Signal Processing Applications

Wirfält, Petter January 2013 (has links)
This thesis addresses a number of problems all related to parameter estimation in sensor array processing. The unifying theme is that some of these parameters are known before the measurements are acquired. We thus study how to improve the estimation of the unknown parameters by incorporating the knowledge of the known parameters; exploiting this knowledge successfully has the potential to dramatically improve the accuracy of the estimates. For covariance matrix estimation, we exploit that the true covariance matrix is Kronecker and Toeplitz structured. We then devise a method to ascertain that the estimates possess this structure. Additionally, we can show that our proposed estimator has better performance than the state-of-art when the number of samples is low, and that it is also efficient in the sense that the estimates have Cram\'er-Rao lower Bound (CRB) equivalent variance. In the direction of arrival (DOA) scenario, there are different types of prior information; first, we study the case when the location of some of the emitters in the scene is known. We then turn to cases with additional prior information, i.e.~when it is known that some (or all) of the source signals are uncorrelated. As it turns out, knowledge of some DOA combined with this latter form of prior knowledge is especially beneficial, giving estimators that are dramatically more accurate than the state-of-art. We also derive the corresponding CRBs, and show that under quite mild assumptions, the estimators are efficient. Finally, we also investigate the frequency estimation scenario, where the data is a one-dimensional temporal sequence which we model as a spatial multi-sensor response. The line-frequency estimation problem is studied when some of the frequencies are known; through experimental data we show that our approach can be beneficial. The second frequency estimation paper explores the analysis of pulse spin-locking data sequences, which are encountered in nuclear resonance experiments. By introducing a novel modeling technique for such data, we develop a method for estimating the interesting parameters of the model. The technique is significantly faster than previously available methods, and provides accurate estimation results. / Denna doktorsavhandling behandlar parameterestimeringsproblem inom flerkanals-signalbehandling. Den gemensamma förutsättningen för dessa problem är att det finns information om de sökta parametrarna redan innan data analyseras; tanken är att på ett så finurligt sätt som möjligt använda denna kunskap för att förbättra skattningarna av de okända parametrarna. I en uppsats studeras kovariansmatrisskattning när det är känt att den sanna kovariansmatrisen har Kronecker- och Toeplitz-struktur. Baserat på denna kunskap utvecklar vi en metod som säkerställer att även skattningarna har denna struktur, och vi kan visa att den föreslagna skattaren har bättre prestanda än existerande metoder. Vi kan också visa att skattarens varians når Cram\'er-Rao-gränsen (CRB). Vi studerar vidare olika sorters förhandskunskap i riktningsbestämningsscenariot: först i det fall då riktningarna till ett antal av sändarna är kända. Sedan undersöker vi fallet då vi även vet något om kovariansen mellan de mottagna signalerna, nämligen att vissa (eller alla) signaler är okorrelerade. Det visar sig att just kombinationen av förkunskap om både korrelation och riktning är speciellt betydelsefull, och genom att utnyttja denna kunskap på rätt sätt kan vi skapa skattare som är mycket noggrannare än tidigare möjligt. Vi härleder även CRB för fall med denna förhandskunskap, och vi kan visa att de föreslagna skattarna är effektiva. Slutligen behandlar vi även frekvensskattning. I detta problem är data en en-dimensionell temporal sekvens som vi modellerar som en spatiell fler-kanalssignal. Fördelen med denna modelleringsstrategi är att vi kan använda liknande metoder i estimatorerna som vid sensor-signalbehandlingsproblemen. Vi utnyttjar återigen förhandskunskap om källsignalerna: i ett av bidragen är antagandet att vissa frekvenser är kända, och vi modifierar en existerande metod för att ta hänsyn till denna kunskap. Genom att tillämpa den föreslagna metoden på experimentell data visar vi metodens användbarhet. Det andra bidraget inom detta område studerar data som erhålls från exempelvis experiment inom kärnmagnetisk resonans. Vi introducerar en ny modelleringsmetod för sådan data och utvecklar en algoritm för att skatta de önskade parametrarna i denna modell. Vår algoritm är betydligt snabbare än existerande metoder, och skattningarna är tillräckligt noggranna för typiska tillämpningar. / <p>QC 20131115</p>
133

Procedural Generation of Levels with Controllable Difficulty for a Platform Game Using a Genetic Algorithm / Procedurell generering av banor med kontrollerbar svårighetsgrad till ett platformspel med hjälp av en genetisk algoritm

Classon, Johan, Andersson, Viktor January 2016 (has links)
This thesis describes the implementation and evaluation of a genetic algorithm (GA) for procedurally generating levels with controllable difficulty for a motion-based 2D platform game. Manually creating content can be time-consuming, and it may be desirable to automate this process with an algorithm, using Procedural Content Generation (PCG). An algorithm was implemented and then refined with an iterative method by conducting user tests. The resulting algorithm is considered a success and shows that using GA's for this kind of PCG is viable. An algorithm able to control difficulty of its output was achieved, but more refinement could be made with further user tests. Using a GA for this purpose, one should find elements that affect difficulty, incorporate these in the fitness function, and test generated content to ensure that the fitness function correctly evaluates solutions with regard to the desired output.
134

Masjienleerbenadering tot woordafbreking in Afrikaans

Fick, Machteld 06 1900 (has links)
Text in Afrikaans / Die doel van hierdie studie was om te bepaal tot watter mate ’n suiwer patroongebaseerde benadering tot woordafbreking bevredigende resultate lewer. Die masjienleertegnieke kunsmatige neurale netwerke, beslissingsbome en die TEX-algoritme is ondersoek aangesien dit met letterpatrone uit woordelyste afgerig kan word om lettergreep- en saamgesteldewoordverdeling te doen. ’n Leksikon van Afrikaanse woorde is uit ’n korpus van elektroniese teks genereer. Om lyste vir lettergreep- en saamgesteldewoordverdeling te kry, is woorde in die leksikon in lettergrepe verdeel en saamgestelde woorde is in hul samestellende dele verdeel. Uit elkeen van hierdie lyste van ±183 000 woorde is ±10 000 woorde as toetsdata gereserveer terwyl die res as afrigtingsdata gebruik is. ’n Rekursiewe algoritme is vir saamgesteldewoordverdeling ontwikkel. In hierdie algoritme word alle ooreenstemmende woorde uit ’n verwysingslys (die leksikon) onttrek deur stringpassing van die begin en einde van woorde af. Verdelingspunte word dan op grond van woordlengte uit die samestelling van begin- en eindwoorde bepaal. Die algoritme is uitgebrei deur die tekortkominge van hierdie basiese prosedure aan te spreek. Neurale netwerke en beslissingsbome is afgerig en variasies van beide tegnieke is ondersoek om die optimale modelle te kry. Patrone vir die TEX-algoritme is met die OPatGen-program gegenereer. Tydens toetsing het die TEX-algoritme die beste op beide lettergreep- en saamgesteldewoordverdeling presteer met 99,56% en 99,12% akkuraatheid, respektiewelik. Dit kan dus vir woordafbreking gebruik word met min risiko vir afbrekingsfoute in gedrukte teks. Die neurale netwerk met 98,82% en 98,42% akkuraatheid op lettergreep- en saamgesteldewoordverdeling, respektiewelik, is ook bruikbaar vir lettergreepverdeling, maar dis meer riskant. Ons het bevind dat beslissingsbome te riskant is om vir lettergreepverdeling en veral vir woordverdeling te gebruik, met 97,91% en 90,71% akkuraatheid, respektiewelik. ’n Gekombineerde algoritme is ontwerp waarin saamgesteldewoordverdeling eers met die TEXalgoritme gedoen word, waarna die resultate van lettergreepverdeling deur beide die TEXalgoritme en die neurale netwerk gekombineer word. Die algoritme het 1,3% minder foute as die TEX-algoritme gemaak. ’n Toets op gepubliseerde Afrikaanse teks het getoon dat die risiko vir woordafbrekingsfoute in teks met gemiddeld tien woorde per re¨el ±0,02% is. / The aim of this study was to determine the level of success achievable with a purely pattern based approach to hyphenation in Afrikaans. The machine learning techniques artificial neural networks, decision trees and the TEX algorithm were investigated since they can be trained with patterns of letters from word lists for syllabification and decompounding. A lexicon of Afrikaans words was extracted from a corpus of electronic text. To obtain lists for syllabification and decompounding, words in the lexicon were respectively syllabified and compound words were decomposed. From each list of ±183 000 words, ±10 000 words were reserved as testing data and the rest was used as training data. A recursive algorithm for decompounding was developed. In this algorithm all words corresponding with a reference list (the lexicon) are extracted by string fitting from beginning and end of words. Splitting points are then determined based on the length of reassembled words. The algorithm was expanded by addressing shortcomings of this basic procedure. Artificial neural networks and decision trees were trained and variations of both were examined to find optimal syllabification and decompounding models. Patterns for the TEX algorithm were generated by using the program OPatGen. Testing showed that the TEX algorithm performed best on both syllabification and decompounding tasks with 99,56% and 99,12% accuracy, respectively. It can therefore be used for hyphenation in Afrikaans with little risk of hyphenation errors in printed text. The performance of the artificial neural network was lower, but still acceptable, with 98,82% and 98,42% accuracy for syllabification and decompounding, respectively. The decision tree with accuracy of 97,91% on syllabification and 90,71% on decompounding was found to be too risky to use for either of the tasks A combined algorithm was developed where words are first decompounded by using the TEX algorithm before syllabifying them with both the TEX algoritm and the neural network and combining the results. This algoritm reduced the number of errors made by the TEX algorithm by 1,3% but missed more hyphens. Testing the algorithm on Afrikaans publications showed the risk for hyphenation errors to be ±0,02% for text assumed to have an average of ten words per line. / Decision Sciences / D. Phil. (Operational Research)
135

Neurala nätverk försjälvkörande fordon : Utforskande av olika tillvägagångssätt / Neural Networks for Autonomous Vehicles : An Exploration of Different Approaches

Hellner, Simon, Syvertsson, Henrik January 2021 (has links)
Artificiella neurala nätverk (ANN) har ett brett tillämpningsområde och blir allt relevantare på flera håll, inte minst för självkörande fordon. För att träna nätverken användsmeta-algoritmer. Nätverken kan styra fordonen med hjälp av olika typer av indata. I detta projekt har vi undersökt två meta-algoritmer: genetisk algoritm (GA) och gradient descent tillsammans med bakåtpropagering (GD &amp; BP). Vi har även undersökt två typer av indata: avståndssensorer och linjedetektering. Vi redogör för teorin bakom de metoder vi har försökt implementera. Vi lyckades inte använda GD &amp; BP för att träna nätverk att köra fordon, men vi redogör för hur vi försökte. I resultatdelen redovisar vi hur det med GA gick att träna ANN som använder avståndssensorer och linjedetektering som indata. Sammanfattningsvis lyckades vi implementera självkörande fordon med två olika typer av indata. / Artificial Neural Networks (ANN) have a broad area of application and are growing increasingly relevant, not least in the field of autonomous vehicles. Meta algorithms are used to train networks, which can control a vehicle using several kinds of input data. In this project we have looked at two meta algorithms: genetic algorithm (GA), and gradient descent with backpropagation (GD &amp; BP). We have looked at two types of input to the ANN: distance sensors and line detection. We explain the theory behind the methods we have tried to implement. We did not succeed in using GD &amp; BP to train ANNs to control vehicles, but we describe our attemps. We did however succeeded in using GA to train ANNs using a combination of distance sensors and line detection as input. In summary we managed to train ANNs to control vehicles using two methods of input, and we encountered interesting problems along the way.
136

Semi - analytické výpočty a spojitá simulace / Semi - analytical computations and continuous systems simulation

Kopřiva, Jan January 2014 (has links)
The thesis deals with speedup and accuracy of numerical computation, especially when differential equations are solved. Algorithms, which are fulling these conditions are named semi-analytical. One posibility how to accelerate computation of differential equation is paralelization. Presented paralelization is based on transformation numerical solution into residue number system, which is extended to floating point computation. A new algorithm for modulo multiplication is also proposed. As application applications in solution of differential calculus are the main goal it is discussed numeric integration with modified Euler, Runge - Kutta and Taylor series method in residue number system. Next possibilities and extension for implemented residue number system are mentioned at the end.
137

Detection and localization of cough from audio samples for cough-based COVID-19 detection / Detektion och lokalisering av hosta från ljudprover för hostbaserad COVID-19-upptäckt

Krishnamurthy, Deepa January 2021 (has links)
Since February 2020, the world is in a COVID-19 pandemic [1]. Researchers around the globe are pitching in to develop a fast reliable, non-invasive testing methodology to solve this problem and one of the key directions of research is to utilize coughs and their corresponding vocal biomarkers for diagnosis of COVID-19. In this thesis, we propose a fast, real-time cough detection pipeline that can be used to detect and localize coughs from audio samples. The core of the pipeline utilizes the yolo-v3 model [2] from vision domain to localize coughs in the audio spectrograms by treating them as objects. This outcome is transformed to localize the boundaries of cough utterances in the input signal. The system to detect coughs from CoughVid dataset [3] is then evaluated. Furthermore, the pipeline is compared with other existing algorithms like tinyyolo-v3 to test for better localization and classification. Average precision(AP@0.5) of yolo-v3 and tinyyolo-v3 model are 0.67 and 0.78 respectively. Based on the AP values, tinyyolo-v3 performs better than yolo-v3 by atleast 10% and based on its computational advantage, its inference time was also found to be 2.4 times faster than yolo-v3 model in our experiments. This work is considered to be novel and significant in detection and localization of cough in an audio stream. In the end, the resulting cough events are used to extract MFCC features from it and classifiers were trained to predict whether a cough has COVID-19 or not. The performance of different classifiers were compared and it was observed that random forest outperformed other models with a precision of 83.04%. It can also be inferred from the results that the classifier looks promising, however, in future this model has to be trained using clinically approved dataset and tested for its reliability in using this model in a clinical setup. / Sedan februari 2020 är världen inne i en COVID-19-pandemi [1]. Forskare runt om i världen satsar på att utveckla en snabb tillförlitlig, icke-invasiv testmetodik för att lösa detta problem och en av de viktigaste forskningsriktningarna är att använda hosta och deras motsvarande vokala biomarkörer för diagnos av COVID-19. I denna avhandling föreslår vi en snabb pipeline för hostdetektering i realtid som kan användas för att upptäcka och lokalisera hosta från ljudprover. Kärnan i rörledningen använder yolo-v3-modellen [2] från syndomänen för att lokalisera hosta i ljudspektrogrammen genom att behandla dem som objekt. Detta resultat transformeras för att lokalisera gränserna för hosta yttranden i insignalen. Systemet för att upptäcka hosta från CoughVid dataset [3] utvärderas sedan. Dessutom jämförs rörledningen med andra befintliga algoritmer som tinyyolo-v3 för att testa för bättre lokalisering och klassificering. Genomsnittlig precision (AP@0.5) för modellen yolo-v3 och tinyyolo-v3 är 0,67 respektive 0,78. Baserat på AP-värdena fungerar tinyyolo-v3 bättre än yolo-v3 med minst 10% och baserat på dess beräkningsfördel befanns dess inferenstid också vara 2,4 gånger snabbare än yolo-v3- modellen i våra experiment. Detta arbete anses vara nytt och viktigt för att upptäcka och lokalisera hosta i en ljudström. I slutändan används de resulterande hosthändel-serna för att extrahera MFCC-funktioner från det och klassificerare utbildades för att förutsäga om en hosta har COVID-19 eller inte. Prestanda för olika klassificerare jämfördes och det observerades att slumpmässig skog överträffade andra modeller med en precision på 83.04%. Av resultaten kan man också dra slutsatsen att klassificeraren ser lovande ut, men i framtiden måste denna modell utbildas med hjälp av kliniskt godkänd dataset och testas med avseende på dess tillförlitlighet vid användning av denna modell i ett kliniskt upplägg.
138

利用混合模型估計風險值的探討

阮建豐 Unknown Date (has links)
風險值大多是在假設資產報酬為常態分配下計算而得的,但是這個假設與實際的資產報酬分配不一致,因為很多研究者都發現實際的資產報酬分配都有厚尾的現象,也就是極端事件的發生機率遠比常態假設要來的高,因此利用常態假設來計算風險值對於真實損失的衡量不是很恰當。 針對這個問題,本論文以歷史模擬法、變異數-共變異數法、混合常態模型來模擬報酬率的分配,並依給定的信賴水準估算出風險值,其中混合常態模型的參數是利用準貝式最大概似估計法及EM演算法來估計;然後利用三種風險值的評量方法:回溯測試、前向測試與二項檢定,來評判三種估算風險值方法的優劣。 經由實證結果發現: 1.報酬率分配在左尾臨界機率1%有較明顯厚尾的現象。 2.利用混合常態分配來模擬報酬率分配會比另外兩種方法更能準確的捕捉到左尾臨界機率1%的厚尾。 3.混合常態模型的峰態係數值接近於真實報酬率分配的峰態係數值,因此我們可以確認混合常態模型可以捕捉高峰的現象。 關鍵字:風險值、厚尾、歷史模擬法、變異數-共變異教法、混合常態模型、準貝式最大概似估計法、EM演算法、回溯測試、前向測試、高峰 / Initially, Value at Risk (VaR) is calculated by assuming that the underline asset return is normal distribution, but this assumption sometimes does not consist with the actual distribution of asset return. Many researchers have found that the actual distribution of the underline asset return have Fat-Tail, extreme value events, character. So under normal distribution assumption, the VaR value is improper compared with the actual losses. The paper discuss three methods. Historical Simulated method - Variance-Covariance method and Mixture Normal .simulating those asset, return and VaR by given proper confidence level. About the Mixture Normal Distribution, we use both EM algorithm and Quasi-Bayesian MLE calculating its parameters. Finally, we use tree VaR testing methods, Back test、Forward tes and Binomial test -----comparing its VaR loss probability We find the following results: 1.Under 1% left-tail critical probability, asset return distribution has significant Fat-tail character. 2.Using Mixture Normal distribution we can catch more Fat-tail character precisely than the other two methods. 3.The kurtosis of Mixture Normal is close to the actual kurtosis, this means that the Mixture Normal distribution can catch the Leptokurtosis phenomenon. Key words: Value at Risk、VaR、Fat tail、Historical simulation method、 Variance-Covariance method、Mixture Normal distribution、Quasi-Bayesian MLE、EM algorithm、Back test、 Forward test、 Leptokurtosis
139

Investigating the Use of Digital Twins to Optimize Waste Collection Routes : A holistic approach towards unlocking the potential of IoT and AI in waste management / Undersökning av användningen av digitala tvillingar för optimering av sophämtningsrutter : Ett holistiskt tillvägagångssätt för att ta del av potentialen för IoT och AI i sophantering

Medehal, Aarati January 2023 (has links)
Solid waste management is a global issue that affects everyone. The management of waste collection routes is a critical challenge in urban environments, primarily due to inefficient routing. This thesis investigates the use of real-time virtual replicas, namely Digital Twins to optimize waste collection routes. By leveraging the capabilities of digital twins, this study intends to improve the effectiveness and efficiency of waste collection operations. The ‘gap’ that the study aims to uncover is hence at the intersection of smart cities, Digital Twins, and waste collection routing. The research methodology comprises of three key components. First, an exploration of five widely used metaheuristic algorithms provides a qualitative understanding of their applicability in vehicle routing, and consecutively waste collection route optimization. Building on this foundation, a simple smart routing scenario for waste collection is presented, highlighting the limitations of a purely Internet of Things (IoT)-based approach. Next, the findings from this demonstration motivate the need for a more data-driven and intelligent solution, leading to the introduction of the Digital Twin concept. Subsequently, a twin framework is developed, which encompasses the technical anatomy and methodology required to create and utilize Digital Twins to optimize waste collection, considering factors such as real-time data integration, predictive analytics, and optimization algorithms. The outcome of this research contributes to the growing concept of smart cities and paves the way toward practical implementations in revolutionizing waste management and creating a sustainable future. / Sophantering är ett globalt problem som påverkar alla, och hantering av sophämtningsrutter är en kritisk utmaning i stadsmiljöer. Den här avhandlingen undersöker användningen av virtuella kopior i realtid, nämligen digitala tvillingar, för att optimera sophämtningsrutter. Genom att utnyttja digitala tvillingars förmågor, avser den här studien att förbättra effektiviteten av sophämtning. Forskningsmetoden består av tre nyckeldelar. Först, en undersökning av fem välanvända Metaheuristika algoritmer som ger en kvalitativ förståelse av deras applicerbarhet i fordonsdirigering och således i optimeringen av sophämtningsrutter. Baserat på detta presenteras ett enkelt smart ruttscenario för sophämtning som understryker bristerna av att bara använda Internet of Things (IoT). Sedan motiverar resultaten av demonstrationen nödvändigheten för en mer datadriven och intelligent lösning, vilket leder till introduktionen av konceptet med digitala tvillingar. Därefter utvecklas ett ramverk för digitala tvillingar som omfattar den tekniska anatomin och metod som krävs för att skapa och använda digitala tvillingar för att optimera sophämtningsrutter. Dessa tar i beaktning faktorer såsom realtidsdataintegrering, prediktiv analys och optimeringsalgoritmer. Slutsatserna av studien bidrar till det växande konceptet av smarta städer och banar väg för praktisk implementation i revolutionerande sophantering och för skapandet för en hållbar framtid.

Page generated in 0.2828 seconds