• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 25
  • 4
  • Tagged with
  • 39
  • 39
  • 39
  • 14
  • 13
  • 12
  • 12
  • 12
  • 8
  • 7
  • 7
  • 7
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Application of machine learning to construct advanced NPC behaviors in Unity 3D. / Tillämpning av maskininlärning för skapande av avancerade NPC-beteenden i Unity 3D.

Håkansson, Carl, Fröberg, Johan January 2021 (has links)
Machine learning has been widely used in computer games for a long time. This is something that has been proven to create a better experience and well-balanced challenges for players. In 2017, the game engine Unity released the ML-agents toolkit that provides several machine learning algorithms together with examples and a user-friendly development environment for free to the public. This has made it simpler for developers to explore what is possible in the world of machine learning in games. In many cases, a developer has spent a lot of time on a specific place in a game and would like a player to visit that area. The location can also be important for the gameplay, but the developer wants to steer the player there without the player feeling forced. This thesis investigates if it is possible to create a smart agent in a modern game engine like Unity that can affect the route taken by a player through a level. The results show that this is fully possible with a high success rate for a simple environment, but that it requires much time and effort to make it work on an advanced environment with several agents. Experiments with a randomized environment to create an agent that is general and can be used in many situations were also done, but a successful agent could not be produced in this way within the timeframe of the work.
2

AI-Based Intrusion Detection Systems to Secure Internet of Things (IoT)

Otoum, Yazan 20 September 2022 (has links)
The Internet of Things (IoT) is comprised of numerous devices that are connected through wired or wireless networks, including sensors and actuators. The number of IoT applications has recently increased dramatically, including Smart Homes, Internet of Vehicles (IoV), Internet of Medical Things (IoMT), Smart Cities, and Wearables. IoT Analytics has reported that the number of connected devices is expected to grow 18% to 14.4 billion in 2022 and will be 27 billion by 2025. Security is a critical issue in today's IoT, due to the nature of the architecture, the types of devices, the different methods of communication (mainly wireless), and the volume of data being transmitted over the network. Furthermore, security will become even more important as the number of devices connected to the IoT increases. However, devices can protect themselves and detect threats with the Intrusion Detection System (IDS). IDS typically use one of two approaches: anomaly-based or signature-based. In this thesis, we define the problems and the particular requirements of securing the IoT environments, and we have proposed a Deep Learning (DL) anomaly-based model with optimal features selection to detect the different potential attacks in IoT environments. We then compare the performance results with other works that have been used for similar tasks. We also employ the idea of reinforcement learning to combine the two different IDS approaches (i.e., anomaly-based and signature-based) to enable the model to detect known and unknown IoT attacks and classify the recognized attacked into five classes: Denial of Service (DDoS), Probe, User-to-Root (U2R), Remote-to-Local (R2L), and Normal traffic. We have also shown the effectiveness of two trending machine-learning techniques, Federated and Transfer learning (FL/TL), over using the traditional centralized Machine and Deep Learning (ML/DL) algorithms. Our proposed models improve the model's performance, increase the learning speed, reduce the amount of data that needs to be trained, and reserve user data privacy when compared with the traditional learning approaches. The proposed models are implemented using the three benchmark datasets generated by the Canadian Institute for Cybersecurity (CIC), NSL-KDD, CICIDS2017, and the CSE-CIC-IDS2018. The performance results were evaluated in different metrics, including Accuracy, Detection Rate (DR), False Alarm Rate (FAR), Sensitivity, Specificity, F-measure, and training and fine-tuning times.
3

MULTIMODAL SPATIAL-TEMPORAL DATA FUSION TECHNIQUES FOR ENHANCING FIELD CROP BIOMASS ESTIMATION IN PRECISION AGRICULTURE

Kevin Tae Sup Lee (18824575) 17 June 2024 (has links)
<p dir="ltr">This study introduces a methodology wherein daily values are linearly interpolated to achieve uniform temporal resolution across various data sets, including spectral and environmental information. This approach facilitates further analysis using machine learning techniques to estimate biomass. The proposed Best Friend Frame (B.F.F.) data set integrates Unmanned Aerial Systems (UAS) data, weather data, weather indices, soil hydrological group classifications, and topographic information. Two different biomass estimations were created to enhance versatility: one averaged per management practice and another averaged per physical experimental plot size. Additionally, SuperDove satellite data were combined with identical environmental data as that of the UAS.</p><p dir="ltr">UAS flights were conducted at the ACRE field in 2022 and 2023. The UAS data were captured at a height of 30 meters, yielding a ground sample distance of 2 cm/pixel per flight. Satellite data were sourced from the Planet SuperDove product. The images were processed using Crop Image Extraction (CIE) and calibrated with Vegetation Index Derivation (VID). Spatial resolution was defined as the experimental plot size per year per crop type (soybean or corn). Topographic data were derived from Indiana LiDAR data, and soil information was obtained from the USDA SSURGO dataset.</p><p dir="ltr">The B.F.F. framework can be utilized with various models to identify which environmental inputs influence biomass accumulation throughout the growing season.</p>
4

Advanced Algorithms for Classification and Anomaly Detection on Log File Data : Comparative study of different Machine Learning Approaches

Wessman, Filip January 2021 (has links)
Background: A problematic area in today’s large scale distributed systems is the exponential amount of growing log data. Finding anomalies by observing and monitoring this data with manual human inspection methods becomes progressively more challenging, complex and time consuming. This is vital for making these systems available around-the-clock. Aim: The main objective of this study is to determine which are the most suitable Machine Learning (ML) algorithms and if they can live up to needs and requirements regarding optimization and efficiency in the log data monitoring area. Including what specific steps of the overall problem can be improved by using these algorithms for anomaly detection and classification on different real provided data logs. Approach: Initial pre-study is conducted, logs are collected and then preprocessed with log parsing tool Drain and regular expressions. The approach consisted of a combination of K-Means + XGBoost and respectively Principal Component Analysis (PCA) + K-Means + XGBoost. These was trained, tested and with different metrics individually evaluated against two datasets, one being a Server data log and on a HTTP Access log. Results: The results showed that both approaches performed very well on both datasets. Able to with high accuracy, precision and low calculation time classify, detect and make predictions on log data events. It was further shown that when applied without dimensionality reduction, PCA, results of the prediction model is slightly better, by a few percent. As for the prediction time, there was marginally small to no difference for when comparing the prediction time with and without PCA. Conclusions: Overall there are very small differences when comparing the results for with and without PCA. But in essence, it is better to do not use PCA and instead apply the original data on the ML models. The models performance is generally very dependent on the data being applied, it the initial preprocessing steps, size and it is structure, especially affecting the calculation time the most.
5

A Machine Learning Approach that Integrates Clinical Data and PTM Proteomics Identifies a Mechanism of ACK1 Activation and Stabilization in Cancer

Loku Balasooriyage, Eranga Roshan Balasooriya 08 August 2022 (has links)
Identification of novel cancer driver mutations is crucial for targeted cancer therapy, yet a difficult task especially for low frequency drivers. To identify cancer driver mutations, we developed a machine learning (ML) model to predict cancer hotspots. Here, we applied the ML program to 32 non-receptor tyrosine kinases (NRTKs) and identified 36 potential cancer driver mutations, with high probability mutations in 10 genes, including ABL1, ABL2, JAK1, JAK3, and ACK1. ACK1 is a member of the poorly understood ACK family of NRTKs that also includes TNK1. Although ACK1 is an established oncogene and high-interest therapeutic target, the exact mechanism of ACK1 regulation is largely unknown and there is still no ACK1 inhibitor in clinical use. The ACK kinase family has a unique domain arrangement with most notably, a predicted ubiquitin association (UBA) domain at its C-terminus. While the presence of a functional UBA domain on a kinase is unique to the ACK family, but the role of the UBA domain on ACK1 is unknown. Interestingly, the ML program identified the ACK1 Mig6 homology region (MHR) and UBA domains truncating mutation p633fs* as a cancer driver mutation. Our data suggest that the ACK1 UBA domain helps activate full-length ACK1 through induced proximity. It also acts as a mechanism of negative feedback by tethering ACK1 to ubiquitinated cargo that is ultimately degraded. Indeed, our preliminary data suggest that truncation of the ACK1 UBA stabilizes ACK1 protein levels, which results in spontaneous ACK1 oligomerization and activation. Furthermore, our data suggests removal of the MHR domain hyper activates ACK1. Thus, our data provide a model to explain how human mutations in ACK1 convert the kinase into an oncogenic driver. In conclusion, our data reveal a mechanism of ACK1 activation and potential strategies to target the kinase in cancer.
6

Creating a semantic segmentationmachine learning model for sea icedetection on radar images to study theThwaites region

Fuentes Soria, Carmen January 2022 (has links)
This thesis presents a deep learning tool able to identify ice in radar images fromthe sea-ice environment of the Twhaites glacier outlet. The project is motivatedby the threatening situation of the Thwaites glacier that has been increasingits mass loss rate during the last decade. This is of concern considering thelarge mass of ice held by the glacier, that in case of melting, could increasethe mean sea level by more than +65 cm [1]. The algorithm generated alongthis work is intended to help in the generation of navigation charts and identificationof icebergs in future stages of the project, outside of the scope of this thesis.The data used for this task are ICEYE’s X-band radar images from the Thwaitessea-ice environment, the target area to be studied. The corresponding groundtruth for each of the samples has been manually generated identifying the iceand icebergs present in each image. Additional data processing includes tiling,to increment the number of samples, and augmentation, done by horizontal andvertical flips of a random number of tiles.The proposed tool performs semantic segmentation on radar images classifyingthe class "Ice". It is developed by a deep learning Convolutional Neural Network(CNN) model, trained with prepared ICEYE’s radar images. The model reachesvalues of F1 metric higher than 89% in the images of the target area (Thwaitessea-ice environment) and is able to generalize to different regions of Antarctica,reaching values of F1 = 80 %. A potential alternative version of the algorithm isproposed and discussed. This alternative score F1 values higher than F1 &gt; 95 %for images of the target environment and F1 = 87 % for the image of the differentregion. However, it must not be confirmed as the final algorithm due to the needfor further verification.
7

Machine Learning in Detecting Auditory Sequences in Magnetoencephalography Data: Research Project in Computational Modelling and Simulation

Shaikh, Mohd Faraz 17 November 2022 (has links)
Spielt Ihr Gehirn Ihre letzten Lebenserfahrungen ab, während Sie sich ausruhen? Eine offene Frage in den Neurowissenschaften ist, welche Ereignisse unser Gehirn wiederholt und gibt es eine Korrelation zwischen der Wiederholung und der Dauer des Ereignisses? In dieser Studie habe ich versucht, dieser Frage nachzugehen, indem ich Magnetenzephalographie-Daten aus einem Experiment zum aktiven Hören verwendet habe. Die Magnetenzephalographie (MEG) ist ein nicht-invasives Neuroimaging-Verfahren, das verwendet wird, um die Gehirnaktivität zu untersuchen und die Gehirndynamik bei Wahrnehmungs- und kognitiven Aufgaben insbesondere in den Bereichen Sprache und Hören zu verstehen. Es zeichnet das in unserem Gehirn erzeugte Magnetfeld auf, um die Gehirnaktivität zu erkennen. Ich baue eine Pipeline für maschinelles Lernen, die einen Teil der Experimentdaten verwendet, um die Klangmuster zu lernen und dann das Vorhandensein von Geräuschen im späteren Teil der Aufnahmen vorhersagt, in denen die Teilnehmer untätig sitzen mussten und kein Ton zugeführt wurde. Das Ziel der Untersuchung der Testwiedergabe von gelernten Klangsequenzen in der Nachhörphase. Ich habe ein Klassifikationsschema verwendet, um Muster zu identifizieren, wenn MEG auf verschiedene Tonsequenzen in der Zeit nach der Aufgabe reagiert. Die Studie kam zu dem Schluss, dass die Lautfolgen über dem theoretischen Zufallsniveau identifiziert und unterschieden werden können und bewies damit die Gültigkeit unseres Klassifikators. Darüber hinaus könnte der Klassifikator die Geräuschsequenzen in der Nachhörzeit mit sehr hoher Wahrscheinlichkeit vorhersagen, aber um die Modellergebnisse über die Nachhörzeit zu validieren, sind mehr Beweise erforderlich. / Does your brain replay your recent life experiences while you are resting? An open question in neuroscience is which events does our brain replay and is there any correlation between the replay and duration of the event? In this study I tried to investigate this question by using Magnetoencephalography data from an active listening experiment. Magnetoencephalography (MEG) is a non-invasive neuroimaging technique used to study the brain activity and understand brain dynamics in perception and cognitive tasks particularly in the fields of speech and hearing. It records the magnetic field generated in our brains to detect the brain activity. I build a machine learning pipeline which uses part of the experiment data to learn the sound patterns and then predicts the presence of sound in the later part of the recordings in which the participants were made to sit idle and no sound was fed. The aim of the study of test replay of learned sound sequences in the post listening period. I have used classification scheme to identify patterns if MEG responses to different sound sequences in the post task period. The study concluded that the sound sequences can be identified and distinguished above theoretical chance level and hence proved the validity of our classifier. Further, the classifier could predict the sound sequences in the post-listening period with very high probability but in order to validate the model results on post listening period, more evidence is needed.
8

PHYSICS-GUIDED MACHINE LEARNING APPLICATIONS FOR AIR TRAFFIC CONTROL

Hong-Cheol Choi (18937627) 08 July 2024 (has links)
<p dir="ltr">The Air Traffic Management (ATM) system encompasses complex and safety-critical operations which are mainly managed by Air Traffic Controllers (ATCs) and pilots to ensure safety and efficiency. This air traffic operation becomes more complex and challenging as demands continue to increase. Indeed, the demand for air transport is expected to increase by an average of 4.3% annually over the next 20 years, and the projected number of flights is expected to reach around 90 million by 2040 [1]. This continuous growth of demands can lead to an excessive workload for both ATCs and pilots, thereby resulting in the degradation of the ATM system. To effectively respond to this problem, a lot of effort has been put into developing decision support tools. This dissertation explores and focuses on the development of algorithms for decision support tools in air traffic control, emphasizing specific desirable properties essential for tasks such as tracking the position of aircraft and monitoring air traffic. The primary focus of this dissertation is to combine a data-driven model and a physics-based model systematically, thereby addressing the limitations of previous works in trajectory prediction and anomaly detection. Through a literature review, important properties, including real-time applicability, interpretability, and feasibility, are identified and pursued for practical applications. These properties are integrated into the proposed algorithms which combine data-driven and physics-based models to address dynamic air traffic scenarios effectively. To meet the requirement of real-time applicability, the algorithms are designed to be computationally efficient and adaptable to continuously changing conditions, ensuring timely provision of immediate information and near-instantaneous responses to assist ATCs. Subsequently, interpretability allows controllers to understand the reasoning behind the algorithm’s predictions. This is facilitated by the use of attention mechanisms and explicit physics-based guidance, making the predictions more intuitive and understandable. In addition, anomaly detection algorithms provide human-readable decision boundaries for flight states for a clear understanding. Lastly, feasibility ensures that the algorithms generate realistic aircraft trajectory predictions based on current flight states and air traffic conditions. This is achieved by physics-guided machine learning which leverages both data-driven and physics-based approaches, accounting for the aircraft dynamics and uncertainties. Moreover, practical and operational considerations are integrated into algorithms for real-world applications. This includes developing anomaly detection models that are adaptable to dynamic trajectory patterns to address the complexities of flexible area navigation airspace. Additionally, to reduce the workload of ATCs, providing immediate advisories for anomaly resolution and arrival sequencing is targeted by learning from historical data. By considering these properties with practical considerations, the dissertation presents a suite of algorithms that can effectively support human operators for air traffic control. </p>
9

AI–Driven Operational Efficiency &amp; AI Adoption in Real Estate in Sweden / AI–driven operationell effektivitet och AI adoptering inom fastighetsbranschen i Sverige

Tayefeh, Sam, Niklasson, Anton January 2024 (has links)
Artificial intelligence (AI) has gained tremendous popularity in recent years, influencing the majority of industry sectors worldwide with its automation, generative, and analytical abilities. However, the real estate industry has been slow to adapt compared to others. This cautious approach is due to worries about costs, integrating new systems, and keeping data secure. As a result, real estate firms often take their time to adapt to these changes in a rapidly evolving market.  This study investigates the challenges and opportunities for the use of AI in Sweden’s real estate market. It is a qualitative research based on existing literature and interviews with representatives from 11 well-known Swedish companies connected to the real estate industry in different ways. The collected data provides an overview of the present level of AI application, outlining both the challenges that the industry faces and the opportunity for technological adaptation. The study dives deeper into these integration problems, highlighting important roadblocks such as cultural skepticism, reluctance to change, and worries about data protection. These issues highlight the complexity of incorporating new technologies into traditional real estate procedures, emphasizing the need for a nuanced approach to technology adoption.  Several strategic recommendations are made, including encouraging strategic collaborations, instituting strong data security measures, and undertaking ongoing training programs to improve workforce proficiency. These measures are intended to make AI integration more seamless and to fully realize its potential in the industry. Overall, the thesis argues that AI can improve the operational efficiency of Sweden’s real estate market. However, attaining its full potential necessitates overcoming the hurdles by strategic interventions and cultural changes. / Artificiell intelligens (AI) har blivit mycket populärt de senaste åren och påverkar de flesta branscher globalt med sina automatiserings-, generativa och analytiska förmågor. Fastighetsbranschen har dock varit långsam med att anpassa sig jämfört med andra. Denna försiktiga inställning beror på oro för kostnader, integrering av nya system och datasäkerhet. Som ett resultat tar fastighetsföretag ofta lång tid på sig att anpassa sig till dessa förändringar i en snabbt föränderlig marknad.  Denna studie undersöker utmaningarna och möjligheterna för användning av AI på den svenska fastighetsmarknaden. Studien är en kvalitativ forskning baserad på befintlig litteratur och intervjuer med representanter från elva välkända svenska företag kopplade till fastighetsbranschen på olika sätt. Den data som samlats in ger en översikt över den nuvarande nivån av AI-tillämpning och beskriver både de utmaningar som branschen står inför och de möjligheter som finns för teknologisk anpassning. Studien fördjupar sig i dessa integrationsproblem och lyfter fram hinder som kulturell skepsis, mot-vilja mot förändring och oro över dataskydd. Dessa hinder belyser komplexiteten i att införliva ny teknik i traditionella fastighetsprocesser, vilket betonar behovet av ett nyanserat förhållningssätt till teknikanvändning. Flera strategiska rekommendationer ges, inklusive att uppmuntra strategiska samarbeten, införa starka dataskyddsåtgärder och genomföra pågående utbildningsprogram för att förbättra arbetskraftens kompetens. Dessa åtgärder syftar till att göra AI-integration mer smidig och att fullt ut realisera dess potential i branschen. Sammanfattningsvis landar studien i att AI kan förbättra den operativa effektiviteten på Sveriges fastighetsmarknad. Att uppnå dess fulla potential kräver dock att man övervinner de nämnda hindren genom strategiska insatser och kulturella förändringar.
10

Automatic Text Ontological Representation and Classification via Fundamental to Specific Conceptual Elements (TOR-FUSE)

Razavi, Amir Hossein 16 July 2012 (has links)
In this dissertation, we introduce a novel text representation method mainly used for text classification purpose. The presented representation method is initially based on a variety of closeness relationships between pairs of words in text passages within the entire corpus. This representation is then used as the basis for our multi-level lightweight ontological representation method (TOR-FUSE), in which documents are represented based on their contexts and the goal of the learning task. The method is unlike the traditional representation methods, in which all the documents are represented solely based on the constituent words of the documents, and are totally isolated from the goal that they are represented for. We believe choosing the correct granularity of representation features is an important aspect of text classification. Interpreting data in a more general dimensional space, with fewer dimensions, can convey more discriminative knowledge and decrease the level of learning perplexity. The multi-level model allows data interpretation in a more conceptual space, rather than only containing scattered words occurring in texts. It aims to perform the extraction of the knowledge tailored for the classification task by automatic creation of a lightweight ontological hierarchy of representations. In the last step, we will train a tailored ensemble learner over a stack of representations at different conceptual granularities. The final result is a mapping and a weighting of the targeted concept of the original learning task, over a stack of representations and granular conceptual elements of its different levels (hierarchical mapping instead of linear mapping over a vector). Finally the entire algorithm is applied to a variety of general text classification tasks, and the performance is evaluated in comparison with well-known algorithms.

Page generated in 0.0893 seconds