• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 21
  • 3
  • Tagged with
  • 30
  • 30
  • 30
  • 9
  • 9
  • 9
  • 8
  • 7
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Artificiell intelligens som ett beslutsstöd inom mammografi : En kvalitativ studie om radiologers perspektiv på icke-tekniska utmaningar / Artificial intelligence as a decision support in mammography : A qualitative study about radiologists perspectives on non-technical challenges

Klingvall, Emelie January 2020 (has links)
Artificiell intelligence (AI) har blivit vanligare att använda för att stödja människor i deras beslutsfattande. Maskininlärning (ML) är ett delområde inom AI som har börjat användas mer inom hälso-och sjukvården. Patientdata ökar inom vården och ett AI-system kan behandla denna ökade datamängd, vilket vidare kan utveckla ett beslutsstöd som hjälper läkarna. AI-tekniken blir vanligare att använda inom radiologin och specifikt inom mammografin som ett beslutsstöd. Användning av AI-teknik inom mammografin medför fördelar men det finns även utmaningar som inte har något med tekniken att göra.Icke-tekniska utmaningar är viktiga att se över för att generera en lyckad praxis. Studiens syfte var därför att undersöka icke-tekniska utmaningar vid användning av AI som ett beslutsstöd inom mammografi ur ett radiologiskt perspektiv. Radiologer med erfarenhet av mammografi intervjuades i syfte att öka kunskapen kring deras syn på användningen.Resultatet från studien identifierade och utvecklade de icke-tekniska utmaningarna utifrån temana: ansvar, mänskliga förmågor, acceptans, utbildning/kunskap och samarbete. Resultatet indikerade även på att inom dessa teman finns icke-tekniska utmaningar med tillhörande aspekter som är mer framträdande än andra. Studien ökar kunskaperna kring radiologers syn på användningen och bidrar till framtida forskning för samtliga berörda aktörer. Forskning kan ta hänsyn till dessa icke-tekniska utmaningar redan innan tekniken är implementerad i syfte att minska risken för komplikationer. / Artificial intelligence (AI) has become more commonly used to support people when making decisions. Machine learning (ML) is a sub-area of AI that has become more frequently used in health care. Patient data is increasing in healthcare and an AI system can help to process this increased amount of data, which further can develop a decision support that can help doctors. AI technology is becoming more common to use in radiology and specifically in mammography, as a decision support. The usage of AI technology in mammography has many benefits, but there are also challenges that are not connected to technology.Non-technical challenges are important to consider and review in order to generate a successful practice. The purpose of this thesis is therefore to review non-technical challenges when using AI as a decision support in mammography from a radiological perspective. Radiologists with experience in mammography were interviewed in order to increase knowledge about their views on the usage.The results identified and developed the non-technical challenges based on themes: responsibility, human abilities, acceptance, education/knowledge and collaboration. The study also found indications within these themes that there are non-technical challenges with associated aspects that are more prominent than others. This study emphasizes and increases the knowledge of radiologists views on the usage of AI and contributes to future research for all the actors involved. Future research can address these non-technical challenges even before the technology is implemented to reduce the risk of complications.
12

Classify part of day and snow on the load of timber stacks : A comparative study between partitional clustering and competitive learning

Nordqvist, My January 2021 (has links)
In today's society, companies are trying to find ways to utilize all the data they have, which considers valuable information and insights to make better decisions. This includes data used to keeping track of timber that flows between forest and industry. The growth of Artificial Intelligence (AI) and Machine Learning (ML) has enabled the development of ML modes to automate the measurements of timber on timber trucks, based on images. However, to improve the results there is a need to be able to get information from unlabeled images in order to decide weather and lighting conditions. The objective of this study is to perform an extensive for classifying unlabeled images in the categories, daylight, darkness, and snow on the load. A comparative study between partitional clustering and competitive learning is conducted to investigate which method gives the best results in terms of different clustering performance metrics. It also examines how dimensionality reduction affects the outcome. The algorithms K-means and Kohonen Self-Organizing Map (SOM) are selected for the clustering. Each model is investigated according to the number of clusters, size of dataset, clustering time, clustering performance, and manual samples from each cluster. The results indicate a noticeable clustering performance discrepancy between the algorithms concerning the number of clusters, dataset size, and manual samples. The use of dimensionality reduction led to shorter clustering time but slightly worse clustering performance. The evaluation results further show that the clustering time of Kohonen SOM is significantly higher than that of K-means.
13

Digitalisation of Predetermined Motion Time Systems : An Investigation Towards Automated Time Setting Processes

Gans, Jesper January 2023 (has links)
Time setting in production operations is necessary to properly takt and balance the flow of assembly and logistics. Time setting activities is also crucial to achieve an optimised, healthy and ergonomic assembly and logistics operation. But time setting is seldom done on a detailed enough level before deployed on the shop floor which necessitates more work of the time setting to make it reflect the work carried out and fit it to the local production area. There is also a need to redo the time setting whenever a change to a process or product has occurred. Nowadays, the time setting is often performed using very manual methods with Predetermined Motion Time Systems (PMTS), sometimes with the aid of digital tools to replace pen and paper but work otherwise practically the same way it has since its inception in the first half of the 20th century. This is a process that require skill, experience and often much time, but is also monotonous and repetitive. To aid in the time setting process, and bring PMTS into Industry 5.0; a digitalised, smart tool is proposed where video can be used to feed a computer program to do the movement classification and time setting accurately and faster than current manual processes can achieve. However, the needs, challenges, and general function of such a system is not well researched in literature. This thesis thus delivers an analysis of current state for the time setting process at a large multinational truck manufacturer with production sites in Sweden and abroad, an overview of technologies for a digitalised, smart PMTS, and a conceptual framework for analysing production tasks using a digitalised, smart system. The framework is then partially implemented to showcase the usefulness of the system and how it would work in practice. / Korrekt tidssättning i produktion är nödvändigt för att takta, planera och balansera flödet i montering och logistik. Tidssättning är också avgörande för att uppnå en optimerad, hälsosam och ergonomisk monterings- och logistikverksamhet. Men tidssättningen görs sällan på en tillräckligt detaljerad nivå innan den används på verkstadsgolvet, vilket kräver mer arbete med tidssättningen för att den ska återspegla det utförda arbetet och anpassas till det lokala produktionsområdet. Det finns också ett behov av att göra om tidssättningen när en förändring av en process eller produkt har skett. Nuförtiden utförs tidssättningen ofta med väldigt manuella metoder med förutbestämda metod-rörelsesystem (PMTS), ibland med hjälp av digitala verktyg som ersätter penna och papper, men i övrigt fungerar det praktiskt taget på samma sätt som det har gjort sedan starten under första halvan av 1900-talet. Detta är en uppgift som kräver skicklighet, erfarenhet och ofta mycket tid, men som också är monoton och repetitiv. För att underlätta tidssättningsprocessen och ta förutbestämda metod-rörelsesystem in i Industri 5.0 föreslås nu ett digitaliserat, smart verktyg där video kan användas för att mata ett datorprogram som gör rörelseklassificeringen och tidssättningen mer exakt och snabbare än vad nuvarande manuella processer kan uppnå. De behov, utmaningar och den allmänna funktionen hos ett sådant system är dock inte väl undersökt i litteraturen utan kräver mer forskning. Detta examensarbete ger därför en analys av det nuvarande läget för tidssättningsprocessen hos en stor multinationell lastbilstillverkare med produktionsanläggningar i Sverige och utomlands, en översikt över tekniker för ett digitaliserat, smart PMTS och ett konceptuellt ramverk för analys av produktionsaktiviteter med hjälp av ett digitaliserat, smart system. Ramverket implementeras sedan delvis i en demonstrator för att visa hur ett sådant system kan se ut och fungera i praktiken.
14

An innovative internet of things solution to control real-life autonomous vehicles

Wahl, Roger L. 06 1900 (has links)
M. Tech. (Department of Information Technology, Faculty of Applied and Computer Sciences), Vaal University of Technology. / This research was initiated because of a global increase in congestion on roads and the consequent increase in the rate of fatalities on both national and international roads. Annually, 1.3 million people are killed on the roads globally, and millions are injured. It was estimated that 2.4 million people will be killed in road traffic accidents annually by 2030, and in South Africa, over 14 000 deaths were reported in 2016. A study undertaken by the American Automobile Association Foundation for Traffic Safety (AAAFTS), established in 1947 to conduct research and address growing highway safety issues, found that motorcar accidents, on average, cost the United States $300 billion per annum. In the same vain, the World Health Organisation (WHO) asserted in their 2013 Global Status Safety Report on Road Safety that by 2020, traffic accidents would become the third leading cause of death globally. In this organisation’s 2015 report, South Africa was listed as having one of the highest road fatality rates in the world, averaging 27 out of 100 000 people. Cognisance of these statistics that describe wanton loss of life and serious economic implications, among other reasons, led to the development of autonomous vehicles (AVs), such as Google and Uber’s driverless taxis and Tesla’s autonomous vehicle. Companies have invested in self-driving prototypes, and they bolster this investment with continuous research to rectify imperfections in the technologies and to enable the implementation of AVs on conventional roads. This research aimed to address issues surrounding the systems communication concept, and focused on a novel method of the routing facet of AVs by exploring the mechanisms of the virtual system of packet switching and by applying these same principles to route autonomous vehicles. This implies that automated vehicles depart from a source address and arrive at a pre-determined destination address in a manner analogous to packet switching technology in computer networking, where a data packet is allotted a source and destination address as it traverses the Open Systems Interconnection (OSI) model for open system interconnection prior to dissemination through the network. This research aimed to develop an IoT model that reduces road congestion by means of a cost effective and reliable method of routing AVs and lessen dependency on vehicle-to-vehicle (V2V) communication with their heavy and costly sensor equipment and GPS, all of which under certain conditions malfunction. At the same time, as safety remains the foremost concern, the concept aimed to reduce the human factor to a considerable degree. The researcher demonstrated this by designing a computer-simulated Internet of Things (IoT) model of the concept. Experimental research in the form of a computer simulation was adopted as the most appropriate research approach. A prototype was developed containing the algorithms that simulated the theoretical model of IoT vehicular technology. The merits of the constructed prototype were analysed and discussed, and the results obtained from the implementation exercise were shared. Analysis was conducted to verify arguments on assumptions to clarify the theory, and the outcome of the research (an IoT model encompassing vehicular wireless technologies) shows how the basic concept of packet switching can be assimilated as an effective mechanism to route large-scale autonomous vehicles within the IoT milieu, culminating in an effective commuter operating system. Controlled routing will invariably save the traveller time, provide independence to those who cannot drive, and decrease the greenhouse effect, whilst the packet switching characteristic offers greater overall security. In addition, the implications of this research will require a workforce to supplement new growth opportunities.
15

Deep Image Processing with Spatial Adaptation and Boosted Efficiency & Supervision for Accurate Human Keypoint Detection and Movement Dynamics Tracking

Chao Yang Dai (14709547) 31 May 2023 (has links)
<p>This thesis aims to design and develop the spatial adaptation approach through spatial transformers to improve the accuracy of human keypoint recognition models. We have studied different model types and design choices to gain an accuracy increase over models without spatial transformers and analyzed how spatial transformers increase the accuracy of predictions. A neural network called Widenet has been leveraged as a specialized network for providing the parameters for the spatial transformer. Further, we have evaluated methods to reduce the model parameters, as well as the strategy to enhance the learning supervision for further improving the performance of the model. Our experiments and results have shown that the proposed deep learning framework can effectively detect the human key points, compared with the baseline methods. Also, we have reduced the model size without significantly impacting the performance, and the enhanced supervision has improved the performance. This study is expected to greatly advance the deep learning of human key points and movement dynamics. </p>
16

Data Augmentation GUI Tool for Machine Learning Models

Sharma, Sweta 30 October 2023 (has links)
The industrial production of semiconductor assemblies is subject to high requirements. As a result, several tests are needed in terms of component quality. In the long run, manual quality assurance (QA) is often connected with higher expenditures. Using a technique based on machine learning, some of these tests may be carried out automatically. Deep neural networks (NN) have shown to be very effective in a diverse range of computer vision applications. Especially convolutional neural networks (CNN), which belong to a subset of NN, are an effective tool for image classification. Deep NNs have the disadvantage of requiring a significant quantity of training data to reach excellent performance. When the dataset is too small a phenomenon known as overfitting can occur. Massive amounts of data cannot be supplied in certain contexts, such as the production of semiconductors. This is especially true given the relatively low number of rejected components in this field. In order to prevent overfitting, a variety of image augmentation methods may be used to the process of artificially creating training images. However, many of those methods cannot be used in certain fields due to their inapplicability. For this thesis, Infineon Technologies AG provided the images of a semiconductor component generated by an ultrasonic microscope. The images can be categorized as having a sufficient number of good and a minority of rejected components, with good components being defined as components that have been deemed to have passed quality control and rejected components being components that contain a defect and did not pass quality control. The accomplishment of the project, the efficacy with which it is carried out, and its level of quality may be dependent on a number of factors; however, selecting the appropriate tools is one of the most important of these factors because it enables significant time and resource savings while also producing the best results. We demonstrate a data augmentation graphical user interface (GUI) tool that has been widely used in the domain of image processing. Using this method, the dataset size has been increased while maintaining the accuracy-time trade-off and optimizing the robustness of deep learning models. The purpose of this work is to develop a user-friendly tool that incorporates traditional, advanced, and smart data augmentation, image processing, and machine learning (ML) approaches. More specifically, the technique mainly uses are zooming, rotation, flipping, cropping, GAN, fusion, histogram matching, autoencoder, image restoration, compression etc. This focuses on implementing and designing a MATLAB GUI for data augmentation and ML models. The thesis was carried out for the Infineon Technologies AG in order to address a challenge that all semiconductor industries experience. The key objective is not only to create an easy- to-use GUI, but also to ensure that its users do not need advanced technical experiences to operate it. This GUI may run on its own as a standalone application. Which may be implemented everywhere for the purposes of data augmentation and classification. The objective is to streamline the working process and make it easy to complete the Quality assurance job even for those who are not familiar with data augmentation, machine learning, or MATLAB. In addition, research will investigate the benefits of data augmentation and image processing, as well as the possibility that these factors might contribute to an improvement in the accuracy of AI models.
17

Extraction of gating mechanisms from Markov state models of a pentameric ligand-gated ion channel

Karalis, Dimitrios January 2021 (has links)
GLIC är en pH-känslig pentamerisk ligandstyrd jonkanal (pLGIC) som finns i cellmembranet hos prokaryoten Gloeobacter violaceus. GLIC är en bakteriell homolog till flera receptorer som är viktiga i nervsystemet hos de flesta eukaryotiska organismer. Dessa receptorer fungerar som mallar för utvecklingen av målstyrda bedövnings- och stimulerande läkemedel som påverkar nervsystemet. Förståelsen av ett proteins mekanismer har därför hög prioritet inför läkemedelsutvecklingen. Eukaryota pLGICs är dock mycket komplexa eftersom några av de är heteromera, har flera domäner, och de pågår eftertranslationella ändringar. GLIC, å andra sidan, har en enklare struktur och det räcker att analysera strukturen av en subenhet - eftersom alla subenheter är helt lika. Flertalet möjliga grindmekanismer föreslogs av vetenskapen men riktiga öppningsmekanismen av GLIC är fortfarande oklar. Projektets mål är att genomföra maskininlärning (ML) för att upptäcka nya grindmekanismer med hjälp av datormetoder. Urspungsdatan togs från tidigare forskning där andra ML-redskap såsom molekyldynamik (MD), elastisk nätverksstyrd Brownsk dynamik (eBDIMS) och Markovstillståndsmodeller (MSM) användes. Utifrån dessa redskap simulerades proteinet som vildtyp samt med funktionsförstärkt mutation vid två olika pH värden. Fem makrotillstånd byggdes: två öppna, två stängda och ett mellanliggande. I projektet användes ett annat ML redskap: KL-divergens. Detta redskap användes för att hitta skillnader i avståndfördelning mellan öppet och stängt makrotillstånd. Utifrån ursprungsdatan byggdes en tensor som lagrade alla parvisa aminosyrornas avstånd. Varje aminosyrapar hade sin egen metadata som i sin tur användes för att frambringa alla fem avståndsfördelningar fråm MSMs som byggdes i förväg. Sedan bräknades medel-KL-divergens mellan två avståndfördelningar av intresse för att filtrera bort aminosyropar med överlappande avståndsfördelningar. För att se till att aminosyror inom aminosyrapar som låg kvar kan påverka varandra, filtrerades bort alla par vars minsta och medelavstånd var stora. De kvarvarande aminosyroparen utvärderades i förhållande till alla fem makrotillstånd Viktiga nya grindmekanismer som hittades genom både KL-divergens och makrotillståndsfördelningar innefattade loopen mellan M2-M3 helixarna av en subenhet och både loopen mellan sträckor β8 och β9 (Loop F)/N-terminal β9-sträckan och pre-M1/N-terminal M1 av närliggande subenheten. Loopen mellan sträckor β8 och β9 (Loop F) visade höga KL-värden också med loopen mellan sträckor β1 och β2 loop samt med loopen mellan sträckor β6 och β7 (Pro-loop) och avståndet mellan aminosyror minskade vid kanalens grind. Övriga intressanta grindmekanismer innefattade parning av aminosyror från loopen β4-β5 (Loop A) med aminosyror från sträckor β1 och β6 samt böjning av kanalen porangränsande helix. KL-divergens påvisades vara ett viktigt redskap för att filtrera tillgänglig data och de nya grindmekanismer kan bli användbara både för akademin, som vill reda ut GLIC:s fullständiga grindmekanismer, och läkemedelsföretag, som letar efter bindningsställen inom molekylen för att utveckla nya läkemedel. / GLIC is a transmembrane proton-gated pentameric ligand-gated ion channel (pLGIC) that is found in the prokaryote Gloeobacter violaceus. GLIC is the prokaryotic homolog to several receptors that are found in the nervous system of many eukaryotic organisms. These receptors are targets for the development of pharmaceutical drugs that interfere with the gating of these channels - such drugs involve anesthetics and stimulants. Understanding the mechanism of a drug’s target is a high priority for the development of a novel medicine. However, eukaryotic pLGICs are complex to analyse, because some of them are heteromeric, have more domains, and because of their post-translational modifications (PTMs). GLIC, on the other hand, has a simpler structure and it is enough to study the structure of only one subunit - since all subunits are identical. Several possible gating mechanisms have been proposed by the scientific community, but the complete gating of GLIC remains unclear. The goal of this project is to implement machine learning (ML) to discover novel gating mechanisms by computational approaches. The starting data was extracted from a previous research where computational tools like unbiased molecular dynamics (MD), elastic network-driven Brownian Dynamics (eBDIMS), and Markov state models (MSMs) were used. From those tools, the protein was simulated in wild-type and in a gain-of-function mutation at two different pH values. Five macrostates were constructed: two open, two closed, and an intermediate. In this project another ML tool was used: KL divergence. This tool was used to score the difference between the distance distributions of one open and one closed macrostate. The starting data was used to create a tensor that stored all residue-residue distances. Each residue pair had its own metadata, which in turn was used to yield the distance distributions of all five pre-build MSMs. Then the average KL scores between two states of interest were calculated and were used to filter out the residue pairs with overlapping distance distributions. To make sure that the residues within a pair can interact with each other, all residue pairs with very high minimum and average distance were filtered out as well. The residue pairs that remained were later evaluated across all five macrostates for further studies. Important novel mechanisms discovered in this project through both the KL divergence and the macrostate distributions involved the M2-M3 loop of one subunit and both the β8-β9 loop/N-terminal β9 strand and the preM1/N-terminal M1 region of the neighboring subunit. The β8-β9 loop (Loop F) showed high KL scores with the β1-β2 and β6-β7 (Pro-loop) loops as well with decreasing distances upon the channel’s opening. Other notable gating mechanisms involved are the pairing of residues from the β1-β2 loop (Loop A) with residues from the strands β1 and β6, as well as the kink of the pore-lining helix. KL divergence proved a valuable tool to filter available data and the novel mechanisms can prove useful both to the academic community that seeks to unravel the complete gating mechanism of GLIC and to the pharmaceutical companies that search for new binding sites within the molecule for new drugs.
18

Two New Applications of Tensors to Machine Learning for Wireless Communications

Bhogi, Keerthana 09 September 2021 (has links)
With the increasing number of wireless devices and the phenomenal amount of data that is being generated by them, there is a growing interest in the wireless communications community to complement the traditional model-driven design approaches with data-driven machine learning (ML)-based solutions. However, managing the large-scale multi-dimensional data to maintain the efficiency and scalability of the ML algorithms has obviously been a challenge. Tensors provide a useful framework to represent multi-dimensional data in an integrated manner by preserving relationships in data across different dimensions. This thesis studies two new applications of tensors to ML for wireless communications where the tensor structure of the concerned data is exploited in novel ways. The first contribution of this thesis is a tensor learning-based low-complexity precoder codebook design technique for a full-dimension multiple-input multiple-output (FD-MIMO) system with a uniform planar antenna (UPA) array at the transmitter (Tx) whose channel distribution is available through a dataset. Represented as a tensor, the FD-MIMO channel is further decomposed using a tensor decomposition technique to obtain an optimal precoder which is a function of Kronecker-Product (KP) of two low-dimensional precoders, each corresponding to the horizontal and vertical dimensions of the FD-MIMO channel. From the design perspective, we have made contributions in deriving a criterion for optimal product precoder codebooks using the obtained low-dimensional precoders. We show that this product codebook design problem is an unsupervised clustering problem on a Cartesian Product Grassmann Manifold (CPM), where the optimal cluster centroids form the desired codebook. We further simplify this clustering problem to a $K$-means algorithm on the low-dimensional factor Grassmann manifolds (GMs) of the CPM which correspond to the horizontal and vertical dimensions of the UPA, thus significantly reducing the complexity of precoder codebook construction when compared to the existing codebook learning techniques. The second contribution of this thesis is a tensor-based bandwidth-efficient gradient communication technique for federated learning (FL) with convolutional neural networks (CNNs). Concisely, FL is a decentralized ML approach that allows to jointly train an ML model at the server using the data generated by the distributed users coordinated by a server, by sharing only the local gradients with the server and not the raw data. Here, we focus on efficient compression and reconstruction of convolutional gradients at the users and the server, respectively. To reduce the gradient communication overhead, we compress the sparse gradients at the users to obtain their low-dimensional estimates using compressive sensing (CS)-based technique and transmit to the server for joint training of the CNN. We exploit a natural tensor structure offered by the convolutional gradients to demonstrate the correlation of a gradient element with its neighbors. We propose a novel prior for the convolutional gradients that captures the described spatial consistency along with its sparse nature in an appropriate way. We further propose a novel Bayesian reconstruction algorithm based on the Generalized Approximate Message Passing (GAMP) framework that exploits this prior information about the gradients. Through the numerical simulations, we demonstrate that the developed gradient reconstruction method improves the convergence of the CNN model. / Master of Science / The increase in the number of wireless and mobile devices have led to the generation of massive amounts of multi-modal data at the users in various real-world applications including wireless communications. This has led to an increasing interest in machine learning (ML)-based data-driven techniques for communication system design. The native setting of ML is {em centralized} where all the data is available on a single device. However, the distributed nature of the users and their data has also motivated the development of distributed ML techniques. Since the success of ML techniques is grounded in their data-based nature, there is a need to maintain the efficiency and scalability of the algorithms to manage the large-scale data. Tensors are multi-dimensional arrays that provide an integrated way of representing multi-modal data. Tensor algebra and tensor decompositions have enabled the extension of several classical ML techniques to tensors-based ML techniques in various application domains such as computer vision, data-mining, image processing, and wireless communications. Tensors-based ML techniques have shown to improve the performance of the ML models because of their ability to leverage the underlying structural information in the data. In this thesis, we present two new applications of tensors to ML for wireless applications and show how the tensor structure of the concerned data can be exploited and incorporated in different ways. The first contribution is a tensor learning-based precoder codebook design technique for full-dimension multiple-input multiple-output (FD-MIMO) systems where we develop a scheme for designing low-complexity product precoder codebooks by identifying and leveraging a tensor representation of the FD-MIMO channel. The second contribution is a tensor-based gradient communication scheme for a decentralized ML technique known as federated learning (FL) with convolutional neural networks (CNNs), where we design a novel bandwidth-efficient gradient compression-reconstruction algorithm that leverages a tensor structure of the convolutional gradients. The numerical simulations in both applications demonstrate that exploiting the underlying tensor structure in the data provides significant gains in their respective performance criteria.
19

Prediction Models for TV Case Resolution Times with Machine Learning / Förutsägelsemodeller för TV-fall Upplösningstid med maskininlärning

Javierre I Moyano, Borja January 2023 (has links)
TV distribution and stream content delivery of video over the Internet, since is made up of complex networks including Content Delivery Networks (CDNs), cables and end-point user devices, that is very prone to issues appearing in different levels of the network ending up affecting the final customer’s TV services. When a problem affects the customer, and this prevents from having a proper TV delivery service in devices used for stream purposes, the issue is reported through a call, a TV case is opened and the company’s customer handling agents start supervising it to solve the problem as soon as possible. The goal of this research work is to present an ML-based solution that predicts the Resolution Times (RTs) of the TV cases in each TV delivery service type, therefore how long the cases will take to be solved. The approach taken to provide meaningful results consisted in utilizing four Machine Learning (ML) algorithms to create 480 models for each of the two scenarios. The results revealed that Random Forest (RF) and, specially, Gradient Boosting Machine (GBM) performed exceptionally well. Surprisingly, hyperparameter tuning didn’t significantly improve the RT as expected. Some challenges included the initial data preprocessing and some uncertainty in hyperparameter tuning approaches. Thanks to these predicted times, the company is now able to better inform their costumers on how long the problem is expected to last until is resolved. This real case scenario also considers how the company processes the available data and manages the problem. The research work consists in, first, a literature review on the prediction of RT of Trouble Ticket (TT) and customer churn in telecommunication companies, as well as the study of the company’s available data for the problem. Later, the research focuses in analysing the provided dataset for the experimentation, the preprocessing of the this data according to the industry standards and, finally, the predictions and analysis of the obtained performance metrics. The proposed solution is designed to offer an improved resolution for the company’s specified task. Future work could involve increasing the number of TV cases per service for improving the results and exploring the link between resolution times and customer churn decisions. / TV-distribution och leverans av strömningsinnehåll via internet består av komplexa nätverk, inklusive CDNs, kablar och slutanvändarutrustning. Detta gör det känsligt för problem på olika nätverksnivåer som kan påverka slutkundens TV-tjänster. När ett problem påverkar kunden och hindrar en korrekt TV-leveranstjänst rapporteras det genom ett samtal. Ett ärende öppnas, och företagets kundhanteringsagenter övervakar det för att lösa problemet så snabbt som möjligt. Målet med detta forskningsarbete är att presentera en maskininlärningsbaserad lösning som förutsäger löstiderna (RTs) för TV-ärenden inom varje TV-leveranstjänsttyp, det vill säga hur lång tid ärendena kommer att ta att lösa. För att få meningsfulla resultat användes fyra maskininlärningsalgoritmer för att skapa 480 modeller för var och en av de två scenarierna. Resultaten visade att Random Forest (RF) och framför allt Gradient Boosting Machine (GBM) presterade exceptionellt bra. Överraskande nog förbättrade inte finjusteringen av hyperparametrar RT som förväntat. Vissa utmaningar inkluderade den initiala dataförbehandlingen och osäkerhet i metoder för hyperparametertuning. Tack vare dessa förutsagda tider kan företaget nu bättre informera sina kunder om hur länge problemet förväntas vara olöst. Denna verkliga fallstudie tar också hänsyn till hur företaget hanterar tillgängliga data och problemet. Forskningsarbetet börjar med en litteraturgenomgång om förutsägelse av RT för Trouble Ticket (TT) och kundavhopp inom telekommunikationsföretag samt studier av företagets tillgängliga data för problemet. Därefter fokuserar forskningen på att analysera den tillhandahållna datamängden för experiment, förbehandling av datan enligt branschstandarder och till sist förutsägelser och analys av de erhållna prestandamätvärdena. Den föreslagna lösningen är utformad för att erbjuda en förbättrad lösning för företagets angivna uppgift. Framtida arbete kan innebära att öka antalet TV-ärenden per tjänst för att förbättra resultaten och utforska sambandet mellan löstider och kundavhoppbeslut.
20

AI inom radiologi, nuläge och framtid / AI in radiology, now and the future

Täreby, Linus, Bertilsson, William January 2023 (has links)
Denna uppsats presenterar resultaten av en kvalitativ undersökning som syftar till att ge en djupare förståelse för användningen av AI inom radiologi, dess framtida påverkan på yrket och hur det används idag. Genom att genomföra tre intervjuer med personer som arbetar inom radiologi, har datainsamlingen fokuserat på att identifiera de positiva och negativa aspekterna av AI i radiologi, samt dess potentiella konsekvenser på yrket. Resultaten visar på en allmän acceptans för AI inom radiologi och dess förmåga att förbättra diagnostiska processer och effektivisera arbetet. Samtidigt finns det en viss oro för att AI kan ersätta människor och minska behovet av mänskliga bedömningar. Denna uppsats ger en grundläggande förståelse för hur AI används inom radiologi och dess möjliga framtida konsekvenser. / This essay presents the results of a qualitative study aimed at gaining a deeper understanding of the use of artificial intelligence (AI) in radiology, its potential impact on the profession and how it’s used today. By conducting three interviews with individuals working in radiology, data collection focused on identifying the positive and negative aspects of AI in radiology, as well as its potential consequences on the profession. The results show a general acceptance of AI in radiology and its ability to improve diagnostic processes and streamline work. At the same time, there is a certain concern that AI may replace humans and reduce the need for human judgments. This report provides a basic understanding of how AI is used in radiology and its possible future consequences.

Page generated in 0.3686 seconds