• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 815
  • 226
  • 1
  • Tagged with
  • 1042
  • 1026
  • 1024
  • 150
  • 124
  • 104
  • 101
  • 90
  • 88
  • 80
  • 79
  • 62
  • 60
  • 59
  • 56
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
681

Using a bio-metric feedback device to enhance player experience in horror games

Hedlund, Ted, Norrlind, Olof January 2023 (has links)
This paper is aimed at investigating whether a biofeedback device can positively increase a player's experience of thrill and suspension in a horror game. To facilitate this, two versions of the same horror game were created with a connection to a heart rate monitor. The difference between the two versions of the game was that the core elements were controlled by the heart rate which attempted to keep the player in a constant suspense state based on their heart rate. This was done to enhance the player's experience and thrill. These two versions of the game were then play-tested by users. Users had no insight into which version they were testing and afterward, a questionnaire was administered to ascertain the tester's emotional responses.The collected data was then analyzed and a pattern could be observed where testers preferred the version of the game that was controlled by the heart rate. This, backed by previous studies showed that using a biofeedback device to implement only the heart rate into a game had a marked positive effect on player experience. Still, additional research is needed with a larger control group to get more accurate results.
682

Sharing to learn and learning to share : Fitting together metalearning and multi-task learning

Upadhyay, Richa January 2023 (has links)
This thesis focuses on integrating learning paradigms that ‘share to learn,’ i.e., Multitask Learning (MTL), and ‘learn (how) to share,’ i.e., meta learning. MTL involves learning several tasks simultaneously within a shared network structure so that the tasks can mutually benefit each other’s learning. While meta learning, better known as ‘learning to learn,’ is an approach to reducing the amount of time and computation required to learn a novel task by leveraging on knowledge accumulated over the course of numerous training episodes of various tasks. The learning process in the human brain is innate and natural. Even before birth, it is capable of learning and memorizing. As a consequence, humans do not learn everything from scratch, and because they are naturally capable of effortlessly transferring their knowledge between tasks, they quickly learn new skills. Humans naturally tend to believe that similar tasks have (somewhat) similar solutions or approaches, so sharing knowledge from a previous activity makes it feasible to learn a new task quickly in a few tries. For instance, the skills acquired while learning to ride a bike are helpful when learning to ride a motorbike, which is, in turn, helpful when learning to drive a car. This natural learning process, which involves sharing information between tasks, has inspired a few research areas in Deep Learning (DL), such as transfer learning, MTL, meta learning, Lifelong Learning (LL), and many more, to create similar neurally-weighted algorithms. These information-sharing algorithms exploit the knowledge gained from one task to improve the performance of another related task. However, they vary in terms of what information they share, when to share, and why to share. This thesis focuses particularly on MTL and meta learning, and presents a comprehensive explanation of both the learning paradigms. A theoretical comparison of both algorithms demonstrates that the strengths of one can outweigh the constraints of the other. Therefore, this work aims to combine MTL and meta learning to attain the best of both worlds. The main contribution of this thesis is Multi-task Meta Learning (MTML), an integration of MTL and meta learning. As the gradient (or optimization) based metalearning follows an episodic approach to train a network, we propose multi-task learning episodes to train a MTML network in this work. The basic idea is to train a multi-task model using bi-level meta-optimization so that when a new task is added, it can learn in fewer steps and perform at least as good as traditional single-task learning on the new task. The MTML paradigm is demonstrated on two publicly available datasets – the NYU-v2 and the taskonomy dataset, for which four tasks are considered, i.e., semantic segmentation, depth estimation, surface normal estimation, and edge detection. This work presents a comparative empirical analysis of MTML to single-task and multi-task learning, where it is evident that MTML excels for most tasks. The future direction of this work includes developing efficient and autonomous MTL architectures by exploiting the concepts of meta learning. The main goal will be to create a task-adaptive MTL, where meta learning may learn to select layers (or features) from the shared structure for every task because not all tasks require the same highlevel, fine-grained features from the shared network. This can be seen as another way of combining MTL and meta learning. It will also introduce modular learning in the multi-task architecture. Furthermore, this work can be extended to include multi-modal multi-task learning, which will help to study the contributions of each input modality to various tasks.
683

The Performance, Interoperability and Integration of Distributed Ledger Technologies

Palm, Emanuel January 2019 (has links)
In the wake of the financial crisis of 2008, Bitcoin emerged as a radical new alternative to the fiat currencies of the traditional banking sector. Through the use of a novel kind of probabilistic consensus algorithm, Bitcoin proved it possible to guarantee the integrity of a digital currency by relying on network majority votes instead of trusted institutions. By showing that it was technically feasible to, at least to some extent, replace the entire banking sector with computers, many significant actors started asking what else this new technology could help automate. A subsequent, seemingly inevitable, wave of efforts produced a multitude of new distributed ledger systems, architectures and applications, all somehow attempting to leverage distributed consensus algorithms to replace trusted intermediaries, facilitating value ownership, transfer and regulation. In this thesis, we scrutinize distributed ledger technologies in terms of how they could help facilitate the digitization of contractual cooperation, especially in the context of the supply chain and manufacturing industries. Concretely, we consider them from three distinct technical perspectives, (1) performance, (2) interoperability and (3) integration. Voting systems, with or without probabilistic mechanisms, require significant time and resources to operate, for which reason it becomes relevant to investigate how the costs of running those systems can be mitigated. In particular, we consider how a blockchain, a form of distributed ledger, can be pruned to in order to reduce disk space requirements. Furthermore, no technical system part of a larger business is an island, but will have to be able to interoperate with other systems to maximize the opportunity for automation. For this reason, we also consider how transparent message translation between systems could be facilitated, as well as presenting a formalism for expressing the syntactic structure of message payloads. Finally, we propose a concrete architecture, the Exchange Network, that models contractual interactions as negotiations about token exchanges rather than as function invocations and state machine transitions, which we argue lowers the barrier to compatibility with conventional legal and business practices. Even if no more trusted institutions could be replaced by any forthcoming distributed ledger technologies, we believe contractual interactions becoming more digital would lead to an increased opportunity for using computers to monitor, assist or even directly participate in the negotiation, management and tracking of business agreements, which we see as more than enough to warrant the cost of further developing of the technology. Such computer involvement may not just save time and reduce costs, but could also enable new kinds of computer-driven economies. In the long run, this may enable new levels of resource optimization, and not just within large organizations, but also smaller companies, or even the homes of families and individuals.
684

Attribute-based Approaches for Secure Data Sharing in Industry

Chiquito, Alex January 2022 (has links)
The Industry 4.0 revolution relies heavily on data to generate value, innovation, new services, and optimize current processes [1]. Technologies such as Internet of Things (IoT), machine learning, digital twins, and much more depend directly on data to bring value and innovation to both discrete manufacturing and process industries. The origin of data may vary from sensor data to financial statements and even strictly confidential user or business data. In data-driven ecosystems, collaboration between different actors is often needed to provide services such as analytics, logistics, predictive maintenance, process improvement, and more. Data therefore cannot be considered a corporate internal asset only. Hence, data needs to be shared among organizations in a data-driven ecosystem for it to be used as a strategic resource for creating desired values, innovations, or process improvements [2]. When sharing business critical and sensitive data, the access to the data needs to be accurately controlled to prevent leakage to authorized users and organizations.  Access control is a mechanism to control actions of users over objects, e.g., to read, write, and delete files, accessing data, writing over registers, and so on. This thesis studies one of the latest access control mechanisms in Attribute Based Access Control (ABAC) for industrial data sharing. ABAC emerges as an evolution of the commonly industry-wide used Role-based Access Control. ABAC presents the idea of attributes to create access policies, rather than manually assigned roles or ownerships, enabling for expressive fine-granular access control policies. Furthermore, this thesis presents approaches to implement ABAC into industrial IoT data sharing applications, with special focus on the manageability and granularity of the attributes and policies.  The thesis also studies the implications of outsourced data storage on third party cloud servers over access control for data sharing and explores how to integrate cryptographic techniques and paradigms into data access control. In particular, the combination of ABAC and Attribute-Based Encryption (ABE) is investigated to protect privacy over not-fully trusted domains. In this, important research gaps are identified. / Arrowhead Tools
685

Auto-scaling Prediction using MachineLearning Algorithms : Analysing Performance and Feature Correlation

Ahmed, Syed Saif, Arepalli, Harshini Devi January 2023 (has links)
Despite Covid-19’s drawbacks, it has recently contributed to highlighting the significance of cloud computing. The great majority of enterprises and organisations have shifted to a hybrid mode that enables users or workers to access their work environment from any location. This made it possible for businesses to save on-premises costs by moving their operations to the cloud. It has become essential to allocate resources effectively, especially through predictive auto-scaling. Although many algorithms have been studied regarding predictive auto-scaling, further analysis and validation need to be done. The objectives of this thesis are to implement machine-learning algorithms for predicting auto-scaling and to compare their performance on common grounds. The secondary objective is to find data connections amongst features within the dataset and evaluate their correlation coefficients. The methodology adopted for this thesis is experimentation. The selection of experimentation was made so that the auto-scaling algorithms can be tested in practical situations and compared to the results to identify the best algorithm using the selected metrics. This experiment can assist in determining whether the algorithms operate as predicted. Metrics such as Accuracy, F1-Score, Precision, Recall, Training Time andRoot Mean Square Error(RMSE) are calculated for the chosen algorithms RandomForest(RF), Logistic Regression, Support Vector Machine and Naive Bayes Classifier. The correlation coefficients of the features in the data are also measured, which helped in increasing the accuracy of the machine learning model. In conclusion, the features related to our target variable(CPU us-age, p95_scaling) often had high correlation coefficients compared to other features. The relationships between these variables could potentially be influenced by other variables that are unrelated to the target variable. Also, from the experimentation, it can be seen that the optimal algorithm for determining how cloud resources should be scaled is the Random Forest Classifier.
686

Analys av Accesspunkters placering : Utveckling av verktyg för mätning av signalstyrka med Heatmap funktion / Analyzing the Placement of Access Points : Development of a Signal Strength Measuring Tool with a Heatmap Function

Oprea, Alexander, Bäckrud, Joel January 2023 (has links)
No description available.
687

Active Assurance in Kubernetes

Wennerström, William January 2021 (has links)
No description available.
688

A Rule-based approach for detection of spatial object relations in images

Afzal, Wahaj January 2023 (has links)
Deep learning and Computer vision are becoming a part of everyday objects and machines. Involvement of artificial intelligence in human’s daily life open doors to new opportunities and research. This involvement provides the idea of improving upon the in-hand research of spatial relations and coming up with a more generic and robust algorithm that provides us with 2-D and 3-D spatial relations and uses RGB and RGB-D images which can help us with few complex relations such as ‘on’ or ‘in’ as well. Suggested methods are tested on the dataset with animated and real objects, where the number of objects varies in every image from at least 4 to at most 10 objects. The size and orientation of objects are also different in every image.
689

Examining Difficulties in Weed Detection

Ahlqvist, Axel January 2022 (has links)
Automatic detection of weeds could be used for more efficient weed control in agriculture. In this master thesis, weed detectors have been trained and examined on data collected by RISE to investigate whether an accurate weed detector could be trained on the collected data. When only using annotations of the weed class Creeping thistle for training and evaluation, a detector achieved a mAP of 0.33. When using four classes of weed, a detector was trained with a mAP of 0.07. The performance was worse than in a previous study also dealing with weed detection. Hypotheses for why the performance was lacking were examined. Experiments indicated that the problem could not fully be explained by the model being underfitted, nor by the object’s backgrounds being too similar to the foreground, nor by the quality of the annotations being too low. The performance was better when training the model with as much data as possible than when only selected segments of the data were used.
690

Mitigating garbage collection in Java microservices : How garbage collection affects Java microservices andhow it can be handled

Ericson, Amanda January 2021 (has links)
Java is one of the more recent programming languages that in runtime free applications from manual memory management by using automatic Garbage collector (GC) threads. Although, at the cost of stop-the-world pauses that pauses the whole application. Since the initial GC algorithms new collectors has been developed to improve the performance of Java applications. Still, memory related errors occurs and developers struggle to pick the correct GC for each specific case. Since the concept of microservices were established the benefits of using it over a monolith system has been brought to attention but there are still problems to solve, some associated to garbage collectors. In this study the performance of garbage collectors are evaluated and compared in a microservice environment. The measurements were conducted in a Java SpringBoot application using Docker and a docker compose file to simulate a microservice environment. The application outputted log files that were parsed into reports which were used as a basis for the analysis. The tests were conducted both with and without a database connection. Final evaluations show that one GC does not fit all application environments. ZGC and Shenandoah GC was proven to perform very good regarding lowering latency, although not being able to handle the a microservice environment as good as CMS. ZGC were not able to handle the database connection tests at all while CMS performed unexpectedly well. Finally, the study enlightens the importance of balancing between memory and hardware usage when choosing what GC to use for each specific case.

Page generated in 0.0465 seconds