• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1641
  • 212
  • 132
  • 106
  • 19
  • 18
  • 9
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 3
  • Tagged with
  • 2338
  • 2338
  • 954
  • 453
  • 441
  • 285
  • 275
  • 244
  • 240
  • 227
  • 219
  • 203
  • 201
  • 201
  • 185
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Machine Learning-Assisted Performance Assurance

Helali Moghadam, Mahshid January 2020 (has links)
With the growing involvement of software systems in our life, assurance of performance, as an important quality characteristic, rises to prominence for the success of software products. Performance testing, preservation, and improvement all contribute to the realization of performance assurance. Common approaches to tackle challenges in testing, preservation, and improvement of performance mainly involve techniques relying on performance models or using system models or source code. Although modeling provides a deep insight into the system behavior, drawing a well-detailed model is challenging. On the other hand, those artifacts such as models and source code might not be available all the time. These issues are the motivations for using model-free machine learning techniques such as model-free reinforcement learning to address the related challenges in performance assurance. Reinforcement learning implies that if the optimal policy (way) for achieving the intended objective in a performance assurance process could instead be learnt by the acting system (e.g., the tester system), then the intended objective could be accomplished without advanced performance models. Furthermore, the learnt policy could later be reused in similar situations, which leads to efficiency improvement by saving computation time while reducing the dependency on the models and source code. In this thesis, our research goal is to develop adaptive and efficient performance assurance techniques meeting the intended objectives without access to models and source code. We propose three model-free learning-based approaches to tackle the challenges; efficient generation of performance test cases, runtime performance (response time) preservation, and performance improvement in terms of makespan (completion time) reduction. We demonstrate the efficiency and adaptivity of our approaches based on experimental evaluations conducted on the research prototype tools, i.e. simulation environments that we developed or tailored for our problems, in different application areas.
132

Discovering what makes news tweets popular when controlling for content / Undersöker vad som gör tweets populära när man tar hänsyn till innehållet

Christensson, Martin, Holmgren, William January 2021 (has links)
Twitter is one of the largest social networks with over 330 million active users. Therefore, by being able to better create tweets that spread further, the message of the tweet can reach more people. It is also a social platform that is widely used by news networks to share news and is the main source of news for many people. Twitter also has an API that researchers can use to easily extract data from the website. This in combination with the reasons abovehas made Twitter into a hot research topic. This study has, to the best of the knowledge of the authors, introduced a novel approach of analyzing twitter data. It has focused on tweets containing links to news articles and groups these into clusters based on the contents of said news articles. Tweets that share near identical news articles, will be grouped into clone sets, which allows to only analyze tweets that share the same content. This eliminates content as a factor that could impact the popularity and allows to better understand theunderlying factors that make a tweet popular. While only subtle differences were found in this study when controlling for content (e.g., regardless if we control for content, we found that followers, following, and whether a user was verified were the most important predictive factors), the approach provided new insights into the timing of when tweets are being posted. Tweets posted early on had a great majority of total retweets as well as the most successful tweet. While tweets posted late had a great majority of the least successful tweets. The methodology of controlling for content gave interesting insights and the authors believe it deserves further attention when doing similar research.
133

Digital Power of Attorney for authorization in industrial cyber-physical systems

Vattaparambil Sudarsan, Sreelakshmi January 2021 (has links)
In the age of digitization, many Cyber-Physical Systems are semi-autonomous and have sufficient power and resources to perform tasks on behalf of users. This thesis defines an authorization technique to transfer the power of legitimate users to trusted CPS or IoT devices, allowing the device to sign or access resources on behalf of the user. The authorization technique is based on digital Power of Attorney, which is a self-contained document generated by the user (principal) and sent to the agent (trusted device). A Power of Attorney contains a timestamp, that makes it invalid after a period of time predefined by the principal. Here, the agent who receives the PoA does not require a separate account; instead, it uses the principal account with limited features. The thesis studies and analyzes other delegation based and subgranting based authorization techniques, such as the OAuth standard. There are certain similarities and differences between OAuth and PoA, that are analyzed based on metrics such as protocol flow, communication type, token format, and control expiration. Considering the benefits and challenges of both the OAuth and PoA, this thesis combines these two techniques and proposes a multilevel subgranting system. The conceptual architecture, protocol flow, design overview, PoA format, use case scenarios, and implementation details of the proposed system are presented. The system is implemented based on an industrial CPS usecase scenario. The results are qualitatively analysed and also quantitatively evaluated based on the metric of computational time.Future work includes security analysis, result evaluation, and comparison of findings with respect to OAuth and other delegation based authorization standards, implementation of PoA based authorization technique from the scratch, and integration with frameworks such as Arrowhead.
134

UNDERSTANDING DIGITAL TWIN: A SYSTEMATIC MAPPING STUDY

Guzina, Luka January 2021 (has links)
Digital Twin is a concept that refers to a virtual representation of manufacturing elements suchas personnel, products, assets and process definitions, a living model that continuously updatesand changes as the physical counterpart changes to represent status, working conditions, productgeometries and resource states in a synchronous manner [1]. The digital representation providesboth the elements and the dynamics of how a physical part operates and lives throughout its lifecycle.In recent years, digital twin caught the attention of many researchers, who investigated its adoptionin various fields. In this thesis, we report on the planning, execution and results of a systematicmapping study to examine the current application of the digital twin, its research relevance, application domains, enabling technologies and perceived benefits. We start from an initial set of 675publications and through a rigorous selection process we obtain a final set of 29 primary studies.Using a classification framework, we extract relevant data. We analyse the extracted data usingboth quantitative and qualitative analyses using vertical and orthogonal analysis techniques.This work is aimed to investigate the current achievements of Digital Twin with the focus on revealing technologies it uses, its applications and benefits it offers by implementing it as well aspublication details.
135

Omarbetning utav PHP-skript och databasintegritet / Revision of PHP-scripts and database integrity

Mellberg, Simon January 2019 (has links)
Den här rapporten beskriver det arbete som är utfört hos Techsam AB för att eliminera eller åtminstone minska riskerna att data går förlorad eller förändras. Projektets syfte är att redogöra problemområden och åtgärda nuvarande system. Problemområden är alla ställen det görs en insättning eller en uppdatering utav databasen. Projektets huvudsakliga uppgift är att utveckla det system som redan finns driftsatt. Databasintegritet är en viktig punkt i projektet, och det finns större möjligheter att kontrollera data desto längre mot slutet produkten kommer. Därmed har andra lösningar införts i början utav produktionen. De lösningarna är visuella stöd och varningar som hjälper att göra rätt beslut. Eftersom systemet endast används \textit{''in-house''} finns det inga krav på utseende eller säkerhet utifrån, utan kraven ligger kring funktion och integritet mot databasen. Utseendet har inte påverkats. / This report describes the work that is done at Techsam AB to eliminate or at least lower the risk of dataloss or change of data. The project purpose is to uncover problematic areas and fix these issues. Problematic areas are everywhere where data is sent och updated in the database. The projects main assignment is to update the system that is already in use. Databaseintegrity is an important topic in this article, and there the possibilities to check data are greater the further down the line you get. Therefore other solutions have been used to aid the user. Those solutions are visual aids and warnings to help make correct decisions. Since the system is only used in-house there are no demands on the apperance nor security outwards, the demands are on the function and integrity towards the database. The appereance have not been altered.
136

Automatisk uppgradering av Cisco IOS

Sjöström, Frej January 2020 (has links)
No description available.
137

Anomaly Detection using Machine Learning Approaches in HVDC Power System

Borhani, Mohammad January 2020 (has links)
No description available.
138

Testing AI-democratization : What are the lower limits of textgeneration using artificial neural networks?

Kinde, Lorentz January 2019 (has links)
Articial intelligence is an area of technology which is rapidly growing. Considering it'sincreasing inuence in society, how available is it? This study attempts to create a web contentsummarizer using generative machine learning. Several concepts and technologies are explored, most notably sequence to sequence, transfer learning and recursive neural networks. The study later concludes how creating a purely generative summarizer is unfeasible on a hobbyist level due to hardware restrictions, showing that slightly more advanced machine learning techniques still are unavailable to non-specialized individuals. The reasons why are investigated in depth using an extensive theoretical section which initially explains how neural networks work, then natural language processing at large and finally how to create a generative recurrent articial neural network. Ethical and societal concerns concerning machine learning text generation is also discussed, along with alternative approaches to solving the task at hand.
139

Verifying Deadlock-Freedom for Advanced Interconnect Architectures

Meng, Wang January 2020 (has links)
Modern advanced Interconnects, such as those orchestrated by the ARM AMBA AXI protocol, can have fatal deadlocks in the connection between Masters and Slaves if those transactions are not properly arranged. There exists some research about the deadlock problems in an on-chip bus system and also methods to avoid those deadlocks which could happen. This project aims to verify those situations could make deadlock happens and also the countermeasures for those deadlocks. In this thesis, the ARM AMBA AXI protocol and countermeasures are modelled in NuSMV. Based on these models, we verified the non-trivial cycles of transactions could cause deadlocks and also some bus techniques which can mitigate deadlock problems efficiently. The results from model checking several instances of the protocol and corresponding countermeasures show the techniques could indeed avoid deadlocks.
140

Virtual Machine Placement in Cloud Environments

Li, Wubin January 2012 (has links)
With the emergence of cloud computing, computing resources (i.e., networks, servers, storage, applications, and services) are provisioned as metered on-demand services over networks, and can be rapidly allocated and released with minimal management effort. In the cloud computing paradigm, the virtual machine is one of the most commonly used resource carriers in which business services are encapsulated. Virtual machine placement optimization, i.e., finding optimal placement schemes for virtual machines, and reconfigurations according to the changes of environments, become challenging issues. The primary contribution of this licentiate thesis is the development and evaluation of our combinatorial optimization approaches to virtual machine placement in cloud environments. We present modeling for dynamic cloud scheduling via migration of virtual machines in multi-cloud environments, and virtual machine placement for predictable and time-constrained peak loads in single-cloud environments. The studied problems are encoded in a mathematical modeling language and solved using a linear programming solver. In addition to scientific publications, this work also contributes in the form of software tools (in EU-funded project OPTIMIS) that demonstrate the feasibility and characteristics of the approaches presented.

Page generated in 0.3753 seconds