• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 815
  • 226
  • 1
  • Tagged with
  • 1042
  • 1026
  • 1024
  • 150
  • 124
  • 104
  • 101
  • 90
  • 88
  • 80
  • 79
  • 62
  • 60
  • 59
  • 56
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

HTTP Load Balancing Performance Evaluation of HAProxy, NGINX, Traefik and Envoy with the Round-Robin Algorithm

Johansson, Alfred January 2022 (has links)
Operating a popular website is a challenging task. Users not only expect services to always be available, but also good performance in the form of fast response times. To achieve high availability and avoid performance problems which can be linked to user satisfaction and financial losses, the ability to balance web server traffic between servers is an important aspect. This study is aimed to evaluate performance aspects of popular open-source load balancing software working at the HTTP layer. The study includes the well-known load balancers HAProxy and NGINX but also Traefik and Envoy which have become popular more recently by offering native integration with container orchestrators. To find performance differences, an experiment was designed with two load scenarios using Apache JMeter to measure the throughput of requests and response times with a varying number of simulated users. The experiment was able to consistently show performance differences between the software in both scenarios. It was found that HAProxy overall had the best performance in both scenarios and could handle test cases with 1000 users where the other load balancers began generating a large proportion of failed connections significantly better. NGINX was the slowest when considering all test cases from both scenarios. Averaging results from both load scenarios excluding tests at the highest, 1000 users, concurrency level, Traefik performed 24% better, Envoy 27% better and HAProxy 36% better compared to NGINX.
222

RVSingle: A general purpose power efficient RISC-V for FPGAs

Shen, YuYang January 2023 (has links)
With the increasing need for low-cost, power-efficient computing units, RISC-Vas an open-standard Instruction Set Architecture (ISA) is becoming more and more popular in the industry. There are multiple open-source RISC-V soft processors like cva6, VEGA, NOEL-V and more. But those processors have a common problem in that they can only be implemented onto a specific FPGA development platform. This thesis introduces a new processor design with compatibility in mind so that it will not be limited to a certain development platform but can be used on multiple different platforms as far as they meet the basic requirements. This processor is a single-stage processor without any pipeline implemented. The processor is used to evaluate the power efficiency of the architecture and has a unique feature to enable or disable the RISC-V Compressed (RVC) instruction subset to understand its impact on power-efficient. It is simple in architecture but still has the full capability for the RV64IC instruction set. Because of it uses RISC-V architecture, in the future, this processor can be easily expanded to adopt more RISC-V instruction subsets.
223

Face Identification Using Eigenfaces and LBPH : A Comparative Study

JAMI, DEVI DEEPSHIKHA, KAMBHAM, NANDA SRIRAAM January 2023 (has links)
Background: With the rise of digitalization, there has been an increasing needfor secure and effective identification solutions, particularly in the realm of votingsystems. Facial biometric technology has emerged as a potential solution to combat fraud and improve the transparency and security of the voting process. Two well known facial identification algorithms, Local Binary Pattern Histograms (LBPH) and Eigenfaces, have been extensively used in computer vision for facial identification.However, their effectiveness in the context of a smart voting system is still a matter of debate. Objectives: The aim of this project is to compare the effectiveness of LBPH and Eigenfaces algorithms in the development of a smart voting system using the Haar cascade for face detection. The objective is to identify the more suitable approach between the two algorithms, considering factors such as lighting conditions and the facial expressions of the individuals being identified. The goal is to evaluate the algorithms using various metrics such as accuracy, precision, recall, and F1 score. Methods: The project involves the comparison of facial identification algorithms using the Haar cascade for face detection. Both the LBPH and Eigenfaces algorithms are implemented and evaluated in a complex environment that is similar to a polling station. The algorithms are trained and tested using a dataset of facial images with varying lighting conditions and facial expressions. The evaluation metrics, including accuracy, precision, recall, and F1 score, are used to compare the performance of thetwo algorithms. Results: The results of the project indicate that the LBPH algorithm performs better than Eigenfaces in terms of accuracy and performance. The algorithms havebeen tested with faces and objects in low-light conditions. Their accuracy and performance are also measured. Conclusions: The comparison of LBPH and Eigenfaces algorithms using the Haarcascade for face detection reveals that LBPH is a more suitable approach. The comparison of facial identification-based algorithms can significantly contribute to the voting process, thereby ensuring integrity of the voting process. The findings of this project can contribute to the development of a more reliable and secure voting system, and the evaluation metrics used in this project can be applied to future research in the field of facial identification purposes.
224

Predicting Cryptocurrency Prices with Machine Learning Algorithms: A Comparative Analysis

Gudavalli, Harsha Nanda, Kancherla, Khetan Venkata Ratnam January 2023 (has links)
Background: Due to its decentralized nature and opportunity for substantial gains, cryptocurrency has become a popular investment opportunity. However, the highly unpredictable and volatile nature of the cryptocurrency market poses a challenge for investors looking to predict price movements and make profitable investments. Time series analysis, which recognizes trends and patterns in previous price data to create forecasts about future price movements, is one of the prominent and effective techniques for price prediction. Integrating Machine learning (ML) techniques and technical indicators along with time series analysis can enhance the prediction accuracy significantly. Objectives: The objective of this thesis is to identify an effective ML algorithm for making long-term predictions of Bitcoin prices, by developing prediction models using the ML algorithms and making predictions using the technical indicators(RelativeStrength Index (RSI), Exponential Moving Average (EMA), Simple Moving Average (SMA)) as input for these models. Method: A Systematic Literature Review (SLR) has been employed to identify effective ML algorithms for making long-term predictions of cryptocurrency prices and conduct an experiment on these identified algorithms. The selected algorithms are trained and tested using the technical indicators RSI, EMA, and SMA calculated using the historic price data over a period of May 2017 to May 2023 taken fromCoinGecko API. The models are then evaluated using various metrics and the effect of the indicators on the performance of the prediction models is found using permutation feature importance and correlation analysis. Results: After conducting SLR, the ML algorithms Random Forest (RF), GradientBoosting (GB), Long Short-Term Memory (LSTM), and Gated Recurrent Unit(GRU) have been identified as effective algorithms to conduct our experiment on. Out of these algorithms, LSTM has been found to be the most accurate model out of the 4 selected algorithms based on Root Mean Square Error (RMSE) score(0.01083), Mean Square Error (MSE) score (0.00011), Coefficient of Determination (R2) score (0.80618), Time-Weighted Average (TWAP) score (0.40507), and Volume-Weighted Average (VWAP) score (0.35660) respectively. Also, by performing permutation feature importance and correlation analysis it was found that the moving averages EMA and SMA had a greater impact on the performance of all the prediction models as compared to RSI. Conclusion: Prediction models were built using the chosen ML algorithms identified through the literature review. Based on the dataset built from the data collected through the CoinGecko database and taking technical indicators as the input features, models were trained and tested using the chosen ML algorithms. The LSTM prediction algorithm was found to be the most accurate out of the chosen algorithms based on the RMSE, R2, TWAP, and VWAP scores obtained.
225

How is it possible to calculate IT security effectiveness?

Kivimaa, Kristjan January 2022 (has links)
In IT Security world, there is lack of available, reliable systems for measuring securitylevels/posture. They lack the range of quantitative measurements and easy and fast deployment,and potentially affects companies of all sizes.Readily available security standards provide qualitative security levels, but not quantitative results– that would be easily comparable. This deficiency makes it hard for companies to evaluate theirsecurity posture accurately. Absence of security metrics makes it complicated for customers toselect the appropriate measures for particular security level needed.The research question for this research project is – “How is it possible to calculate IT securityeffectiveness?”.The aim of this research is to use this reference model to calculate and to optimize majoruniversity’s and a small CSP-s (Cloud Service Provider) security posture and their spending’s onsecurity measures. Aim is to develop a reference model to support IT Security team and businessside to make reasoned and optimal decisions about IT security and all that with a reasonablenumber of manhours.In this Graded Security Expert System (GSES) aka Graded Security Reference Model (GSRM) thequantitative metrics of the graded security approach are used to express the relations betweensecurity goals, security confidence and security costs.What makes this model unique, is the option to use previous customers security templates/models– cutting the implementation time from 500+ manhours to as low as 50 manhours. The firstcustomers 500+ manhours will also be cut down to 50+ manhours on the second yearimplementing the expert system.The Graded Security Reference Model (GSRM) was developed using a combination oftheoretical method and design science research. The model is based on InfoSec (info security)activities and InfoSec spendings from previous year – cost and effectiveness – gathered fromexpert opinionsBy implementing GSRM, user can gather quantitative security levels as no other model, or astandard provides those.GSRM delivers very detailed and accurate (according to university’s IT Security Team)effectiveness levels per spendings brackets.GSRM was created as a graded security reference model on CoCoViLa platform, which is unique asit provides quantitative results corresponding to company’s security posture.Freely available models and standards either provide vague quantitative security postureinformation or are extremely complicated to use – BIS/ISKE (not supported any more).This Graded Security Reference Model has turned theories presented in literature review into afunctional, graphical model.The GSRM was used with detailed data from the 15+k users university and their IT security team(all members have 10+ years of IT security experience) concluded that the model is reasonablysimple to implement/modify, and results are precise and easily understandable. It was alsoobserved that the business side had no problems understanding the results and very fewexplanatory remarks were needed.
226

Digitalized contract definition and negotiations for the agreement of rights and obligations in electronic auctions

Chiquito, Eric January 2022 (has links)
Negotiations of different kinds are used to trade goods and services. Within these, the creation of a signed agreement or contract that is binding for the agreeing parties helps also the gathering of evidence that can be used in case of disputes and for adjudication. Traditionally, contracts are established on paper agreements that are signed by all the involved parties and by a law enforcement entity that ensure its legality in a court of law. These contracts have evolved with the introduction of Information Technology (IT) where the negotiation of goods and services is mainly virtual and/or automatized. The consistency and processing time of the computers allow for negotiations to be more efficient than ever.  Digitalized negotiations allow for auctioning systems providing a mechanism to efficiently match demand and supply in the exchange of goods and services. Such suctioning systems allow multiple users to iteratively or non-iteratively compete against one another to achieve allocative efficiency. Lately, digitalized auctions are implemented using Blockchain systems with the use of Smart Contracts to archieve decentralization. These are implemented as a digital script that may encode any set of rules written as code, with the validity of the code being enforced by the Blockchain's consensus mechanism. These Smart Contracts computations however tend to be expensive when executed and limited by the blocksize.   This thesis studies the creation of digitized negotiation protocols and contract definition following the needs of traditional trading and auctioning systems. We investigate the use of Ricardian Contracts for flexible representation of rights and obligations of entities in the context of circular economy in both single and multi-attribute auctions. We analyze the implication of digitized agreements in the context of data sharing. Furthermore, we analyze how usage control policies can be represented into Ricardian Contracts in the context of intellectual property protection, compliance with regulations, and digital rights management.Finally, we analyze the properties that a system that supports the mentioned models should have and how to implement it in the context of distributed auctioning systems by contrasting available state-of-the-art.  The main contributions of the thesis are: (1) The creation of a multi-attribute auctioning protocol for the circular economy which implements Ricardian Contracts for the representation of rights and obligations. (2) A method to negotiate obligations and access provisions with multi-level Ricardian contracts, and automatically enforce those provisions with access control. (3) A state-of-art analysis on distributed and decentralized auctioning systems where the key properties of auctioning systems are identified and are evaluated against the current implementations.
227

Electrical Energy consumption prediction for Schools

Movva, Venkata Sreenadh January 2022 (has links)
This thesis is a part of the master's in data science course at LTU. The core objective would be to build models that can do a short-term prediction of electricity energy consumption based on historical consumption data. With the increasing demand for electricity, forecasting electricity consumption is important and must be more accurate and closer to the actual values. As a part of this thesis, three different time series forecasting models are studied and experimented. The first model is based on an ensemble of Facebook prophet and XGBoost models together, the second model is based on deep learning neural network using Long short-term memory a Recurrent Neural Network, the third model is based on Convolution neural network. The performance of these three models is discussed and improvements needed, are also mentioned.These three models are trained with data from 2014-2019 and predictions are evaluated with 2020. As 2020 is the core of the COVID-19 pandemic season, offices were closed and this has impact on the model performance and evaluations. These impacts are also highlighted. Cross Industry standard process for Data mining methodology is followed in this thesis.
228

An Evaluation of TensorFlow as a Programming Framework for HPC Applications / En undersökning av TensorFlow som ett utvecklingsramverk för högpresterade datorsystem

Chien, Wei Der January 2018 (has links)
In recent years, deep-learning, a branch of machine learning gained increasing popularity due to their extensive applications and performance. At the core of these application is dense matrix-matrix multiplication. Graphics Processing Units (GPUs) are commonly used in the training process due to their massively parallel computation capabilities. In addition, specialized low-precision accelerators have emerged to specifically address Tensor operations. Software frameworks, such as TensorFlow have also emerged to increase the expressiveness of neural network model development. In TensorFlow computation problems are expressed as Computation Graphs where nodes of a graph denote operation and edges denote data movement between operations. With increasing number of heterogeneous accelerators which might co-exist on the same cluster system, it became increasingly difficult for users to program efficient and scalable applications. TensorFlow provides a high level of abstraction and it is possible to place operations of a computation graph on a device easily through a high level API. In this work, the usability of TensorFlow as a programming framework for HPC application is reviewed. We give an introduction of TensorFlow as a programming framework and paradigm for distributed computation. Two sample applications are implemented on TensorFlow: tiled matrix multiplication and conjugate gradient solver for solving large linear systems. We try to illustrate how such problems can be expressed in computation graph for distributed computation. We perform scalability tests and comment on performance scaling results and quantify how TensorFlow can take advantage of HPC systems by performing micro-benchmarking on communication performance. Through this work, we show that TensorFlow is an emerging and promising platform which is well suited for a particular class of problem which requires very little synchronization. / Under de senaste åren har deep-learning, en så kallad typ av maskininlärning, blivit populärt på grund av dess applikationer och prestanda. Den viktigaste komponenten i de här teknikerna är matrismultiplikation. Grafikprocessorer (GPUs) är vanligt förekommande vid träningsprocesser av artificiella neuronnät. Detta på grund av deras massivt parallella beräkningskapacitet. Dessutom har specialiserade lågprecisionsacceleratorer  som  specifikt beräknar  matrismultiplikation tagits fram. Många utvecklingsramverk har framkommit för att hjälpa programmerare att hantera artificiella neuronnät. I TensorFlow uttrycks beräkningsproblem som en beräkningsgraf. En nod representerar en beräkningsoperation och en väg representerar dataflöde mellan beräkningsoperationer i en beräkningsgraf. Eftersom man måste programmera olika acceleratorer med olika systemarkitekturer har programmering av högprestandasystem blivit allt svårare. TensorFlow erbjuder en hög abstraktionsnivå och förenklar programmering av högprestandaberäkningar. Man programmerar acceleratorer genom att placera operationer inom grafen på olika acceleratorer med en API. I detta arbete granskas användbarheten hos TensorFlow som ett programmeringsramverk för applikationer med högprestandaberäkningar. Vi presenterar TensorFlow som ett programmeringsutvecklingsramverk för distribuerad beräkning. Vi implementerar två vanliga applikationer i TensorFlow: en lösare som löser linjära ekvationsystem med konjugerade gradientmetoden samt blockmatrismultiplikation och illustrerar hur de här problemen kan uttryckas i beräkningsgrafer för distribuerad beräkning. Vi experimenterar och kommenterar metoder för att demonstrera hur TensorFlow kan nyttja HPC-maskinvaror. Vi testar både skalbarhet och effektivitet samt gör mikro-benchmarking på kommunikationsprestanda. Genom detta arbete visar vi att TensorFlow är en framväxande och lovande plattform som passar väl för en viss typ av problem som kräver minimal synkronisering.
229

Evaluating the Single Sign-On Protocol OpenID Connect for an Electronic Document Signature Service From a Security Perspective / En utvärdering av Single Sign-On-protokollet OpenID Connect  för en elektronisk dokumentunderskrifttjänst från ett säkerhetsperspektiv

Thor, Ludvig January 2022 (has links)
Today, there is an increasing demand for authentication services to provide authentication to users on the internet. One example of an authentication protocol is OpenID Connect. It is used by for example Google to provide single sign-on functionality to millions of users. Since this demand is growing and more companies are implementing the protocol, there is also a need to ensure that the protocol is implemented in such a way that ensures protection from adversaries attacking the services in different ways. This paper makes an effort at providing guidelines to those aiming at implementing the protocol. It looks into several attacks that can be performed. It is found that how one chooses to implement the protocol can greatly affect security and the protocol's susceptibility to attacks. The attacks that are studied are Cross Site Request Forgery (CSRF) attacks, Mix-Up attacks, Passive web attacks, and Distributed Denial of Service attacks. It is found, among other things, that implementers of the protocol should incorporate state variables to protect against CSRF attacks and services must utilize a secure HTTPS connection to protect e.g. sensitive data. A recommendation is made for how a federation with Relying Parties and OpenID Providers can be set up to further improve security.
230

Machine learning based control of small-scale autonomous data centers

Brännvall, Rickard January 2020 (has links)
The low-latency requirements of 5G are expected to increase the demand for distributeddata storage and computing capabilities in the form of small-scale data centers (DC)located at the edge, near the interface between mobile and wired networks. These edgeDC will likely be of modular and standardized designs, although configurations, localresource constraints, environments and load profiles will vary and thereby increase theDC infrastructure diversity. Autonomy and energy efficiency are key objectives for thedesign, configuration and control of such data centers. Edge DCs are (by definition)decentralized and should continue operating without human intervention in the presenceof disturbances, such as intermittent power failures, failing components and overheating.Automatic control is also required for efficient use of renewable energy, batteries and theavailable communication, computing and data storage capacity. These objectives demand data-driven models of the internal thermal and electricprocesses of an autonomous edge DC, since the resources required to manually defineand optimize the models for each DC would be prohibitive. In this thesis machinelearning methods that are implemented in a modular design are evaluated for thermalcontrol of such modular DCs. Experiments with small server clusters are presented, whichwere performed in order to investigate what parameters that are important in the designof advanced control strategies for autonomous edge DC. Furthermore, recent transferlearning results are discussed to understand how to develop data driven models thatcan be deployed to modular DC in varying configurations and environmental contextswithout training from scratch. The first study demonstrates how a data driven thermal model for a small clusterof servers can be calibrated to sensor data and used for constructing a model predictivecontroller for the server cooling fan. The experimental investigations of cooling fancontrol continues in the next study which explores operational sweet-spots and energyefficient holistic control strategies. The machine learning based controller from the firststudy is then re-purposed to maintain environmental conditions in an exhaust chamberfavourable for drying apples, as part of a practical study how excess heat produced bycomputation can be used in the food processing industry. A fourth study describes theRISE EDGE lab - a test bed for small data centers - built with the intention to exploreand evaluate related technologies for micro-grids with renewable energy and batteries,5G connectivity and coolant storage. Finally the last work presented develops the modelfrom the first study towards an application for thermal based load balancing.

Page generated in 0.0406 seconds